Unnamed: 0,id,type,created_at,repo,repo_url,action,title,labels,body,index,text_combine,label,text,binary_label 147456,19522820647.0,IssuesEvent,2021-12-29 22:26:33,swagger-api/swagger-codegen,https://api.github.com/repos/swagger-api/swagger-codegen,opened,CVE-2020-36185 (High) detected in multiple libraries,security vulnerability,"## CVE-2020-36185 - High Severity Vulnerability
Vulnerable Libraries - jackson-databind-2.7.8.jar, jackson-databind-2.8.9.jar, jackson-databind-2.6.4.jar, jackson-databind-2.8.8.jar, jackson-databind-2.4.5.jar

jackson-databind-2.7.8.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.7.8.jar

Dependency Hierarchy: - lagom-scaladsl-api_2.11-1.3.8.jar (Root Library) - lagom-api_2.11-1.3.8.jar - play_2.11-2.5.13.jar - :x: **jackson-databind-2.7.8.jar** (Vulnerable Library)

jackson-databind-2.8.9.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.9.jar

Dependency Hierarchy: - play-guice_2.12-2.6.3.jar (Root Library) - play_2.12-2.6.3.jar - :x: **jackson-databind-2.8.9.jar** (Vulnerable Library)

jackson-databind-2.6.4.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /samples/client/petstore/java/jersey1/build.gradle

Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.4/f2abadd10891512268b16a1a1a6f81890f3e2976/jackson-databind-2.6.4.jar,/aches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.4/f2abadd10891512268b16a1a1a6f81890f3e2976/jackson-databind-2.6.4.jar

Dependency Hierarchy: - :x: **jackson-databind-2.6.4.jar** (Vulnerable Library)

jackson-databind-2.8.8.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.8.jar

Dependency Hierarchy: - finch-circe_2.11-0.15.1.jar (Root Library) - circe-jackson28_2.11-0.8.0.jar - :x: **jackson-databind-2.8.8.jar** (Vulnerable Library)

jackson-databind-2.4.5.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /samples/client/petstore/scala/build.gradle

Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.4.5/c69c0cb613128c69d84a6a0304ddb9fce82e8242/jackson-databind-2.4.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.4.5/c69c0cb613128c69d84a6a0304ddb9fce82e8242/jackson-databind-2.4.5.jar

Dependency Hierarchy: - swagger-core-1.5.8.jar (Root Library) - :x: **jackson-databind-2.4.5.jar** (Vulnerable Library)

Found in HEAD commit: 4b7a8d7d7384aa6a27d6309c35ade0916edae7ed

Found in base branch: master

Vulnerability Details

FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.SharedPoolDataSource.

Publish Date: 2021-01-06

URL: CVE-2020-36185

CVSS 3 Score Details (8.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/FasterXML/jackson-databind/issues/2998

Release Date: 2021-01-06

Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8

",True,"CVE-2020-36185 (High) detected in multiple libraries - ## CVE-2020-36185 - High Severity Vulnerability
Vulnerable Libraries - jackson-databind-2.7.8.jar, jackson-databind-2.8.9.jar, jackson-databind-2.6.4.jar, jackson-databind-2.8.8.jar, jackson-databind-2.4.5.jar

jackson-databind-2.7.8.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.7.8.jar

Dependency Hierarchy: - lagom-scaladsl-api_2.11-1.3.8.jar (Root Library) - lagom-api_2.11-1.3.8.jar - play_2.11-2.5.13.jar - :x: **jackson-databind-2.7.8.jar** (Vulnerable Library)

jackson-databind-2.8.9.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.9.jar

Dependency Hierarchy: - play-guice_2.12-2.6.3.jar (Root Library) - play_2.12-2.6.3.jar - :x: **jackson-databind-2.8.9.jar** (Vulnerable Library)

jackson-databind-2.6.4.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /samples/client/petstore/java/jersey1/build.gradle

Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.4/f2abadd10891512268b16a1a1a6f81890f3e2976/jackson-databind-2.6.4.jar,/aches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.4/f2abadd10891512268b16a1a1a6f81890f3e2976/jackson-databind-2.6.4.jar

Dependency Hierarchy: - :x: **jackson-databind-2.6.4.jar** (Vulnerable Library)

jackson-databind-2.8.8.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.8.jar

Dependency Hierarchy: - finch-circe_2.11-0.15.1.jar (Root Library) - circe-jackson28_2.11-0.8.0.jar - :x: **jackson-databind-2.8.8.jar** (Vulnerable Library)

jackson-databind-2.4.5.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /samples/client/petstore/scala/build.gradle

Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.4.5/c69c0cb613128c69d84a6a0304ddb9fce82e8242/jackson-databind-2.4.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.4.5/c69c0cb613128c69d84a6a0304ddb9fce82e8242/jackson-databind-2.4.5.jar

Dependency Hierarchy: - swagger-core-1.5.8.jar (Root Library) - :x: **jackson-databind-2.4.5.jar** (Vulnerable Library)

Found in HEAD commit: 4b7a8d7d7384aa6a27d6309c35ade0916edae7ed

Found in base branch: master

Vulnerability Details

FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.SharedPoolDataSource.

Publish Date: 2021-01-06

URL: CVE-2020-36185

CVSS 3 Score Details (8.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/FasterXML/jackson-databind/issues/2998

Release Date: 2021-01-06

Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8

",0,cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy lagom scaladsl api jar root library lagom api jar play jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy play guice jar root library play jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file samples client petstore java build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar aches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy finch circe jar root library circe jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file samples client petstore scala build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy swagger core jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp datasources sharedpooldatasource publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com lightbend lagom lagom scaladsl api com lightbend lagom lagom api com typesafe play play com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree com typesafe play play guice com typesafe play play com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree com github finagle finch circe io circe circe com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree io swagger swagger core com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp datasources sharedpooldatasource vulnerabilityurl ,0 36534,15022780078.0,IssuesEvent,2021-02-01 17:22:14,hashicorp/terraform-provider-aws,https://api.github.com/repos/hashicorp/terraform-provider-aws,closed,Getting unknown block type advanced_backup setting ,service/backup,"I am using terraform v0.12.14 I am getting unknown block type **advanced_backup_setting** while using it in my tf code. Can someone please help. I have verified it is using provider.aws version 3.26 resource ""aws_backup_plan"" ""test_backup"" { name = ""test_backup_plan"" rule { rule_name = ""test_backup_rule"" target_vault_name = aws_backup_vault.test.name schedule = ""cron(0 16 * * ? *)"" } advanced_backup_setting { backup_options = { WindowsVSS = ""enabled"" } resource_type = ""EC2"" } } ",1.0,"Getting unknown block type advanced_backup setting - I am using terraform v0.12.14 I am getting unknown block type **advanced_backup_setting** while using it in my tf code. Can someone please help. I have verified it is using provider.aws version 3.26 resource ""aws_backup_plan"" ""test_backup"" { name = ""test_backup_plan"" rule { rule_name = ""test_backup_rule"" target_vault_name = aws_backup_vault.test.name schedule = ""cron(0 16 * * ? *)"" } advanced_backup_setting { backup_options = { WindowsVSS = ""enabled"" } resource_type = ""EC2"" } } ",0,getting unknown block type advanced backup setting i am using terraform i am getting unknown block type advanced backup setting while using it in my tf code can someone please help i have verified it is using provider aws version resource aws backup plan test backup name test backup plan rule rule name test backup rule target vault name aws backup vault test name schedule cron advanced backup setting backup options windowsvss enabled resource type ,0 99693,8708527890.0,IssuesEvent,2018-12-06 11:10:15,KratosMultiphysics/Kratos,https://api.github.com/repos/KratosMultiphysics/Kratos,closed,[Ttesting][Adjoint] Missing header here (with the license and the author),Kratos Core Licencing Testing,"https://github.com/KratosMultiphysics/Kratos/blob/c5b413682acd83040b72e26ab64eab21139bfac6/kratos/tests/strategies/schemes/test_residual_based_adjoint_bossak_scheme.cpp#L1 Besides, wait until the cpp tests are moved to a common folder",1.0,"[Ttesting][Adjoint] Missing header here (with the license and the author) - https://github.com/KratosMultiphysics/Kratos/blob/c5b413682acd83040b72e26ab64eab21139bfac6/kratos/tests/strategies/schemes/test_residual_based_adjoint_bossak_scheme.cpp#L1 Besides, wait until the cpp tests are moved to a common folder",0, missing header here with the license and the author besides wait until the cpp tests are moved to a common folder,0 5880,21553443207.0,IssuesEvent,2022-04-30 02:43:07,o3de/o3de,https://api.github.com/repos/o3de/o3de,opened,AR Bug Report - Test AutomatedTesting::MultiplayerTests_Main failed,kind/bug needs-triage kind/automation,"**Describe the bug** `AutomatedTesting::MultiplayerTests_Main.main::TEST_RUN` failed while running AR **Failed Jenkins Job Information:** platform linux -- Python 3.7.12, pytest-5.3.2, py-1.11.0, pluggy-0.13.1 -- /data/workspace/o3de/python/runtime/python-3.7.12-rev2-linux/python/bin/python ``` [2022-04-29T10:46:42.179Z] E Failed: Test test_Multiplayer_AutoComponent_NetworkInput: [2022-04-29T10:46:42.179Z] E Test FAILED [2022-04-29T10:46:42.179Z] E ------------ [2022-04-29T10:46:42.179Z] E | Output | [2022-04-29T10:46:42.179Z] E ------------ [2022-04-29T10:46:42.179Z] E Starting test Multiplayer_AutoComponent_NetworkInput... [2022-04-29T10:46:42.179Z] E Test Multiplayer_AutoComponent_NetworkInput finished. [2022-04-29T10:46:42.179Z] E Report: [2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Unexpected line not found: LaunchEditorServer failed! The ServerLauncher binary is missing! [2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Found expected line: Editor has connected to the editor-server. [2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Found expected line: Editor is sending the editor-server the level data packet. [2022-04-29T10:46:42.179Z] E [FAILED ] Failure: Failed to find expected line: Logger: Editor Server completed receiving the editor's level assets, responding to Editor... [2022-04-29T10:46:42.179Z] E EXCEPTION raised: [2022-04-29T10:46:42.179Z] E Traceback (most recent call last): [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 305, in start_test [2022-04-29T10:46:42.179Z] E test_function() [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/Multiplayer/tests/Multiplayer_AutoComponent_NetworkInput.py"", line 57, in Multiplayer_AutoComponent_NetworkInput [2022-04-29T10:46:42.179Z] E helper.multiplayer_enter_game_mode(Tests.enter_game_mode) [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 178, in multiplayer_enter_game_mode [2022-04-29T10:46:42.179Z] E TestHelper.succeed_if_log_line_found(""EditorServer"", ""Logger: Editor Server completed receiving the editor's level assets, responding to Editor..."", section_tracer.prints, 5.0) [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 138, in succeed_if_log_line_found [2022-04-29T10:46:42.179Z] E Report.critical_result((""Found expected line: "" + line, ""Failed to find expected line: "" + line), TestHelper.find_line(window, line, print_infos)) [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 410, in critical_result [2022-04-29T10:46:42.179Z] E TestHelper.fail_fast(fast_fail_message) [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 213, in fail_fast [2022-04-29T10:46:42.179Z] E raise FailFast() [2022-04-29T10:46:42.179Z] E editor_python_test_tools.utils.FailFast [2022-04-29T10:46:42.179Z] E Test result: FAILURE ``` **Attachments** [log.txt](https://github.com/o3de/o3de/files/8595521/log.txt) ",1.0,"AR Bug Report - Test AutomatedTesting::MultiplayerTests_Main failed - **Describe the bug** `AutomatedTesting::MultiplayerTests_Main.main::TEST_RUN` failed while running AR **Failed Jenkins Job Information:** platform linux -- Python 3.7.12, pytest-5.3.2, py-1.11.0, pluggy-0.13.1 -- /data/workspace/o3de/python/runtime/python-3.7.12-rev2-linux/python/bin/python ``` [2022-04-29T10:46:42.179Z] E Failed: Test test_Multiplayer_AutoComponent_NetworkInput: [2022-04-29T10:46:42.179Z] E Test FAILED [2022-04-29T10:46:42.179Z] E ------------ [2022-04-29T10:46:42.179Z] E | Output | [2022-04-29T10:46:42.179Z] E ------------ [2022-04-29T10:46:42.179Z] E Starting test Multiplayer_AutoComponent_NetworkInput... [2022-04-29T10:46:42.179Z] E Test Multiplayer_AutoComponent_NetworkInput finished. [2022-04-29T10:46:42.179Z] E Report: [2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Unexpected line not found: LaunchEditorServer failed! The ServerLauncher binary is missing! [2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Found expected line: Editor has connected to the editor-server. [2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Found expected line: Editor is sending the editor-server the level data packet. [2022-04-29T10:46:42.179Z] E [FAILED ] Failure: Failed to find expected line: Logger: Editor Server completed receiving the editor's level assets, responding to Editor... [2022-04-29T10:46:42.179Z] E EXCEPTION raised: [2022-04-29T10:46:42.179Z] E Traceback (most recent call last): [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 305, in start_test [2022-04-29T10:46:42.179Z] E test_function() [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/Multiplayer/tests/Multiplayer_AutoComponent_NetworkInput.py"", line 57, in Multiplayer_AutoComponent_NetworkInput [2022-04-29T10:46:42.179Z] E helper.multiplayer_enter_game_mode(Tests.enter_game_mode) [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 178, in multiplayer_enter_game_mode [2022-04-29T10:46:42.179Z] E TestHelper.succeed_if_log_line_found(""EditorServer"", ""Logger: Editor Server completed receiving the editor's level assets, responding to Editor..."", section_tracer.prints, 5.0) [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 138, in succeed_if_log_line_found [2022-04-29T10:46:42.179Z] E Report.critical_result((""Found expected line: "" + line, ""Failed to find expected line: "" + line), TestHelper.find_line(window, line, print_infos)) [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 410, in critical_result [2022-04-29T10:46:42.179Z] E TestHelper.fail_fast(fast_fail_message) [2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 213, in fail_fast [2022-04-29T10:46:42.179Z] E raise FailFast() [2022-04-29T10:46:42.179Z] E editor_python_test_tools.utils.FailFast [2022-04-29T10:46:42.179Z] E Test result: FAILURE ``` **Attachments** [log.txt](https://github.com/o3de/o3de/files/8595521/log.txt) ",1,ar bug report test automatedtesting multiplayertests main failed describe the bug automatedtesting multiplayertests main main test run failed while running ar failed jenkins job information platform linux python pytest py pluggy data workspace python runtime python linux python bin python e failed test test multiplayer autocomponent networkinput e test failed e e output e e starting test multiplayer autocomponent networkinput e test multiplayer autocomponent networkinput finished e report e success unexpected line not found launcheditorserver failed the serverlauncher binary is missing e success found expected line editor has connected to the editor server e success found expected line editor is sending the editor server the level data packet e failure failed to find expected line logger editor server completed receiving the editor s level assets responding to editor e exception raised e traceback most recent call last e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in start test e test function e file data workspace automatedtesting gem pythontests multiplayer tests multiplayer autocomponent networkinput py line in multiplayer autocomponent networkinput e helper multiplayer enter game mode tests enter game mode e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in multiplayer enter game mode e testhelper succeed if log line found editorserver logger editor server completed receiving the editor s level assets responding to editor section tracer prints e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in succeed if log line found e report critical result found expected line line failed to find expected line line testhelper find line window line print infos e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in critical result e testhelper fail fast fast fail message e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in fail fast e raise failfast e editor python test tools utils failfast e test result failure attachments ,1 9271,27832546230.0,IssuesEvent,2023-03-20 06:46:44,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,Table sink close stucks and CDC changefeed not advance ,type/bug severity/critical found/automation area/ticdc affects-6.5 affects-6.6,"### What did you do? 1. create mysql changefeed 2. Run workload tpcc prepare 3. scale out tikv from 3 to 7 4. scale in cdc from 3 to 1 node 5. tpcc run, and at the same time, scale out cdc from 3 to 6, and scale tikv in from 7 to 3. 6. wait checkpoint advance ### What did you expect to see? changefeed advances ### What did you see instead? changefeed not advances ![image](https://user-images.githubusercontent.com/7403864/224641616-be0a4272-598c-471c-ad08-002c93e9cfcb.png) ![image](https://user-images.githubusercontent.com/7403864/224640894-8e29829e-f24b-49c0-84fe-e24dc707697a.png) ### Versions of the cluster cdc version: [""Welcome to Change Data Capture (CDC)""] [release-version=v6.7.0-alpha] [git-hash=04f7e22aaa07dfedff853fe9b7a08675bbbf0fe1] [git-branch=heads/refs/tags/v6.7.0-alpha] [utc-build-time=""2023-03-11 11:32:23""] [go-version=""go version go1.20.2 linux/amd64""] [failpoint-build=false] ",1.0,"Table sink close stucks and CDC changefeed not advance - ### What did you do? 1. create mysql changefeed 2. Run workload tpcc prepare 3. scale out tikv from 3 to 7 4. scale in cdc from 3 to 1 node 5. tpcc run, and at the same time, scale out cdc from 3 to 6, and scale tikv in from 7 to 3. 6. wait checkpoint advance ### What did you expect to see? changefeed advances ### What did you see instead? changefeed not advances ![image](https://user-images.githubusercontent.com/7403864/224641616-be0a4272-598c-471c-ad08-002c93e9cfcb.png) ![image](https://user-images.githubusercontent.com/7403864/224640894-8e29829e-f24b-49c0-84fe-e24dc707697a.png) ### Versions of the cluster cdc version: [""Welcome to Change Data Capture (CDC)""] [release-version=v6.7.0-alpha] [git-hash=04f7e22aaa07dfedff853fe9b7a08675bbbf0fe1] [git-branch=heads/refs/tags/v6.7.0-alpha] [utc-build-time=""2023-03-11 11:32:23""] [go-version=""go version go1.20.2 linux/amd64""] [failpoint-build=false] ",1,table sink close stucks and cdc changefeed not advance what did you do create mysql changefeed run workload tpcc prepare scale out tikv from to scale in cdc from to node tpcc run and at the same time scale out cdc from to and scale tikv in from to wait checkpoint advance what did you expect to see changefeed advances what did you see instead changefeed not advances versions of the cluster cdc version ,1 1269,9815410570.0,IssuesEvent,2019-06-13 12:35:58,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,Support for multibranch pipeline,[zube]: In Review automation,"As an effort to standardise CI/CD work across the Observability teams, alongside using the JJBB framework, we want to add support for Jenkins' multibranch pipelines.",1.0,"Support for multibranch pipeline - As an effort to standardise CI/CD work across the Observability teams, alongside using the JJBB framework, we want to add support for Jenkins' multibranch pipelines.",1,support for multibranch pipeline as an effort to standardise ci cd work across the observability teams alongside using the jjbb framework we want to add support for jenkins multibranch pipelines ,1 7468,24944522212.0,IssuesEvent,2022-10-31 22:13:21,ericcornelissen/webmangler,https://api.github.com/repos/ericcornelissen/webmangler,closed,Address use of deprecated GitHub Actions command `set-output`,automation,"# Task ## Description All GitHub Actions workflows should be updated to not rely on `set-output` or `save-state` which, per the following warnings that may be seen on current workflow runs (example), have been deprecated: > The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/ At the time of writing, this warning can be seen on runs of the following workflows of this project: - [`code-analysis.yml`](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-analysis.yml) - [`code-checks.yml`](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-checks.yml) ### `code-analysis.yml` This is due to the use of `@actions/core@v1.9.1` in v2.1.27 of the CodeQL Action. This is already fixed in the CodeQL Action, but not yet released. This will be addressed automatically by the regular Renovate Pull Requests. ### `code-checks.yml` This is due to the use of `echo ""::set-output name=xxx::yyy""` in embedded scripts in the [""Determine jobs"" job](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-checks.yml#L11-L107). This must be addressed manually. #### Suggested solution Based on https://github.com/github/codeql-action/compare/v2.1.27...v2.1.28 it looks like changing the workflow as follows _should_ work: ```diff - echo ""::set-output name=xxx::$yyy"" + echo ""xxx=$yyy"" >> $GITHUB_OUTPUT ``` ## Related - #152 - #392 - #399 - #412",1.0,"Address use of deprecated GitHub Actions command `set-output` - # Task ## Description All GitHub Actions workflows should be updated to not rely on `set-output` or `save-state` which, per the following warnings that may be seen on current workflow runs (example), have been deprecated: > The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/ At the time of writing, this warning can be seen on runs of the following workflows of this project: - [`code-analysis.yml`](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-analysis.yml) - [`code-checks.yml`](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-checks.yml) ### `code-analysis.yml` This is due to the use of `@actions/core@v1.9.1` in v2.1.27 of the CodeQL Action. This is already fixed in the CodeQL Action, but not yet released. This will be addressed automatically by the regular Renovate Pull Requests. ### `code-checks.yml` This is due to the use of `echo ""::set-output name=xxx::yyy""` in embedded scripts in the [""Determine jobs"" job](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-checks.yml#L11-L107). This must be addressed manually. #### Suggested solution Based on https://github.com/github/codeql-action/compare/v2.1.27...v2.1.28 it looks like changing the workflow as follows _should_ work: ```diff - echo ""::set-output name=xxx::$yyy"" + echo ""xxx=$yyy"" >> $GITHUB_OUTPUT ``` ## Related - #152 - #392 - #399 - #412",1,address use of deprecated github actions command set output task description all github actions workflows should be updated to not rely on set output or save state which per the following warnings that may be seen on current workflow runs example have been deprecated the set output command is deprecated and will be disabled soon please upgrade to using environment files for more information see at the time of writing this warning can be seen on runs of the following workflows of this project code analysis yml this is due to the use of actions core in of the codeql action this is already fixed in the codeql action but not yet released this will be addressed automatically by the regular renovate pull requests code checks yml this is due to the use of echo set output name xxx yyy in embedded scripts in the this must be addressed manually suggested solution based on it looks like changing the workflow as follows should work diff echo set output name xxx yyy echo xxx yyy github output related ,1 7388,24767699872.0,IssuesEvent,2022-10-22 18:48:26,surge-synthesizer/surge,https://api.github.com/repos/surge-synthesizer/surge,closed,Live 11 Parameter Feedback,Host Specific Host Automation Bug Report,"XT in Live 11 seems to have slider drag events fight with the automation recording Just you know, start live 11 and drag a slider and it wobbles Sometimes this manifests itself as slider resets to 0 suddenly in some sort of cascade This is clearly something in the notify / return value loop in live while dragging and begin/end not suppressing returns or some such but we need to debug it for our next release.",1.0,"Live 11 Parameter Feedback - XT in Live 11 seems to have slider drag events fight with the automation recording Just you know, start live 11 and drag a slider and it wobbles Sometimes this manifests itself as slider resets to 0 suddenly in some sort of cascade This is clearly something in the notify / return value loop in live while dragging and begin/end not suppressing returns or some such but we need to debug it for our next release.",1,live parameter feedback xt in live seems to have slider drag events fight with the automation recording just you know start live and drag a slider and it wobbles sometimes this manifests itself as slider resets to suddenly in some sort of cascade this is clearly something in the notify return value loop in live while dragging and begin end not suppressing returns or some such but we need to debug it for our next release ,1 9936,30783065865.0,IssuesEvent,2023-07-31 11:24:36,deinstapel/eks-rolling-update,https://api.github.com/repos/deinstapel/eks-rolling-update,opened,GitHub actions from forks will currently fail,automation,"Creating a PR from a fork will currently fail in github actions. This is due to the `github.actor` being unauthorized to push to the container registry. As we manually approve PRs as safe before running github actions, we should use a dedicated bot account with PAT instead.",1.0,"GitHub actions from forks will currently fail - Creating a PR from a fork will currently fail in github actions. This is due to the `github.actor` being unauthorized to push to the container registry. As we manually approve PRs as safe before running github actions, we should use a dedicated bot account with PAT instead.",1,github actions from forks will currently fail creating a pr from a fork will currently fail in github actions this is due to the github actor being unauthorized to push to the container registry as we manually approve prs as safe before running github actions we should use a dedicated bot account with pat instead ,1 1488,10197698115.0,IssuesEvent,2019-08-13 01:39:25,askmench/mench-web-app,https://api.github.com/repos/askmench/mench-web-app,closed,Enable Users to Clear Action Plan Progression Links,Bot/Chat-Automation Inputs/Forms,So they can reset their mistakes and re-take all or parts of their Action Plan intentions.,1.0,Enable Users to Clear Action Plan Progression Links - So they can reset their mistakes and re-take all or parts of their Action Plan intentions.,1,enable users to clear action plan progression links so they can reset their mistakes and re take all or parts of their action plan intentions ,1 5829,21333609143.0,IssuesEvent,2022-04-18 11:51:45,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,opened,Wait for project selection is namespace drop-down on OCP console,bug ui_automation,"Issue seen with Run `1649886578` Also take screenshots on important pages.",1.0,"Wait for project selection is namespace drop-down on OCP console - Issue seen with Run `1649886578` Also take screenshots on important pages.",1,wait for project selection is namespace drop down on ocp console issue seen with run also take screenshots on important pages ,1 48593,10263500707.0,IssuesEvent,2019-08-22 14:29:11,OrifInformatique/gestion_questionnaires,https://api.github.com/repos/OrifInformatique/gestion_questionnaires,closed,Charger uniquement les modèles les plus utilisés au lancement,code enhancement,"Dans la plupart des controlleurs, la fonction `__construct` charge tous les modèles utilisés, même s'ils ne sont utilisés qu'une seule fois dans le controlleur. Il serait probablement préférable de ne charger que ceux qui sont utilisés souvent dans `__construct` et les autres dans les fonctions où ils sont utilisés.",1.0,"Charger uniquement les modèles les plus utilisés au lancement - Dans la plupart des controlleurs, la fonction `__construct` charge tous les modèles utilisés, même s'ils ne sont utilisés qu'une seule fois dans le controlleur. Il serait probablement préférable de ne charger que ceux qui sont utilisés souvent dans `__construct` et les autres dans les fonctions où ils sont utilisés.",0,charger uniquement les modèles les plus utilisés au lancement dans la plupart des controlleurs la fonction construct charge tous les modèles utilisés même s ils ne sont utilisés qu une seule fois dans le controlleur il serait probablement préférable de ne charger que ceux qui sont utilisés souvent dans construct et les autres dans les fonctions où ils sont utilisés ,0 336240,24490353024.0,IssuesEvent,2022-10-10 00:24:39,datajoint/datajoint-elements,https://api.github.com/repos/datajoint/datajoint-elements,closed,Add documentation on release process,documentation,"- Add versions of acquisition and analysis packages to release notes. - Add this documentation within the `Management` section.",1.0,"Add documentation on release process - - Add versions of acquisition and analysis packages to release notes. - Add this documentation within the `Management` section.",0,add documentation on release process add versions of acquisition and analysis packages to release notes add this documentation within the management section ,0 392353,26935956376.0,IssuesEvent,2023-02-07 20:40:33,onflow/flow-cli,https://api.github.com/repos/onflow/flow-cli,closed,Include a clarification about how to provide the arguments of a transaction like a json file,Documentation,"### Issue To Be Solved How to pass a json file for providing the arguments of a transaction using flow-cli is not documented ### Suggest A Solution Update the [docs](https://developers.flow.com/tools/flow-cli/send-signed-transactions) with clear instructions about how to do it `flow transactions send {filename} --args-json ""$(cat myfile.json)""` || `flow transactions send {filename}""$(wget -O- -q https://raw.githubusercontent.com/onflow/some-flow-repo/some-commit-hash-or-tag/path-to/arguments.json)"" `",1.0,"Include a clarification about how to provide the arguments of a transaction like a json file - ### Issue To Be Solved How to pass a json file for providing the arguments of a transaction using flow-cli is not documented ### Suggest A Solution Update the [docs](https://developers.flow.com/tools/flow-cli/send-signed-transactions) with clear instructions about how to do it `flow transactions send {filename} --args-json ""$(cat myfile.json)""` || `flow transactions send {filename}""$(wget -O- -q https://raw.githubusercontent.com/onflow/some-flow-repo/some-commit-hash-or-tag/path-to/arguments.json)"" `",0,include a clarification about how to provide the arguments of a transaction like a json file issue to be solved how to pass a json file for providing the arguments of a transaction using flow cli is not documented suggest a solution update the with clear instructions about how to do it flow transactions send filename args json cat myfile json flow transactions send filename wget o q ,0 8782,27172243566.0,IssuesEvent,2023-02-17 20:35:23,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,"Embeddable URLs from the ""driveItem: preview"" endpoint fail to load ~20% of the time.",status:investigating Needs: Attention :wave: area:Previewers automation:Closed," #### Category - [ ] Question - [ ] Documentation issue - [ x] Bug #### Expected or Desired Behavior Embeddable URL that should display the contents of the sharepoint office file in the web browser (as ready only). #### Observed Behavior Fails about 20% of the time with JS errors on the viewer (through the link returned). When this occurs, the viewer does not load, and the page needs to be refreshed. #### Steps to Reproduce Call the ""driveItem: preview"" endpoint, receive an embeddable link to display within an iframe. Fails to load ~20% of the time. Thank you. ",1.0,"Embeddable URLs from the ""driveItem: preview"" endpoint fail to load ~20% of the time. - #### Category - [ ] Question - [ ] Documentation issue - [ x] Bug #### Expected or Desired Behavior Embeddable URL that should display the contents of the sharepoint office file in the web browser (as ready only). #### Observed Behavior Fails about 20% of the time with JS errors on the viewer (through the link returned). When this occurs, the viewer does not load, and the page needs to be refreshed. #### Steps to Reproduce Call the ""driveItem: preview"" endpoint, receive an embeddable link to display within an iframe. Fails to load ~20% of the time. Thank you. ",1,embeddable urls from the driveitem preview endpoint fail to load of the time category question documentation issue bug expected or desired behavior embeddable url that should display the contents of the sharepoint office file in the web browser as ready only observed behavior fails about of the time with js errors on the viewer through the link returned when this occurs the viewer does not load and the page needs to be refreshed steps to reproduce call the driveitem preview endpoint receive an embeddable link to display within an iframe fails to load of the time thank you ,1 5755,20981764623.0,IssuesEvent,2022-03-28 20:42:38,willowtreeapps/vocable-ios,https://api.github.com/repos/willowtreeapps/vocable-ios,closed,[Test] Re-enable CustomCategoriesTests,automation,"We accidentally disabled the CustomCategoriesTests somehow. We should re-enable them so that we can get the test results during builds. AC: All tests in CustomCategoriesTests are enabled and run during builds. ",1.0,"[Test] Re-enable CustomCategoriesTests - We accidentally disabled the CustomCategoriesTests somehow. We should re-enable them so that we can get the test results during builds. AC: All tests in CustomCategoriesTests are enabled and run during builds. ",1, re enable customcategoriestests we accidentally disabled the customcategoriestests somehow we should re enable them so that we can get the test results during builds ac all tests in customcategoriestests are enabled and run during builds ,1 350149,31857391762.0,IssuesEvent,2023-09-15 08:28:28,microsoft/AzureStorageExplorer,https://api.github.com/repos/microsoft/AzureStorageExplorer,opened,Only 1000 entities are loaded when sorting by one column if there is a query clause under query mode,🧪 testing :gear: tables,"**Storage Explorer Version:** 1.32.0-dev (93) **Build Number:** 20230915.2 **Branch:** main **Platform/OS:** Windows 10/Linux Ubuntu 20.04/MacOS Ventura 13.5.2 (Apple M1 Pro) **Architecture:** x64/x64/x64 **How Found:** Exploratory testing **Regression From:** Not a regression ## Steps to Reproduce ## 1. Open 'Settings' -> Enable the setting 'Global Sort'. 2. Expand one storage account -> Tables. 3. Open one table which contains more than 1000 entities. 4. Click 'Query' -> Make sure there is at least one query clause. 5. Sort entities by one column. 6. Check whether all entities are loaded. ## Expected Experience ## All entities are loaded. ## Actual Experience ## Only 1000 entities are loaded. ## Additional Context ## 1. This issue doesn't reproduce when there is no query clause. 2. Here is the record: ![enti](https://github.com/microsoft/AzureStorageExplorer/assets/87792676/4631c652-93d7-418b-9140-54a0e3abdc5b) ",1.0,"Only 1000 entities are loaded when sorting by one column if there is a query clause under query mode - **Storage Explorer Version:** 1.32.0-dev (93) **Build Number:** 20230915.2 **Branch:** main **Platform/OS:** Windows 10/Linux Ubuntu 20.04/MacOS Ventura 13.5.2 (Apple M1 Pro) **Architecture:** x64/x64/x64 **How Found:** Exploratory testing **Regression From:** Not a regression ## Steps to Reproduce ## 1. Open 'Settings' -> Enable the setting 'Global Sort'. 2. Expand one storage account -> Tables. 3. Open one table which contains more than 1000 entities. 4. Click 'Query' -> Make sure there is at least one query clause. 5. Sort entities by one column. 6. Check whether all entities are loaded. ## Expected Experience ## All entities are loaded. ## Actual Experience ## Only 1000 entities are loaded. ## Additional Context ## 1. This issue doesn't reproduce when there is no query clause. 2. Here is the record: ![enti](https://github.com/microsoft/AzureStorageExplorer/assets/87792676/4631c652-93d7-418b-9140-54a0e3abdc5b) ",0,only entities are loaded when sorting by one column if there is a query clause under query mode storage explorer version dev build number branch main platform os windows linux ubuntu macos ventura apple pro architecture how found exploratory testing regression from not a regression steps to reproduce open settings enable the setting global sort expand one storage account tables open one table which contains more than entities click query make sure there is at least one query clause sort entities by one column check whether all entities are loaded expected experience all entities are loaded actual experience only entities are loaded additional context this issue doesn t reproduce when there is no query clause here is the record ,0 272092,20732962109.0,IssuesEvent,2022-03-14 11:08:12,Arquisoft/dede_en2b,https://api.github.com/repos/Arquisoft/dede_en2b,closed,Contribution for deliverable 1 - Diego Martín,documentation v0.1,"I have developed part of point 1 and 4 of the documentation, the last one to be expanded. Moreover I was in charge of writting the topics discussed on the second reunion. And last but not least, I checked beforehand the instalation of the software and helped my teammates, since some of them had trouble with npm and ruby.",1.0,"Contribution for deliverable 1 - Diego Martín - I have developed part of point 1 and 4 of the documentation, the last one to be expanded. Moreover I was in charge of writting the topics discussed on the second reunion. And last but not least, I checked beforehand the instalation of the software and helped my teammates, since some of them had trouble with npm and ruby.",0,contribution for deliverable diego martín i have developed part of point and of the documentation the last one to be expanded moreover i was in charge of writting the topics discussed on the second reunion and last but not least i checked beforehand the instalation of the software and helped my teammates since some of them had trouble with npm and ruby ,0 42905,5545726310.0,IssuesEvent,2017-03-22 22:21:22,gustafl/youtube-blacklist-chrome-extension,https://api.github.com/repos/gustafl/youtube-blacklist-chrome-extension,opened,Design the popup,design,"It should contain the following, in order: * Logo * Title text * Version number * Disable/enable toggle button * Number of blacklisted users * Number of comments hidden on page * Link to extension in Web Store",1.0,"Design the popup - It should contain the following, in order: * Logo * Title text * Version number * Disable/enable toggle button * Number of blacklisted users * Number of comments hidden on page * Link to extension in Web Store",0,design the popup it should contain the following in order logo title text version number disable enable toggle button number of blacklisted users number of comments hidden on page link to extension in web store,0 171720,27168183667.0,IssuesEvent,2023-02-17 16:58:18,Lightning-AI/lightning,https://api.github.com/repos/Lightning-AI/lightning,closed,"Is `precision=""mixed""` redundant?",refactor design precision: amp,"## Proposed refactoring or deprecation Does `precision=""mixed""` act differently to `precision=16` in any way? I understand that ""mixed"" is more correct as 16-bit precision can still run some computations in 32-bit. ### Motivation In https://github.com/PyTorchLightning/pytorch-lightning/pull/9763 I noticed we did not even have a `PrecisionType` for `""mixed""`. There's a single test in the codebase passing the ""mixed"" value: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/tests/plugins/test_deepspeed_plugin.py#L153 And no mention at all of this value in the docs. ### Pitch Have one value to set this, whether it is `16` or `""mixed""`. Most likely `16` since its the one widely used. Otherwise, add tests for passing `""mixed""` ### Additional context ```python $ grep -iIrn '""mixed""' pytorch_lightning pytorch_lightning/plugins/training_type/sharded.py:62: is_fp16 = precision in (""mixed"", 16) pytorch_lightning/plugins/training_type/fully_sharded.py:135: mixed_precision=precision == ""mixed"", pytorch_lightning/plugins/training_type/ipu.py:42: if self.precision in (""mixed"", 16): pytorch_lightning/plugins/training_type/deepspeed.py:405: dtype = torch.float16 if self.precision in (16, ""mixed"") else torch.float32 pytorch_lightning/plugins/training_type/deepspeed.py:473: dtype = torch.float16 if self.precision in (16, ""mixed"") else torch.float32 pytorch_lightning/plugins/training_type/deepspeed.py:602: if precision in (16, ""mixed""): pytorch_lightning/plugins/precision/mixed.py:26: precision: Union[str, int] = ""mixed"" pytorch_lightning/plugins/precision/fully_sharded_native_amp.py:26: precision = ""mixed"" ``` ```python $ grep -iIrn '""mixed""' tests tests/plugins/test_deepspeed_plugin.py:153:@pytest.mark.parametrize(""precision"", [16, ""mixed""]) ``` ```python $ grep -Irn 'mixed' docs | grep 'precision=' # no mention in the docs! ``` ______________________________________________________________________ #### If you enjoy Lightning, check out our other projects! ⚡ - [**Metrics**](https://github.com/PyTorchLightning/metrics): Machine learning metrics for distributed, scalable PyTorch applications. - [**Flash**](https://github.com/PyTorchLightning/lightning-flash): The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning - [**Bolts**](https://github.com/PyTorchLightning/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch - [**Lightning Transformers**](https://github.com/PyTorchLightning/lightning-transformers): Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. cc @justusschock @awaelchli @akihironitta @rohitgr7 @tchaton @borda @carmocca",1.0,"Is `precision=""mixed""` redundant? - ## Proposed refactoring or deprecation Does `precision=""mixed""` act differently to `precision=16` in any way? I understand that ""mixed"" is more correct as 16-bit precision can still run some computations in 32-bit. ### Motivation In https://github.com/PyTorchLightning/pytorch-lightning/pull/9763 I noticed we did not even have a `PrecisionType` for `""mixed""`. There's a single test in the codebase passing the ""mixed"" value: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/tests/plugins/test_deepspeed_plugin.py#L153 And no mention at all of this value in the docs. ### Pitch Have one value to set this, whether it is `16` or `""mixed""`. Most likely `16` since its the one widely used. Otherwise, add tests for passing `""mixed""` ### Additional context ```python $ grep -iIrn '""mixed""' pytorch_lightning pytorch_lightning/plugins/training_type/sharded.py:62: is_fp16 = precision in (""mixed"", 16) pytorch_lightning/plugins/training_type/fully_sharded.py:135: mixed_precision=precision == ""mixed"", pytorch_lightning/plugins/training_type/ipu.py:42: if self.precision in (""mixed"", 16): pytorch_lightning/plugins/training_type/deepspeed.py:405: dtype = torch.float16 if self.precision in (16, ""mixed"") else torch.float32 pytorch_lightning/plugins/training_type/deepspeed.py:473: dtype = torch.float16 if self.precision in (16, ""mixed"") else torch.float32 pytorch_lightning/plugins/training_type/deepspeed.py:602: if precision in (16, ""mixed""): pytorch_lightning/plugins/precision/mixed.py:26: precision: Union[str, int] = ""mixed"" pytorch_lightning/plugins/precision/fully_sharded_native_amp.py:26: precision = ""mixed"" ``` ```python $ grep -iIrn '""mixed""' tests tests/plugins/test_deepspeed_plugin.py:153:@pytest.mark.parametrize(""precision"", [16, ""mixed""]) ``` ```python $ grep -Irn 'mixed' docs | grep 'precision=' # no mention in the docs! ``` ______________________________________________________________________ #### If you enjoy Lightning, check out our other projects! ⚡ - [**Metrics**](https://github.com/PyTorchLightning/metrics): Machine learning metrics for distributed, scalable PyTorch applications. - [**Flash**](https://github.com/PyTorchLightning/lightning-flash): The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning - [**Bolts**](https://github.com/PyTorchLightning/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch - [**Lightning Transformers**](https://github.com/PyTorchLightning/lightning-transformers): Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. cc @justusschock @awaelchli @akihironitta @rohitgr7 @tchaton @borda @carmocca",0,is precision mixed redundant proposed refactoring or deprecation does precision mixed act differently to precision in any way i understand that mixed is more correct as bit precision can still run some computations in bit motivation in i noticed we did not even have a precisiontype for mixed there s a single test in the codebase passing the mixed value and no mention at all of this value in the docs pitch have one value to set this whether it is or mixed most likely since its the one widely used otherwise add tests for passing mixed additional context python grep iirn mixed pytorch lightning pytorch lightning plugins training type sharded py is precision in mixed pytorch lightning plugins training type fully sharded py mixed precision precision mixed pytorch lightning plugins training type ipu py if self precision in mixed pytorch lightning plugins training type deepspeed py dtype torch if self precision in mixed else torch pytorch lightning plugins training type deepspeed py dtype torch if self precision in mixed else torch pytorch lightning plugins training type deepspeed py if precision in mixed pytorch lightning plugins precision mixed py precision union mixed pytorch lightning plugins precision fully sharded native amp py precision mixed python grep iirn mixed tests tests plugins test deepspeed plugin py pytest mark parametrize precision python grep irn mixed docs grep precision no mention in the docs if you enjoy lightning check out our other projects ⚡ machine learning metrics for distributed scalable pytorch applications the fastest way to get a lightning baseline a collection of tasks for fast prototyping baselining finetuning and solving problems with deep learning pretrained sota deep learning models callbacks and more for research and production with pytorch lightning and pytorch flexible interface for high performance research using sota transformers leveraging pytorch lightning transformers and hydra cc justusschock awaelchli akihironitta tchaton borda carmocca,0 1031,9201757206.0,IssuesEvent,2019-03-07 20:30:06,home-assistant/home-assistant,https://api.github.com/repos/home-assistant/home-assistant,closed,Home assistant interpreting non latin aliases in automations.yaml as non unique,component: automation waiting-for-reply,"**Home Assistant release with the issue:** 0.77.3 **Component/platform:** automations.yaml **Description of problem:** automations.yaml with 2 or more automations like this: ``` - id: state_door_closed alias: ""Сообщить о закрытии двери"" trigger: - entity_id: binary_sensor.home_door platform: state to: 'off' action: - data: message: ""Дверь закрыта"" service: notify.state - id: state_door_opened alias: ""Сообщить об открытии двери"" trigger: - entity_id: binary_sensor.home_door platform: state to: 'on' action: - data: message: ""Дверь открыта"" service: notify.state ``` cause errors on Home Assistant start: ""Entity id already exists: automation._____"" because code using only latin symbols (in that case - spaces) and thinks that aliases same. And it's not only one problem - on Home Assistant start that automations not loaded at all. **But if we made manual automations reload - they appear in list - and works - very strange (so work is possible!)** So - we give id in english, alias - as we want to see in lists. And got errors. May be rename alias in friendly_name and works with it values like with friendly name, and unique name take from id? Automation Editor generates unique ids for example and aliases can be same (and fully latin) too... So problem not in non latin symbols.",1.0,"Home assistant interpreting non latin aliases in automations.yaml as non unique - **Home Assistant release with the issue:** 0.77.3 **Component/platform:** automations.yaml **Description of problem:** automations.yaml with 2 or more automations like this: ``` - id: state_door_closed alias: ""Сообщить о закрытии двери"" trigger: - entity_id: binary_sensor.home_door platform: state to: 'off' action: - data: message: ""Дверь закрыта"" service: notify.state - id: state_door_opened alias: ""Сообщить об открытии двери"" trigger: - entity_id: binary_sensor.home_door platform: state to: 'on' action: - data: message: ""Дверь открыта"" service: notify.state ``` cause errors on Home Assistant start: ""Entity id already exists: automation._____"" because code using only latin symbols (in that case - spaces) and thinks that aliases same. And it's not only one problem - on Home Assistant start that automations not loaded at all. **But if we made manual automations reload - they appear in list - and works - very strange (so work is possible!)** So - we give id in english, alias - as we want to see in lists. And got errors. May be rename alias in friendly_name and works with it values like with friendly name, and unique name take from id? Automation Editor generates unique ids for example and aliases can be same (and fully latin) too... So problem not in non latin symbols.",1,home assistant interpreting non latin aliases in automations yaml as non unique home assistant release with the issue component platform automations yaml description of problem automations yaml with or more automations like this id state door closed alias сообщить о закрытии двери trigger entity id binary sensor home door platform state to off action data message дверь закрыта service notify state id state door opened alias сообщить об открытии двери trigger entity id binary sensor home door platform state to on action data message дверь открыта service notify state cause errors on home assistant start entity id already exists automation because code using only latin symbols in that case spaces and thinks that aliases same and it s not only one problem on home assistant start that automations not loaded at all but if we made manual automations reload they appear in list and works very strange so work is possible so we give id in english alias as we want to see in lists and got errors may be rename alias in friendly name and works with it values like with friendly name and unique name take from id automation editor generates unique ids for example and aliases can be same and fully latin too so problem not in non latin symbols ,1 102249,16550524748.0,IssuesEvent,2021-05-28 08:03:34,Vento-Nuenenen/inowo,https://api.github.com/repos/Vento-Nuenenen/inowo,opened,CVE-2021-32640 (Medium) detected in ws-7.4.5.tgz,security vulnerability,"## CVE-2021-32640 - Medium Severity Vulnerability
Vulnerable Library - ws-7.4.5.tgz

Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js

Library home page: https://registry.npmjs.org/ws/-/ws-7.4.5.tgz

Path to dependency file: inowo/package.json

Path to vulnerable library: inowo/node_modules/ws/package.json

Dependency Hierarchy: - laravel-mix-6.0.19.tgz (Root Library) - webpack-dev-server-4.0.0-beta.2.tgz - :x: **ws-7.4.5.tgz** (Vulnerable Library)

Found in HEAD commit: 22cf1dc4dcbb30afa68b88767bdb7da1e5c84a24

Found in base branch: master

Vulnerability Details

ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.

Publish Date: 2021-05-25

URL: CVE-2021-32640

CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693

Release Date: 2021-05-25

Fix Resolution: ws - 7.4.6

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-32640 (Medium) detected in ws-7.4.5.tgz - ## CVE-2021-32640 - Medium Severity Vulnerability
Vulnerable Library - ws-7.4.5.tgz

Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js

Library home page: https://registry.npmjs.org/ws/-/ws-7.4.5.tgz

Path to dependency file: inowo/package.json

Path to vulnerable library: inowo/node_modules/ws/package.json

Dependency Hierarchy: - laravel-mix-6.0.19.tgz (Root Library) - webpack-dev-server-4.0.0-beta.2.tgz - :x: **ws-7.4.5.tgz** (Vulnerable Library)

Found in HEAD commit: 22cf1dc4dcbb30afa68b88767bdb7da1e5c84a24

Found in base branch: master

Vulnerability Details

ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.

Publish Date: 2021-05-25

URL: CVE-2021-32640

CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693

Release Date: 2021-05-25

Fix Resolution: ws - 7.4.6

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in ws tgz cve medium severity vulnerability vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client and server for node js library home page a href path to dependency file inowo package json path to vulnerable library inowo node modules ws package json dependency hierarchy laravel mix tgz root library webpack dev server beta tgz x ws tgz vulnerable library found in head commit a href found in base branch master vulnerability details ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ws step up your open source security game with whitesource ,0 648240,21179996761.0,IssuesEvent,2022-04-08 06:55:25,AY2122S2-CS2103T-T13-4/tp,https://api.github.com/repos/AY2122S2-CS2103T-T13-4/tp,closed,GUI Undo/Redo bug,type.Bug priority.MEDIUM,"To replicate, here are the steps: 1. Type `add` 2. Close `AddWindow` 3. Type `undo` No commands should be undone, and if any action should be taken, it would be to reopen `AddWindow` (but I think this one would be complicated) Alternatively, if you were to add a new `Person`.. 1. Type `add` 2. Fill in details of new `Person` 3. Submit 4. Type undo. It gives the correct behaviour of undoing the adding of a `Person` 5. Type undo again. It gives the wrong behaviour and reports `Undo Success!` when correct one should report `No more commands to undo!`, since typing `add` is just a intermediary to open up `AddWindow` rather than a command that modifies. ",1.0,"GUI Undo/Redo bug - To replicate, here are the steps: 1. Type `add` 2. Close `AddWindow` 3. Type `undo` No commands should be undone, and if any action should be taken, it would be to reopen `AddWindow` (but I think this one would be complicated) Alternatively, if you were to add a new `Person`.. 1. Type `add` 2. Fill in details of new `Person` 3. Submit 4. Type undo. It gives the correct behaviour of undoing the adding of a `Person` 5. Type undo again. It gives the wrong behaviour and reports `Undo Success!` when correct one should report `No more commands to undo!`, since typing `add` is just a intermediary to open up `AddWindow` rather than a command that modifies. ",0,gui undo redo bug to replicate here are the steps type add close addwindow type undo no commands should be undone and if any action should be taken it would be to reopen addwindow but i think this one would be complicated alternatively if you were to add a new person type add fill in details of new person submit type undo it gives the correct behaviour of undoing the adding of a person type undo again it gives the wrong behaviour and reports undo success when correct one should report no more commands to undo since typing add is just a intermediary to open up addwindow rather than a command that modifies ,0 1099,9461136796.0,IssuesEvent,2019-04-17 12:49:03,nf-core/tools,https://api.github.com/repos/nf-core/tools,opened,Automate Releases,automation question template,"Since we already insist on having people work on `dev` branches & `master` branches only incorporating stable code, we could also produce a way of making automated releases too: https://github.com/semantic-release/semantic-release This would only require: - Setting the `master` branch protected for everyone, except for PRs coming from `dev` - Forcing everyone to NEVER merge to `master` except for releases that have been tested on `dev` (which we anyways do) Could then configure the method above to make a release whenever coming from `master` and something has been changed. If people then also use the `CHANGELOG.md`, it would automatically do proper and nice releases :-) ",1.0,"Automate Releases - Since we already insist on having people work on `dev` branches & `master` branches only incorporating stable code, we could also produce a way of making automated releases too: https://github.com/semantic-release/semantic-release This would only require: - Setting the `master` branch protected for everyone, except for PRs coming from `dev` - Forcing everyone to NEVER merge to `master` except for releases that have been tested on `dev` (which we anyways do) Could then configure the method above to make a release whenever coming from `master` and something has been changed. If people then also use the `CHANGELOG.md`, it would automatically do proper and nice releases :-) ",1,automate releases since we already insist on having people work on dev branches master branches only incorporating stable code we could also produce a way of making automated releases too this would only require setting the master branch protected for everyone except for prs coming from dev forcing everyone to never merge to master except for releases that have been tested on dev which we anyways do could then configure the method above to make a release whenever coming from master and something has been changed if people then also use the changelog md it would automatically do proper and nice releases ,1 1166,9607666582.0,IssuesEvent,2019-05-11 21:03:54,riemers/home-assistant-config,https://api.github.com/repos/riemers/home-assistant-config,opened,Make a movie mode,Automation Todo,"- Dim lights (if at night time) - Only turn on movie mode is receiver is turned on - Turn on subwoofer (add a plug in between, safes usage) - Turn sun screen down (day and night, awefull pedestrian stoplight bluring my vision) ",1.0,"Make a movie mode - - Dim lights (if at night time) - Only turn on movie mode is receiver is turned on - Turn on subwoofer (add a plug in between, safes usage) - Turn sun screen down (day and night, awefull pedestrian stoplight bluring my vision) ",1,make a movie mode dim lights if at night time only turn on movie mode is receiver is turned on turn on subwoofer add a plug in between safes usage turn sun screen down day and night awefull pedestrian stoplight bluring my vision ,1 107080,16751510521.0,IssuesEvent,2021-06-12 01:03:36,Tim-sandbox/EZBuggyPrioritize,https://api.github.com/repos/Tim-sandbox/EZBuggyPrioritize,opened,CVE-2018-3258 (High) detected in mysql-connector-java-5.1.25.jar,security vulnerability,"## CVE-2018-3258 - High Severity Vulnerability
Vulnerable Library - mysql-connector-java-5.1.25.jar

MySQL JDBC Type 4 driver

Library home page: http://dev.mysql.com/doc/connector-j/en/

Path to dependency file: EZBuggyPrioritize/pom.xml

Path to vulnerable library: EZBuggyPrioritize/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/mysql-connector-java-5.1.25.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar

Dependency Hierarchy: - :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)

Found in base branch: main

Vulnerability Details

Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 8.0.12 and prior. Easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.8 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H).

Publish Date: 2018-10-17

URL: CVE-2018-3258

CVSS 3 Score Details (8.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3258

Release Date: 2018-10-17

Fix Resolution: mysql:mysql-connector-java:8.0.13

*** :rescue_worker_helmet: Automatic Remediation is available for this issue ",True,"CVE-2018-3258 (High) detected in mysql-connector-java-5.1.25.jar - ## CVE-2018-3258 - High Severity Vulnerability
Vulnerable Library - mysql-connector-java-5.1.25.jar

MySQL JDBC Type 4 driver

Library home page: http://dev.mysql.com/doc/connector-j/en/

Path to dependency file: EZBuggyPrioritize/pom.xml

Path to vulnerable library: EZBuggyPrioritize/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/mysql-connector-java-5.1.25.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar

Dependency Hierarchy: - :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)

Found in base branch: main

Vulnerability Details

Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 8.0.12 and prior. Easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.8 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H).

Publish Date: 2018-10-17

URL: CVE-2018-3258

CVSS 3 Score Details (8.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3258

Release Date: 2018-10-17

Fix Resolution: mysql:mysql-connector-java:8.0.13

*** :rescue_worker_helmet: Automatic Remediation is available for this issue ",0,cve high detected in mysql connector java jar cve high severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file ezbuggyprioritize pom xml path to vulnerable library ezbuggyprioritize target easybuggy snapshot web inf lib mysql connector java jar canner repository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in base branch main vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and prior easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac l pr l ui n s u c h i h a h publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mysql mysql connector java rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree mysql mysql connector java isminimumfixversionavailable true minimumfixversion mysql mysql connector java basebranches vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and prior easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac l pr l ui n s u c h i h a h vulnerabilityurl ,0 4683,17200692041.0,IssuesEvent,2021-07-17 06:44:10,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Time Condition problem with both After and Before fields,integration: automation stale,"### The problem I have an automation to switch on/off a plug, which is triggered from two time-only input_datetime helpers (one for each of on/off) and a device-tracker presence boolean grouped into group.inhabitants. Then there are two choose actions, one for on one for off, determined by various conditions. The triggers ``` trigger: - platform: state entity_id: group.inhabitants - platform: time at: input_datetime.plug_coffee_on - platform: time at: input_datetime.plug_coffee_off ``` The condition for on: ``` conditions: - condition: and conditions: - condition: time after: input_datetime.plug_coffee_on before: input_datetime.plug_coffee_off - condition: state entity_id: group.inhabitants state: 'on' ``` The problem is that the on choose branch is executed when the OFF time helper is triggered, despite its being after the OFF time. It seems that the time condition in this case ignores the before statement. You can see this to be the case in the UI automation editor. There, the before condition is in fact empty. When I try to fix it up, by entering the proper helper in the UI and saving it, it is again empty when I reopen the automation editor (see the screenshot). Nothing changes in the YAML. ![image](https://user-images.githubusercontent.com/10956987/114303988-1a004600-9ad1-11eb-8fc8-06ccddf26581.png) I've pasted the whole automation below, in case it is something strange in some other part of it that is causing this behaviour. ### What is version of Home Assistant Core has the issue? 2021.4.3 ### What was the last working version of Home Assistant Core? ? ### What type of installation are you running? Home Assistant OS ### Integration causing the issue Automation ### Link to integration documentation on our website _No response_ ### Example YAML snippet ```yaml alias: 'Plug: coffee machine' description: Switches on coffee machine if during day and someone home trigger: - platform: state entity_id: group.inhabitants - platform: time at: input_datetime.plug_coffee_on - platform: time at: input_datetime.plug_coffee_off action: - choose: - alias: Turn it on conditions: - condition: and conditions: - condition: time after: input_datetime.plug_coffee_on before: input_datetime.plug_coffee_off - condition: state entity_id: group.inhabitants state: 'on' sequence: - service: switch.turn_on target: entity_id: switch.plug_salon_coffee - service: notify.persistent_notification data: title: Coffee switched on message: Indeed - choose: - alias: Turn it off conditions: - condition: or conditions: - condition: not conditions: - condition: time after: input_datetime.plug_coffee_on before: input_datetime.plug_coffee_off - condition: state entity_id: group.inhabitants state: 'off' sequence: - service: switch.turn_off target: entity_id: switch.plug_salon_coffee - service: notify.persistent_notification data: title: Coffee switched off message: Indeed mode: single ``` ### Anything in the logs that might be useful for us? Attaching a copy of the automation's trace. Renamed to .txt so I can upload: [trace automation.plug_coffee_machine 2021-04-12T14_00_00.003971+00_00.txt](https://github.com/home-assistant/core/files/6297413/trace.automation.plug_coffee_machine.2021-04-12T14_00_00.003971%2B00_00.txt) ",1.0,"Time Condition problem with both After and Before fields - ### The problem I have an automation to switch on/off a plug, which is triggered from two time-only input_datetime helpers (one for each of on/off) and a device-tracker presence boolean grouped into group.inhabitants. Then there are two choose actions, one for on one for off, determined by various conditions. The triggers ``` trigger: - platform: state entity_id: group.inhabitants - platform: time at: input_datetime.plug_coffee_on - platform: time at: input_datetime.plug_coffee_off ``` The condition for on: ``` conditions: - condition: and conditions: - condition: time after: input_datetime.plug_coffee_on before: input_datetime.plug_coffee_off - condition: state entity_id: group.inhabitants state: 'on' ``` The problem is that the on choose branch is executed when the OFF time helper is triggered, despite its being after the OFF time. It seems that the time condition in this case ignores the before statement. You can see this to be the case in the UI automation editor. There, the before condition is in fact empty. When I try to fix it up, by entering the proper helper in the UI and saving it, it is again empty when I reopen the automation editor (see the screenshot). Nothing changes in the YAML. ![image](https://user-images.githubusercontent.com/10956987/114303988-1a004600-9ad1-11eb-8fc8-06ccddf26581.png) I've pasted the whole automation below, in case it is something strange in some other part of it that is causing this behaviour. ### What is version of Home Assistant Core has the issue? 2021.4.3 ### What was the last working version of Home Assistant Core? ? ### What type of installation are you running? Home Assistant OS ### Integration causing the issue Automation ### Link to integration documentation on our website _No response_ ### Example YAML snippet ```yaml alias: 'Plug: coffee machine' description: Switches on coffee machine if during day and someone home trigger: - platform: state entity_id: group.inhabitants - platform: time at: input_datetime.plug_coffee_on - platform: time at: input_datetime.plug_coffee_off action: - choose: - alias: Turn it on conditions: - condition: and conditions: - condition: time after: input_datetime.plug_coffee_on before: input_datetime.plug_coffee_off - condition: state entity_id: group.inhabitants state: 'on' sequence: - service: switch.turn_on target: entity_id: switch.plug_salon_coffee - service: notify.persistent_notification data: title: Coffee switched on message: Indeed - choose: - alias: Turn it off conditions: - condition: or conditions: - condition: not conditions: - condition: time after: input_datetime.plug_coffee_on before: input_datetime.plug_coffee_off - condition: state entity_id: group.inhabitants state: 'off' sequence: - service: switch.turn_off target: entity_id: switch.plug_salon_coffee - service: notify.persistent_notification data: title: Coffee switched off message: Indeed mode: single ``` ### Anything in the logs that might be useful for us? Attaching a copy of the automation's trace. Renamed to .txt so I can upload: [trace automation.plug_coffee_machine 2021-04-12T14_00_00.003971+00_00.txt](https://github.com/home-assistant/core/files/6297413/trace.automation.plug_coffee_machine.2021-04-12T14_00_00.003971%2B00_00.txt) ",1,time condition problem with both after and before fields the problem i have an automation to switch on off a plug which is triggered from two time only input datetime helpers one for each of on off and a device tracker presence boolean grouped into group inhabitants then there are two choose actions one for on one for off determined by various conditions the triggers trigger platform state entity id group inhabitants platform time at input datetime plug coffee on platform time at input datetime plug coffee off the condition for on conditions condition and conditions condition time after input datetime plug coffee on before input datetime plug coffee off condition state entity id group inhabitants state on the problem is that the on choose branch is executed when the off time helper is triggered despite its being after the off time it seems that the time condition in this case ignores the before statement you can see this to be the case in the ui automation editor there the before condition is in fact empty when i try to fix it up by entering the proper helper in the ui and saving it it is again empty when i reopen the automation editor see the screenshot nothing changes in the yaml i ve pasted the whole automation below in case it is something strange in some other part of it that is causing this behaviour what is version of home assistant core has the issue what was the last working version of home assistant core what type of installation are you running home assistant os integration causing the issue automation link to integration documentation on our website no response example yaml snippet yaml alias plug coffee machine description switches on coffee machine if during day and someone home trigger platform state entity id group inhabitants platform time at input datetime plug coffee on platform time at input datetime plug coffee off action choose alias turn it on conditions condition and conditions condition time after input datetime plug coffee on before input datetime plug coffee off condition state entity id group inhabitants state on sequence service switch turn on target entity id switch plug salon coffee service notify persistent notification data title coffee switched on message indeed choose alias turn it off conditions condition or conditions condition not conditions condition time after input datetime plug coffee on before input datetime plug coffee off condition state entity id group inhabitants state off sequence service switch turn off target entity id switch plug salon coffee service notify persistent notification data title coffee switched off message indeed mode single anything in the logs that might be useful for us attaching a copy of the automation s trace renamed to txt so i can upload ,1 671273,22752900380.0,IssuesEvent,2022-07-07 14:23:24,opensquare-network/paid-qa,https://api.github.com/repos/opensquare-network/paid-qa,closed,"page `/#/new` error, balance",bug priority:medium,"reproduce, reload page `/#/new`, related file `site/utils/hooks.js` ",1.0,"page `/#/new` error, balance - reproduce, reload page `/#/new`, related file `site/utils/hooks.js` ",0,page new error balance reproduce reload page new related file site utils hooks js img width alt image src ,0 1529,10291354253.0,IssuesEvent,2019-08-27 14:19:00,mozilla-mobile/android-components,https://api.github.com/repos/mozilla-mobile/android-components,opened,Auto-Land MickeyMoz PRs,🤖 automation,"Initially MickeyMoz only created PRs so that we can verify them and see how it works out. Nowadays a bunch of them are pretty stable and we could consider auto-merging them (Docs, Public Suffix List, GeckoView updates). With bors-ng now (#1200) I think we could do something like: * MickeyMoz openes PR * MozLando approves PR (would need to be a code owner unless we lift that restriction since bors does not respect codeowners yet anyways) * MozLando comments with ""bors r+"" * bors-ng tests and lands the patch",1.0,"Auto-Land MickeyMoz PRs - Initially MickeyMoz only created PRs so that we can verify them and see how it works out. Nowadays a bunch of them are pretty stable and we could consider auto-merging them (Docs, Public Suffix List, GeckoView updates). With bors-ng now (#1200) I think we could do something like: * MickeyMoz openes PR * MozLando approves PR (would need to be a code owner unless we lift that restriction since bors does not respect codeowners yet anyways) * MozLando comments with ""bors r+"" * bors-ng tests and lands the patch",1,auto land mickeymoz prs initially mickeymoz only created prs so that we can verify them and see how it works out nowadays a bunch of them are pretty stable and we could consider auto merging them docs public suffix list geckoview updates with bors ng now i think we could do something like mickeymoz openes pr mozlando approves pr would need to be a code owner unless we lift that restriction since bors does not respect codeowners yet anyways mozlando comments with bors r bors ng tests and lands the patch,1 33009,12157618593.0,IssuesEvent,2020-04-25 23:00:11,jmservera/node-red-azure-webapp,https://api.github.com/repos/jmservera/node-red-azure-webapp,opened,"WS-2016-0044 (Medium) detected in swagger-ui-v2.1.4, swagger-ui-2.1.4.tgz",security vulnerability,"## WS-2016-0044 - Medium Severity Vulnerability
Vulnerable Libraries - swagger-ui-2.1.4.tgz

swagger-ui-2.1.4.tgz

Swagger UI is a dependency-free collection of HTML, JavaScript, and CSS assets that dynamically generate beautiful documentation from a Swagger-compliant API

Library home page: https://registry.npmjs.org/swagger-ui/-/swagger-ui-2.1.4.tgz

Path to dependency file: /tmp/ws-scm/node-red-azure-webapp/package.json

Path to vulnerable library: /tmp/ws-scm/node-red-azure-webapp/node_modules/node-red-node-swagger/node_modules/swagger-ui/package.json

Dependency Hierarchy: - node-red-node-swagger-0.1.9.tgz (Root Library) - :x: **swagger-ui-2.1.4.tgz** (Vulnerable Library)

Found in HEAD commit: a828769834f37b291096091f7d78329324424ce2

Vulnerability Details

swagger-ui response headers are not escaped when generating the curl command, allowing XSS attack

Publish Date: 2016-07-25

URL: WS-2016-0044

CVSS 3 Score Details (4.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://nodesecurity.io/advisories/131

Release Date: 2016-07-25

Fix Resolution: Update to 2.1.5 or later.

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2016-0044 (Medium) detected in swagger-ui-v2.1.4, swagger-ui-2.1.4.tgz - ## WS-2016-0044 - Medium Severity Vulnerability
Vulnerable Libraries - swagger-ui-2.1.4.tgz

swagger-ui-2.1.4.tgz

Swagger UI is a dependency-free collection of HTML, JavaScript, and CSS assets that dynamically generate beautiful documentation from a Swagger-compliant API

Library home page: https://registry.npmjs.org/swagger-ui/-/swagger-ui-2.1.4.tgz

Path to dependency file: /tmp/ws-scm/node-red-azure-webapp/package.json

Path to vulnerable library: /tmp/ws-scm/node-red-azure-webapp/node_modules/node-red-node-swagger/node_modules/swagger-ui/package.json

Dependency Hierarchy: - node-red-node-swagger-0.1.9.tgz (Root Library) - :x: **swagger-ui-2.1.4.tgz** (Vulnerable Library)

Found in HEAD commit: a828769834f37b291096091f7d78329324424ce2

Vulnerability Details

swagger-ui response headers are not escaped when generating the curl command, allowing XSS attack

Publish Date: 2016-07-25

URL: WS-2016-0044

CVSS 3 Score Details (4.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://nodesecurity.io/advisories/131

Release Date: 2016-07-25

Fix Resolution: Update to 2.1.5 or later.

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in swagger ui swagger ui tgz ws medium severity vulnerability vulnerable libraries swagger ui tgz swagger ui tgz swagger ui is a dependency free collection of html javascript and css assets that dynamically generate beautiful documentation from a swagger compliant api library home page a href path to dependency file tmp ws scm node red azure webapp package json path to vulnerable library tmp ws scm node red azure webapp node modules node red node swagger node modules swagger ui package json dependency hierarchy node red node swagger tgz root library x swagger ui tgz vulnerable library found in head commit a href vulnerability details swagger ui response headers are not escaped when generating the curl command allowing xss attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution update to or later step up your open source security game with whitesource ,0 269471,23444888846.0,IssuesEvent,2022-08-15 18:33:16,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,roachtest: kv0/enc=false/nodes=1/size=64kb/conc=4096 failed [OOM],C-test-failure O-robot O-roachtest T-storage branch-release-22.1,"roachtest.kv0/enc=false/nodes=1/size=64kb/conc=4096 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4583064&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4583064&tab=artifacts#/kv0/enc=false/nodes=1/size=64kb/conc=4096) on release-22.1 @ [a1c1879e01ceee79a81693c67a1dba184b5fc1b1](https://github.com/cockroachdb/cockroach/commits/a1c1879e01ceee79a81693c67a1dba184b5fc1b1): ``` | 1186.0s 0 122.1 172.5 18253.6 62277.0 64424.5 64424.5 write | 1187.0s 0 374.8 172.7 19327.4 62277.0 64424.5 64424.5 write | 1188.0s 0 87.1 172.6 18253.6 62277.0 64424.5 64424.5 write | 1189.0s 0 379.6 172.8 11811.2 62277.0 64424.5 66572.0 write | 1190.0s 0 445.8 173.0 13421.8 64424.5 66572.0 68719.5 write | 1191.0s 0 420.1 173.2 13421.8 66572.0 68719.5 68719.5 write | 1192.0s 0 196.1 173.3 14495.5 66572.0 68719.5 73014.4 write | 1193.0s 0 181.9 173.3 15032.4 66572.0 73014.4 73014.4 write | 1194.0s 0 116.1 173.2 16643.0 73014.4 73014.4 73014.4 write | 1195.0s 0 374.5 173.4 15032.4 68719.5 73014.4 73014.4 write | 1196.0s 0 258.2 173.5 15569.3 68719.5 73014.4 73014.4 write Wraps: (4) COMMAND_PROBLEM Wraps: (5) Node 2. Command with error: | `````` | ./workload run kv --init --histograms=perf/stats.json --concurrency=4096 --splits=1000 --duration=30m0s --read-percent=0 --min-block-bytes=65536 --max-block-bytes=65536 {pgurl:1-1} | `````` Wraps: (6) exit status 1 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.Cmd (5) *hintdetail.withDetail (6) *exec.ExitError monitor.go:127,kv.go:155,kv.go:270,test_runner.go:866: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitorImpl).WaitE | main/pkg/cmd/roachtest/monitor.go:115 | main.(*monitorImpl).Wait | main/pkg/cmd/roachtest/monitor.go:123 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerKV.func2 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/kv.go:155 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerKV.func3 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/kv.go:270 | main.(*testRunner).runTest.func2 | main/pkg/cmd/roachtest/test_runner.go:866 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitorImpl).wait.func2 | main/pkg/cmd/roachtest/monitor.go:171 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | main/pkg/cmd/roachtest/monitor.go:80 | runtime.doInit | GOROOT/src/runtime/proc.go:6498 | runtime.main | GOROOT/src/runtime/proc.go:238 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1581 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ```
Help

See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)

Same failure on other branches

- #72375 roachtest: kv0/enc=false/nodes=1/size=64kb/conc=4096 failed [admission control] [C-test-failure O-roachtest O-robot T-storage branch-master] - #70247 roachtest: kv0/enc=false/nodes=1/size=64kb/conc=4096 failed [admission control] [C-test-failure O-roachtest O-robot T-storage branch-release-21.2]

/cc @cockroachdb/kv-triage [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*kv0/enc=false/nodes=1/size=64kb/conc=4096.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) Jira issue: CRDB-13816 Epic CRDB-16238",2.0,"roachtest: kv0/enc=false/nodes=1/size=64kb/conc=4096 failed [OOM] - roachtest.kv0/enc=false/nodes=1/size=64kb/conc=4096 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4583064&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4583064&tab=artifacts#/kv0/enc=false/nodes=1/size=64kb/conc=4096) on release-22.1 @ [a1c1879e01ceee79a81693c67a1dba184b5fc1b1](https://github.com/cockroachdb/cockroach/commits/a1c1879e01ceee79a81693c67a1dba184b5fc1b1): ``` | 1186.0s 0 122.1 172.5 18253.6 62277.0 64424.5 64424.5 write | 1187.0s 0 374.8 172.7 19327.4 62277.0 64424.5 64424.5 write | 1188.0s 0 87.1 172.6 18253.6 62277.0 64424.5 64424.5 write | 1189.0s 0 379.6 172.8 11811.2 62277.0 64424.5 66572.0 write | 1190.0s 0 445.8 173.0 13421.8 64424.5 66572.0 68719.5 write | 1191.0s 0 420.1 173.2 13421.8 66572.0 68719.5 68719.5 write | 1192.0s 0 196.1 173.3 14495.5 66572.0 68719.5 73014.4 write | 1193.0s 0 181.9 173.3 15032.4 66572.0 73014.4 73014.4 write | 1194.0s 0 116.1 173.2 16643.0 73014.4 73014.4 73014.4 write | 1195.0s 0 374.5 173.4 15032.4 68719.5 73014.4 73014.4 write | 1196.0s 0 258.2 173.5 15569.3 68719.5 73014.4 73014.4 write Wraps: (4) COMMAND_PROBLEM Wraps: (5) Node 2. Command with error: | `````` | ./workload run kv --init --histograms=perf/stats.json --concurrency=4096 --splits=1000 --duration=30m0s --read-percent=0 --min-block-bytes=65536 --max-block-bytes=65536 {pgurl:1-1} | `````` Wraps: (6) exit status 1 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.Cmd (5) *hintdetail.withDetail (6) *exec.ExitError monitor.go:127,kv.go:155,kv.go:270,test_runner.go:866: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitorImpl).WaitE | main/pkg/cmd/roachtest/monitor.go:115 | main.(*monitorImpl).Wait | main/pkg/cmd/roachtest/monitor.go:123 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerKV.func2 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/kv.go:155 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerKV.func3 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/kv.go:270 | main.(*testRunner).runTest.func2 | main/pkg/cmd/roachtest/test_runner.go:866 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitorImpl).wait.func2 | main/pkg/cmd/roachtest/monitor.go:171 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | main/pkg/cmd/roachtest/monitor.go:80 | runtime.doInit | GOROOT/src/runtime/proc.go:6498 | runtime.main | GOROOT/src/runtime/proc.go:238 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1581 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ```
Help

See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)

Same failure on other branches

- #72375 roachtest: kv0/enc=false/nodes=1/size=64kb/conc=4096 failed [admission control] [C-test-failure O-roachtest O-robot T-storage branch-master] - #70247 roachtest: kv0/enc=false/nodes=1/size=64kb/conc=4096 failed [admission control] [C-test-failure O-roachtest O-robot T-storage branch-release-21.2]

/cc @cockroachdb/kv-triage [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*kv0/enc=false/nodes=1/size=64kb/conc=4096.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) Jira issue: CRDB-13816 Epic CRDB-16238",0,roachtest enc false nodes size conc failed roachtest enc false nodes size conc with on release write write write write write write write write write write write wraps command problem wraps node command with error workload run kv init histograms perf stats json concurrency splits duration read percent min block bytes max block bytes pgurl wraps exit status error types withstack withstack errutil withprefix cluster withcommanddetails errors cmd hintdetail withdetail exec exiterror monitor go kv go kv go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitorimpl waite main pkg cmd roachtest monitor go main monitorimpl wait main pkg cmd roachtest monitor go github com cockroachdb cockroach pkg cmd roachtest tests registerkv github com cockroachdb cockroach pkg cmd roachtest tests kv go github com cockroachdb cockroach pkg cmd roachtest tests registerkv github com cockroachdb cockroach pkg cmd roachtest tests kv go main testrunner runtest main pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitorimpl wait main pkg cmd roachtest monitor go wraps monitor task failed wraps attached stack trace stack trace main init main pkg cmd roachtest monitor go runtime doinit goroot src runtime proc go runtime main goroot src runtime proc go runtime goexit goroot src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror help see see same failure on other branches roachtest enc false nodes size conc failed roachtest enc false nodes size conc failed cc cockroachdb kv triage jira issue crdb epic crdb ,0 1421,10091903305.0,IssuesEvent,2019-07-26 15:18:16,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,"Typo for ""set stop healthservice""",Pri2 automation/svc change-inventory-management/subsvc cxp doc-enhancement triaged,"It should be ""net stop healthservice"". --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 44e919d1-bf8c-38af-29a5-63a2e679850d * Version Independent ID: 03a4bc5a-6666-bc24-f315-4324278e50ae * Content: [Troubleshooting issues with Azure Change Tracking](https://docs.microsoft.com/en-us/azure/automation/troubleshoot/change-tracking#feedback) * Content Source: [articles/automation/troubleshoot/change-tracking.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/troubleshoot/change-tracking.md) * Service: **automation** * Sub-service: **change-inventory-management** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**",1.0,"Typo for ""set stop healthservice"" - It should be ""net stop healthservice"". --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 44e919d1-bf8c-38af-29a5-63a2e679850d * Version Independent ID: 03a4bc5a-6666-bc24-f315-4324278e50ae * Content: [Troubleshooting issues with Azure Change Tracking](https://docs.microsoft.com/en-us/azure/automation/troubleshoot/change-tracking#feedback) * Content Source: [articles/automation/troubleshoot/change-tracking.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/troubleshoot/change-tracking.md) * Service: **automation** * Sub-service: **change-inventory-management** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**",1,typo for set stop healthservice it should be net stop healthservice document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service change inventory management github login bobbytreed microsoft alias robreed ,1 3047,13034356824.0,IssuesEvent,2020-07-28 08:34:30,DimensionDev/Maskbook,https://api.github.com/repos/DimensionDev/Maskbook,closed,[Bug] CI tag conflict when fetching new commits,Dev: CI Automation Type: Bug,"# Bug Report ## Environment ### System - OS: - OS Version: ### Browser - Browser: - Browser Version: ### Maskbook - Maskbook Version: - Installation: /* May be ""Store"", ""ZIP"", or ""Self-Complied"" */ - Build Commit: Optionally attach a Commit ID, if it is from an pre-release branch head ## Bug Info ### Expected Behavior /* Write the expected behavior here. */ ### Actual Behavior https://dimension.chat/group/maskbook-qa?msg=KQd9E5MokZXrDAtZM ### How To Reproduce /* Specify how it may be produced here. */ ",1.0,"[Bug] CI tag conflict when fetching new commits - # Bug Report ## Environment ### System - OS: - OS Version: ### Browser - Browser: - Browser Version: ### Maskbook - Maskbook Version: - Installation: /* May be ""Store"", ""ZIP"", or ""Self-Complied"" */ - Build Commit: Optionally attach a Commit ID, if it is from an pre-release branch head ## Bug Info ### Expected Behavior /* Write the expected behavior here. */ ### Actual Behavior https://dimension.chat/group/maskbook-qa?msg=KQd9E5MokZXrDAtZM ### How To Reproduce /* Specify how it may be produced here. */ ",1, ci tag conflict when fetching new commits bug report environment system os os version browser browser browser version maskbook maskbook version installation may be store zip or self complied build commit optionally attach a commit id if it is from an pre release branch head bug info expected behavior write the expected behavior here actual behavior how to reproduce specify how it may be produced here ,1 4419,16508307774.0,IssuesEvent,2021-05-25 22:36:50,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] Priority class not getting set for recurring jobs,area/manager backport-needed kind/bug require/automation-e2e severity/4,"**Describe the bug** Priority set using Longhorn UI is not reflecting for recurring job. **To Reproduce** Steps to reproduce the behavior: 1. Deploy Longhorn master on a cluster. 2. Set priority class in the setting page of Longhorn UI. 3. Create a volume and configure recurring snapshot/backup job for it. 4. Check the yaml for recurring job, there is no priority class. **Environment:** - Longhorn version: Longhorn -master `04/06/2021` - Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Kubectl - Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: RKE - K8s v1.20.4 - Number of management node in the cluster: 1 - Number of worker node in the cluster: 4 ",1.0,"[BUG] Priority class not getting set for recurring jobs - **Describe the bug** Priority set using Longhorn UI is not reflecting for recurring job. **To Reproduce** Steps to reproduce the behavior: 1. Deploy Longhorn master on a cluster. 2. Set priority class in the setting page of Longhorn UI. 3. Create a volume and configure recurring snapshot/backup job for it. 4. Check the yaml for recurring job, there is no priority class. **Environment:** - Longhorn version: Longhorn -master `04/06/2021` - Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Kubectl - Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: RKE - K8s v1.20.4 - Number of management node in the cluster: 1 - Number of worker node in the cluster: 4 ",1, priority class not getting set for recurring jobs describe the bug priority set using longhorn ui is not reflecting for recurring job to reproduce steps to reproduce the behavior deploy longhorn master on a cluster set priority class in the setting page of longhorn ui create a volume and configure recurring snapshot backup job for it check the yaml for recurring job there is no priority class environment longhorn version longhorn master installation method e g rancher catalog app helm kubectl kubectl kubernetes distro e g rke eks openshift and version rke number of management node in the cluster number of worker node in the cluster ,1 470348,13536123247.0,IssuesEvent,2020-09-16 08:35:48,DigitalExcellence/dex-backend,https://api.github.com/repos/DigitalExcellence/dex-backend,closed,System.Net.Mail.SmtpException: The SMTP server requires a secure connection or the client was not authenticated. The server resp...,priority,"Sentry Issue: [IDENTITYSERVER-E](https://sentry.io/organizations/digital-excellence/issues/1870503441/?referrer=github_integration) ``` System.Net.Mail.SmtpException: The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.7.0 Authentication Required. Learn more at File ""/app/IdentityServer/Quickstart/Account/ExternalController.cs"", line 245, in Callback Module ""IdentityServer4.Hosting.IdentityServerMiddleware"", in Invoke Module ""IdentityServer4.Hosting.MutualTlsTokenEndpointMiddleware"", in Invoke Module ""IdentityServer4.Hosting.BaseUrlMiddleware"", in Invoke ... (44 additional frame(s) were not displayed) ```",1.0,"System.Net.Mail.SmtpException: The SMTP server requires a secure connection or the client was not authenticated. The server resp... - Sentry Issue: [IDENTITYSERVER-E](https://sentry.io/organizations/digital-excellence/issues/1870503441/?referrer=github_integration) ``` System.Net.Mail.SmtpException: The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.7.0 Authentication Required. Learn more at File ""/app/IdentityServer/Quickstart/Account/ExternalController.cs"", line 245, in Callback Module ""IdentityServer4.Hosting.IdentityServerMiddleware"", in Invoke Module ""IdentityServer4.Hosting.MutualTlsTokenEndpointMiddleware"", in Invoke Module ""IdentityServer4.Hosting.BaseUrlMiddleware"", in Invoke ... (44 additional frame(s) were not displayed) ```",0,system net mail smtpexception the smtp server requires a secure connection or the client was not authenticated the server resp sentry issue system net mail smtpexception the smtp server requires a secure connection or the client was not authenticated the server response was authentication required learn more at file app identityserver quickstart account externalcontroller cs line in callback module hosting identityservermiddleware in invoke module hosting mutualtlstokenendpointmiddleware in invoke module hosting baseurlmiddleware in invoke additional frame s were not displayed ,0 4965,18110085331.0,IssuesEvent,2021-09-23 01:50:36,theglus/Home-Assistant-Config,https://api.github.com/repos/theglus/Home-Assistant-Config,closed,Migrate Vacuum notifications from Telegram to In-App,automation Winston notifications,"# Requirements - [x] Create [notify group](https://www.home-assistant.io/integrations/notify.group/). - [x] Update [vacuum_clean.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_clean.yaml). - [x] Update [vacuum_docked.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_docked.yaml). - [x] Update [vacuum_done.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_done.yaml). # Resources * [Notification Integration](https://www.home-assistant.io/integrations/notify/) * [Notify Group](https://www.home-assistant.io/integrations/notify.group/)",1.0,"Migrate Vacuum notifications from Telegram to In-App - # Requirements - [x] Create [notify group](https://www.home-assistant.io/integrations/notify.group/). - [x] Update [vacuum_clean.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_clean.yaml). - [x] Update [vacuum_docked.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_docked.yaml). - [x] Update [vacuum_done.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_done.yaml). # Resources * [Notification Integration](https://www.home-assistant.io/integrations/notify/) * [Notify Group](https://www.home-assistant.io/integrations/notify.group/)",1,migrate vacuum notifications from telegram to in app requirements create update update update resources ,1 743,7976150936.0,IssuesEvent,2018-07-17 11:44:19,lubbertkramer/home_assistant_config,https://api.github.com/repos/lubbertkramer/home_assistant_config,opened,Auto dimming lights that are on,Automation Home Assistant Not now,"Maybe auto dim existing lights that are on throughout the day. https://community.home-assistant.io/t/group-add-entities-interesting-concept-how-does-it-work/54967/6?u=ccostan",1.0,"Auto dimming lights that are on - Maybe auto dim existing lights that are on throughout the day. https://community.home-assistant.io/t/group-add-entities-interesting-concept-how-does-it-work/54967/6?u=ccostan",1,auto dimming lights that are on maybe auto dim existing lights that are on throughout the day ,1 9151,27628793415.0,IssuesEvent,2023-03-10 09:15:28,camunda/camunda-bpm-platform,https://api.github.com/repos/camunda/camunda-bpm-platform,closed,"Fire historic task update events when task property ""last updated"" is changed",version:7.19.0 type:feature component:c7-automation-platform group:support,"### User Story (Required on creation) As a developer, I want to keep my custom Tasklist system in sync, which relies on a custom history backend. For that, I need historic task update events fired whenever the ""last updated"" task property is changed. ### Functional Requirements (Required before implementation) Fire the historic task update events in case task property changed, including ""last updated"". ### Technical Requirements (Required before implementation) * fire the event for ""last update"" and consider future properties * evaluate if ""last update"" brings value in the history for the user. if It doesn't, we only emit the event, ensure we don't populate information in the history for it * preserve the events order * look for other implications if we bring back the #update in #triggerUpdateEvent ### Limitations of Scope ### Hints ### Links https://jira.camunda.com/browse/SUPPORT-15192 ### Breakdown - [x] https://github.com/camunda/camunda-bpm-platform/pull/3178 ",1.0,"Fire historic task update events when task property ""last updated"" is changed - ### User Story (Required on creation) As a developer, I want to keep my custom Tasklist system in sync, which relies on a custom history backend. For that, I need historic task update events fired whenever the ""last updated"" task property is changed. ### Functional Requirements (Required before implementation) Fire the historic task update events in case task property changed, including ""last updated"". ### Technical Requirements (Required before implementation) * fire the event for ""last update"" and consider future properties * evaluate if ""last update"" brings value in the history for the user. if It doesn't, we only emit the event, ensure we don't populate information in the history for it * preserve the events order * look for other implications if we bring back the #update in #triggerUpdateEvent ### Limitations of Scope ### Hints ### Links https://jira.camunda.com/browse/SUPPORT-15192 ### Breakdown - [x] https://github.com/camunda/camunda-bpm-platform/pull/3178 ",1,fire historic task update events when task property last updated is changed user story required on creation as a developer i want to keep my custom tasklist system in sync which relies on a custom history backend for that i need historic task update events fired whenever the last updated task property is changed functional requirements required before implementation fire the historic task update events in case task property changed including last updated technical requirements required before implementation fire the event for last update and consider future properties evaluate if last update brings value in the history for the user if it doesn t we only emit the event ensure we don t populate information in the history for it preserve the events order look for other implications if we bring back the update in triggerupdateevent limitations of scope hints links breakdown ,1 78891,22496235458.0,IssuesEvent,2022-06-23 07:50:32,OpenModelica/OpenModelica,https://api.github.com/repos/OpenModelica/OpenModelica,closed,Windows installers fail SmartScreen checks,enhancement COMP/Build System,"When installing OMC on Windows, the SmartScreen filter identifies the OMC installer as suspicious software from unidentified authors, and requires to give explicit consent to perform a potentially dangerous installation. This may be ok for hardened hackers that know about the OSMC, but it's not projecting an image of quality and dependability on the sofware, particularly for industrial and corporate use. Looking like potential malware is not a very good marketing strategy :) I would recommend that from 1.13.0 we start signing the installer with a certificate, so that we avoid this kind of problems. More information on how to do this is found ​[here](https://blogs.msdn.microsoft.com/ie/2011/03/22/smartscreen-application-reputation-building-reputation/). ---------- From https://trac.openmodelica.org/OpenModelica/ticket/4829",1.0,"Windows installers fail SmartScreen checks - When installing OMC on Windows, the SmartScreen filter identifies the OMC installer as suspicious software from unidentified authors, and requires to give explicit consent to perform a potentially dangerous installation. This may be ok for hardened hackers that know about the OSMC, but it's not projecting an image of quality and dependability on the sofware, particularly for industrial and corporate use. Looking like potential malware is not a very good marketing strategy :) I would recommend that from 1.13.0 we start signing the installer with a certificate, so that we avoid this kind of problems. More information on how to do this is found ​[here](https://blogs.msdn.microsoft.com/ie/2011/03/22/smartscreen-application-reputation-building-reputation/). ---------- From https://trac.openmodelica.org/OpenModelica/ticket/4829",0,windows installers fail smartscreen checks when installing omc on windows the smartscreen filter identifies the omc installer as suspicious software from unidentified authors and requires to give explicit consent to perform a potentially dangerous installation this may be ok for hardened hackers that know about the osmc but it s not projecting an image of quality and dependability on the sofware particularly for industrial and corporate use looking like potential malware is not a very good marketing strategy i would recommend that from we start signing the installer with a certificate so that we avoid this kind of problems more information on how to do this is found ​ from ,0 4033,15216571968.0,IssuesEvent,2021-02-17 15:39:15,uiowa/uiowa,https://api.github.com/repos/uiowa/uiowa,reopened,Path alias %files does not work with Drush rsync,automation bug,"``` vagrant@local:/var/www/uiowa$ drush rsync @home.test:%files @home.local:%files In BackendPathEvaluator.php line 85: Cannot evaluate path alias %files for site alias @home.test ```",1.0,"Path alias %files does not work with Drush rsync - ``` vagrant@local:/var/www/uiowa$ drush rsync @home.test:%files @home.local:%files In BackendPathEvaluator.php line 85: Cannot evaluate path alias %files for site alias @home.test ```",1,path alias files does not work with drush rsync vagrant local var www uiowa drush rsync home test files home local files in backendpathevaluator php line cannot evaluate path alias files for site alias home test ,1 1798,10789362855.0,IssuesEvent,2019-11-05 11:45:08,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,opened,Download file native dialog in Internet Explorer 11 (IE 11),FREQUENCY: level 1 SYSTEM: automations TYPE: bug support center,"### Steps to Reproduce: **index.html** ```html Download ``` **test.js**: ```js import { Selector } from 'testcafe'; // import fs from 'fs'; fixture `Fixture` .page `http://localhost:8080/index.html`; test('test', async t => { await t .click(Selector('body > a:nth-child(1)')); await t.wait(4000); // const filePath = 'c:\\Users\\name\\Downloads\\file.xlsx'; // await t // .expect(fs.existsSync(filePath)).ok(); }); ``` 1. Run `testcafe ie test.js`. 2. See the following result: ![Capture](https://user-images.githubusercontent.com/30019338/68204815-dcdd7d80-ffd9-11e9-9502-9462a97b401a.PNG) ### Your Environment details: * testcafe version: v1.6.0 * browser name and version: IE 11 * platform and version: Windows 10 ",1.0,"Download file native dialog in Internet Explorer 11 (IE 11) - ### Steps to Reproduce: **index.html** ```html Download ``` **test.js**: ```js import { Selector } from 'testcafe'; // import fs from 'fs'; fixture `Fixture` .page `http://localhost:8080/index.html`; test('test', async t => { await t .click(Selector('body > a:nth-child(1)')); await t.wait(4000); // const filePath = 'c:\\Users\\name\\Downloads\\file.xlsx'; // await t // .expect(fs.existsSync(filePath)).ok(); }); ``` 1. Run `testcafe ie test.js`. 2. See the following result: ![Capture](https://user-images.githubusercontent.com/30019338/68204815-dcdd7d80-ffd9-11e9-9502-9462a97b401a.PNG) ### Your Environment details: * testcafe version: v1.6.0 * browser name and version: IE 11 * platform and version: Windows 10 ",1,download file native dialog in internet explorer ie steps to reproduce index html html download test js js import selector from testcafe import fs from fs fixture fixture page test test async t await t click selector body a nth child await t wait const filepath c users name downloads file xlsx await t expect fs existssync filepath ok run testcafe ie test js see the following result your environment details testcafe version browser name and version ie platform and version windows ,1 660924,22036047624.0,IssuesEvent,2022-05-28 15:48:13,ArctosDB/arctos,https://api.github.com/repos/ArctosDB/arctos,closed,degrees latitude and longitude fields missing in bulkloading,Priority-High (Needed for work) Function-DataEntry/Bulkloading Tool - Bulkload Collecting Events,"From @catherpes Andy Johnson of MSB: Any records that I have in this file are not going in if they have degrees decimal minutes. This is the most updated version of the file in Excel from which I generate the csv file to upload. I thought maybe the field name for degrees had changed so I went back to the bulkloader builder. I cleared all checks in the whole builder and then clicked on DM.m Coordinates. Scrolled down to see what fields are checked an apart from the default Coordinate Metadata, only dec_lat_min and dec_long_min are checked. There needs to be a degrees latitude and longitude field for these records [drawer bulkload6.xlsx](https://github.com/ArctosDB/arctos/files/8790374/drawer.bulkload6.xlsx) .",1.0,"degrees latitude and longitude fields missing in bulkloading - From @catherpes Andy Johnson of MSB: Any records that I have in this file are not going in if they have degrees decimal minutes. This is the most updated version of the file in Excel from which I generate the csv file to upload. I thought maybe the field name for degrees had changed so I went back to the bulkloader builder. I cleared all checks in the whole builder and then clicked on DM.m Coordinates. Scrolled down to see what fields are checked an apart from the default Coordinate Metadata, only dec_lat_min and dec_long_min are checked. There needs to be a degrees latitude and longitude field for these records [drawer bulkload6.xlsx](https://github.com/ArctosDB/arctos/files/8790374/drawer.bulkload6.xlsx) .",0,degrees latitude and longitude fields missing in bulkloading from catherpes andy johnson of msb any records that i have in this file are not going in if they have degrees decimal minutes this is the most updated version of the file in excel from which i generate the csv file to upload i thought maybe the field name for degrees had changed so i went back to the bulkloader builder i cleared all checks in the whole builder and then clicked on dm m coordinates scrolled down to see what fields are checked an apart from the default coordinate metadata only dec lat min and dec long min are checked there needs to be a degrees latitude and longitude field for these records ,0 8331,26734916501.0,IssuesEvent,2023-01-30 08:44:08,submariner-io/releases,https://api.github.com/repos/submariner-io/releases,closed,Releases shouldn’t always be marked as latest,automation,"Releases are currently marked as latest by default, regardless of the branch they come from. This results in get.submariner.io defaulting to the latest chronological release, not the latest version we want end-users to install by default. It also causes upgrade CI jobs to fail, since they install the “latest” release. See for instance the job failures resulting from the 0.13.2 release — this ended up being the default release, replacing 0.14.0. I fixed the release markers manually but ideally this should be taken care of by the release job (_e.g._ by checking whether there’s a release branch with a higher version and an actual release).",1.0,"Releases shouldn’t always be marked as latest - Releases are currently marked as latest by default, regardless of the branch they come from. This results in get.submariner.io defaulting to the latest chronological release, not the latest version we want end-users to install by default. It also causes upgrade CI jobs to fail, since they install the “latest” release. See for instance the job failures resulting from the 0.13.2 release — this ended up being the default release, replacing 0.14.0. I fixed the release markers manually but ideally this should be taken care of by the release job (_e.g._ by checking whether there’s a release branch with a higher version and an actual release).",1,releases shouldn’t always be marked as latest releases are currently marked as latest by default regardless of the branch they come from this results in get submariner io defaulting to the latest chronological release not the latest version we want end users to install by default it also causes upgrade ci jobs to fail since they install the “latest” release see for instance the job failures resulting from the release — this ended up being the default release replacing i fixed the release markers manually but ideally this should be taken care of by the release job e g by checking whether there’s a release branch with a higher version and an actual release ,1 2084,11360349953.0,IssuesEvent,2020-01-26 05:56:51,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,Support running campaigns on > 200 repositories,automation customer,"Customers who ran into issues with this: - https://app.hubspot.com/contacts/2762526/contact/17877751 - https://app.hubspot.com/contacts/2762526/company/608245740 I couldn't find where we are tracking this functionality - perhaps it is implicit with the introduction of the CLI as the main way to create campaigns, but I'd like to ensure we're watching and addressing this.",1.0,"Support running campaigns on > 200 repositories - Customers who ran into issues with this: - https://app.hubspot.com/contacts/2762526/contact/17877751 - https://app.hubspot.com/contacts/2762526/company/608245740 I couldn't find where we are tracking this functionality - perhaps it is implicit with the introduction of the CLI as the main way to create campaigns, but I'd like to ensure we're watching and addressing this.",1,support running campaigns on repositories customers who ran into issues with this i couldn t find where we are tracking this functionality perhaps it is implicit with the introduction of the cli as the main way to create campaigns but i d like to ensure we re watching and addressing this ,1 5547,20032421501.0,IssuesEvent,2022-02-02 08:11:17,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,Integration Tests: Switch to static GKE clusters instead of freshly created ones,type:chore automation refactoring github_actions area:devops,"Right now, the integration tests set up fresh GKE clusters for all tested versions with every run. To have a more secure and fast setup, we should switch to having static clusters on GKE for all versions that we want to test against. Those should just be cleaned up correctly after each integration test run to be able to properly re-use them in the next run. Tasks: - [x] Set up static clusters for all GKE versions that we want to test against - [x] Ensure that clusters stay on the same k8s version - [x] Change integration test pipeline to use those clusters for testing - [ ] Remove now unneeded GH secrets, add kubeconfig(s) for clusters to secrets - [ ] Add verification steps to the pipeline to ensure that clusters are cleaned up properly after test runs - [x] Think about using https://codeberg.org/hjacobs/kube-janitor for cleanup so that test setups have a TTL for debugging",1.0,"Integration Tests: Switch to static GKE clusters instead of freshly created ones - Right now, the integration tests set up fresh GKE clusters for all tested versions with every run. To have a more secure and fast setup, we should switch to having static clusters on GKE for all versions that we want to test against. Those should just be cleaned up correctly after each integration test run to be able to properly re-use them in the next run. Tasks: - [x] Set up static clusters for all GKE versions that we want to test against - [x] Ensure that clusters stay on the same k8s version - [x] Change integration test pipeline to use those clusters for testing - [ ] Remove now unneeded GH secrets, add kubeconfig(s) for clusters to secrets - [ ] Add verification steps to the pipeline to ensure that clusters are cleaned up properly after test runs - [x] Think about using https://codeberg.org/hjacobs/kube-janitor for cleanup so that test setups have a TTL for debugging",1,integration tests switch to static gke clusters instead of freshly created ones right now the integration tests set up fresh gke clusters for all tested versions with every run to have a more secure and fast setup we should switch to having static clusters on gke for all versions that we want to test against those should just be cleaned up correctly after each integration test run to be able to properly re use them in the next run tasks set up static clusters for all gke versions that we want to test against ensure that clusters stay on the same version change integration test pipeline to use those clusters for testing remove now unneeded gh secrets add kubeconfig s for clusters to secrets add verification steps to the pipeline to ensure that clusters are cleaned up properly after test runs think about using for cleanup so that test setups have a ttl for debugging,1 1993,11222518943.0,IssuesEvent,2020-01-07 20:22:26,bank2ynab/bank2ynab,https://api.github.com/repos/bank2ynab/bank2ynab,closed,Increased project automation (Github Actions),project automation,"**Other feature request** **Is your feature request related to a problem? Please describe.** The following issues all reference using hooks to automate certain aspects of the development workflow: #239, #238, #181. After doing some research, I think the best way to approach these tasks is to actually make use of the deployment abilities of Travis rather than using hooks. **Describe the solution you'd like** Let's give Travis superpowers! Essentially, we need to add a ""deployment"" phase to Travis that occurs after a successful round of testing. This could, for example, do the following tasks. - Bump the version number for every commit to the `master` branch - Update `README.md` every time a new bank is added to the config - Automatically fix formatting issues from new commits - Automatically deploy new versions of the `master` branch to PyPi We'll need to somehow give Travis an authorisation key, this will be the trickiest part, I think, as it will need to be kept secure. Travis has documentation on encryption, linked below, which will probably be helpful. **Describe alternatives you've considered** Either continue to do these tasks manually or use the hooks concept. The problem with the hooks is that these are finicky to install and update. Travis provides a centralised solution. **Additional context** ***Travis Docs*** Github Releases uploading: https://docs.travis-ci.com/user/deployment/releases/ PyPi Deployment: https://docs.travis-ci.com/user/deployment/pypi/ Encryption Keys: https://docs.travis-ci.com/user/encryption-keys/ ",1.0,"Increased project automation (Github Actions) - **Other feature request** **Is your feature request related to a problem? Please describe.** The following issues all reference using hooks to automate certain aspects of the development workflow: #239, #238, #181. After doing some research, I think the best way to approach these tasks is to actually make use of the deployment abilities of Travis rather than using hooks. **Describe the solution you'd like** Let's give Travis superpowers! Essentially, we need to add a ""deployment"" phase to Travis that occurs after a successful round of testing. This could, for example, do the following tasks. - Bump the version number for every commit to the `master` branch - Update `README.md` every time a new bank is added to the config - Automatically fix formatting issues from new commits - Automatically deploy new versions of the `master` branch to PyPi We'll need to somehow give Travis an authorisation key, this will be the trickiest part, I think, as it will need to be kept secure. Travis has documentation on encryption, linked below, which will probably be helpful. **Describe alternatives you've considered** Either continue to do these tasks manually or use the hooks concept. The problem with the hooks is that these are finicky to install and update. Travis provides a centralised solution. **Additional context** ***Travis Docs*** Github Releases uploading: https://docs.travis-ci.com/user/deployment/releases/ PyPi Deployment: https://docs.travis-ci.com/user/deployment/pypi/ Encryption Keys: https://docs.travis-ci.com/user/encryption-keys/ ",1,increased project automation github actions other feature request is your feature request related to a problem please describe the following issues all reference using hooks to automate certain aspects of the development workflow after doing some research i think the best way to approach these tasks is to actually make use of the deployment abilities of travis rather than using hooks describe the solution you d like let s give travis superpowers essentially we need to add a deployment phase to travis that occurs after a successful round of testing this could for example do the following tasks bump the version number for every commit to the master branch update readme md every time a new bank is added to the config automatically fix formatting issues from new commits automatically deploy new versions of the master branch to pypi we ll need to somehow give travis an authorisation key this will be the trickiest part i think as it will need to be kept secure travis has documentation on encryption linked below which will probably be helpful describe alternatives you ve considered either continue to do these tasks manually or use the hooks concept the problem with the hooks is that these are finicky to install and update travis provides a centralised solution additional context travis docs github releases uploading pypi deployment encryption keys ,1 3795,14613675119.0,IssuesEvent,2020-12-22 08:39:07,Tithibots/tithiwa,https://api.github.com/repos/Tithibots/tithiwa,closed,Create delete_chats_of_all_contacts() in contacts.py,Selenium Automation enhancement good first issue python,"do as follow 1. go through all contacts chats same as in [exit_from_all_groups()](https://github.com/Tithibots/tithiwa/blob/a278e4a27af13a8469262ff28328ca74135441eb/tithiwa/group.py#L114) NOTE: by using [CONTACTS__NAME_IN_CHATS ](https://github.com/Tithibots/tithiwa/blob/0ba6306873121bd3b87e9a53cae780c474586672/tithiwa/constants.py#L29) you can get all chats of contacts. 2. open chat options by using [CHATROOM__OPTIONS](https://github.com/Tithibots/tithiwa/blob/c44c381913171a1b5528b59c598dd679e38b62b0/tithiwa/constants.py#L28) 3. press on `Delete chat` and wait for the chat to be deleted by using [self._close_info()](https://github.com/Tithibots/tithiwa/blob/c44c381913171a1b5528b59c598dd679e38b62b0/tithiwa/waobject.py#L21) ![ShareX_3wxkCm3ZmW](https://user-images.githubusercontent.com/30471072/96861958-78356000-1482-11eb-8a4f-0f0ff8e57cfc.png) ",1.0,"Create delete_chats_of_all_contacts() in contacts.py - do as follow 1. go through all contacts chats same as in [exit_from_all_groups()](https://github.com/Tithibots/tithiwa/blob/a278e4a27af13a8469262ff28328ca74135441eb/tithiwa/group.py#L114) NOTE: by using [CONTACTS__NAME_IN_CHATS ](https://github.com/Tithibots/tithiwa/blob/0ba6306873121bd3b87e9a53cae780c474586672/tithiwa/constants.py#L29) you can get all chats of contacts. 2. open chat options by using [CHATROOM__OPTIONS](https://github.com/Tithibots/tithiwa/blob/c44c381913171a1b5528b59c598dd679e38b62b0/tithiwa/constants.py#L28) 3. press on `Delete chat` and wait for the chat to be deleted by using [self._close_info()](https://github.com/Tithibots/tithiwa/blob/c44c381913171a1b5528b59c598dd679e38b62b0/tithiwa/waobject.py#L21) ![ShareX_3wxkCm3ZmW](https://user-images.githubusercontent.com/30471072/96861958-78356000-1482-11eb-8a4f-0f0ff8e57cfc.png) ",1,create delete chats of all contacts in contacts py do as follow go through all contacts chats same as in note by using you can get all chats of contacts open chat options by using press on delete chat and wait for the chat to be deleted by using ,1 3363,13563892785.0,IssuesEvent,2020-09-18 09:13:53,SatelliteQE/airgun,https://api.github.com/repos/SatelliteQE/airgun,opened,"DiscoveryRule view uses different locator for ""search"" field",Automation failure,"there's no longer an `@id` attribute, but rather `name`.",1.0,"DiscoveryRule view uses different locator for ""search"" field - there's no longer an `@id` attribute, but rather `name`.",1,discoveryrule view uses different locator for search field there s no longer an id attribute but rather name ,1 7194,24384902165.0,IssuesEvent,2022-10-04 10:54:10,ZhengqiaoWang/blog-comment,https://api.github.com/repos/ZhengqiaoWang/blog-comment,opened,自动化办公UI模块 | 王政乔,gitalk /office_automation/自动化办公UI.html,"https://www.zhengqiao.wang/office_automation/%E8%87%AA%E5%8A%A8%E5%8C%96%E5%8A%9E%E5%85%ACUI.html 自动化办公系列:这个是我用来帮助广大不怎么了解Python但又希望通过使用Python实现自动化办公的系列。这个模块能帮助用户快速地处理构建界面,可以满足基本的输入、文件选择和提示。根据下面的教程提示,可以帮助你快速的实现一些简单的处理小工具,而不需要吭哧吭哧地在命令行上敲来敲去。",1.0,"自动化办公UI模块 | 王政乔 - https://www.zhengqiao.wang/office_automation/%E8%87%AA%E5%8A%A8%E5%8C%96%E5%8A%9E%E5%85%ACUI.html 自动化办公系列:这个是我用来帮助广大不怎么了解Python但又希望通过使用Python实现自动化办公的系列。这个模块能帮助用户快速地处理构建界面,可以满足基本的输入、文件选择和提示。根据下面的教程提示,可以帮助你快速的实现一些简单的处理小工具,而不需要吭哧吭哧地在命令行上敲来敲去。",1,自动化办公ui模块 王政乔 自动化办公系列:这个是我用来帮助广大不怎么了解python但又希望通过使用python实现自动化办公的系列。这个模块能帮助用户快速地处理构建界面,可以满足基本的输入、文件选择和提示。根据下面的教程提示,可以帮助你快速的实现一些简单的处理小工具,而不需要吭哧吭哧地在命令行上敲来敲去。,1 82745,15679669325.0,IssuesEvent,2021-03-25 01:03:44,bci-oss/keycloak,https://api.github.com/repos/bci-oss/keycloak,opened,CVE-2019-12418 (High) detected in tomcat-catalina-7.0.92.jar,security vulnerability,"## CVE-2019-12418 - High Severity Vulnerability
Vulnerable Library - tomcat-catalina-7.0.92.jar

Tomcat Servlet Engine Core Classes and Standard implementations

Path to dependency file: keycloak/adapters/oidc/tomcat/tomcat7/pom.xml

Path to vulnerable library: canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar

Dependency Hierarchy: - :x: **tomcat-catalina-7.0.92.jar** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

When Apache Tomcat 9.0.0.M1 to 9.0.28, 8.5.0 to 8.5.47, 7.0.0 and 7.0.97 is configured with the JMX Remote Lifecycle Listener, a local attacker without access to the Tomcat process or configuration files is able to manipulate the RMI registry to perform a man-in-the-middle attack to capture user names and passwords used to access the JMX interface. The attacker can then use these credentials to access the JMX interface and gain complete control over the Tomcat instance.

Publish Date: 2019-12-23

URL: CVE-2019-12418

CVSS 3 Score Details (7.0)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12418

Release Date: 2019-12-23

Fix Resolution: org.apache.tomcat:tomcat-catalina:7.0.98;org.apache.tomcat:tomcat-catalina:8.5.48;org.apache.tomcat:tomcat-catalina:9.0.29;org.apache.tomcat.embed:tomcat-embed-core:9.0.29

",True,"CVE-2019-12418 (High) detected in tomcat-catalina-7.0.92.jar - ## CVE-2019-12418 - High Severity Vulnerability
Vulnerable Library - tomcat-catalina-7.0.92.jar

Tomcat Servlet Engine Core Classes and Standard implementations

Path to dependency file: keycloak/adapters/oidc/tomcat/tomcat7/pom.xml

Path to vulnerable library: canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar

Dependency Hierarchy: - :x: **tomcat-catalina-7.0.92.jar** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

When Apache Tomcat 9.0.0.M1 to 9.0.28, 8.5.0 to 8.5.47, 7.0.0 and 7.0.97 is configured with the JMX Remote Lifecycle Listener, a local attacker without access to the Tomcat process or configuration files is able to manipulate the RMI registry to perform a man-in-the-middle attack to capture user names and passwords used to access the JMX interface. The attacker can then use these credentials to access the JMX interface and gain complete control over the Tomcat instance.

Publish Date: 2019-12-23

URL: CVE-2019-12418

CVSS 3 Score Details (7.0)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12418

Release Date: 2019-12-23

Fix Resolution: org.apache.tomcat:tomcat-catalina:7.0.98;org.apache.tomcat:tomcat-catalina:8.5.48;org.apache.tomcat:tomcat-catalina:9.0.29;org.apache.tomcat.embed:tomcat-embed-core:9.0.29

",0,cve high detected in tomcat catalina jar cve high severity vulnerability vulnerable library tomcat catalina jar tomcat servlet engine core classes and standard implementations path to dependency file keycloak adapters oidc tomcat pom xml path to vulnerable library canner repository org apache tomcat tomcat catalina tomcat catalina jar canner repository org apache tomcat tomcat catalina tomcat catalina jar canner repository org apache tomcat tomcat catalina tomcat catalina jar canner repository org apache tomcat tomcat catalina tomcat catalina jar canner repository org apache tomcat tomcat catalina tomcat catalina jar dependency hierarchy x tomcat catalina jar vulnerable library found in base branch master vulnerability details when apache tomcat to to and is configured with the jmx remote lifecycle listener a local attacker without access to the tomcat process or configuration files is able to manipulate the rmi registry to perform a man in the middle attack to capture user names and passwords used to access the jmx interface the attacker can then use these credentials to access the jmx interface and gain complete control over the tomcat instance publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat catalina org apache tomcat tomcat catalina org apache tomcat tomcat catalina org apache tomcat embed tomcat embed core isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache tomcat tomcat catalina isminimumfixversionavailable true minimumfixversion org apache tomcat tomcat catalina org apache tomcat tomcat catalina org apache tomcat tomcat catalina org apache tomcat embed tomcat embed core basebranches vulnerabilityidentifier cve vulnerabilitydetails when apache tomcat to to and is configured with the jmx remote lifecycle listener a local attacker without access to the tomcat process or configuration files is able to manipulate the rmi registry to perform a man in the middle attack to capture user names and passwords used to access the jmx interface the attacker can then use these credentials to access the jmx interface and gain complete control over the tomcat instance vulnerabilityurl ,0 9194,27712647654.0,IssuesEvent,2023-03-14 15:06:56,githubcustomers/discovery.co.za,https://api.github.com/repos/githubcustomers/discovery.co.za,opened,Task Eight: Compare Other SAST and CodeQL Results,ghas-trial automation Important,"# Task Eight: Compare Other SAST and CodeQL Results CodeQL Defaults results are **very** precise. We advocate comparing results to other SAST tools. Still, when comparing, we recommended that a minimum threshold of `security-extended` be used for comparison, but `security-and-quality` will yield maximum results. When comparing results from other SAST tools, look at the **quality** of the responses back, not the number. Remember, if your current SAST tool returns 20 vulnerabilities, that doesn't mean that 20 need to be fixed. The higher the number of vulnerabilities, the longer it will take a developer to look through the data to understand false positives versus correct matches. Code Scanning is precise, which means the results from the default pack should be accurate and of high quality, meaning less time spent understanding false positives and quicker delivery time for your business whilst staying as secure! Additionally, a developer will be more likely to properly look through data of a tool that returns streamlined and high-quality results than a wide casting tool and may be wasting their time. Meaning hopefully, you are going to be more secure as you are increasing your adoption of security. ",1.0,"Task Eight: Compare Other SAST and CodeQL Results - # Task Eight: Compare Other SAST and CodeQL Results CodeQL Defaults results are **very** precise. We advocate comparing results to other SAST tools. Still, when comparing, we recommended that a minimum threshold of `security-extended` be used for comparison, but `security-and-quality` will yield maximum results. When comparing results from other SAST tools, look at the **quality** of the responses back, not the number. Remember, if your current SAST tool returns 20 vulnerabilities, that doesn't mean that 20 need to be fixed. The higher the number of vulnerabilities, the longer it will take a developer to look through the data to understand false positives versus correct matches. Code Scanning is precise, which means the results from the default pack should be accurate and of high quality, meaning less time spent understanding false positives and quicker delivery time for your business whilst staying as secure! Additionally, a developer will be more likely to properly look through data of a tool that returns streamlined and high-quality results than a wide casting tool and may be wasting their time. Meaning hopefully, you are going to be more secure as you are increasing your adoption of security. ",1,task eight compare other sast and codeql results task eight compare other sast and codeql results codeql defaults results are very precise we advocate comparing results to other sast tools still when comparing we recommended that a minimum threshold of security extended be used for comparison but security and quality will yield maximum results when comparing results from other sast tools look at the quality of the responses back not the number remember if your current sast tool returns vulnerabilities that doesn t mean that need to be fixed the higher the number of vulnerabilities the longer it will take a developer to look through the data to understand false positives versus correct matches code scanning is precise which means the results from the default pack should be accurate and of high quality meaning less time spent understanding false positives and quicker delivery time for your business whilst staying as secure additionally a developer will be more likely to properly look through data of a tool that returns streamlined and high quality results than a wide casting tool and may be wasting their time meaning hopefully you are going to be more secure as you are increasing your adoption of security ,1 545340,15948599653.0,IssuesEvent,2021-04-15 06:08:37,openshift/odo,https://api.github.com/repos/openshift/odo,closed,broken odo link,priority/Critical release-blocker,"``` ▶ odo service list NAME AGE MariaDB/mariadb 17m10s ▶ odo link MariaDB/mariadb ✓ Successfully created link between component ""springboot"" and service ""MariaDB/mariadb"" To apply the link, please use `odo push` ▶ odo push ✓ Waiting for component to start [9ms] Validation ✓ Validating the devfile [23800ns] Creating Kubernetes resources for component springboot ✗ Failed to start component with name springboot. Error: Failed to create the component: unable to create or update component: pvc not found for mount path springboot-mariadb-mariadb ``` using SBO 0.7.0 ``` ▶ odo version odo v2.1.0 (6040118b0) ``` /priority critical ",1.0,"broken odo link - ``` ▶ odo service list NAME AGE MariaDB/mariadb 17m10s ▶ odo link MariaDB/mariadb ✓ Successfully created link between component ""springboot"" and service ""MariaDB/mariadb"" To apply the link, please use `odo push` ▶ odo push ✓ Waiting for component to start [9ms] Validation ✓ Validating the devfile [23800ns] Creating Kubernetes resources for component springboot ✗ Failed to start component with name springboot. Error: Failed to create the component: unable to create or update component: pvc not found for mount path springboot-mariadb-mariadb ``` using SBO 0.7.0 ``` ▶ odo version odo v2.1.0 (6040118b0) ``` /priority critical ",0,broken odo link ▶ odo service list name age mariadb mariadb ▶ odo link mariadb mariadb ✓ successfully created link between component springboot and service mariadb mariadb to apply the link please use odo push ▶ odo push ✓ waiting for component to start validation ✓ validating the devfile creating kubernetes resources for component springboot ✗ failed to start component with name springboot error failed to create the component unable to create or update component pvc not found for mount path springboot mariadb mariadb using sbo ▶ odo version odo priority critical ,0 682,7785589596.0,IssuesEvent,2018-06-06 16:15:38,pypa/pip,https://api.github.com/repos/pypa/pip,closed,Skipping CI when code doesn't change,C: automation T: DevOps,"I think it would be useful if pip's CI skipped running tests when a change doesn't really modify any code. This would mean that documentation changes and news-file updates would not result in a 40 minute complete CI run, just a short sweet one. ^.^ If a changeset does not touch any file within `pip/` or `tests/`, test run would be skipped but the linting and likes would still run. FWIW, [cpython does it](https://github.com/python/cpython/blob/master/.travis.yml#L53) on Travis CI. --- Should I investigate further into this - seeing if can be done for pip? ",1.0,"Skipping CI when code doesn't change - I think it would be useful if pip's CI skipped running tests when a change doesn't really modify any code. This would mean that documentation changes and news-file updates would not result in a 40 minute complete CI run, just a short sweet one. ^.^ If a changeset does not touch any file within `pip/` or `tests/`, test run would be skipped but the linting and likes would still run. FWIW, [cpython does it](https://github.com/python/cpython/blob/master/.travis.yml#L53) on Travis CI. --- Should I investigate further into this - seeing if can be done for pip? ",1,skipping ci when code doesn t change i think it would be useful if pip s ci skipped running tests when a change doesn t really modify any code this would mean that documentation changes and news file updates would not result in a minute complete ci run just a short sweet one if a changeset does not touch any file within pip or tests test run would be skipped but the linting and likes would still run fwiw on travis ci should i investigate further into this seeing if can be done for pip ,1 9740,30462406588.0,IssuesEvent,2023-07-17 07:59:10,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Testcafe hangs when interacting with login page,TYPE: bug FREQUENCY: level 1 SYSTEM: native automation,"### What is your Scenario? We are facing an issue within our TestCafe tests where they hang when interacting with the login page. TestCafe does not reach its assertion or selector timeout, instead it just completely stops. Please see example. ### What is the Current behavior? TestCafe hangs after performing an action and does not move onto next action. ### What is the Expected behavior? TestCafe to either move onto next action or timeout. ### What is your public website URL? (or attach your complete example) economist.com ### What is your TestCafe test code? // cookie-consent.js // ``` import { RequestMock } from 'testcafe'; // Mock Evidon cookie consent to avoid interacting with the module on each new session export function mockEvidonCookieConsent() { return RequestMock() .onRequestTo(/evidon\.com\//) .respond('', 200); } export function mockSourcepointCookieConsent() { return RequestMock() .onRequestTo(/cmp-cdn\.p\.aws\.economist\.com\/latest\/cmp\.min\.js/) .respond('', 200); } ``` // example.js // ``` import { fixture , Selector } from 'testcafe'; import { xpathSelector} from './helpers'; import { mockEvidonCookieConsent, mockSourcepointCookieConsent, } from './cookie-consent'; fixture `example fixture` .page('economist.com') .requestHooks([mockEvidonCookieConsent(), mockSourcepointCookieConsent()]) const emailField = xpathSelector('//*[@type=""text""]'); const passwordField = xpathSelector('//*[@type=""password""]'); const loginLink = Selector('.ds-masthead').find('a').withText('Log in').filterVisible(); test('example', async t => { await t.click(loginLink) await t.typeText(emailField, 'email@example.com') console.log('email entered') await t.typeText(passwordField, 'password') console.log('password entered') }); test('example 2', async t => { await t.click(loginLink) await t.typeText(emailField, 'email@example.com') console.log('email entered') await t.typeText(passwordField, 'password') console.log('password entered') }); test('example 3', async t => { await t.click(loginLink) await t.typeText(emailField, 'email@example.com') console.log('email entered') await t.typeText(passwordField, 'password') console.log('password entered') }); ``` // helpers.js // ``` import { Selector } from 'testcafe'; /** * Retrieves all elements that match a given xpath. * @param {string} xpath - xpath to search with * @return {Object} found elements */ const getElementsByXPath = Selector(xpath => { const iterator = document.evaluate( xpath, document, null, XPathResult.UNORDERED_NODE_ITERATOR_TYPE, null, ); const items = []; let item = iterator.iterateNext(); while (item) { items.push(item); item = iterator.iterateNext(); } return items; }); /** * Create a selector based on a xpath. Testcafe does not natively support xpath * selectors, hence this function. * @param {string} xpath - xpath to search with * @returns {Selector} returns a selector */ export const xpathSelector = xpath => { return Selector(getElementsByXPath(xpath)); }; ``` ### Your complete configuration file _No response_ ### Your complete test report None, testcafe hangs and never finishes. ### Screenshots ### Steps to Reproduce 1. Run example.js 2. Notice that TestCafe will hang ### TestCafe version 2.5.1-rc.1 ### Node.js version 16.20 ### Command-line arguments testcafe chrome example.js --skip-js-errors ### Browser name(s) and version(s) Chrome ### Platform(s) and version(s) macOS ### Other _No response_",1.0,"Testcafe hangs when interacting with login page - ### What is your Scenario? We are facing an issue within our TestCafe tests where they hang when interacting with the login page. TestCafe does not reach its assertion or selector timeout, instead it just completely stops. Please see example. ### What is the Current behavior? TestCafe hangs after performing an action and does not move onto next action. ### What is the Expected behavior? TestCafe to either move onto next action or timeout. ### What is your public website URL? (or attach your complete example) economist.com ### What is your TestCafe test code? // cookie-consent.js // ``` import { RequestMock } from 'testcafe'; // Mock Evidon cookie consent to avoid interacting with the module on each new session export function mockEvidonCookieConsent() { return RequestMock() .onRequestTo(/evidon\.com\//) .respond('', 200); } export function mockSourcepointCookieConsent() { return RequestMock() .onRequestTo(/cmp-cdn\.p\.aws\.economist\.com\/latest\/cmp\.min\.js/) .respond('', 200); } ``` // example.js // ``` import { fixture , Selector } from 'testcafe'; import { xpathSelector} from './helpers'; import { mockEvidonCookieConsent, mockSourcepointCookieConsent, } from './cookie-consent'; fixture `example fixture` .page('economist.com') .requestHooks([mockEvidonCookieConsent(), mockSourcepointCookieConsent()]) const emailField = xpathSelector('//*[@type=""text""]'); const passwordField = xpathSelector('//*[@type=""password""]'); const loginLink = Selector('.ds-masthead').find('a').withText('Log in').filterVisible(); test('example', async t => { await t.click(loginLink) await t.typeText(emailField, 'email@example.com') console.log('email entered') await t.typeText(passwordField, 'password') console.log('password entered') }); test('example 2', async t => { await t.click(loginLink) await t.typeText(emailField, 'email@example.com') console.log('email entered') await t.typeText(passwordField, 'password') console.log('password entered') }); test('example 3', async t => { await t.click(loginLink) await t.typeText(emailField, 'email@example.com') console.log('email entered') await t.typeText(passwordField, 'password') console.log('password entered') }); ``` // helpers.js // ``` import { Selector } from 'testcafe'; /** * Retrieves all elements that match a given xpath. * @param {string} xpath - xpath to search with * @return {Object} found elements */ const getElementsByXPath = Selector(xpath => { const iterator = document.evaluate( xpath, document, null, XPathResult.UNORDERED_NODE_ITERATOR_TYPE, null, ); const items = []; let item = iterator.iterateNext(); while (item) { items.push(item); item = iterator.iterateNext(); } return items; }); /** * Create a selector based on a xpath. Testcafe does not natively support xpath * selectors, hence this function. * @param {string} xpath - xpath to search with * @returns {Selector} returns a selector */ export const xpathSelector = xpath => { return Selector(getElementsByXPath(xpath)); }; ``` ### Your complete configuration file _No response_ ### Your complete test report None, testcafe hangs and never finishes. ### Screenshots ### Steps to Reproduce 1. Run example.js 2. Notice that TestCafe will hang ### TestCafe version 2.5.1-rc.1 ### Node.js version 16.20 ### Command-line arguments testcafe chrome example.js --skip-js-errors ### Browser name(s) and version(s) Chrome ### Platform(s) and version(s) macOS ### Other _No response_",1,testcafe hangs when interacting with login page what is your scenario we are facing an issue within our testcafe tests where they hang when interacting with the login page testcafe does not reach its assertion or selector timeout instead it just completely stops please see example what is the current behavior testcafe hangs after performing an action and does not move onto next action what is the expected behavior testcafe to either move onto next action or timeout what is your public website url or attach your complete example economist com what is your testcafe test code cookie consent js import requestmock from testcafe mock evidon cookie consent to avoid interacting with the module on each new session export function mockevidoncookieconsent return requestmock onrequestto evidon com respond export function mocksourcepointcookieconsent return requestmock onrequestto cmp cdn p aws economist com latest cmp min js respond example js import fixture selector from testcafe import xpathselector from helpers import mockevidoncookieconsent mocksourcepointcookieconsent from cookie consent fixture example fixture page economist com requesthooks const emailfield xpathselector const passwordfield xpathselector const loginlink selector ds masthead find a withtext log in filtervisible test example async t await t click loginlink await t typetext emailfield email example com console log email entered await t typetext passwordfield password console log password entered test example async t await t click loginlink await t typetext emailfield email example com console log email entered await t typetext passwordfield password console log password entered test example async t await t click loginlink await t typetext emailfield email example com console log email entered await t typetext passwordfield password console log password entered helpers js import selector from testcafe retrieves all elements that match a given xpath param string xpath xpath to search with return object found elements const getelementsbyxpath selector xpath const iterator document evaluate xpath document null xpathresult unordered node iterator type null const items let item iterator iteratenext while item items push item item iterator iteratenext return items create a selector based on a xpath testcafe does not natively support xpath selectors hence this function param string xpath xpath to search with returns selector returns a selector export const xpathselector xpath return selector getelementsbyxpath xpath your complete configuration file no response your complete test report none testcafe hangs and never finishes img width alt screenshot at src screenshots img width alt screenshot at src img width alt screenshot at src steps to reproduce run example js notice that testcafe will hang testcafe version rc node js version command line arguments testcafe chrome example js skip js errors browser name s and version s chrome platform s and version s macos other no response ,1 324923,24024680923.0,IssuesEvent,2022-09-15 10:28:23,ita-social-projects/dokazovi-requirements,https://api.github.com/repos/ita-social-projects/dokazovi-requirements,opened,[Test for Story #604] Verify that admin can't schedule the material with an invalid date/time or without confirmation,documentation test case,"**Story link** [#604 Story](https://github.com/ita-social-projects/dokazovi-requirements/issues/604#issue-1344414724) ### Status: Pass/Fail/Not executed ### Title: Verify that admin can't schedule the material with an invalid date/time or without confirmation ### Description: Verify that admin is not able to schedule the material with an invalid entered date and time entered or when the admin doen't confirm chosen date and time and the material’s status doesn't change ### Pre-conditions: The admin is logged in Адміністрування → Керування матеріалами → material with <На модерації> status or <В архіві> status→Дії → Запланувати публікацію Step № | Test Steps | Test data | Expected result | Status (Pass/Fail/Not executed) | Notes ------------ | ------------ | ------------ | ------------ | ------------ | ------------ 1 | Click on the Date&Time picker component ![image](https://user-images.githubusercontent.com/99169057/190351744-b1fe4110-405c-4cb1-b1f4-76a56c7bb896.png) and select the date and time | | The date and time are selected and shown. The selected date and time is validated automatically by the system. The date - in dd.mm.yyyy format. The time - in hh:mm format |Not executed| Mockup 2 | Click on the 'Ні' button| | Date&Time picker component is closed and the changes are not saved | Not executed| 3 | Entered a valid date and time in the Date Time picker component and repeat the step | | The date and time are entered and shown. The entered date and time is validated automatically by the system. The date - in dd.mm.yyyy format. The time - in hh:mm format. |Not executed| Mockup 4 | Click on the 'Ні' button| | Date&Time picker component is closed and the changes are not saved | Not executed| 5 | Entered an invalid date/time in the Date Time picker component and click on the 'Так' button | | The system validates the data and shows the error message: 'Введіть коректну дату та час' below the 'Обрати дату та час' field |Not executed| ![D_T picker](https://user-images.githubusercontent.com/99800949/185623971-36d8d9cb-0c79-42da-bfa4-3f3719cfc4dc.png) ![D_T picker 2](https://user-images.githubusercontent.com/99800949/185623970-d8246eee-b5d0-454f-8e78-b5471f09c922.png) ### Dependencies: [#604](https://github.com/ita-social-projects/dokazovi-requirements/issues/604#issue-1344414724) ### [Gantt Chart](https://docs.google.com/spreadsheets/d/1bgaEJDOf3OhfNRfP-WWPKmmZFW5C3blOUxamE3wSCbM/edit#gid=775577959) ",1.0,"[Test for Story #604] Verify that admin can't schedule the material with an invalid date/time or without confirmation - **Story link** [#604 Story](https://github.com/ita-social-projects/dokazovi-requirements/issues/604#issue-1344414724) ### Status: Pass/Fail/Not executed ### Title: Verify that admin can't schedule the material with an invalid date/time or without confirmation ### Description: Verify that admin is not able to schedule the material with an invalid entered date and time entered or when the admin doen't confirm chosen date and time and the material’s status doesn't change ### Pre-conditions: The admin is logged in Адміністрування → Керування матеріалами → material with <На модерації> status or <В архіві> status→Дії → Запланувати публікацію Step № | Test Steps | Test data | Expected result | Status (Pass/Fail/Not executed) | Notes ------------ | ------------ | ------------ | ------------ | ------------ | ------------ 1 | Click on the Date&Time picker component ![image](https://user-images.githubusercontent.com/99169057/190351744-b1fe4110-405c-4cb1-b1f4-76a56c7bb896.png) and select the date and time | | The date and time are selected and shown. The selected date and time is validated automatically by the system. The date - in dd.mm.yyyy format. The time - in hh:mm format |Not executed| Mockup 2 | Click on the 'Ні' button| | Date&Time picker component is closed and the changes are not saved | Not executed| 3 | Entered a valid date and time in the Date Time picker component and repeat the step | | The date and time are entered and shown. The entered date and time is validated automatically by the system. The date - in dd.mm.yyyy format. The time - in hh:mm format. |Not executed| Mockup 4 | Click on the 'Ні' button| | Date&Time picker component is closed and the changes are not saved | Not executed| 5 | Entered an invalid date/time in the Date Time picker component and click on the 'Так' button | | The system validates the data and shows the error message: 'Введіть коректну дату та час' below the 'Обрати дату та час' field |Not executed| ![D_T picker](https://user-images.githubusercontent.com/99800949/185623971-36d8d9cb-0c79-42da-bfa4-3f3719cfc4dc.png) ![D_T picker 2](https://user-images.githubusercontent.com/99800949/185623970-d8246eee-b5d0-454f-8e78-b5471f09c922.png) ### Dependencies: [#604](https://github.com/ita-social-projects/dokazovi-requirements/issues/604#issue-1344414724) ### [Gantt Chart](https://docs.google.com/spreadsheets/d/1bgaEJDOf3OhfNRfP-WWPKmmZFW5C3blOUxamE3wSCbM/edit#gid=775577959) ",0, verify that admin can t schedule the material with an invalid date time or without confirmation story link status pass fail not executed title verify that admin can t schedule the material with an invalid date time or without confirmation description verify that admin is not able to schedule the material with an invalid entered date and time entered or when the admin doen t confirm chosen date and time and the material’s status doesn t change pre conditions the admin is logged in адміністрування → керування матеріалами → material with status or status→дії → запланувати публікацію step № test steps test data expected result status pass fail not executed notes click on the date time picker component and select the date and time the date and time are selected and shown the selected date and time is validated automatically by the system the date in dd mm yyyy format the time in hh mm format not executed mockup click on the ні button date time picker component is closed and the changes are not saved not executed entered a valid date and time in the date time picker component and repeat the step the date and time are entered and shown the entered date and time is validated automatically by the system the date in dd mm yyyy format the time in hh mm format not executed mockup click on the ні button date time picker component is closed and the changes are not saved not executed entered an invalid date time in the date time picker component and click on the так button the system validates the data and shows the error message введіть коректну дату та час below the обрати дату та час field not executed dependencies ,0 584259,17409860674.0,IssuesEvent,2021-08-03 10:52:55,AtlasOfLivingAustralia/collectory,https://api.github.com/repos/AtlasOfLivingAustralia/collectory,closed,Dynamic representation of partner profile links,enhancement priority-medium status-new type-task,"_migrated from:_ https://code.google.com/p/ala/issues/detail?id=711 _date:_ Wed Jun 18 21:20:27 2014 _author:_ alau...@gmail.com --- Currently the links to partner profiles are static. A more dynamic representation to draw attention to profiles from the home page is preferred. ",1.0,"Dynamic representation of partner profile links - _migrated from:_ https://code.google.com/p/ala/issues/detail?id=711 _date:_ Wed Jun 18 21:20:27 2014 _author:_ alau...@gmail.com --- Currently the links to partner profiles are static. A more dynamic representation to draw attention to profiles from the home page is preferred. ",0,dynamic representation of partner profile links migrated from date wed jun author alau gmail com currently the links to partner profiles are static a more dynamic representation to draw attention to profiles from the home page is preferred ,0 168328,13079801608.0,IssuesEvent,2020-08-01 04:53:13,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,roachtest: hibernate failed,C-test-failure O-roachtest O-robot branch-provisional_202007220233_v20.2.0-alpha.2 release-blocker,"[(roachtest).hibernate failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2107811&tab=buildLog) on [provisional_202007220233_v20.2.0-alpha.2@d3119926d33d808c6384cf3e99a7f7435f395489](https://github.com/cockroachdb/cockroach/commits/d3119926d33d808c6384cf3e99a7f7435f395489): ``` The test failed on branch=provisional_202007220233_v20.2.0-alpha.2, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/hibernate/run_1 orm_helpers.go:214,orm_helpers.go:144,java_helpers.go:216,hibernate.go:173,hibernate.go:185,test_runner.go:757: Tests run on Cockroach v20.2.0-alpha.1-933-gd3119926d3 Tests run against hibernate HHH-13724-cockroachdb-dialects 8322 Total Tests Run 8321 tests passed 1 test failed 1951 tests skipped 0 tests ignored 0 tests passed unexpectedly 1 test failed unexpectedly 0 tests expected failed but skipped 0 tests expected failed but not run --- --- FAIL: org.hibernate.jpa.test.graphs.EntityGraphTest.attributeNodeInheritanceTest - unknown (unexpected) For a full summary look at the hibernate artifacts An updated blocklist (hibernateBlockList20_2) is available in the artifacts' hibernate log ```
More

Artifacts: [/hibernate](https://teamcity.cockroachdb.com/viewLog.html?buildId=2107811&tab=artifacts#/hibernate) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ahibernate.%2A&sort=title&restgroup=false&display=lastcommented+project) powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)

",2.0,"roachtest: hibernate failed - [(roachtest).hibernate failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2107811&tab=buildLog) on [provisional_202007220233_v20.2.0-alpha.2@d3119926d33d808c6384cf3e99a7f7435f395489](https://github.com/cockroachdb/cockroach/commits/d3119926d33d808c6384cf3e99a7f7435f395489): ``` The test failed on branch=provisional_202007220233_v20.2.0-alpha.2, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/hibernate/run_1 orm_helpers.go:214,orm_helpers.go:144,java_helpers.go:216,hibernate.go:173,hibernate.go:185,test_runner.go:757: Tests run on Cockroach v20.2.0-alpha.1-933-gd3119926d3 Tests run against hibernate HHH-13724-cockroachdb-dialects 8322 Total Tests Run 8321 tests passed 1 test failed 1951 tests skipped 0 tests ignored 0 tests passed unexpectedly 1 test failed unexpectedly 0 tests expected failed but skipped 0 tests expected failed but not run --- --- FAIL: org.hibernate.jpa.test.graphs.EntityGraphTest.attributeNodeInheritanceTest - unknown (unexpected) For a full summary look at the hibernate artifacts An updated blocklist (hibernateBlockList20_2) is available in the artifacts' hibernate log ```
More

Artifacts: [/hibernate](https://teamcity.cockroachdb.com/viewLog.html?buildId=2107811&tab=artifacts#/hibernate) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ahibernate.%2A&sort=title&restgroup=false&display=lastcommented+project) powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)

",0,roachtest hibernate failed on the test failed on branch provisional alpha cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts hibernate run orm helpers go orm helpers go java helpers go hibernate go hibernate go test runner go tests run on cockroach alpha tests run against hibernate hhh cockroachdb dialects total tests run tests passed test failed tests skipped tests ignored tests passed unexpectedly test failed unexpectedly tests expected failed but skipped tests expected failed but not run fail org hibernate jpa test graphs entitygraphtest attributenodeinheritancetest unknown unexpected for a full summary look at the hibernate artifacts an updated blocklist is available in the artifacts hibernate log more artifacts powered by ,0 89499,15829643748.0,IssuesEvent,2021-04-06 11:28:15,VivekBuzruk/UI,https://api.github.com/repos/VivekBuzruk/UI,closed,CVE-2021-23337 (High) detected in lodash-4.17.20.tgz - autoclosed,security vulnerability,"## CVE-2021-23337 - High Severity Vulnerability
Vulnerable Library - lodash-4.17.20.tgz

Lodash modular utilities.

Library home page: https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz

Path to dependency file: UI/package.json

Path to vulnerable library: UI/node_modules/lodash/package.json

Dependency Hierarchy: - :x: **lodash-4.17.20.tgz** (Vulnerable Library)

Found in HEAD commit: edeb2a2fd15349abe4886893f9325323672726f3

Found in base branch: master

Vulnerability Details

Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.

Publish Date: 2021-02-15

URL: CVE-2021-23337

CVSS 3 Score Details (7.2)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c

Release Date: 2021-02-15

Fix Resolution: lodash - 4.17.21

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-23337 (High) detected in lodash-4.17.20.tgz - autoclosed - ## CVE-2021-23337 - High Severity Vulnerability
Vulnerable Library - lodash-4.17.20.tgz

Lodash modular utilities.

Library home page: https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz

Path to dependency file: UI/package.json

Path to vulnerable library: UI/node_modules/lodash/package.json

Dependency Hierarchy: - :x: **lodash-4.17.20.tgz** (Vulnerable Library)

Found in HEAD commit: edeb2a2fd15349abe4886893f9325323672726f3

Found in base branch: master

Vulnerability Details

Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.

Publish Date: 2021-02-15

URL: CVE-2021-23337

CVSS 3 Score Details (7.2)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c

Release Date: 2021-02-15

Fix Resolution: lodash - 4.17.21

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in lodash tgz autoclosed cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file ui package json path to vulnerable library ui node modules lodash package json dependency hierarchy x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource ,0 108894,9335204581.0,IssuesEvent,2019-03-28 18:00:00,istio/istio,https://api.github.com/repos/istio/istio,closed,Helm repo for 1.1.1 uses wrong hub,area/test and release,"`hub: gcr.io/istio-release` should be `hub: docker.io/istio`. 1.1.0 is correct, 1.1.1 is wrong. https://storage.googleapis.com/istio-release/releases/1.1.1/charts/istio-1.1.1.tgz @mandarjog @utka ",1.0,"Helm repo for 1.1.1 uses wrong hub - `hub: gcr.io/istio-release` should be `hub: docker.io/istio`. 1.1.0 is correct, 1.1.1 is wrong. https://storage.googleapis.com/istio-release/releases/1.1.1/charts/istio-1.1.1.tgz @mandarjog @utka ",0,helm repo for uses wrong hub hub gcr io istio release should be hub docker io istio is correct is wrong mandarjog utka ,0 51105,12678391853.0,IssuesEvent,2020-06-19 09:41:23,opencv/opencv,https://api.github.com/repos/opencv/opencv,closed,[imgproc] Compilation error caused by color_yuv.simd.hpp,category: build/install platform: win32,"##### System information (version) - OpenCV => 4.4.0 - Operating System / Platform => Windows 64 Bit - Compiler => mingw32 ##### Detailed description The symbol ""scr1"" is already defined in the mingw-w64 runtime package in ""mingw-w64\x86_64-8.1.0-posix-seh-rt_v6-rev0\mingw64\x86_64-w64-mingw32\include\dlgs.h"" (included through ""windows.h"" for example). ##### Steps to reproduce The compilation fails. ##### Several ways to solve this issue 1. The ugliest one would be to add at the beginning of this file: ``` #if defined scr1 # undef scr1 #endif // scr1 ``` 2. A better one should be to avoid including the ""windows.h"" file before ""color_yuv.simd.hpp"" 3. An alternative could be the renaming of variables. The guilty mingw32 ""dlgs.h"" file: [dlgs_h.txt](https://github.com/opencv/opencv/files/4776326/dlgs_h.txt) ",1.0,"[imgproc] Compilation error caused by color_yuv.simd.hpp - ##### System information (version) - OpenCV => 4.4.0 - Operating System / Platform => Windows 64 Bit - Compiler => mingw32 ##### Detailed description The symbol ""scr1"" is already defined in the mingw-w64 runtime package in ""mingw-w64\x86_64-8.1.0-posix-seh-rt_v6-rev0\mingw64\x86_64-w64-mingw32\include\dlgs.h"" (included through ""windows.h"" for example). ##### Steps to reproduce The compilation fails. ##### Several ways to solve this issue 1. The ugliest one would be to add at the beginning of this file: ``` #if defined scr1 # undef scr1 #endif // scr1 ``` 2. A better one should be to avoid including the ""windows.h"" file before ""color_yuv.simd.hpp"" 3. An alternative could be the renaming of variables. The guilty mingw32 ""dlgs.h"" file: [dlgs_h.txt](https://github.com/opencv/opencv/files/4776326/dlgs_h.txt) ",0, compilation error caused by color yuv simd hpp system information version opencv operating system platform windows bit compiler detailed description the symbol is already defined in the mingw runtime package in mingw posix seh rt include dlgs h included through windows h for example steps to reproduce the compilation fails several ways to solve this issue the ugliest one would be to add at the beginning of this file if defined undef endif a better one should be to avoid including the windows h file before color yuv simd hpp an alternative could be the renaming of variables the guilty dlgs h file ,0 222673,7435048440.0,IssuesEvent,2018-03-26 13:08:06,CS2103JAN2018-T15-B4/main,https://api.github.com/repos/CS2103JAN2018-T15-B4/main,opened,"As an eventful business, I would like a calendar with a list of events to keep track of.",priority.high type.story,This is a build-up on the calendar view feature.,1.0,"As an eventful business, I would like a calendar with a list of events to keep track of. - This is a build-up on the calendar view feature.",0,as an eventful business i would like a calendar with a list of events to keep track of this is a build up on the calendar view feature ,0 6688,23739785764.0,IssuesEvent,2022-08-31 11:23:42,nf-core/tools,https://api.github.com/repos/nf-core/tools,closed,Update PyPI Deployment GHA workflow,automation,"https://github.com/nf-core/tools/actions/runs/2956647970 [build-n-publish: # >> PyPA publish to PyPI GHA: UNSUPPORTED GITHUB ACTION VERSION <<#L1](https://github.com/nf-core/tools/commit/cd0ac0b5ee826304a0fd77792593735dd8fc2e58#annotation_4454551978) You are using ""pypa/gh-action-pypi-publish@master"". The ""master"" branch of this project has been sunset and will not receive any updates, not even security bug fixes. Please, make sure to use a supported version. If you want to pin to v1 major version, use ""pypa/gh-action-pypi-publish@release/v1"". If you feel adventurous, you may opt to use use ""pypa/gh-action-pypi-publish@unstable/v1"" instead. A more general recommendation is to pin to exact tags or commit shas.",1.0,"Update PyPI Deployment GHA workflow - https://github.com/nf-core/tools/actions/runs/2956647970 [build-n-publish: # >> PyPA publish to PyPI GHA: UNSUPPORTED GITHUB ACTION VERSION <<#L1](https://github.com/nf-core/tools/commit/cd0ac0b5ee826304a0fd77792593735dd8fc2e58#annotation_4454551978) You are using ""pypa/gh-action-pypi-publish@master"". The ""master"" branch of this project has been sunset and will not receive any updates, not even security bug fixes. Please, make sure to use a supported version. If you want to pin to v1 major version, use ""pypa/gh-action-pypi-publish@release/v1"". If you feel adventurous, you may opt to use use ""pypa/gh-action-pypi-publish@unstable/v1"" instead. A more general recommendation is to pin to exact tags or commit shas.",1,update pypi deployment gha workflow you are using pypa gh action pypi publish master the master branch of this project has been sunset and will not receive any updates not even security bug fixes please make sure to use a supported version if you want to pin to major version use pypa gh action pypi publish release if you feel adventurous you may opt to use use pypa gh action pypi publish unstable instead a more general recommendation is to pin to exact tags or commit shas ,1 814343,30503671063.0,IssuesEvent,2023-07-18 15:24:05,arkedge/c2a-core,https://api.github.com/repos/arkedge/c2a-core,opened,"minimum_user, 2nd_obc_user の rename",priority::high,"## 詳細 - わかりにくい - 2nd のように,先頭数字は使いにくい ## close条件 - [ ] ディレクトリやドキュメントのりネーム - [ ] 各所接頭語などのりネーム ",1.0,"minimum_user, 2nd_obc_user の rename - ## 詳細 - わかりにくい - 2nd のように,先頭数字は使いにくい ## close条件 - [ ] ディレクトリやドキュメントのりネーム - [ ] 各所接頭語などのりネーム ",0,minimum user obc user の rename 詳細 わかりにくい のように,先頭数字は使いにくい close条件 ディレクトリやドキュメントのりネーム 各所接頭語などのりネーム ,0 751729,26255098851.0,IssuesEvent,2023-01-05 23:30:12,kubernetes/kubernetes,https://api.github.com/repos/kubernetes/kubernetes,closed,"CEL: Invalid value: ""object"": internal error: runtime error: index out of range [3] with length 3 evaluating rule: ",kind/bug priority/important-soon sig/api-machinery triage/accepted,"### What happened? I'm seeing this error when posting an update to the kubernetes API: `Invalid value: ""object"": internal error: runtime error: index out of range [3] with length 3 evaluating rule: ` ### What did you expect to happen? No error ### How can we reproduce it (as minimally and precisely as possible)? https://github.com/inteon/CEL_bug ### Anything else we need to know? _No response_ ### Kubernetes version
```console $ ./kube-apiserver --version Kubernetes v1.25.0 ```
### Cloud provider
### OS version
```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here # On Windows: C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture # paste output here ```
### Install tools
### Container runtime (CRI) and version (if applicable)
### Related plugins (CNI, CSI, ...) and versions (if applicable)
",1.0,"CEL: Invalid value: ""object"": internal error: runtime error: index out of range [3] with length 3 evaluating rule: - ### What happened? I'm seeing this error when posting an update to the kubernetes API: `Invalid value: ""object"": internal error: runtime error: index out of range [3] with length 3 evaluating rule: ` ### What did you expect to happen? No error ### How can we reproduce it (as minimally and precisely as possible)? https://github.com/inteon/CEL_bug ### Anything else we need to know? _No response_ ### Kubernetes version
```console $ ./kube-apiserver --version Kubernetes v1.25.0 ```
### Cloud provider
### OS version
```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here # On Windows: C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture # paste output here ```
### Install tools
### Container runtime (CRI) and version (if applicable)
### Related plugins (CNI, CSI, ...) and versions (if applicable)
",0,cel invalid value object internal error runtime error index out of range with length evaluating rule what happened i m seeing this error when posting an update to the kubernetes api invalid value object internal error runtime error index out of range with length evaluating rule what did you expect to happen no error how can we reproduce it as minimally and precisely as possible anything else we need to know no response kubernetes version console kube apiserver version kubernetes cloud provider os version console on linux cat etc os release paste output here uname a paste output here on windows c wmic os get caption version buildnumber osarchitecture paste output here install tools container runtime cri and version if applicable related plugins cni csi and versions if applicable ,0 4389,16445491124.0,IssuesEvent,2021-05-20 19:02:51,operate-first/SRE,https://api.github.com/repos/operate-first/SRE,closed,[Automation] Should we settle on a single language for automation scripts? Which one?,automation,"From the discussion @HumairAK started in operate-first/apps#635: > one issue I'm seeing is right now we have a few scripts in bash, one in python (that @larsks has worked on) we will need to narrow down to one language if we are to wrap these, lest we start calling bash / python scripts from golang or something (yuck).",1.0,"[Automation] Should we settle on a single language for automation scripts? Which one? - From the discussion @HumairAK started in operate-first/apps#635: > one issue I'm seeing is right now we have a few scripts in bash, one in python (that @larsks has worked on) we will need to narrow down to one language if we are to wrap these, lest we start calling bash / python scripts from golang or something (yuck).",1, should we settle on a single language for automation scripts which one from the discussion humairak started in operate first apps one issue i m seeing is right now we have a few scripts in bash one in python that larsks has worked on we will need to narrow down to one language if we are to wrap these lest we start calling bash python scripts from golang or something yuck ,1 2873,12740933520.0,IssuesEvent,2020-06-26 04:23:29,home-assistant/frontend,https://api.github.com/repos/home-assistant/frontend,closed,"When an entity selection list in automations appears at the bottom of the screen, it doesn't work in Firefox and Chrome",editor: automation stale," ## The problem When I create an automation, and I go to select the entity, by start typing it, it will shorten the list momentarily, and then jump back to the full list. It only happens when the list is at the bottom of the screen, making it open above the line, if it's at the top, opening down, it seems to work correctly. ## Environment - Home Assistant release with the issue: 0.105.4 - Last working Home Assistant release (if known): pre-0.100 - Operating environment (Hass.io/Docker/Windows/etc.): hassio - Browser: Firefox 72.0.2 (and also developer edition), Chrome 80.0.3987.116 - Link to integration documentation on our website: ## Problem-relevant `configuration.yaml` ```yaml ``` ## Traceback/Error logs ```txt ``` ## Additional information I did a small 'video' of the problem: https://www.screencast.com/t/wGrgSIYIt ",1.0,"When an entity selection list in automations appears at the bottom of the screen, it doesn't work in Firefox and Chrome - ## The problem When I create an automation, and I go to select the entity, by start typing it, it will shorten the list momentarily, and then jump back to the full list. It only happens when the list is at the bottom of the screen, making it open above the line, if it's at the top, opening down, it seems to work correctly. ## Environment - Home Assistant release with the issue: 0.105.4 - Last working Home Assistant release (if known): pre-0.100 - Operating environment (Hass.io/Docker/Windows/etc.): hassio - Browser: Firefox 72.0.2 (and also developer edition), Chrome 80.0.3987.116 - Link to integration documentation on our website: ## Problem-relevant `configuration.yaml` ```yaml ``` ## Traceback/Error logs ```txt ``` ## Additional information I did a small 'video' of the problem: https://www.screencast.com/t/wGrgSIYIt ",1,when an entity selection list in automations appears at the bottom of the screen it doesn t work in firefox and chrome read this first if you need additional help with this template please refer to make sure you are running the latest version of home assistant before reporting an issue do not report issues for integrations if you are using custom components or integrations provide as many details as possible paste logs configuration samples and code into the backticks do not delete any text from this template otherwise your issue may be closed without comment the problem describe the issue you are experiencing here to communicate to the maintainers tell us what you were trying to do and what happened instead when i create an automation and i go to select the entity by start typing it it will shorten the list momentarily and then jump back to the full list it only happens when the list is at the bottom of the screen making it open above the line if it s at the top opening down it seems to work correctly environment provide details about the versions you are using which helps us to reproduce and find the issue quicker version information is found in the home assistant frontend developer tools info home assistant release with the issue last working home assistant release if known pre operating environment hass io docker windows etc hassio browser firefox and also developer edition chrome link to integration documentation on our website problem relevant configuration yaml an example configuration that caused the problem for you fill this out even if it seems unimportant to you please be sure to remove personal information like passwords private urls and other credentials yaml traceback error logs if you come across any trace or error logs please provide them txt additional information i did a small video of the problem ,1 6900,24022655640.0,IssuesEvent,2022-09-15 08:56:18,querqy/querqy-opensearch,https://api.github.com/repos/querqy/querqy-opensearch,closed,CI workflows,enhancement automation,"Issue: Create CI workflows for test automation. This should include building the plugin and running unit/integration tests. More on Plugin standards [here](https://github.com/opensearch-project/opensearch-plugins/blob/793f21c111a322d3800dcc66fa1c61bdc026c271/STANDARDS.md#ci-workflows )",1.0,"CI workflows - Issue: Create CI workflows for test automation. This should include building the plugin and running unit/integration tests. More on Plugin standards [here](https://github.com/opensearch-project/opensearch-plugins/blob/793f21c111a322d3800dcc66fa1c61bdc026c271/STANDARDS.md#ci-workflows )",1,ci workflows issue create ci workflows for test automation this should include building the plugin and running unit integration tests more on plugin standards ,1 5103,18674127173.0,IssuesEvent,2021-10-31 08:54:04,Tithibots/tithiwa,https://api.github.com/repos/Tithibots/tithiwa,closed,Create track_online_status() In Chatroom class to track that someone is online or not,enhancement help wanted good first issue python Selenium Automation hacktoberfest,"Just open the chatroom of the given contact and just save online status in some file every second. maybe use True or False represents online or offline ",1.0,"Create track_online_status() In Chatroom class to track that someone is online or not - Just open the chatroom of the given contact and just save online status in some file every second. maybe use True or False represents online or offline ",1,create track online status in chatroom class to track that someone is online or not just open the chatroom of the given contact and just save online status in some file every second maybe use true or false represents online or offline ,1 609109,18854230219.0,IssuesEvent,2021-11-12 02:39:38,crypto-com/chain-desktop-wallet,https://api.github.com/repos/crypto-com/chain-desktop-wallet,opened,Problem: Ledger approval in Staking is longer than expected,low priority need-investigation,"## Problem To confirm a staking transaction, it takes between 20-30 seconds for the transaction to appear on my Ledger for approval, which is longer than normal.",1.0,"Problem: Ledger approval in Staking is longer than expected - ## Problem To confirm a staking transaction, it takes between 20-30 seconds for the transaction to appear on my Ledger for approval, which is longer than normal.",0,problem ledger approval in staking is longer than expected problem to confirm a staking transaction it takes between seconds for the transaction to appear on my ledger for approval which is longer than normal ,0 318341,27297708865.0,IssuesEvent,2023-02-23 21:58:16,nucleus-security/Test-repo,https://api.github.com/repos/nucleus-security/Test-repo,opened,Nucleus - [High] - 440037,Test,"Source: QUALYS Finding Description: CentOS has released security update for kernel security update to fix the vulnerabilities. Affected Products: centos 6 Impact: This vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable. Target(s): Asset name: 192.168.56.127 IP: 192.168.56.127 Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-August/022983.html) for updates and patch information. Patch: Following are links for downloading patches to fix the vulnerabilities: CESA-2018:2390: centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-August/022983.html) References: QID:440037 CVE:CVE-2018-5390, CVE-2018-3620, CVE-2018-3646, CVE-2018-3693, CVE-2018-10901, CVE-2017-0861, CVE-2017-15265, CVE-2018-7566, CVE-2018-1000004 Category:CentOS PCI Flagged:yes Vendor References:CESA-2018:2390 centos 6 Bugtraq IDs:104976, 104905, 105080, 101288, 103605, 104606, 102329 Severity: High Date Discovered: 2022-11-12 08:04:44 Nucleus Notification Rules Triggered: Rule GitHub Project Name: 6716 Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/201000007/NDQwMDM3/UVVBTFlT/VnVsbg--/false/MjAxMDAwMDA3/c3VtbWFyeQ--/false",1.0,"Nucleus - [High] - 440037 - Source: QUALYS Finding Description: CentOS has released security update for kernel security update to fix the vulnerabilities. Affected Products: centos 6 Impact: This vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable. Target(s): Asset name: 192.168.56.127 IP: 192.168.56.127 Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-August/022983.html) for updates and patch information. Patch: Following are links for downloading patches to fix the vulnerabilities: CESA-2018:2390: centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-August/022983.html) References: QID:440037 CVE:CVE-2018-5390, CVE-2018-3620, CVE-2018-3646, CVE-2018-3693, CVE-2018-10901, CVE-2017-0861, CVE-2017-15265, CVE-2018-7566, CVE-2018-1000004 Category:CentOS PCI Flagged:yes Vendor References:CESA-2018:2390 centos 6 Bugtraq IDs:104976, 104905, 105080, 101288, 103605, 104606, 102329 Severity: High Date Discovered: 2022-11-12 08:04:44 Nucleus Notification Rules Triggered: Rule GitHub Project Name: 6716 Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/201000007/NDQwMDM3/UVVBTFlT/VnVsbg--/false/MjAxMDAwMDA3/c3VtbWFyeQ--/false",0,nucleus source qualys finding description centos has released security update for kernel security update to fix the vulnerabilities affected products centos impact this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable target s asset name ip solution to resolve this issue upgrade to the latest packages which contain a patch refer to centos advisory centos for updates and patch information patch following are links for downloading patches to fix the vulnerabilities cesa centos references qid cve cve cve cve cve cve cve cve cve cve category centos pci flagged yes vendor references cesa centos bugtraq ids severity high date discovered nucleus notification rules triggered rule github project name please see nucleus for more information on these vulnerabilities ,0 3932,14993640657.0,IssuesEvent,2021-01-29 11:39:02,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,Vuln detector: Add new tests to check the downloaded feed format,automation core/vuln detector,"Hello team, The purpose of this issue is to add a new test to check that the downloaded feeds have the expected format. For that, it will be downloaded the Redhat, Canonical, Ubuntu, and NVD feeds and then will check that they will have the following formats: |Feed | Format | |--------|-----------| | Redhat | JSON| | Canonical | Bzip --> XML| |Debian | XML| |NVD| JSON| Best regards",1.0,"Vuln detector: Add new tests to check the downloaded feed format - Hello team, The purpose of this issue is to add a new test to check that the downloaded feeds have the expected format. For that, it will be downloaded the Redhat, Canonical, Ubuntu, and NVD feeds and then will check that they will have the following formats: |Feed | Format | |--------|-----------| | Redhat | JSON| | Canonical | Bzip --> XML| |Debian | XML| |NVD| JSON| Best regards",1,vuln detector add new tests to check the downloaded feed format hello team the purpose of this issue is to add a new test to check that the downloaded feeds have the expected format for that it will be downloaded the redhat canonical ubuntu and nvd feeds and then will check that they will have the following formats feed format redhat json canonical bzip xml debian xml nvd json best regards,1 258460,19561337186.0,IssuesEvent,2022-01-03 16:36:26,platten/enarx-test-wasmldr,https://api.github.com/repos/platten/enarx-test-wasmldr,opened,Document/enforce minimum Rust toolchain version,documentation good first issue," **Issue by [connorkuehl](https://github.com/connorkuehl)** _Tue Sep 15 17:15:56 2020_ ---- `enarx-wasmldr` will be most often used in conjunction with `enarx-keepldr`. That is, `enarx-wasmldr` must be compiled as a static PIE binary for `musl`. I believe the most recent Rust toolchain we depend on is [version 1.46.0 (2020-08-27)](https://github.com/rust-lang/rust/blob/master/RELEASES.md#version-1460-2020-08-27). That release enabled static PIE binaries for `musl`. ",1.0,"Document/enforce minimum Rust toolchain version - **Issue by [connorkuehl](https://github.com/connorkuehl)** _Tue Sep 15 17:15:56 2020_ ---- `enarx-wasmldr` will be most often used in conjunction with `enarx-keepldr`. That is, `enarx-wasmldr` must be compiled as a static PIE binary for `musl`. I believe the most recent Rust toolchain we depend on is [version 1.46.0 (2020-08-27)](https://github.com/rust-lang/rust/blob/master/RELEASES.md#version-1460-2020-08-27). That release enabled static PIE binaries for `musl`. ",0,document enforce minimum rust toolchain version issue by tue sep enarx wasmldr will be most often used in conjunction with enarx keepldr that is enarx wasmldr must be compiled as a static pie binary for musl i believe the most recent rust toolchain we depend on is that release enabled static pie binaries for musl ,0 7003,24110925778.0,IssuesEvent,2022-09-20 11:16:51,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,opened,[XCTests] Tabs Perf tests: Pre-loaded tabs are not shown,eng:automation,"There is the [TabsPerformanceTest](https://github.com/mozilla-mobile/firefox-ios/blob/main/Tests/XCUITests/TabsPerformanceTests.swift) suite where we are launching the app and the tab tray with different number of tabs, from 1 to 1200. To run these tests we created some `.archive`[ files containing the tabs](https://github.com/mozilla-mobile/firefox-ios/blob/2887a4eede37591f5448c0030462ffbdc3f4c513/Tests/XCUITests/TabsPerformanceTests.swift#L5). Unfortunately after this commit 6a0b5048121eb1690d82fa2b13235af817857e67 the tabs are not shown in the simulator. I see changes in the `TabManagerStore` which will be likely the cause of this. @lmarceau (as author of that commit ;)) I'm afraid we need your help to fix this :( ",1.0,"[XCTests] Tabs Perf tests: Pre-loaded tabs are not shown - There is the [TabsPerformanceTest](https://github.com/mozilla-mobile/firefox-ios/blob/main/Tests/XCUITests/TabsPerformanceTests.swift) suite where we are launching the app and the tab tray with different number of tabs, from 1 to 1200. To run these tests we created some `.archive`[ files containing the tabs](https://github.com/mozilla-mobile/firefox-ios/blob/2887a4eede37591f5448c0030462ffbdc3f4c513/Tests/XCUITests/TabsPerformanceTests.swift#L5). Unfortunately after this commit 6a0b5048121eb1690d82fa2b13235af817857e67 the tabs are not shown in the simulator. I see changes in the `TabManagerStore` which will be likely the cause of this. @lmarceau (as author of that commit ;)) I'm afraid we need your help to fix this :( ",1, tabs perf tests pre loaded tabs are not shown there is the suite where we are launching the app and the tab tray with different number of tabs from to to run these tests we created some archive unfortunately after this commit the tabs are not shown in the simulator i see changes in the tabmanagerstore which will be likely the cause of this lmarceau as author of that commit i m afraid we need your help to fix this ,1 17880,6522804624.0,IssuesEvent,2017-08-29 05:21:12,drashti4/localisationofschool,https://api.github.com/repos/drashti4/localisationofschool,closed,Website Designing Changes,help wanted website building," To-do list for this week - Where to accommodate new changes (Resource section) - Design changes in website - Auditing/Changing current CSS Suggest your change by creating issue [How to create issue](https://help.github.com/articles/creating-an-issue/) Related Issue [#1](https://github.com/drashti4/localisationofschool/issues/1) PS - Take a look at current status [website](https://drashti4.github.io/local-web/)",1.0,"Website Designing Changes - To-do list for this week - Where to accommodate new changes (Resource section) - Design changes in website - Auditing/Changing current CSS Suggest your change by creating issue [How to create issue](https://help.github.com/articles/creating-an-issue/) Related Issue [#1](https://github.com/drashti4/localisationofschool/issues/1) PS - Take a look at current status [website](https://drashti4.github.io/local-web/)",0,website designing changes to do list for this week where to accommodate new changes resource section design changes in website auditing changing current css suggest your change by creating issue related issue ps take a look at current status ,0 104018,13020507789.0,IssuesEvent,2020-07-27 03:19:43,lazerwalker/azure-mud,https://api.github.com/repos/lazerwalker/azure-mud,opened,Should we require real names?,design discussion,"In favor of real names: * Helps tie things back to being a real conference with real people * Makes it easier for us to tie CoC violations back to real people * Feels perhaps more ""grown-up"" than just handles/usernames Against real names: * Some people, who are not trolls, might prefer to be pseudonymous, and that's totally valid. * Needs to be worded correctly to make it clear we're not asking for a legal name. * Yet another thing to ask people. Less info is better! I'm currently mildly against.",1.0,"Should we require real names? - In favor of real names: * Helps tie things back to being a real conference with real people * Makes it easier for us to tie CoC violations back to real people * Feels perhaps more ""grown-up"" than just handles/usernames Against real names: * Some people, who are not trolls, might prefer to be pseudonymous, and that's totally valid. * Needs to be worded correctly to make it clear we're not asking for a legal name. * Yet another thing to ask people. Less info is better! I'm currently mildly against.",0,should we require real names in favor of real names helps tie things back to being a real conference with real people makes it easier for us to tie coc violations back to real people feels perhaps more grown up than just handles usernames against real names some people who are not trolls might prefer to be pseudonymous and that s totally valid needs to be worded correctly to make it clear we re not asking for a legal name yet another thing to ask people less info is better i m currently mildly against ,0 69782,9333306775.0,IssuesEvent,2019-03-28 14:10:07,conosco/conosco-core,https://api.github.com/repos/conosco/conosco-core,opened,Documento de EAP,0 - Development Team 0 - Product Owner 0 - Scrum Master 1 - Product 1 - Techinical Viability 2 - Documentation 5 - Advanced,"# TN° - Documento de EAP --- ### Descrição: Elaboração e produção do documento de Estrutura analítica do Projeto ### Tarefas Seção para tarefas (tasks) para issues mais complexas. - [ ] Reunião de idealização do documento - [ ] Escolher ferramenta para elaboração da eap - [ ] Produção do documento ### Comentários ",1.0,"Documento de EAP - # TN° - Documento de EAP --- ### Descrição: Elaboração e produção do documento de Estrutura analítica do Projeto ### Tarefas Seção para tarefas (tasks) para issues mais complexas. - [ ] Reunião de idealização do documento - [ ] Escolher ferramenta para elaboração da eap - [ ] Produção do documento ### Comentários ",0,documento de eap tn° documento de eap descrição elaboração e produção do documento de estrutura analítica do projeto tarefas seção para tarefas tasks para issues mais complexas reunião de idealização do documento escolher ferramenta para elaboração da eap produção do documento comentários ,0 51353,12705539333.0,IssuesEvent,2020-06-23 04:57:57,xamarin/xamarin-android,https://api.github.com/repos/xamarin/xamarin-android,opened,_error XA0030: Building with JDK version `11.0.7` is not supported_ when attempting to use a JDK 11 version that is a bit too new,Area: App+Library Build,"### Steps to reproduce 1. Download the current version of the _jbrsdk_ JetBrains Runtime that's available on . Today, that's , OpenJDK version 11.0.7. 2. Extract the files to a directory. 3. Set Xamarin.Android builds to use that JDK 11 directory. For example, open **Tools > Options** in Visual Studio, select the **Xamarin > Android Settings** node, and set **Java Development Kit Location** to that directory. 4. Attempt to build a Xamarin.Android app project. ### Expected behavior Maybe the build should succeed? That is, maybe the explicit version check for the JDK version number can now use just the major.minor part of the version number instead of the full major.minor.build? It seems like the third place of JDK 11 version numbers (the `Build` number in `System.Version` terminology) changes more frequently than it did for the old 1.8.0 version number scheme of JDK 8. ### Actual behavior The build fails because the third place of the JDK version number is higher than the current `$(LatestSupportedJavaVersion)`: ``` C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\MSBuild\Xamarin\Android\Xamarin.Android.Legacy.targets(163,5): error XA0030: Building with JDK version `11.0.7` is not supported. Please install JDK version `11.0.4`. See https://aka.ms/xamarin/jdk9-errors ``` ### Version information Microsoft Visual Studio Enterprise 2019 Int Preview Version 16.7.0 Preview 4.0 [30222.8.master] Xamarin.Android SDK 10.4.0.0 (d16-7/de70286)",1.0,"_error XA0030: Building with JDK version `11.0.7` is not supported_ when attempting to use a JDK 11 version that is a bit too new - ### Steps to reproduce 1. Download the current version of the _jbrsdk_ JetBrains Runtime that's available on . Today, that's , OpenJDK version 11.0.7. 2. Extract the files to a directory. 3. Set Xamarin.Android builds to use that JDK 11 directory. For example, open **Tools > Options** in Visual Studio, select the **Xamarin > Android Settings** node, and set **Java Development Kit Location** to that directory. 4. Attempt to build a Xamarin.Android app project. ### Expected behavior Maybe the build should succeed? That is, maybe the explicit version check for the JDK version number can now use just the major.minor part of the version number instead of the full major.minor.build? It seems like the third place of JDK 11 version numbers (the `Build` number in `System.Version` terminology) changes more frequently than it did for the old 1.8.0 version number scheme of JDK 8. ### Actual behavior The build fails because the third place of the JDK version number is higher than the current `$(LatestSupportedJavaVersion)`: ``` C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\MSBuild\Xamarin\Android\Xamarin.Android.Legacy.targets(163,5): error XA0030: Building with JDK version `11.0.7` is not supported. Please install JDK version `11.0.4`. See https://aka.ms/xamarin/jdk9-errors ``` ### Version information Microsoft Visual Studio Enterprise 2019 Int Preview Version 16.7.0 Preview 4.0 [30222.8.master] Xamarin.Android SDK 10.4.0.0 (d16-7/de70286)",0, error building with jdk version is not supported when attempting to use a jdk version that is a bit too new steps to reproduce download the current version of the jbrsdk jetbrains runtime that s available on today that s openjdk version extract the files to a directory set xamarin android builds to use that jdk directory for example open tools options in visual studio select the xamarin android settings node and set java development kit location to that directory attempt to build a xamarin android app project expected behavior maybe the build should succeed that is maybe the explicit version check for the jdk version number can now use just the major minor part of the version number instead of the full major minor build it seems like the third place of jdk version numbers the build number in system version terminology changes more frequently than it did for the old version number scheme of jdk actual behavior the build fails because the third place of the jdk version number is higher than the current latestsupportedjavaversion c program files microsoft visual studio preview msbuild xamarin android xamarin android legacy targets error building with jdk version is not supported please install jdk version see version information microsoft visual studio enterprise int preview version preview xamarin android sdk ,0 2103,11394427392.0,IssuesEvent,2020-01-30 09:20:16,elastic/package-registry,https://api.github.com/repos/elastic/package-registry,opened,Add Docker image build stage,automation ci,"We have a new namespace in place to publish the Docker images, thus we have to add a new stage to the pipeline job to build and publish the docker images. We will tag the image with the latest version plus the word SNAPSHOT, something like `docker.elastic.co/package-registry/package-registry:0.2.0-SNAPSHOT` also we store the same image with a tag referenced the commit, something like `docker.elastic.co/package-registry/package-registry:f999b7a84d977cd19a379f0cec802aa1ef7ca379`, finally, for GitHub tags we will use the GitHub tag and the commit to publish the Docker image, something like `docker.elastic.co/package-registry/package-registry:0.2.0` and `docker.elastic.co/package-registry/package-registry: 928f750f7dace1934dc5a67bfe24eb848ca44be1 ` ``` docker build . docker tag 87e1feeff7c8 push.docker.elastic.co/package-registry/package-registry:0.2.0 docker push push.docker.elastic.co/package-registry/package-registry:0.2.0 ```",1.0,"Add Docker image build stage - We have a new namespace in place to publish the Docker images, thus we have to add a new stage to the pipeline job to build and publish the docker images. We will tag the image with the latest version plus the word SNAPSHOT, something like `docker.elastic.co/package-registry/package-registry:0.2.0-SNAPSHOT` also we store the same image with a tag referenced the commit, something like `docker.elastic.co/package-registry/package-registry:f999b7a84d977cd19a379f0cec802aa1ef7ca379`, finally, for GitHub tags we will use the GitHub tag and the commit to publish the Docker image, something like `docker.elastic.co/package-registry/package-registry:0.2.0` and `docker.elastic.co/package-registry/package-registry: 928f750f7dace1934dc5a67bfe24eb848ca44be1 ` ``` docker build . docker tag 87e1feeff7c8 push.docker.elastic.co/package-registry/package-registry:0.2.0 docker push push.docker.elastic.co/package-registry/package-registry:0.2.0 ```",1,add docker image build stage we have a new namespace in place to publish the docker images thus we have to add a new stage to the pipeline job to build and publish the docker images we will tag the image with the latest version plus the word snapshot something like docker elastic co package registry package registry snapshot also we store the same image with a tag referenced the commit something like docker elastic co package registry package registry finally for github tags we will use the github tag and the commit to publish the docker image something like docker elastic co package registry package registry and docker elastic co package registry package registry docker build docker tag push docker elastic co package registry package registry docker push push docker elastic co package registry package registry ,1 4000,15113860147.0,IssuesEvent,2021-02-09 00:33:56,BCDevOps/OpenShift4-RollOut,https://api.github.com/repos/BCDevOps/OpenShift4-RollOut,closed,Create pruning tools,team/DXC tech/automation,"**Describe the issue** An OpenShift cluster needs regular pruning to keep healthy. This includes the image registry soft and hard prune, as well as old builds and deployments. https://docs.openshift.com/container-platform/4.4/applications/pruning-objects.html Develop the needed CronJobs or other tooling to effect the pruning of the cluster **Definition of done Checklist (where applicable)** - [x] Image Registry Soft Prune - [ ] Image Registry Hard Prune - [x] Deployments Prune - [x] Builds Prune",1.0,"Create pruning tools - **Describe the issue** An OpenShift cluster needs regular pruning to keep healthy. This includes the image registry soft and hard prune, as well as old builds and deployments. https://docs.openshift.com/container-platform/4.4/applications/pruning-objects.html Develop the needed CronJobs or other tooling to effect the pruning of the cluster **Definition of done Checklist (where applicable)** - [x] Image Registry Soft Prune - [ ] Image Registry Hard Prune - [x] Deployments Prune - [x] Builds Prune",1,create pruning tools describe the issue an openshift cluster needs regular pruning to keep healthy this includes the image registry soft and hard prune as well as old builds and deployments develop the needed cronjobs or other tooling to effect the pruning of the cluster definition of done checklist where applicable image registry soft prune image registry hard prune deployments prune builds prune,1 234692,7725005220.0,IssuesEvent,2018-05-24 16:35:32,InFact-coop/BlueCross,https://api.github.com/repos/InFact-coop/BlueCross,closed,Postcode validation,T2h T4h bug enhancement priority-2,"**Problem**: Small letters and whitespace padding in the postcode field will prevent you from submitting the form **Cause**: The regex we use might be a bit overzealous **Possible solution 1**: Change the regex to allow for both uppercase and lowercase letters. Use elm's controlled input to strip whitespace from the beginning and end of the postcode using [String.trim](http://package.elm-lang.org/packages/elm-lang/core/latest/String#trim). **Possible solution 2**: Implement on-the-fly postcode checking using [postcode.io's open source API](https://postcodes.io/docs) as vouched for by @lucymk.",1.0,"Postcode validation - **Problem**: Small letters and whitespace padding in the postcode field will prevent you from submitting the form **Cause**: The regex we use might be a bit overzealous **Possible solution 1**: Change the regex to allow for both uppercase and lowercase letters. Use elm's controlled input to strip whitespace from the beginning and end of the postcode using [String.trim](http://package.elm-lang.org/packages/elm-lang/core/latest/String#trim). **Possible solution 2**: Implement on-the-fly postcode checking using [postcode.io's open source API](https://postcodes.io/docs) as vouched for by @lucymk.",0,postcode validation problem small letters and whitespace padding in the postcode field will prevent you from submitting the form cause the regex we use might be a bit overzealous possible solution change the regex to allow for both uppercase and lowercase letters use elm s controlled input to strip whitespace from the beginning and end of the postcode using possible solution implement on the fly postcode checking using as vouched for by lucymk ,0 9409,28240883789.0,IssuesEvent,2023-04-06 07:03:04,camunda/issues,https://api.github.com/repos/camunda/issues,opened,Support for Spring Boot 3.x,public kind:epic component:c7-automation-platform riskAssessment:pending,"### Value Proposition Statement Allow user to use maintained environment and benefit from new features ### User Problem - Spring Boot 2.7 is out of maintenance by 11/2023 ### User Stories - As Software Developer I want to use Spring Boot 3.0 for my Camunda Engine and External Task Clients. ### Implementation Notes - https://github.com/camunda/camunda-bpm-platform/issues/2755 - Team issue: https://github.com/orgs/camunda/projects/44/views/1?pane=issue&itemId=14493705 :robot: This issue is automatically synced from: [source](https://github.com/camunda/product-hub/issues/1100)",1.0,"Support for Spring Boot 3.x - ### Value Proposition Statement Allow user to use maintained environment and benefit from new features ### User Problem - Spring Boot 2.7 is out of maintenance by 11/2023 ### User Stories - As Software Developer I want to use Spring Boot 3.0 for my Camunda Engine and External Task Clients. ### Implementation Notes - https://github.com/camunda/camunda-bpm-platform/issues/2755 - Team issue: https://github.com/orgs/camunda/projects/44/views/1?pane=issue&itemId=14493705 :robot: This issue is automatically synced from: [source](https://github.com/camunda/product-hub/issues/1100)",1,support for spring boot x value proposition statement allow user to use maintained environment and benefit from new features user problem spring boot is out of maintenance by user stories as software developer i want to use spring boot for my camunda engine and external task clients implementation notes team issue robot this issue is automatically synced from ,1 219,4768840865.0,IssuesEvent,2016-10-26 10:27:18,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Don't scroll to parent element while focusing on non-focusable element during click automation,AREA: client SYSTEM: automations TYPE: bug,"### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? If click automation is performing on element, which isn't focusable but it parent does, it scrolls to the parent element ### What is the expected behavior? It must not scroll to focusable parent element, cause it isn't happening when we performs click natively (without TestCafe). ### How would you reproduce the current behavior (if this is a bug)? Run the test #### Provide the test code and the tested page URL (if applicable) Tested page URL: HTML markup: ```html Title
``` Test code ```js import { expect } from 'chai'; import { ClientFunction } from 'testcafe'; fixture `bug` .page `index.html`; const getWindowTopScroll = ClientFunction(() => window.pageYOffset); test(""Shouldn't scroll to the parent"", async t => { const oldWindowScrollValue = await getWindowTopScroll(); await t.click('#child'); const newWindowScrollValue = await getWindowTopScroll(); expect(newWindowScrollValue).eql(oldWindowScrollValue); }); ``` ### Specify your * operating system: WIN 10 x64 * testcafe version: 0.10.0-alpha * node.js version: v5.7.0",1.0,"Don't scroll to parent element while focusing on non-focusable element during click automation - ### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? If click automation is performing on element, which isn't focusable but it parent does, it scrolls to the parent element ### What is the expected behavior? It must not scroll to focusable parent element, cause it isn't happening when we performs click natively (without TestCafe). ### How would you reproduce the current behavior (if this is a bug)? Run the test #### Provide the test code and the tested page URL (if applicable) Tested page URL: HTML markup: ```html Title
``` Test code ```js import { expect } from 'chai'; import { ClientFunction } from 'testcafe'; fixture `bug` .page `index.html`; const getWindowTopScroll = ClientFunction(() => window.pageYOffset); test(""Shouldn't scroll to the parent"", async t => { const oldWindowScrollValue = await getWindowTopScroll(); await t.click('#child'); const newWindowScrollValue = await getWindowTopScroll(); expect(newWindowScrollValue).eql(oldWindowScrollValue); }); ``` ### Specify your * operating system: WIN 10 x64 * testcafe version: 0.10.0-alpha * node.js version: v5.7.0",1,don t scroll to parent element while focusing on non focusable element during click automation are you requesting a feature or reporting a bug bug what is the current behavior if click automation is performing on element which isn t focusable but it parent does it scrolls to the parent element what is the expected behavior it must not scroll to focusable parent element cause it isn t happening when we performs click natively without testcafe how would you reproduce the current behavior if this is a bug run the test provide the test code and the tested page url if applicable tested page url html markup html title test code js import expect from chai import clientfunction from testcafe fixture bug page index html const getwindowtopscroll clientfunction window pageyoffset test shouldn t scroll to the parent async t const oldwindowscrollvalue await getwindowtopscroll await t click child const newwindowscrollvalue await getwindowtopscroll expect newwindowscrollvalue eql oldwindowscrollvalue specify your operating system win testcafe version alpha node js version ,1 271244,23592906182.0,IssuesEvent,2022-08-23 16:38:07,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,sql/tests: TestRandomSyntaxSQLSmith failed,C-test-failure O-robot branch-master T-sql-experience,"sql/tests.TestRandomSyntaxSQLSmith [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/5633782?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/5633782?buildTab=artifacts#/) on master @ [3c9b17113488d2ee6929936aa6ec48396f3ed71c](https://github.com/cockroachdb/cockroach/commits/3c9b17113488d2ee6929936aa6ec48396f3ed71c): Random syntax error: ``` rsg_test.go:784: Crash detected: server panic: pq: internal error: lookup for ComparisonExpr ((col_56906)[void] IS DISTINCT FROM (NULL)[unknown])[bool]'s CmpOp failed ``` Query: ``` WITH with_9404 (col_56900) AS ( SELECT * FROM ( VALUES (NULL), ((-0.9320566654205322):::FLOAT8), ((SELECT (-1.7138859033584595):::FLOAT8 AS col_56899 LIMIT 1:::INT8)) ) AS tab_23172 (col_56900) ), with_9407 (col_56906) AS ( SELECT * FROM ( VALUES ('':::VOID), ( ( WITH with_9405 (col_56901) AS ( SELECT * FROM ( VALUES ('85007e11-427f-4a36-8d6e-eb4af6af4db5':::UUID), ('33c4f6f8-2600-4c85-a675-341407509889':::UUID), ('076d8edb-4ee3-4db4-82ca-225c6c455386':::UUID), ('fec11024-a8cf-4100-bdd1-deda3ec3de80':::UUID) ) AS tab_23173 (col_56901) ), with_9406 (col_56902, col_56903, col_56904) AS ( SELECT tab_23175.col1_0 AS col_56902, 'morning':::greeting AS col_56903, B'1000110000101110011001011100001000010100' AS col_56904 FROM defaultdb.public.table1@table1_col1_14_idx AS tab_23174 JOIN defaultdb.public.table1@[0] AS tab_23175 ON (tab_23174.col1_14) = (tab_23175.col1_14) AND (tab_23174.col1_16) = (tab_23175.col1_16) AND (tab_23174.col1_9) = (tab_23175.col1_9) WHERE false GROUP BY tab_23175.col1_0 HAVING every(tab_23174.col1_1::BOOL)::BOOL ) SELECT '':::VOID AS col_56905 FROM defaultdb.public.table1@[0] AS tab_23176 LIMIT 1:::INT8 ) ), (NULL), ('':::VOID), ('':::VOID) ) AS tab_23177 (col_56906) EXCEPT ALL SELECT * FROM (VALUES ('':::VOID)) AS tab_23178 (col_56907) ) SELECT COALESCE(cte_ref_2751.col_56906, '':::VOID) AS col_56908 FROM with_9407 AS cte_ref_2751 ORDER BY cte_ref_2751.col_56906 DESC, cte_ref_2751.col_56906; ```
Help

See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)

Same failure on other branches

- #74272 sql/tests: TestRandomSyntaxSQLSmith failed [C-test-failure O-robot branch-release-21.1] - #70855 sql/tests: TestRandomSyntaxSQLSmith failed [C-test-failure O-robot T-sql-queries branch-release-21.2 sync-me-8]

/cc @cockroachdb/sql-experience [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestRandomSyntaxSQLSmith.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) Jira issue: CRDB-17238",1.0,"sql/tests: TestRandomSyntaxSQLSmith failed - sql/tests.TestRandomSyntaxSQLSmith [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/5633782?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/5633782?buildTab=artifacts#/) on master @ [3c9b17113488d2ee6929936aa6ec48396f3ed71c](https://github.com/cockroachdb/cockroach/commits/3c9b17113488d2ee6929936aa6ec48396f3ed71c): Random syntax error: ``` rsg_test.go:784: Crash detected: server panic: pq: internal error: lookup for ComparisonExpr ((col_56906)[void] IS DISTINCT FROM (NULL)[unknown])[bool]'s CmpOp failed ``` Query: ``` WITH with_9404 (col_56900) AS ( SELECT * FROM ( VALUES (NULL), ((-0.9320566654205322):::FLOAT8), ((SELECT (-1.7138859033584595):::FLOAT8 AS col_56899 LIMIT 1:::INT8)) ) AS tab_23172 (col_56900) ), with_9407 (col_56906) AS ( SELECT * FROM ( VALUES ('':::VOID), ( ( WITH with_9405 (col_56901) AS ( SELECT * FROM ( VALUES ('85007e11-427f-4a36-8d6e-eb4af6af4db5':::UUID), ('33c4f6f8-2600-4c85-a675-341407509889':::UUID), ('076d8edb-4ee3-4db4-82ca-225c6c455386':::UUID), ('fec11024-a8cf-4100-bdd1-deda3ec3de80':::UUID) ) AS tab_23173 (col_56901) ), with_9406 (col_56902, col_56903, col_56904) AS ( SELECT tab_23175.col1_0 AS col_56902, 'morning':::greeting AS col_56903, B'1000110000101110011001011100001000010100' AS col_56904 FROM defaultdb.public.table1@table1_col1_14_idx AS tab_23174 JOIN defaultdb.public.table1@[0] AS tab_23175 ON (tab_23174.col1_14) = (tab_23175.col1_14) AND (tab_23174.col1_16) = (tab_23175.col1_16) AND (tab_23174.col1_9) = (tab_23175.col1_9) WHERE false GROUP BY tab_23175.col1_0 HAVING every(tab_23174.col1_1::BOOL)::BOOL ) SELECT '':::VOID AS col_56905 FROM defaultdb.public.table1@[0] AS tab_23176 LIMIT 1:::INT8 ) ), (NULL), ('':::VOID), ('':::VOID) ) AS tab_23177 (col_56906) EXCEPT ALL SELECT * FROM (VALUES ('':::VOID)) AS tab_23178 (col_56907) ) SELECT COALESCE(cte_ref_2751.col_56906, '':::VOID) AS col_56908 FROM with_9407 AS cte_ref_2751 ORDER BY cte_ref_2751.col_56906 DESC, cte_ref_2751.col_56906; ```
Help

See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)

Same failure on other branches

- #74272 sql/tests: TestRandomSyntaxSQLSmith failed [C-test-failure O-robot branch-release-21.1] - #70855 sql/tests: TestRandomSyntaxSQLSmith failed [C-test-failure O-robot T-sql-queries branch-release-21.2 sync-me-8]

/cc @cockroachdb/sql-experience [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestRandomSyntaxSQLSmith.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) Jira issue: CRDB-17238",0,sql tests testrandomsyntaxsqlsmith failed sql tests testrandomsyntaxsqlsmith with on master random syntax error rsg test go crash detected server panic pq internal error lookup for comparisonexpr col is distinct from null s cmpop failed query with with col as select from values null select as col limit as tab col with col as select from values void with with col as select from values uuid uuid uuid uuid as tab col with col col col as select tab as col morning greeting as col b as col from defaultdb public idx as tab join defaultdb public as tab on tab tab and tab tab and tab tab where false group by tab having every tab bool bool select void as col from defaultdb public as tab limit null void void as tab col except all select from values void as tab col select coalesce cte ref col void as col from with as cte ref order by cte ref col desc cte ref col help see also same failure on other branches sql tests testrandomsyntaxsqlsmith failed sql tests testrandomsyntaxsqlsmith failed cc cockroachdb sql experience jira issue crdb ,0 622,7582804739.0,IssuesEvent,2018-04-25 06:33:49,vmware/harbor,https://api.github.com/repos/vmware/harbor,closed,Can not push signed image for notary signer is restarting,kind/automation-found kind/bug priority/high,"Version v1.5.0-a342c31a Can not push signed image for notary signer is restarting {""level"":""fatal"",""msg"":""Could not read config at :/etc/notary/signer-config.json, viper error: open /etc/notary/signer-config.json: permission denied"",""time"":""2018-04-23T06:50:20Z""} ",1.0,"Can not push signed image for notary signer is restarting - Version v1.5.0-a342c31a Can not push signed image for notary signer is restarting {""level"":""fatal"",""msg"":""Could not read config at :/etc/notary/signer-config.json, viper error: open /etc/notary/signer-config.json: permission denied"",""time"":""2018-04-23T06:50:20Z""} ",1,can not push signed image for notary signer is restarting version can not push signed image for notary signer is restarting level fatal msg could not read config at etc notary signer config json viper error open etc notary signer config json permission denied time ,1 55993,8038640409.0,IssuesEvent,2018-07-30 15:55:08,choojs/choo,https://api.github.com/repos/choojs/choo,closed,Redirection to another during page load causes the view to not render.,Type: Documentation,"## Expected behavior I am brand new to Choo so I may be thinking about this all wrong. But the use case is, the User comes to a route that requires authentication. They are not authenticated so they are routed to a login view at a different route. I have written up a small Choo to make it easier to demostrate the problem. The expected result from the code below is: > Should ONLY EVER get here. And the route should be: > https://localhost:8080/login ### package.json ```js { ""name"": ""test-choo"", ""version"": ""1.0.0"", ""description"": """", ""main"": ""index.js"", ""scripts"": { ""test"": ""echo \""Error: no test specified\"" && exit 1"" }, ""author"": """", ""license"": ""ISC"", ""dependencies"": { ""bankai"": ""^9.14.0"", ""choo"": ""^6.13.0"" } } ``` ### index.js ```js const choo = require('choo') const html = require('choo/html') const app = choo() app.use((state, emitter) => { state.session = null }) app.route('/', (state, emit) => { if (!state.session) emit(state.events.REPLACESTATE, '/login') return html`

Should NEVER get here.

` }) app.route('/login', (state, emit) => { return html`

Should ONLY EVER get here.

` }) app.mount('body') ``` ### Actual behavior The actual result is: > Should NEVER get here. And the route should be: > https://localhost:8080/login ### Steps to reproduce behavior Write here. 1. Copy the above files into a folder 2. `npm i` 3. `./node_modules/.bin/bankai start index.js` 4. In a browser open, `https://localhost:8080/` ### Notes What I noticed when tracing the function calls was that NAVIGATE event emitted and its callback executed. In the `start` function (which was called after `documentReady` in `mount`), the RENDER event was never emitted because `self._loaded` was false (https://github.com/choojs/choo/blob/master/index.js#L101). After this code is executed, the `documentReady` callback in `start` is called, setting `self._loaded` to true. So it appears that there is an order of operations issue maybe caused by the `setTimeout` in the `documentReady` call. I would like to fix this problem (assuming its a problem), but I need a little bit of hand holding since I am only 1 day into Choo. Any guidance would be greatly appreciated. ",1.0,"Redirection to another during page load causes the view to not render. - ## Expected behavior I am brand new to Choo so I may be thinking about this all wrong. But the use case is, the User comes to a route that requires authentication. They are not authenticated so they are routed to a login view at a different route. I have written up a small Choo to make it easier to demostrate the problem. The expected result from the code below is: > Should ONLY EVER get here. And the route should be: > https://localhost:8080/login ### package.json ```js { ""name"": ""test-choo"", ""version"": ""1.0.0"", ""description"": """", ""main"": ""index.js"", ""scripts"": { ""test"": ""echo \""Error: no test specified\"" && exit 1"" }, ""author"": """", ""license"": ""ISC"", ""dependencies"": { ""bankai"": ""^9.14.0"", ""choo"": ""^6.13.0"" } } ``` ### index.js ```js const choo = require('choo') const html = require('choo/html') const app = choo() app.use((state, emitter) => { state.session = null }) app.route('/', (state, emit) => { if (!state.session) emit(state.events.REPLACESTATE, '/login') return html`

Should NEVER get here.

` }) app.route('/login', (state, emit) => { return html`

Should ONLY EVER get here.

` }) app.mount('body') ``` ### Actual behavior The actual result is: > Should NEVER get here. And the route should be: > https://localhost:8080/login ### Steps to reproduce behavior Write here. 1. Copy the above files into a folder 2. `npm i` 3. `./node_modules/.bin/bankai start index.js` 4. In a browser open, `https://localhost:8080/` ### Notes What I noticed when tracing the function calls was that NAVIGATE event emitted and its callback executed. In the `start` function (which was called after `documentReady` in `mount`), the RENDER event was never emitted because `self._loaded` was false (https://github.com/choojs/choo/blob/master/index.js#L101). After this code is executed, the `documentReady` callback in `start` is called, setting `self._loaded` to true. So it appears that there is an order of operations issue maybe caused by the `setTimeout` in the `documentReady` call. I would like to fix this problem (assuming its a problem), but I need a little bit of hand holding since I am only 1 day into Choo. Any guidance would be greatly appreciated. ",0,redirection to another during page load causes the view to not render expected behavior i am brand new to choo so i may be thinking about this all wrong but the use case is the user comes to a route that requires authentication they are not authenticated so they are routed to a login view at a different route i have written up a small choo to make it easier to demostrate the problem the expected result from the code below is should only ever get here and the route should be package json js name test choo version description main index js scripts test echo error no test specified exit author license isc dependencies bankai choo index js js const choo require choo const html require choo html const app choo app use state emitter state session null app route state emit if state session emit state events replacestate login return html should never get here app route login state emit return html should only ever get here app mount body actual behavior the actual result is should never get here and the route should be steps to reproduce behavior write here copy the above files into a folder npm i node modules bin bankai start index js in a browser open notes what i noticed when tracing the function calls was that navigate event emitted and its callback executed in the start function which was called after documentready in mount the render event was never emitted because self loaded was false after this code is executed the documentready callback in start is called setting self loaded to true so it appears that there is an order of operations issue maybe caused by the settimeout in the documentready call i would like to fix this problem assuming its a problem but i need a little bit of hand holding since i am only day into choo any guidance would be greatly appreciated ,0 322491,27611620939.0,IssuesEvent,2023-03-09 16:21:11,delph-in/srg,https://api.github.com/repos/delph-in/srg,reopened,Passive voice,mrs testsuite,"Judging by the item 331 in the MRS test suite, passive voice doesn't quite work yet (the subject is not linked to the event in the MRS): ",1.0,"Passive voice - Judging by the item 331 in the MRS test suite, passive voice doesn't quite work yet (the subject is not linked to the event in the MRS): ",0,passive voice judging by the item in the mrs test suite passive voice doesn t quite work yet the subject is not linked to the event in the mrs img width alt screen shot at pm src img width alt screen shot at pm src ,0 4276,15931247478.0,IssuesEvent,2021-04-14 02:54:09,Azure/azure-powershell,https://api.github.com/repos/Azure/azure-powershell,closed,Runbook fails to call cmdlets within Start-Job scriptblock,Accounts Automation customer-reported question,"## Description I have a runbook which has a Start-Job scriptblock for authenticating and connecting to ADLS so that I can implement a timeout on the connection attempt. I have to implement this as occasionally a runbook will timeout after 3 hours of just trying to run Connect-AzAccounts. Prior to the Start-Job scripblock approach this runbook was working fine, with the scriptblock it works locally on my PC, however now running as the runbook results in may errors which appear related to not being able to import modules. Since Start-Job creates a new session I called Import-Module on the 3 required modules but this still isn't working. ## Steps to reproduce `$TenantId = ""XXXXX"" $ResourceGroupName = ""XXXXX"" $StorageAccountName = ""XXXXX"" $Credential = Get-AutomationPSCredential -Name ""XXXXXXX"" $connectAzTimeout = 30 $connectAzTimer = [system.diagnostics.stopwatch]::StartNew() $connectAzJob = Start-Job -ScriptBlock { Import-Module Microsoft.PowerShell.Core -Force Import-Module Az.Accounts -Force Import-Module Az.Storage -Force $connectAz = Connect-AzAccount -Credential $using:Credential -Tenant $using:TenantId -ServicePrincipal $storageAccount = Get-AzStorageAccount -ResourceGroupName $using:ResourceGroupName -AccountName $using:StorageAccountName $storageContext = $storageAccount.Context $storageContext } Register-ObjectEvent $connectAzJob -EventName StateChanged -SourceIdentifier ConnectAzJobEnd -Action { if ($sender.State -eq 'Completed') { $global:storageContext = Receive-Job $connectAzJob } } while (!$storageContext -And $connectAzTimer.Elapsed.TotalSeconds -le $connectAzTimeout) { sleep -Seconds 1 } ` ## Environment data The following AutomationAccount modules are installed in addition to all of the standard Automation Account modules: Az.Accounts Az.Storage ## Error output `Cannot invoke method. Method invocation is supported only on core types in this language mode. + CategoryInfo : InvalidOperation: (:) [], RuntimeException + FullyQualifiedErrorId : MethodInvocationNotSupportedInConstrainedLanguage + PSComputerName : localhost` `Could not load file or assembly 'Newtonsoft.Json, Version=10.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed' or one of its dependencies. The system cannot find the file specified. + CategoryInfo : NotSpecified: (:) [Import-Module], FileNotFoundException + FullyQualifiedErrorId : System.IO.FileNotFoundException,Microsoft.PowerShell.Commands.ImportModuleCommand + PSComputerName : localhost` `Could not load file or assembly 'Azure.Core, Version=1.9.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8' or one of its dependencies. The system cannot find the file specified. + CategoryInfo : NotSpecified: (:) [Import-Module], FileNotFoundException + FullyQualifiedErrorId : System.IO.FileNotFoundException,Microsoft.PowerShell.Commands.ImportModuleCommand + PSComputerName : localhost` `The term 'Connect-AzAccount' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. + CategoryInfo : ObjectNotFound: (Connect-AzAccount:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException + PSComputerName : localhost` ",1.0,"Runbook fails to call cmdlets within Start-Job scriptblock - ## Description I have a runbook which has a Start-Job scriptblock for authenticating and connecting to ADLS so that I can implement a timeout on the connection attempt. I have to implement this as occasionally a runbook will timeout after 3 hours of just trying to run Connect-AzAccounts. Prior to the Start-Job scripblock approach this runbook was working fine, with the scriptblock it works locally on my PC, however now running as the runbook results in may errors which appear related to not being able to import modules. Since Start-Job creates a new session I called Import-Module on the 3 required modules but this still isn't working. ## Steps to reproduce `$TenantId = ""XXXXX"" $ResourceGroupName = ""XXXXX"" $StorageAccountName = ""XXXXX"" $Credential = Get-AutomationPSCredential -Name ""XXXXXXX"" $connectAzTimeout = 30 $connectAzTimer = [system.diagnostics.stopwatch]::StartNew() $connectAzJob = Start-Job -ScriptBlock { Import-Module Microsoft.PowerShell.Core -Force Import-Module Az.Accounts -Force Import-Module Az.Storage -Force $connectAz = Connect-AzAccount -Credential $using:Credential -Tenant $using:TenantId -ServicePrincipal $storageAccount = Get-AzStorageAccount -ResourceGroupName $using:ResourceGroupName -AccountName $using:StorageAccountName $storageContext = $storageAccount.Context $storageContext } Register-ObjectEvent $connectAzJob -EventName StateChanged -SourceIdentifier ConnectAzJobEnd -Action { if ($sender.State -eq 'Completed') { $global:storageContext = Receive-Job $connectAzJob } } while (!$storageContext -And $connectAzTimer.Elapsed.TotalSeconds -le $connectAzTimeout) { sleep -Seconds 1 } ` ## Environment data The following AutomationAccount modules are installed in addition to all of the standard Automation Account modules: Az.Accounts Az.Storage ## Error output `Cannot invoke method. Method invocation is supported only on core types in this language mode. + CategoryInfo : InvalidOperation: (:) [], RuntimeException + FullyQualifiedErrorId : MethodInvocationNotSupportedInConstrainedLanguage + PSComputerName : localhost` `Could not load file or assembly 'Newtonsoft.Json, Version=10.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed' or one of its dependencies. The system cannot find the file specified. + CategoryInfo : NotSpecified: (:) [Import-Module], FileNotFoundException + FullyQualifiedErrorId : System.IO.FileNotFoundException,Microsoft.PowerShell.Commands.ImportModuleCommand + PSComputerName : localhost` `Could not load file or assembly 'Azure.Core, Version=1.9.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8' or one of its dependencies. The system cannot find the file specified. + CategoryInfo : NotSpecified: (:) [Import-Module], FileNotFoundException + FullyQualifiedErrorId : System.IO.FileNotFoundException,Microsoft.PowerShell.Commands.ImportModuleCommand + PSComputerName : localhost` `The term 'Connect-AzAccount' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. + CategoryInfo : ObjectNotFound: (Connect-AzAccount:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException + PSComputerName : localhost` ",1,runbook fails to call cmdlets within start job scriptblock description i have a runbook which has a start job scriptblock for authenticating and connecting to adls so that i can implement a timeout on the connection attempt i have to implement this as occasionally a runbook will timeout after hours of just trying to run connect azaccounts prior to the start job scripblock approach this runbook was working fine with the scriptblock it works locally on my pc however now running as the runbook results in may errors which appear related to not being able to import modules since start job creates a new session i called import module on the required modules but this still isn t working steps to reproduce tenantid xxxxx resourcegroupname xxxxx storageaccountname xxxxx credential get automationpscredential name xxxxxxx connectaztimeout connectaztimer startnew connectazjob start job scriptblock import module microsoft powershell core force import module az accounts force import module az storage force connectaz connect azaccount credential using credential tenant using tenantid serviceprincipal storageaccount get azstorageaccount resourcegroupname using resourcegroupname accountname using storageaccountname storagecontext storageaccount context storagecontext register objectevent connectazjob eventname statechanged sourceidentifier connectazjobend action if sender state eq completed global storagecontext receive job connectazjob while storagecontext and connectaztimer elapsed totalseconds le connectaztimeout sleep seconds environment data the following automationaccount modules are installed in addition to all of the standard automation account modules az accounts az storage error output cannot invoke method method invocation is supported only on core types in this language mode categoryinfo invalidoperation runtimeexception fullyqualifiederrorid methodinvocationnotsupportedinconstrainedlanguage pscomputername localhost could not load file or assembly newtonsoft json version culture neutral publickeytoken or one of its dependencies the system cannot find the file specified categoryinfo notspecified filenotfoundexception fullyqualifiederrorid system io filenotfoundexception microsoft powershell commands importmodulecommand pscomputername localhost could not load file or assembly azure core version culture neutral publickeytoken or one of its dependencies the system cannot find the file specified categoryinfo notspecified filenotfoundexception fullyqualifiederrorid system io filenotfoundexception microsoft powershell commands importmodulecommand pscomputername localhost the term connect azaccount is not recognized as the name of a cmdlet function script file or operable program check the spelling of the name or if a path was included verify that the path is correct and try again categoryinfo objectnotfound connect azaccount string commandnotfoundexception fullyqualifiederrorid commandnotfoundexception pscomputername localhost ,1 143293,19177912119.0,IssuesEvent,2021-12-04 00:04:36,samq-ghdemo/js-monorepo,https://api.github.com/repos/samq-ghdemo/js-monorepo,opened,"CVE-2017-1000228 (High) detected in ejs-0.8.8.tgz, ejs-1.0.0.tgz",security vulnerability,"## CVE-2017-1000228 - High Severity Vulnerability
Vulnerable Libraries - ejs-0.8.8.tgz, ejs-1.0.0.tgz

ejs-0.8.8.tgz

Embedded JavaScript templates

Library home page: https://registry.npmjs.org/ejs/-/ejs-0.8.8.tgz

Path to dependency file: js-monorepo/vulnerable-node/package.json

Path to vulnerable library: js-monorepo/vulnerable-node/node_modules/ejs-locals/node_modules/ejs/package.json,js-monorepo/nodejs-goof/node_modules/ejs-locals/node_modules/ejs/package.json

Dependency Hierarchy: - ejs-locals-1.0.2.tgz (Root Library) - :x: **ejs-0.8.8.tgz** (Vulnerable Library)

ejs-1.0.0.tgz

Embedded JavaScript templates

Library home page: https://registry.npmjs.org/ejs/-/ejs-1.0.0.tgz

Path to dependency file: js-monorepo/nodejs-goof/package.json

Path to vulnerable library: /nodejs-goof/node_modules/ejs/package.json

Dependency Hierarchy: - :x: **ejs-1.0.0.tgz** (Vulnerable Library)

Found in HEAD commit: f3701923c18333c1e4e49bf595dd36b3f186812f

Found in base branch: main

Vulnerability Details

nodejs ejs versions older than 2.5.3 is vulnerable to remote code execution due to weak input validation in ejs.renderFile() function

Publish Date: 2017-11-17

URL: CVE-2017-1000228

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-1000228

Release Date: 2017-11-17

Fix Resolution: 2.5.3

",True,"CVE-2017-1000228 (High) detected in ejs-0.8.8.tgz, ejs-1.0.0.tgz - ## CVE-2017-1000228 - High Severity Vulnerability
Vulnerable Libraries - ejs-0.8.8.tgz, ejs-1.0.0.tgz

ejs-0.8.8.tgz

Embedded JavaScript templates

Library home page: https://registry.npmjs.org/ejs/-/ejs-0.8.8.tgz

Path to dependency file: js-monorepo/vulnerable-node/package.json

Path to vulnerable library: js-monorepo/vulnerable-node/node_modules/ejs-locals/node_modules/ejs/package.json,js-monorepo/nodejs-goof/node_modules/ejs-locals/node_modules/ejs/package.json

Dependency Hierarchy: - ejs-locals-1.0.2.tgz (Root Library) - :x: **ejs-0.8.8.tgz** (Vulnerable Library)

ejs-1.0.0.tgz

Embedded JavaScript templates

Library home page: https://registry.npmjs.org/ejs/-/ejs-1.0.0.tgz

Path to dependency file: js-monorepo/nodejs-goof/package.json

Path to vulnerable library: /nodejs-goof/node_modules/ejs/package.json

Dependency Hierarchy: - :x: **ejs-1.0.0.tgz** (Vulnerable Library)

Found in HEAD commit: f3701923c18333c1e4e49bf595dd36b3f186812f

Found in base branch: main

Vulnerability Details

nodejs ejs versions older than 2.5.3 is vulnerable to remote code execution due to weak input validation in ejs.renderFile() function

Publish Date: 2017-11-17

URL: CVE-2017-1000228

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-1000228

Release Date: 2017-11-17

Fix Resolution: 2.5.3

",0,cve high detected in ejs tgz ejs tgz cve high severity vulnerability vulnerable libraries ejs tgz ejs tgz ejs tgz embedded javascript templates library home page a href path to dependency file js monorepo vulnerable node package json path to vulnerable library js monorepo vulnerable node node modules ejs locals node modules ejs package json js monorepo nodejs goof node modules ejs locals node modules ejs package json dependency hierarchy ejs locals tgz root library x ejs tgz vulnerable library ejs tgz embedded javascript templates library home page a href path to dependency file js monorepo nodejs goof package json path to vulnerable library nodejs goof node modules ejs package json dependency hierarchy x ejs tgz vulnerable library found in head commit a href found in base branch main vulnerability details nodejs ejs versions older than is vulnerable to remote code execution due to weak input validation in ejs renderfile function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree ejs locals ejs isminimumfixversionavailable true minimumfixversion isbinary false packagetype javascript node js packagename ejs packageversion packagefilepaths istransitivedependency false dependencytree ejs isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails nodejs ejs versions older than is vulnerable to remote code execution due to weak input validation in ejs renderfile function vulnerabilityurl ,0 9025,27394707000.0,IssuesEvent,2023-02-28 18:45:14,awslabs/aws-lambda-powertools-typescript,https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript,closed,Maintenance: fix typo in doc publishing workflow,area/automation type/internal status/completed,"### Summary In a recent PR (#1124) we have made some changes to the way docs are published and mistakenly introduced a bug in the workflow that wasn't spotted during the review. This caused the workflow [to fail during its first run](https://github.com/awslabs/aws-lambda-powertools-typescript/actions/runs/4296070251) after another unrelated PR was merged. Based on the error message it appears that one of the GitHub global variables names has a typo: `input` vs `inputs` - with the latter being the correct one. ### Why is this needed? To allow the workflow to run properly. ### Which area does this relate to? Automation ### Solution See linked PR. ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/) ### Future readers Please react with 👍 and your use case to help us understand customer demand.",1.0,"Maintenance: fix typo in doc publishing workflow - ### Summary In a recent PR (#1124) we have made some changes to the way docs are published and mistakenly introduced a bug in the workflow that wasn't spotted during the review. This caused the workflow [to fail during its first run](https://github.com/awslabs/aws-lambda-powertools-typescript/actions/runs/4296070251) after another unrelated PR was merged. Based on the error message it appears that one of the GitHub global variables names has a typo: `input` vs `inputs` - with the latter being the correct one. ### Why is this needed? To allow the workflow to run properly. ### Which area does this relate to? Automation ### Solution See linked PR. ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/) ### Future readers Please react with 👍 and your use case to help us understand customer demand.",1,maintenance fix typo in doc publishing workflow summary in a recent pr we have made some changes to the way docs are published and mistakenly introduced a bug in the workflow that wasn t spotted during the review this caused the workflow after another unrelated pr was merged based on the error message it appears that one of the github global variables names has a typo input vs inputs with the latter being the correct one why is this needed to allow the workflow to run properly which area does this relate to automation solution see linked pr acknowledgment this request meets should this be considered in other lambda powertools languages i e and future readers please react with 👍 and your use case to help us understand customer demand ,1 63395,26378595483.0,IssuesEvent,2023-01-12 06:15:38,thkl/hap-homematic,https://api.github.com/repos/thkl/hap-homematic,closed,keinen zugriff von extern,enhancement DeviceService,"Hallo ich versuche von extern auf den hap zuzugreifen doch dies funktioniert nicht. Solange ich im Wlan bin funktioniert alles sobald ich über LTE gehe geht nichts mehr",1.0,"keinen zugriff von extern - Hallo ich versuche von extern auf den hap zuzugreifen doch dies funktioniert nicht. Solange ich im Wlan bin funktioniert alles sobald ich über LTE gehe geht nichts mehr",0,keinen zugriff von extern hallo ich versuche von extern auf den hap zuzugreifen doch dies funktioniert nicht solange ich im wlan bin funktioniert alles sobald ich über lte gehe geht nichts mehr,0 166849,26416761762.0,IssuesEvent,2023-01-13 16:32:45,openfoodfacts/openfoodfacts-server,https://api.github.com/repos/openfoodfacts/openfoodfacts-server,closed,"In the home page, the text ""Lastest products added"" is completely shifted to the left of the page ",bug new design,"### Describe the bug In the home page, the text ""Lastest products added"" is completely shifted to the left of the page (see image). This behaviour is visible on: https://nl.openfoodfacts.org/ https://de.openfoodfacts.org/ https://es.openfoodfacts.org/ https://it.openfoodfacts.org/ (and maybe more) ### To Reproduce Go to : https://nl.openfoodfacts.org/ ### Expected behavior It should be aligned with the other text sections ### Screenshots _No response_ ### Additional context _No response_ ### Type of device Browser ### Browser version _No response_ ### Number of products impacted _No response_ ### Time per product _No response_",1.0,"In the home page, the text ""Lastest products added"" is completely shifted to the left of the page - ### Describe the bug In the home page, the text ""Lastest products added"" is completely shifted to the left of the page (see image). This behaviour is visible on: https://nl.openfoodfacts.org/ https://de.openfoodfacts.org/ https://es.openfoodfacts.org/ https://it.openfoodfacts.org/ (and maybe more) ### To Reproduce Go to : https://nl.openfoodfacts.org/ ### Expected behavior It should be aligned with the other text sections ### Screenshots _No response_ ### Additional context _No response_ ### Type of device Browser ### Browser version _No response_ ### Number of products impacted _No response_ ### Time per product _No response_",0,in the home page the text lastest products added is completely shifted to the left of the page describe the bug in the home page the text lastest products added is completely shifted to the left of the page see image this behaviour is visible on and maybe more img width alt pasted graphic src to reproduce go to expected behavior it should be aligned with the other text sections screenshots no response additional context no response type of device browser browser version no response number of products impacted no response time per product no response ,0 300569,25977012528.0,IssuesEvent,2022-12-19 15:36:55,mehah/otclient,https://api.github.com/repos/mehah/otclient,closed,"No items to browse on market,TFS 1.4.2 (1098)",Priority: Medium Status: Pending Test Type: Bug,"### Priority Medium ### Area - [X] Data - [X] Source - [ ] Docker - [ ] Other ### What happened? When i open market it show with not items to browse. Image: https://ibb.co/KbbjHrq ### What OS are you seeing the problem on? Windows ### Code of Conduct - [X] I agree to follow this project's Code of Conduct",1.0,"No items to browse on market,TFS 1.4.2 (1098) - ### Priority Medium ### Area - [X] Data - [X] Source - [ ] Docker - [ ] Other ### What happened? When i open market it show with not items to browse. Image: https://ibb.co/KbbjHrq ### What OS are you seeing the problem on? Windows ### Code of Conduct - [X] I agree to follow this project's Code of Conduct",0,no items to browse on market tfs priority medium area data source docker other what happened when i open market it show with not items to browse image what os are you seeing the problem on windows code of conduct i agree to follow this project s code of conduct,0 666718,22365012732.0,IssuesEvent,2022-06-16 02:25:57,Railcraft/Railcraft,https://api.github.com/repos/Railcraft/Railcraft,closed,Possible Resource Leak: Iron Tanks,bug high priority,"**Description of the Bug** Iron and Steel tanks may be leaking memory instead of liquids. **To Reproduce** Use an infinite fluid generator from whichever mod. Pipe fluid into tank. Wait 90-180 minutes. Lag spikes will ensue. **Negative Test** Replace the RC tanks with tanks from another mod. No lag. **Expected behavior** No lag **Additional context** * railcraft-12.0.0.jar * forge-14.23.5.2835 * infitech 3 modpack (infitechs gotta have RC by tradition. duh.) * We spent several days trying to narrow down and isolate this to a specific mod before wasting anyone's time; that said, our evidence is only empirical. The common factor we've observed is Railcraft multiblock tanks, but it could also be that the various mods we've used for piping (Thermal Dynamics, Gregtech CE) are having difficulties working with the tanks for reasons specific to those mods.",1.0,"Possible Resource Leak: Iron Tanks - **Description of the Bug** Iron and Steel tanks may be leaking memory instead of liquids. **To Reproduce** Use an infinite fluid generator from whichever mod. Pipe fluid into tank. Wait 90-180 minutes. Lag spikes will ensue. **Negative Test** Replace the RC tanks with tanks from another mod. No lag. **Expected behavior** No lag **Additional context** * railcraft-12.0.0.jar * forge-14.23.5.2835 * infitech 3 modpack (infitechs gotta have RC by tradition. duh.) * We spent several days trying to narrow down and isolate this to a specific mod before wasting anyone's time; that said, our evidence is only empirical. The common factor we've observed is Railcraft multiblock tanks, but it could also be that the various mods we've used for piping (Thermal Dynamics, Gregtech CE) are having difficulties working with the tanks for reasons specific to those mods.",0,possible resource leak iron tanks description of the bug iron and steel tanks may be leaking memory instead of liquids to reproduce use an infinite fluid generator from whichever mod pipe fluid into tank wait minutes lag spikes will ensue negative test replace the rc tanks with tanks from another mod no lag expected behavior no lag additional context railcraft jar forge infitech modpack infitechs gotta have rc by tradition duh we spent several days trying to narrow down and isolate this to a specific mod before wasting anyone s time that said our evidence is only empirical the common factor we ve observed is railcraft multiblock tanks but it could also be that the various mods we ve used for piping thermal dynamics gregtech ce are having difficulties working with the tanks for reasons specific to those mods ,0 2270,11684987908.0,IssuesEvent,2020-03-05 08:09:11,baloise-incubator/gitopscli,https://api.github.com/repos/baloise-incubator/gitopscli,closed,Ensure commits are signed off,automation,"I took the contributing guidelines from our ""official"" template: https://github.com/baloise/repository-template-java/blob/master/CONTRIBUTING.md Those guidelines require all contributors to sign off their commit: https://baloise-incubator.github.io/gitopscli/contributing/#sign-your-work-developer-certificate-of-origin We should automatically check that.",1.0,"Ensure commits are signed off - I took the contributing guidelines from our ""official"" template: https://github.com/baloise/repository-template-java/blob/master/CONTRIBUTING.md Those guidelines require all contributors to sign off their commit: https://baloise-incubator.github.io/gitopscli/contributing/#sign-your-work-developer-certificate-of-origin We should automatically check that.",1,ensure commits are signed off i took the contributing guidelines from our official template those guidelines require all contributors to sign off their commit we should automatically check that ,1 218700,17016483457.0,IssuesEvent,2021-07-02 12:47:51,ubtue/tuefind,https://api.github.com/repos/ubtue/tuefind,closed,Probleme mit responsive design,System: IxTheo System: KrimDok System: RelBib ready for testing,"In einer bestimmten Skalierung (bei mir etwas mehr als der halbe Bildschirm) der Portalwebsite kann man den Pfeil der Dropdownliste der ""Sortieren""-Funktion nicht anklicken, vmtl. weil er in die Facette ""Erscheinungsjahr"" hineinragt: ![grafik](https://user-images.githubusercontent.com/25769591/117817423-df243600-b267-11eb-9e87-35fd25e83d7c.png) Mit Bitte um Anpassung.",1.0,"Probleme mit responsive design - In einer bestimmten Skalierung (bei mir etwas mehr als der halbe Bildschirm) der Portalwebsite kann man den Pfeil der Dropdownliste der ""Sortieren""-Funktion nicht anklicken, vmtl. weil er in die Facette ""Erscheinungsjahr"" hineinragt: ![grafik](https://user-images.githubusercontent.com/25769591/117817423-df243600-b267-11eb-9e87-35fd25e83d7c.png) Mit Bitte um Anpassung.",0,probleme mit responsive design in einer bestimmten skalierung bei mir etwas mehr als der halbe bildschirm der portalwebsite kann man den pfeil der dropdownliste der sortieren funktion nicht anklicken vmtl weil er in die facette erscheinungsjahr hineinragt mit bitte um anpassung ,0 329847,24237090000.0,IssuesEvent,2022-09-27 01:02:18,DataLinkDC/dlink,https://api.github.com/repos/DataLinkDC/dlink,closed,[Document][doc] Update website ,documentation,"### Search before asking - [X] I had searched in the [issues](https://github.com/DataLinkDC/dlink/issues?q=is%3Aissue) and found no similar document requirement. ### Description _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) ",1.0,"[Document][doc] Update website - ### Search before asking - [X] I had searched in the [issues](https://github.com/DataLinkDC/dlink/issues?q=is%3Aissue) and found no similar document requirement. ### Description _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) ",0, update website search before asking i had searched in the and found no similar document requirement description no response are you willing to submit a pr yes i am willing to submit a pr code of conduct i agree to follow this project s ,0 782908,27511107112.0,IssuesEvent,2023-03-06 08:53:43,pdx-blurp/blurp-frontend,https://api.github.com/repos/pdx-blurp/blurp-frontend,closed,Remove modal pop-up from node creation,high priority enhancement,"Currently the user has to fill out a modal form with node information when creating a node - this is a slow process. AC: When a user creates a node using the node tool, the node should just be placed instead of the modal popping up All node data should be changeable from the data sidebar This also requires that creating a new node does not require any data from the user",1.0,"Remove modal pop-up from node creation - Currently the user has to fill out a modal form with node information when creating a node - this is a slow process. AC: When a user creates a node using the node tool, the node should just be placed instead of the modal popping up All node data should be changeable from the data sidebar This also requires that creating a new node does not require any data from the user",0,remove modal pop up from node creation currently the user has to fill out a modal form with node information when creating a node this is a slow process ac when a user creates a node using the node tool the node should just be placed instead of the modal popping up all node data should be changeable from the data sidebar this also requires that creating a new node does not require any data from the user,0 6355,22909753810.0,IssuesEvent,2022-07-16 04:56:46,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Could use clarification on Hybrid Runbook Workers with Private Link,automation/svc triaged cxp doc-enhancement /subsvc Pri2,"Please provide clarification on the following point: With the current implementation of Private Links for Azure Automation, it only supports running jobs on the Hybrid Runbook Worker connected to an Azure virtual network and does not support cloud jobs. There's 2 places we can run Hybrid Runbook Workers. The first is on an Azure VM and the second is on an on-premises VM outside of Azure. Can the point be clarified to explicitly state whether on-premises VMs are supported to run Hybrid Runbook Workers against an Automation Account with Private Link enabled? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 29645401-45f3-a1d9-2dd0-e561d49b0eb3 * Version Independent ID: 27677330-fa36-53df-e83d-44f3e5e7a93c * Content: [Use Azure Private Link to securely connect networks to Azure Automation](https://docs.microsoft.com/en-us/azure/automation/how-to/private-link-security) * Content Source: [articles/automation/how-to/private-link-security.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/how-to/private-link-security.md) * Service: **automation** * Sub-service: **** * GitHub Login: @SGSneha * Microsoft Alias: **sudhirsneha**",1.0,"Could use clarification on Hybrid Runbook Workers with Private Link - Please provide clarification on the following point: With the current implementation of Private Links for Azure Automation, it only supports running jobs on the Hybrid Runbook Worker connected to an Azure virtual network and does not support cloud jobs. There's 2 places we can run Hybrid Runbook Workers. The first is on an Azure VM and the second is on an on-premises VM outside of Azure. Can the point be clarified to explicitly state whether on-premises VMs are supported to run Hybrid Runbook Workers against an Automation Account with Private Link enabled? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 29645401-45f3-a1d9-2dd0-e561d49b0eb3 * Version Independent ID: 27677330-fa36-53df-e83d-44f3e5e7a93c * Content: [Use Azure Private Link to securely connect networks to Azure Automation](https://docs.microsoft.com/en-us/azure/automation/how-to/private-link-security) * Content Source: [articles/automation/how-to/private-link-security.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/how-to/private-link-security.md) * Service: **automation** * Sub-service: **** * GitHub Login: @SGSneha * Microsoft Alias: **sudhirsneha**",1,could use clarification on hybrid runbook workers with private link please provide clarification on the following point with the current implementation of private links for azure automation it only supports running jobs on the hybrid runbook worker connected to an azure virtual network and does not support cloud jobs there s places we can run hybrid runbook workers the first is on an azure vm and the second is on an on premises vm outside of azure can the point be clarified to explicitly state whether on premises vms are supported to run hybrid runbook workers against an automation account with private link enabled document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service github login sgsneha microsoft alias sudhirsneha ,1 104592,22701980386.0,IssuesEvent,2022-07-05 11:32:43,VirtusLab/git-machete,https://api.github.com/repos/VirtusLab/git-machete,opened,Use keyword-only arguments in more places across the codebase,good first issue code quality,"Like with `git_machete.client.MacheteClient.create_github_pr`. Mostly to prevent confusing order of params of the same type (such that mypy wouldn't capture if params are swapped). Generally, `create_github_pr` isn't in fact the best candidate (albeit there are still 2 arguments of `LocalBranchShortName` type).",1.0,"Use keyword-only arguments in more places across the codebase - Like with `git_machete.client.MacheteClient.create_github_pr`. Mostly to prevent confusing order of params of the same type (such that mypy wouldn't capture if params are swapped). Generally, `create_github_pr` isn't in fact the best candidate (albeit there are still 2 arguments of `LocalBranchShortName` type).",0,use keyword only arguments in more places across the codebase like with git machete client macheteclient create github pr mostly to prevent confusing order of params of the same type such that mypy wouldn t capture if params are swapped generally create github pr isn t in fact the best candidate albeit there are still arguments of localbranchshortname type ,0 305144,9359840570.0,IssuesEvent,2019-04-02 08:00:44,geosolutions-it/tdipisa,https://api.github.com/repos/geosolutions-it/tdipisa,reopened,SCIADRO - Plan Work,Priority: High task,"- [ ] Planning of the UI work in MS2 - [ ] Planning of the backend part and start checking data ingestion to review model etc",1.0,"SCIADRO - Plan Work - - [ ] Planning of the UI work in MS2 - [ ] Planning of the backend part and start checking data ingestion to review model etc",0,sciadro plan work planning of the ui work in planning of the backend part and start checking data ingestion to review model etc,0 8719,27172165187.0,IssuesEvent,2023-02-17 20:30:55,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Search on group not working without Sites.Read.All permission,type:bug area:Docs automation:Closed,"Searching in a Group Drive with a **Client Credentials** (Application permission) token doesn't work with the `Files.ReadWrite.All` permission. For example: `https://graph.microsoft.com/v1.0/groups/{GROUP_ID}/drive/root/search(q='newFileTest.docx')` ... results in a 403 Forbidden error: ```json { ""error"": { ""code"": ""accessDenied"", ""message"": ""The caller does not have permission to perform the action."", ""innerError"": { ""request-id"": ""**redacted**"", ""date"": ""2019-04-17T12:47:10"" } } } ``` Only when the **Client Credentials** (Application permission) token has the `Sites.ReadWrite.All` permission (probably `Sites.Read.All` is enough already), search works. Please update the docs to clarify this behavior. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 6a094aa0-adc5-9f08-d75e-bff6c5c42b4b * Version Independent ID: c674446f-0d55-5c4b-0d0a-bbddf184dd1b * Content: [Search for files - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_search?view=odsp-graph-online#feedback) * Content Source: [docs/rest-api/api/driveitem_search.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/api/driveitem_search.md) * Product: **onedrive** * GitHub Login: @rgregg * Microsoft Alias: **rgregg**",1.0,"Search on group not working without Sites.Read.All permission - Searching in a Group Drive with a **Client Credentials** (Application permission) token doesn't work with the `Files.ReadWrite.All` permission. For example: `https://graph.microsoft.com/v1.0/groups/{GROUP_ID}/drive/root/search(q='newFileTest.docx')` ... results in a 403 Forbidden error: ```json { ""error"": { ""code"": ""accessDenied"", ""message"": ""The caller does not have permission to perform the action."", ""innerError"": { ""request-id"": ""**redacted**"", ""date"": ""2019-04-17T12:47:10"" } } } ``` Only when the **Client Credentials** (Application permission) token has the `Sites.ReadWrite.All` permission (probably `Sites.Read.All` is enough already), search works. Please update the docs to clarify this behavior. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 6a094aa0-adc5-9f08-d75e-bff6c5c42b4b * Version Independent ID: c674446f-0d55-5c4b-0d0a-bbddf184dd1b * Content: [Search for files - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_search?view=odsp-graph-online#feedback) * Content Source: [docs/rest-api/api/driveitem_search.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/api/driveitem_search.md) * Product: **onedrive** * GitHub Login: @rgregg * Microsoft Alias: **rgregg**",1,search on group not working without sites read all permission searching in a group drive with a client credentials application permission token doesn t work with the files readwrite all permission for example results in a forbidden error json error code accessdenied message the caller does not have permission to perform the action innererror request id redacted date only when the client credentials application permission token has the sites readwrite all permission probably sites read all is enough already search works please update the docs to clarify this behavior document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product onedrive github login rgregg microsoft alias rgregg ,1 254997,27484692871.0,IssuesEvent,2023-03-04 01:08:50,panasalap/linux-4.1.15,https://api.github.com/repos/panasalap/linux-4.1.15,opened,CVE-2018-14615 (Medium) detected in linux-yocto-devv4.2.8,security vulnerability,"## CVE-2018-14615 - Medium Severity Vulnerability
Vulnerable Library - linux-yocto-devv4.2.8

Linux Embedded Kernel - tracks the next mainline release

Library home page: https://git.yoctoproject.org/git/linux-yocto-dev

Found in base branch: master

Vulnerable Source Files (2)

/fs/f2fs/inline.c /fs/f2fs/inline.c

Vulnerability Details

An issue was discovered in the Linux kernel through 4.17.10. There is a buffer overflow in truncate_inline_inode() in fs/f2fs/inline.c when umounting an f2fs image, because a length value may be negative.

Publish Date: 2018-07-27

URL: CVE-2018-14615

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-14615

Release Date: 2018-07-27

Fix Resolution: v4.19-rc1

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2018-14615 (Medium) detected in linux-yocto-devv4.2.8 - ## CVE-2018-14615 - Medium Severity Vulnerability
Vulnerable Library - linux-yocto-devv4.2.8

Linux Embedded Kernel - tracks the next mainline release

Library home page: https://git.yoctoproject.org/git/linux-yocto-dev

Found in base branch: master

Vulnerable Source Files (2)

/fs/f2fs/inline.c /fs/f2fs/inline.c

Vulnerability Details

An issue was discovered in the Linux kernel through 4.17.10. There is a buffer overflow in truncate_inline_inode() in fs/f2fs/inline.c when umounting an f2fs image, because a length value may be negative.

Publish Date: 2018-07-27

URL: CVE-2018-14615

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-14615

Release Date: 2018-07-27

Fix Resolution: v4.19-rc1

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linux yocto cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel tracks the next mainline release library home page a href found in base branch master vulnerable source files fs inline c fs inline c vulnerability details an issue was discovered in the linux kernel through there is a buffer overflow in truncate inline inode in fs inline c when umounting an image because a length value may be negative publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0 3124,13133250489.0,IssuesEvent,2020-08-06 20:32:06,MLH-Fellowship/nodemaker,https://api.github.com/repos/MLH-Fellowship/nodemaker,closed,Handle errors based on error object,backend mid priority revisit automation,"Revisit the possibility to automate error handling based on the error object: ```ts } catch (error) { // TODO: Replace TODO_ERROR_STATUS_CODE and TODO_ERROR_MESSAGE based on the error object returned by API. if (TODO_ERROR_STATUS_CODE === 401) { // Return a clear error throw new Error('The Hacker News credentials are invalid!'); } if (TODO_ERROR_MESSAGE) { // Try to return the error prettier throw new Error(`Hacker News error response [${TODO_ERROR_STATUS_CODE}]: ${TODO_ERROR_MESSAGE}`); } // If that data does not exist for some reason, return the actual error. throw error; } ```",1.0,"Handle errors based on error object - Revisit the possibility to automate error handling based on the error object: ```ts } catch (error) { // TODO: Replace TODO_ERROR_STATUS_CODE and TODO_ERROR_MESSAGE based on the error object returned by API. if (TODO_ERROR_STATUS_CODE === 401) { // Return a clear error throw new Error('The Hacker News credentials are invalid!'); } if (TODO_ERROR_MESSAGE) { // Try to return the error prettier throw new Error(`Hacker News error response [${TODO_ERROR_STATUS_CODE}]: ${TODO_ERROR_MESSAGE}`); } // If that data does not exist for some reason, return the actual error. throw error; } ```",1,handle errors based on error object revisit the possibility to automate error handling based on the error object ts catch error todo replace todo error status code and todo error message based on the error object returned by api if todo error status code return a clear error throw new error the hacker news credentials are invalid if todo error message try to return the error prettier throw new error hacker news error response todo error message if that data does not exist for some reason return the actual error throw error ,1 84093,16451570695.0,IssuesEvent,2021-05-21 06:43:53,jz-feng/shiba-cafe,https://api.github.com/repos/jz-feng/shiba-cafe,closed,testing/debugging framework to trigger game states,code dev efficiency future,"eg. trigger a certain recipe, change timer durations, etc",1.0,"testing/debugging framework to trigger game states - eg. trigger a certain recipe, change timer durations, etc",0,testing debugging framework to trigger game states eg trigger a certain recipe change timer durations etc,0 1382,10014108863.0,IssuesEvent,2019-07-15 16:40:59,OpenZeppelin/openzeppelin-solidity,https://api.github.com/repos/OpenZeppelin/openzeppelin-solidity,closed,Migrate to Truffle's test node,automation,"Things to look out for: - Custom account balances - Gas limit?",1.0,"Migrate to Truffle's test node - Things to look out for: - Custom account balances - Gas limit?",1,migrate to truffle s test node things to look out for custom account balances gas limit ,1 144256,19286094642.0,IssuesEvent,2021-12-11 01:36:07,Tim-sandbox/WebGoat-8.1,https://api.github.com/repos/Tim-sandbox/WebGoat-8.1,opened,CVE-2021-22096 (Medium) detected in spring-web-5.3.9.jar,security vulnerability,"## CVE-2021-22096 - Medium Severity Vulnerability
Vulnerable Library - spring-web-5.3.9.jar

Spring Web

Library home page: https://github.com/spring-projects/spring-framework

Path to dependency file: WebGoat-8.1/webgoat-container/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar

Dependency Hierarchy: - spring-boot-starter-web-2.5.4.jar (Root Library) - :x: **spring-web-5.3.9.jar** (Vulnerable Library)

Found in base branch: develop

Vulnerability Details

In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.

Publish Date: 2021-10-28

URL: CVE-2021-22096

CVSS 3 Score Details (4.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://tanzu.vmware.com/security/cve-2021-22096

Release Date: 2021-10-28

Fix Resolution: org.springframework:spring:5.2.18.RELEASE,5.3.12

",True,"CVE-2021-22096 (Medium) detected in spring-web-5.3.9.jar - ## CVE-2021-22096 - Medium Severity Vulnerability
Vulnerable Library - spring-web-5.3.9.jar

Spring Web

Library home page: https://github.com/spring-projects/spring-framework

Path to dependency file: WebGoat-8.1/webgoat-container/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar

Dependency Hierarchy: - spring-boot-starter-web-2.5.4.jar (Root Library) - :x: **spring-web-5.3.9.jar** (Vulnerable Library)

Found in base branch: develop

Vulnerability Details

In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.

Publish Date: 2021-10-28

URL: CVE-2021-22096

CVSS 3 Score Details (4.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://tanzu.vmware.com/security/cve-2021-22096

Release Date: 2021-10-28

Fix Resolution: org.springframework:spring:5.2.18.RELEASE,5.3.12

",0,cve medium detected in spring web jar cve medium severity vulnerability vulnerable library spring web jar spring web library home page a href path to dependency file webgoat webgoat container pom xml path to vulnerable library home wss scanner repository org springframework spring web spring web jar home wss scanner repository org springframework spring web spring web jar home wss scanner repository org springframework spring web spring web jar dependency hierarchy spring boot starter web jar root library x spring web jar vulnerable library found in base branch develop vulnerability details in spring framework versions and older unsupported versions it is possible for a user to provide malicious input to cause the insertion of additional log entries publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring release isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter web org springframework spring web isminimumfixversionavailable true minimumfixversion org springframework spring release isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails in spring framework versions and older unsupported versions it is possible for a user to provide malicious input to cause the insertion of additional log entries vulnerabilityurl ,0 5096,18670237300.0,IssuesEvent,2021-10-30 15:15:38,aws/aws-cli,https://api.github.com/repos/aws/aws-cli,reopened,Question: aws ec2 wait instance-status-ok and termination,feature-request configuration waiter automation-exempt,"Given I launch a new EC2 instance and want to wait for its status to become _ok_. Now, before that happens, the instance terminates. There is no way for the waiter to ever satisfy in that situation. However, it waits until it times out after 10 minutes. ``` aws ec2 wait instance-status-ok --instance-ids i-0123456789abcdef # waits 10 min… during that time the EC2 instance gets terminated… Waiter InstanceStatusOk failed: Max attempts exceeded ``` Wouldn't it be possible for a waiter to fail more quickly when it can detect that a condition can no longer be met? e.g. fail with a specific error code, indicating that the condition can not be met. ",1.0,"Question: aws ec2 wait instance-status-ok and termination - Given I launch a new EC2 instance and want to wait for its status to become _ok_. Now, before that happens, the instance terminates. There is no way for the waiter to ever satisfy in that situation. However, it waits until it times out after 10 minutes. ``` aws ec2 wait instance-status-ok --instance-ids i-0123456789abcdef # waits 10 min… during that time the EC2 instance gets terminated… Waiter InstanceStatusOk failed: Max attempts exceeded ``` Wouldn't it be possible for a waiter to fail more quickly when it can detect that a condition can no longer be met? e.g. fail with a specific error code, indicating that the condition can not be met. ",1,question aws wait instance status ok and termination given i launch a new instance and want to wait for its status to become ok now before that happens the instance terminates there is no way for the waiter to ever satisfy in that situation however it waits until it times out after minutes aws wait instance status ok instance ids i waits min… during that time the instance gets terminated… waiter instancestatusok failed max attempts exceeded wouldn t it be possible for a waiter to fail more quickly when it can detect that a condition can no longer be met e g fail with a specific error code indicating that the condition can not be met ,1 4440,16546701370.0,IssuesEvent,2021-05-28 01:30:48,SAP/fundamental-ngx,https://api.github.com/repos/SAP/fundamental-ngx,closed,bug: (platform) menu - focus is missing for avatar menu,E2E automation Medium ariba bug platform,"#### Is this a bug, enhancement, or feature request? bug #### Briefly describe your proposal. One of the avatar menu buttons is skipped on tab navigation and focus state is not shown #### If this is a bug, please provide steps for reproducing it. 1. go to https://fundamental-ngx.netlify.app/#/platform/menu 2. look at the `Menu with Horizontal Positioning` examples 3. use tab key to navigate through all options ![Peek 2020-11-03 16-17](https://user-images.githubusercontent.com/47522152/97996608-a7d45880-1df0-11eb-8778-4a0862c5e48b.gif) ",1.0,"bug: (platform) menu - focus is missing for avatar menu - #### Is this a bug, enhancement, or feature request? bug #### Briefly describe your proposal. One of the avatar menu buttons is skipped on tab navigation and focus state is not shown #### If this is a bug, please provide steps for reproducing it. 1. go to https://fundamental-ngx.netlify.app/#/platform/menu 2. look at the `Menu with Horizontal Positioning` examples 3. use tab key to navigate through all options ![Peek 2020-11-03 16-17](https://user-images.githubusercontent.com/47522152/97996608-a7d45880-1df0-11eb-8778-4a0862c5e48b.gif) ",1,bug platform menu focus is missing for avatar menu is this a bug enhancement or feature request bug briefly describe your proposal one of the avatar menu buttons is skipped on tab navigation and focus state is not shown if this is a bug please provide steps for reproducing it go to look at the menu with horizontal positioning examples use tab key to navigate through all options ,1 236144,25971501893.0,IssuesEvent,2022-12-19 11:37:02,nk7598/linux-4.19.72,https://api.github.com/repos/nk7598/linux-4.19.72,closed,WS-2021-0522 (Medium) detected in linuxlinux-4.19.269 - autoclosed,security vulnerability,"## WS-2021-0522 - Medium Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.269

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in HEAD commit: 8d6de636016872da224f31e7d9d0fe96d373b46c

Vulnerable Source Files (1)

/fs/aio.c

Vulnerability Details

In Linux/Kernel is vulnerable to use-after-free due to missing POLLFREE handling in fs/aio.c

Publish Date: 2021-12-01

URL: WS-2021-0522

CVSS 3 Score Details (6.2)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://osv.dev/vulnerability/GSD-2021-1002601

Release Date: 2021-12-01

Fix Resolution: Linux/Kernel - v4.19.221, v5.4.165, v5.10.85, v5.15.8, v5.16-rc5

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2021-0522 (Medium) detected in linuxlinux-4.19.269 - autoclosed - ## WS-2021-0522 - Medium Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.269

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in HEAD commit: 8d6de636016872da224f31e7d9d0fe96d373b46c

Vulnerable Source Files (1)

/fs/aio.c

Vulnerability Details

In Linux/Kernel is vulnerable to use-after-free due to missing POLLFREE handling in fs/aio.c

Publish Date: 2021-12-01

URL: WS-2021-0522

CVSS 3 Score Details (6.2)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://osv.dev/vulnerability/GSD-2021-1002601

Release Date: 2021-12-01

Fix Resolution: Linux/Kernel - v4.19.221, v5.4.165, v5.10.85, v5.15.8, v5.16-rc5

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in linuxlinux autoclosed ws medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href vulnerable source files fs aio c vulnerability details in linux kernel is vulnerable to use after free due to missing pollfree handling in fs aio c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux kernel step up your open source security game with mend ,0 6516,23309787217.0,IssuesEvent,2022-08-08 07:06:34,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,closed,[lightning] no retries when pd timeout,type/bug severity/moderate component/br component/lightning found/automation,"## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) It is observed that there is no retry when pd timeout ### 2. What did you expect to see? (Required) There should be retry mechanism when pd connection timeout during lightning import ### 3. What did you see instead (Required) Lightning import fails immediately when PD server timeout ``` 2022/01/05 03:12:47.110 +00:00] [INFO] [restore.go:444] [""the whole procedure start""] [2022/01/05 03:12:47.111 +00:00] [INFO] [restore.go:748] [""restore all schema start""] [2022/01/05 03:14:30.165 +00:00] [INFO] [restore.go:767] [""restore all schema completed""] [takeTime=1m43.053950663s] [] [2022/01/05 03:14:42.712 +00:00] [ERROR] [restore.go:462] [""run failed""] [step=2] [error=""Error 9001: PD server timeout""] [2022/01/05 03:14:42.712 +00:00] [ERROR] [restore.go:473] [""the whole procedure failed""] [takeTime=1m55.602514675s] [error=""Error 9001: PD server timeout""] [2022/01/05 03:14:42.713 +00:00] [WARN] [local.go:501] [""remove local db file failed""] [error=""unlinkat /tmp/sorted-kv-dir: device or resource busy""] ``` ### 4. What is your TiDB version? (Required) / # /tidb-lightning -V Release Version: v5.4.0 Git Commit Hash: 974b5784adbbd47d14659916d47dd986effa7b4e Git Branch: heads/refs/tags/v5.4.0 Go Version: go1.16.4 UTC Build Time: 2022-01-03 10:01:05 Race Enabled: false ",1.0,"[lightning] no retries when pd timeout - ## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) It is observed that there is no retry when pd timeout ### 2. What did you expect to see? (Required) There should be retry mechanism when pd connection timeout during lightning import ### 3. What did you see instead (Required) Lightning import fails immediately when PD server timeout ``` 2022/01/05 03:12:47.110 +00:00] [INFO] [restore.go:444] [""the whole procedure start""] [2022/01/05 03:12:47.111 +00:00] [INFO] [restore.go:748] [""restore all schema start""] [2022/01/05 03:14:30.165 +00:00] [INFO] [restore.go:767] [""restore all schema completed""] [takeTime=1m43.053950663s] [] [2022/01/05 03:14:42.712 +00:00] [ERROR] [restore.go:462] [""run failed""] [step=2] [error=""Error 9001: PD server timeout""] [2022/01/05 03:14:42.712 +00:00] [ERROR] [restore.go:473] [""the whole procedure failed""] [takeTime=1m55.602514675s] [error=""Error 9001: PD server timeout""] [2022/01/05 03:14:42.713 +00:00] [WARN] [local.go:501] [""remove local db file failed""] [error=""unlinkat /tmp/sorted-kv-dir: device or resource busy""] ``` ### 4. What is your TiDB version? (Required) / # /tidb-lightning -V Release Version: v5.4.0 Git Commit Hash: 974b5784adbbd47d14659916d47dd986effa7b4e Git Branch: heads/refs/tags/v5.4.0 Go Version: go1.16.4 UTC Build Time: 2022-01-03 10:01:05 Race Enabled: false ",1, no retries when pd timeout bug report please answer these questions before submitting your issue thanks minimal reproduce step required it is observed that there is no retry when pd timeout what did you expect to see required there should be retry mechanism when pd connection timeout during lightning import what did you see instead required lightning import fails immediately when pd server timeout what is your tidb version required tidb lightning v release version git commit hash git branch heads refs tags go version utc build time race enabled false ,1 1865,10987498009.0,IssuesEvent,2019-12-02 09:20:42,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,reopened,a8n: Implement retryCampaign mutation,automation,"This is a follow-up to [RFC 42](https://docs.google.com/document/d/1j85PoL6NOzLX_PHFzBQogZcnttYK0BXj9XnrxF3DYmA/edit). Right now, when the `createCampaign` mutation with a given `CampaignPlan` ID fails due to various reasons (GitHub not reachable, token invalid, gitserver down, ...) the conversion of `ChangesetJobs` into `Changesets` in the `(&a8n.Service).runChangesetJob` ends with the `ChangesetJobs` having its `Error` field populated. [See the code here](https://sourcegraph.com/github.com/sourcegraph/sourcegraph@034cee47c83a7734cf04a4b5720717665b5a69db/-/blob/enterprise/pkg/a8n/service.go#L114) What we want is a `retryCampaign` mutation that * takes in a `Campaign` ID * loads all the failed `ChangesetJobs` (definition: `finished_at` is null, or `error` is non-blank, or no `Changeset` with its `changeset_job_id` exists) * uses `(&a8n.Service).runChangesetJob` to try again to create a commit from the given diff in the connected CampaignJob, push the commit, open a pull request on the codehost, save the pull request as an external service **Important**: for that to work, the `runChangesetJob` method must be idempotent! That means: if it runs twice with the same `ChangesetJob` is **cannot create duplicate pull requests!**. That means it needs to check that new commits are not added to same branch, check for `ErrAlreadyExists` response from code hosts, early-exit if a `Changset` with the given `changeset_job_id` exists, etc. ",1.0,"a8n: Implement retryCampaign mutation - This is a follow-up to [RFC 42](https://docs.google.com/document/d/1j85PoL6NOzLX_PHFzBQogZcnttYK0BXj9XnrxF3DYmA/edit). Right now, when the `createCampaign` mutation with a given `CampaignPlan` ID fails due to various reasons (GitHub not reachable, token invalid, gitserver down, ...) the conversion of `ChangesetJobs` into `Changesets` in the `(&a8n.Service).runChangesetJob` ends with the `ChangesetJobs` having its `Error` field populated. [See the code here](https://sourcegraph.com/github.com/sourcegraph/sourcegraph@034cee47c83a7734cf04a4b5720717665b5a69db/-/blob/enterprise/pkg/a8n/service.go#L114) What we want is a `retryCampaign` mutation that * takes in a `Campaign` ID * loads all the failed `ChangesetJobs` (definition: `finished_at` is null, or `error` is non-blank, or no `Changeset` with its `changeset_job_id` exists) * uses `(&a8n.Service).runChangesetJob` to try again to create a commit from the given diff in the connected CampaignJob, push the commit, open a pull request on the codehost, save the pull request as an external service **Important**: for that to work, the `runChangesetJob` method must be idempotent! That means: if it runs twice with the same `ChangesetJob` is **cannot create duplicate pull requests!**. That means it needs to check that new commits are not added to same branch, check for `ErrAlreadyExists` response from code hosts, early-exit if a `Changset` with the given `changeset_job_id` exists, etc. ",1, implement retrycampaign mutation this is a follow up to right now when the createcampaign mutation with a given campaignplan id fails due to various reasons github not reachable token invalid gitserver down the conversion of changesetjobs into changesets in the service runchangesetjob ends with the changesetjobs having its error field populated what we want is a retrycampaign mutation that takes in a campaign id loads all the failed changesetjobs definition finished at is null or error is non blank or no changeset with its changeset job id exists uses service runchangesetjob to try again to create a commit from the given diff in the connected campaignjob push the commit open a pull request on the codehost save the pull request as an external service important for that to work the runchangesetjob method must be idempotent that means if it runs twice with the same changesetjob is cannot create duplicate pull requests that means it needs to check that new commits are not added to same branch check for erralreadyexists response from code hosts early exit if a changset with the given changeset job id exists etc ,1 6,2632908674.0,IssuesEvent,2015-03-08 16:51:29,houeland/kolproxy,https://api.github.com/repos/houeland/kolproxy,closed,QuestLog prereq for finding black market is too greedy,Component: Automation Priority: High Type: Bug,"I was doing a standard run and proxy burned all my adventures in the black forest after the black market was already unlocked without buying the forged identification documents, I believe because of this change: https://github.com/houeland/kolproxy/commit/428ecc23d7b8df140aa1aaacc70da08d667421e0#diff-6ded12ca806b9f315deec9ee1b5296d9L4686 ",1.0,"QuestLog prereq for finding black market is too greedy - I was doing a standard run and proxy burned all my adventures in the black forest after the black market was already unlocked without buying the forged identification documents, I believe because of this change: https://github.com/houeland/kolproxy/commit/428ecc23d7b8df140aa1aaacc70da08d667421e0#diff-6ded12ca806b9f315deec9ee1b5296d9L4686 ",1,questlog prereq for finding black market is too greedy i was doing a standard run and proxy burned all my adventures in the black forest after the black market was already unlocked without buying the forged identification documents i believe because of this change ,1 248348,7929517475.0,IssuesEvent,2018-07-06 15:19:29,nco/nco,https://api.github.com/repos/nco/nco,closed,Add optional scale_factor/add_offset arguments to ncap2 packing routine,medium priority,"Please make scale_factor and add_offset optional arguments to the ncap2 pack() method, so users can specify these parameters as per discussion in: https://sourceforge.net/p/nco/discussion/9829/thread/2bd55075/?limit=25#f65b",1.0,"Add optional scale_factor/add_offset arguments to ncap2 packing routine - Please make scale_factor and add_offset optional arguments to the ncap2 pack() method, so users can specify these parameters as per discussion in: https://sourceforge.net/p/nco/discussion/9829/thread/2bd55075/?limit=25#f65b",0,add optional scale factor add offset arguments to packing routine please make scale factor and add offset optional arguments to the pack method so users can specify these parameters as per discussion in ,0 6760,23865152993.0,IssuesEvent,2022-09-07 10:21:46,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,closed,[Bitrise] Smoketest for iPad is wrong running the Full Functional test plan instead,eng:automation,"For the workflow RunSmokeXCUITestsiPad https://github.com/mozilla-mobile/firefox-ios/blob/c6e718220467ff3b55505e993b8f88151c3b4747/bitrise.yml#L725 we are running the Full Functional test suite instead of just the Smoketest plan. See[ bitrise logs](https://app.bitrise.io/build/87a9400b-4f76-4ba0-a358-ec3c4e21b684?tab=log). The reason is that there is a - test_plan: option that we need to use instead of the - xcodebuild_test_options: ""-testPlan SmokeXCUITests"". The latter one is unkown and the test plan appears unset, that's why bitrise is running all the tests in the Fennec_Enterprise_XCUITests instead of just the SmokeTest cc @clarmso I will take this one and ask you for review so that you start being familiar with the bitrise.yml file. ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4865) ",1.0,"[Bitrise] Smoketest for iPad is wrong running the Full Functional test plan instead - For the workflow RunSmokeXCUITestsiPad https://github.com/mozilla-mobile/firefox-ios/blob/c6e718220467ff3b55505e993b8f88151c3b4747/bitrise.yml#L725 we are running the Full Functional test suite instead of just the Smoketest plan. See[ bitrise logs](https://app.bitrise.io/build/87a9400b-4f76-4ba0-a358-ec3c4e21b684?tab=log). The reason is that there is a - test_plan: option that we need to use instead of the - xcodebuild_test_options: ""-testPlan SmokeXCUITests"". The latter one is unkown and the test plan appears unset, that's why bitrise is running all the tests in the Fennec_Enterprise_XCUITests instead of just the SmokeTest cc @clarmso I will take this one and ask you for review so that you start being familiar with the bitrise.yml file. ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4865) ",1, smoketest for ipad is wrong running the full functional test plan instead for the workflow runsmokexcuitestsipad we are running the full functional test suite instead of just the smoketest plan see the reason is that there is a test plan option that we need to use instead of the xcodebuild test options testplan smokexcuitests the latter one is unkown and the test plan appears unset that s why bitrise is running all the tests in the fennec enterprise xcuitests instead of just the smoketest cc clarmso i will take this one and ask you for review so that you start being familiar with the bitrise yml file ┆issue is synchronized with this ,1 337481,30248327302.0,IssuesEvent,2023-07-06 18:18:47,unifyai/ivy,https://api.github.com/repos/unifyai/ivy,opened,Fix jax_lax_operators.test_jax_conj,JAX Frontend Sub Task Failing Test,"| | | |---|---| |paddle| |torch| ",1.0,"Fix jax_lax_operators.test_jax_conj - | | | |---|---| |paddle| |torch| ",0,fix jax lax operators test jax conj paddle a href src torch a href src ,0 377880,26274064057.0,IssuesEvent,2023-01-06 20:00:50,profusion/.github,https://api.github.com/repos/profusion/.github,opened,feat: create `README.md` for `ProFUSION`'s GitHub page,documentation help wanted,"## Description This repository is special, as GitHub shares its properties (like PR templates) across different repos in `ProFUSION`'s ownership. This also happens with the `README.md` file, which will appear in the main page of PF's profile here on GitHub. With that, we need to write a good portfolio-like file that will be visible to anyone on GitHub. ## Implementation details Since the file will be public, we can't link to any private repos. Instead, we should detail things like: - Our work ethic - Mention some projects we've worked on - Link our developer's profiles - Mention technologies we excel in - Use attractive graphs/images/gifs/animations in the file, like the best GitHub profiles do These aren't hard points, all of this should be discussed with other employees (or employers) to make this a shared effort ## Potential caveats Any little point that you're not sure can be shared in public needs to be addressed first. Get confirmation with @barbieri or @bdilly BEFORE pushing your changes to the remote (even if it's a wip). Needless to say, Both Barbieri and Dilly should approve the file for it to be merged ## Additional context and visual reference Here are some cool profiles from some of our developers that can serve as inspiration: - [Felipe Bergamin](https://github.com/felipebergamin) - [Daniel Céspedes](https://github.com/devDanielCespedes) - [Ricardo Dalarme](https://github.com/ricardodalarme)",1.0,"feat: create `README.md` for `ProFUSION`'s GitHub page - ## Description This repository is special, as GitHub shares its properties (like PR templates) across different repos in `ProFUSION`'s ownership. This also happens with the `README.md` file, which will appear in the main page of PF's profile here on GitHub. With that, we need to write a good portfolio-like file that will be visible to anyone on GitHub. ## Implementation details Since the file will be public, we can't link to any private repos. Instead, we should detail things like: - Our work ethic - Mention some projects we've worked on - Link our developer's profiles - Mention technologies we excel in - Use attractive graphs/images/gifs/animations in the file, like the best GitHub profiles do These aren't hard points, all of this should be discussed with other employees (or employers) to make this a shared effort ## Potential caveats Any little point that you're not sure can be shared in public needs to be addressed first. Get confirmation with @barbieri or @bdilly BEFORE pushing your changes to the remote (even if it's a wip). Needless to say, Both Barbieri and Dilly should approve the file for it to be merged ## Additional context and visual reference Here are some cool profiles from some of our developers that can serve as inspiration: - [Felipe Bergamin](https://github.com/felipebergamin) - [Daniel Céspedes](https://github.com/devDanielCespedes) - [Ricardo Dalarme](https://github.com/ricardodalarme)",0,feat create readme md for profusion s github page description this repository is special as github shares its properties like pr templates across different repos in profusion s ownership this also happens with the readme md file which will appear in the main page of pf s profile here on github with that we need to write a good portfolio like file that will be visible to anyone on github implementation details since the file will be public we can t link to any private repos instead we should detail things like our work ethic mention some projects we ve worked on link our developer s profiles mention technologies we excel in use attractive graphs images gifs animations in the file like the best github profiles do these aren t hard points all of this should be discussed with other employees or employers to make this a shared effort potential caveats any little point that you re not sure can be shared in public needs to be addressed first get confirmation with barbieri or bdilly before pushing your changes to the remote even if it s a wip needless to say both barbieri and dilly should approve the file for it to be merged additional context and visual reference here are some cool profiles from some of our developers that can serve as inspiration ,0 7437,24880133668.0,IssuesEvent,2022-10-27 23:41:26,AzureAD/microsoft-authentication-library-for-objc,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-objc,closed,Automation tests failure,automation failure,"@AzureAD/appleidentity Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Fixed MSAL build [21f3ada6821bd2839827c49ba1c8c47594e6be81] Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1004872&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1004872&view=logs)",1.0,"Automation tests failure - @AzureAD/appleidentity Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Fixed MSAL build [21f3ada6821bd2839827c49ba1c8c47594e6be81] Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1004872&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1004872&view=logs)",1,automation tests failure azuread appleidentity automation failed for ran against commit fixed msal build pipeline url ,1 2872,12740762016.0,IssuesEvent,2020-06-26 03:45:14,pacorain/home,https://api.github.com/repos/pacorain/home,opened,"Notification: It's raining, you should close the windows!",new automation,Home Assistant should send everyone a notification when the weather changes to rainy and there are open windows.,1.0,"Notification: It's raining, you should close the windows! - Home Assistant should send everyone a notification when the weather changes to rainy and there are open windows.",1,notification it s raining you should close the windows home assistant should send everyone a notification when the weather changes to rainy and there are open windows ,1 5523,19910617943.0,IssuesEvent,2022-01-25 16:48:23,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Test Rate limiting - Global level,automation aps-demo,"Apply Global Rate limiting at Route level(Completed) 1. set api rate limit to global service level 1.1 Post the API request to set rate limiting at route level to Kong 1.2 Verify the request status should be 201 created 2. Verify that Rate limiting is set at global route level 2.1 Make the request call to Kong gateway and verify that service returns 200 status code and verify 'x-ratelimit-remaining-hour' in response header 3. set api rate limit to 1 request per min, local Redis and Scope as Route 3.1 Mark(access-manager) login to the application 3.2 Set the namespace 3.3 Navigate to Consumers Page under Namespaces 3.4 Click on the consumer 3.5 Click on Rate Limiting option 3.6 Enter 1 request in hour input field, Select 'Redis' in Policy drop down and select scope as Route in Rate limiting popup 3.7 Click on Apply button 4. verify rate limit error when the API calls beyond the limit 4.1 Make the request call to Kong gateway and verify that service returns 200 status code 4.2 Make another request call to Kong gateway and verify that service returns 429 status code and 'API rate limit exceeded' message in the response when user the calls the API beyond the limit Apply Global Rate limiting at Service level(Completed) 1. set api rate limit to global service level 1.1 Post the API request to set rate limiting at Service level to Kong 1.2 Verify the request status should be 201 created 2. Verify that Rate limiting is set at global service level 2.1 Make the request call to Kong gateway and verify that service returns 200 status code and verify 'x-ratelimit-remaining-hour' in response header 3. set api rate limit to 1 request per min, local Redis and Scope as Service 3.1 Mark(access-manager) login to the application 3.2 Set the namespace 3.3 Navigate to Consumers Page under Namespaces 3.4 Click on the consumer 3.5 Click on Rate Limiting option 3.6 Enter 1 request in hour input field, Select 'Redis' in Policy drop down and select scope as Service in Rate limiting popup 3.7 Click on Apply button 4. verify rate limit error when the API calls beyond the limit 4.1 Make the request call to Kong gateway and verify that service returns 200 status code 4.2 Make another request call to Kong gateway and verify that service returns 429 status code and 'API rate limit exceeded' message in the response when user the calls the API beyond the limit",1.0,"Test Rate limiting - Global level - Apply Global Rate limiting at Route level(Completed) 1. set api rate limit to global service level 1.1 Post the API request to set rate limiting at route level to Kong 1.2 Verify the request status should be 201 created 2. Verify that Rate limiting is set at global route level 2.1 Make the request call to Kong gateway and verify that service returns 200 status code and verify 'x-ratelimit-remaining-hour' in response header 3. set api rate limit to 1 request per min, local Redis and Scope as Route 3.1 Mark(access-manager) login to the application 3.2 Set the namespace 3.3 Navigate to Consumers Page under Namespaces 3.4 Click on the consumer 3.5 Click on Rate Limiting option 3.6 Enter 1 request in hour input field, Select 'Redis' in Policy drop down and select scope as Route in Rate limiting popup 3.7 Click on Apply button 4. verify rate limit error when the API calls beyond the limit 4.1 Make the request call to Kong gateway and verify that service returns 200 status code 4.2 Make another request call to Kong gateway and verify that service returns 429 status code and 'API rate limit exceeded' message in the response when user the calls the API beyond the limit Apply Global Rate limiting at Service level(Completed) 1. set api rate limit to global service level 1.1 Post the API request to set rate limiting at Service level to Kong 1.2 Verify the request status should be 201 created 2. Verify that Rate limiting is set at global service level 2.1 Make the request call to Kong gateway and verify that service returns 200 status code and verify 'x-ratelimit-remaining-hour' in response header 3. set api rate limit to 1 request per min, local Redis and Scope as Service 3.1 Mark(access-manager) login to the application 3.2 Set the namespace 3.3 Navigate to Consumers Page under Namespaces 3.4 Click on the consumer 3.5 Click on Rate Limiting option 3.6 Enter 1 request in hour input field, Select 'Redis' in Policy drop down and select scope as Service in Rate limiting popup 3.7 Click on Apply button 4. verify rate limit error when the API calls beyond the limit 4.1 Make the request call to Kong gateway and verify that service returns 200 status code 4.2 Make another request call to Kong gateway and verify that service returns 429 status code and 'API rate limit exceeded' message in the response when user the calls the API beyond the limit",1,test rate limiting global level apply global rate limiting at route level completed set api rate limit to global service level post the api request to set rate limiting at route level to kong verify the request status should be created verify that rate limiting is set at global route level make the request call to kong gateway and verify that service returns status code and verify x ratelimit remaining hour in response header set api rate limit to request per min local redis and scope as route mark access manager login to the application set the namespace navigate to consumers page under namespaces click on the consumer click on rate limiting option enter request in hour input field select redis in policy drop down and select scope as route in rate limiting popup click on apply button verify rate limit error when the api calls beyond the limit make the request call to kong gateway and verify that service returns status code make another request call to kong gateway and verify that service returns status code and api rate limit exceeded message in the response when user the calls the api beyond the limit apply global rate limiting at service level completed set api rate limit to global service level post the api request to set rate limiting at service level to kong verify the request status should be created verify that rate limiting is set at global service level make the request call to kong gateway and verify that service returns status code and verify x ratelimit remaining hour in response header set api rate limit to request per min local redis and scope as service mark access manager login to the application set the namespace navigate to consumers page under namespaces click on the consumer click on rate limiting option enter request in hour input field select redis in policy drop down and select scope as service in rate limiting popup click on apply button verify rate limit error when the api calls beyond the limit make the request call to kong gateway and verify that service returns status code make another request call to kong gateway and verify that service returns status code and api rate limit exceeded message in the response when user the calls the api beyond the limit,1 1007,9122064856.0,IssuesEvent,2019-02-23 03:52:56,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Monitoring agent does not install Automation dependencies,automation/svc,"PATH C:\Program Files\Microsoft Monitoring Agent\Agent\AzureAutomation\ does not exist after installing monitoring agent, manual installation cannot be completed --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 7b29372c-7bd9-7da2-4cff-9afbb432bccf * Version Independent ID: 66ce101d-d21b-3fdf-be70-7f9cadc1570e * Content: [Azure Automation Windows Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-windows-hrw-install#manual-deployment) * Content Source: [articles/automation/automation-windows-hrw-install.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-windows-hrw-install.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Monitoring agent does not install Automation dependencies - PATH C:\Program Files\Microsoft Monitoring Agent\Agent\AzureAutomation\ does not exist after installing monitoring agent, manual installation cannot be completed --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 7b29372c-7bd9-7da2-4cff-9afbb432bccf * Version Independent ID: 66ce101d-d21b-3fdf-be70-7f9cadc1570e * Content: [Azure Automation Windows Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-windows-hrw-install#manual-deployment) * Content Source: [articles/automation/automation-windows-hrw-install.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-windows-hrw-install.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,monitoring agent does not install automation dependencies path c program files microsoft monitoring agent agent azureautomation does not exist after installing monitoring agent manual installation cannot be completed document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 498547,14409879942.0,IssuesEvent,2020-12-04 03:22:18,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,`pulumi plugin install --reinstall` doesn't work for Python projects,kind/bug language/python priority/P1,"#### Problem `pulumi plugin install --reinstall` doesn't work for Python projects. Running with verbose output seems to indicate zero required plugins were found. Looking through the code I suspect this `TODO` is the issue. https://github.com/pulumi/pulumi/blob/89c956d18942c1fcbf687da3052dd26089d8f486/sdk/python/cmd/pulumi-language-python/main.go#L144-L149 #### Verbose Output ``` % pulumi plugin install --reinstall -v9 --logtostderr I1019 16:15:34.901943 11777 pulumi.go:129] skipping update check I1019 16:15:34.906450 11777 plugins.go:470] GetPluginPath(language, python, ): found on $PATH /usr/local/bin/pulumi-language-python I1019 16:15:34.906923 11777 plugin.go:83] Launching plugin 'python' from '/usr/local/bin/pulumi-language-python' with args: 127.0.0.1:61623 I1019 16:15:34.932973 11777 langruntime_plugin.go:178] langhost[python].GetPluginInfo() executing I1019 16:15:34.934407 11777 langruntime_plugin.go:91] langhost[python].GetRequiredPlugins(proj=aws-py-fargate,pwd=/Users/clstokes/go/src/github.com/pulumi/examples/aws-py-fargate,program=.) executing I1019 16:15:34.935436 11777 langruntime_plugin.go:133] langhost[python].GetRequiredPlugins(proj=aws-py-fargate,pwd=/Users/clstokes/go/src/github.com/pulumi/examples/aws-py-fargate,program=.) success: #versions=0 I1019 16:15:34.935522 11777 plugins.go:470] GetPluginPath(language, python, ): found on $PATH /usr/local/bin/pulumi-language-python ``` ",1.0,"`pulumi plugin install --reinstall` doesn't work for Python projects - #### Problem `pulumi plugin install --reinstall` doesn't work for Python projects. Running with verbose output seems to indicate zero required plugins were found. Looking through the code I suspect this `TODO` is the issue. https://github.com/pulumi/pulumi/blob/89c956d18942c1fcbf687da3052dd26089d8f486/sdk/python/cmd/pulumi-language-python/main.go#L144-L149 #### Verbose Output ``` % pulumi plugin install --reinstall -v9 --logtostderr I1019 16:15:34.901943 11777 pulumi.go:129] skipping update check I1019 16:15:34.906450 11777 plugins.go:470] GetPluginPath(language, python, ): found on $PATH /usr/local/bin/pulumi-language-python I1019 16:15:34.906923 11777 plugin.go:83] Launching plugin 'python' from '/usr/local/bin/pulumi-language-python' with args: 127.0.0.1:61623 I1019 16:15:34.932973 11777 langruntime_plugin.go:178] langhost[python].GetPluginInfo() executing I1019 16:15:34.934407 11777 langruntime_plugin.go:91] langhost[python].GetRequiredPlugins(proj=aws-py-fargate,pwd=/Users/clstokes/go/src/github.com/pulumi/examples/aws-py-fargate,program=.) executing I1019 16:15:34.935436 11777 langruntime_plugin.go:133] langhost[python].GetRequiredPlugins(proj=aws-py-fargate,pwd=/Users/clstokes/go/src/github.com/pulumi/examples/aws-py-fargate,program=.) success: #versions=0 I1019 16:15:34.935522 11777 plugins.go:470] GetPluginPath(language, python, ): found on $PATH /usr/local/bin/pulumi-language-python ``` ",0, pulumi plugin install reinstall doesn t work for python projects problem pulumi plugin install reinstall doesn t work for python projects running with verbose output seems to indicate zero required plugins were found looking through the code i suspect this todo is the issue verbose output pulumi plugin install reinstall logtostderr pulumi go skipping update check plugins go getpluginpath language python found on path usr local bin pulumi language python plugin go launching plugin python from usr local bin pulumi language python with args langruntime plugin go langhost getplugininfo executing langruntime plugin go langhost getrequiredplugins proj aws py fargate pwd users clstokes go src github com pulumi examples aws py fargate program executing langruntime plugin go langhost getrequiredplugins proj aws py fargate pwd users clstokes go src github com pulumi examples aws py fargate program success versions plugins go getpluginpath language python found on path usr local bin pulumi language python ,0 307361,9416130636.0,IssuesEvent,2019-04-10 14:05:23,robotframework/robotframework,https://api.github.com/repos/robotframework/robotframework,closed,Deprecate omitting lines with only `...`,deprecation enhancement priority: medium,"At the moment lines with only the continuation marker `...` are handled somewhat inconsistently. 1. When used with documentation in the settings section or with the `[Documentation]` setting such a line creates an empty documentation line needed to form paragraphs: ```robotframework *** Settings *** Documentation First row. ... Second row. ... ... First row of the second paragraph. ``` 2. When used as an argument to a keyword, such rows are ignored. For example, here `Keyword` ends up called with two arguments and the line with only `...` is totally ignored: ```robotframework *** Test Cases *** Example Keyword argument ... ... another argument ``` This different handling of lines with only `...` causes problems for the new parsed in RF 3.2 (#3076) similarly as #3105 and #3106. It seems that the best way to handle this problem is making lines with only `...` equivalent to lines with only a single empty value. That won't affect the usage with documentation where this syntax currently is needed and thus actually used. The change obviously affects the usage with keywords as in the second example above, but because currently the syntax has no effect nobody should have any reasons to use it. We are going to deprecate using `...` without a meaning in RF 3.1.2 and then change the behavior in RF 3.2.",1.0,"Deprecate omitting lines with only `...` - At the moment lines with only the continuation marker `...` are handled somewhat inconsistently. 1. When used with documentation in the settings section or with the `[Documentation]` setting such a line creates an empty documentation line needed to form paragraphs: ```robotframework *** Settings *** Documentation First row. ... Second row. ... ... First row of the second paragraph. ``` 2. When used as an argument to a keyword, such rows are ignored. For example, here `Keyword` ends up called with two arguments and the line with only `...` is totally ignored: ```robotframework *** Test Cases *** Example Keyword argument ... ... another argument ``` This different handling of lines with only `...` causes problems for the new parsed in RF 3.2 (#3076) similarly as #3105 and #3106. It seems that the best way to handle this problem is making lines with only `...` equivalent to lines with only a single empty value. That won't affect the usage with documentation where this syntax currently is needed and thus actually used. The change obviously affects the usage with keywords as in the second example above, but because currently the syntax has no effect nobody should have any reasons to use it. We are going to deprecate using `...` without a meaning in RF 3.1.2 and then change the behavior in RF 3.2.",0,deprecate omitting lines with only at the moment lines with only the continuation marker are handled somewhat inconsistently when used with documentation in the settings section or with the setting such a line creates an empty documentation line needed to form paragraphs robotframework settings documentation first row second row first row of the second paragraph when used as an argument to a keyword such rows are ignored for example here keyword ends up called with two arguments and the line with only is totally ignored robotframework test cases example keyword argument another argument this different handling of lines with only causes problems for the new parsed in rf similarly as and it seems that the best way to handle this problem is making lines with only equivalent to lines with only a single empty value that won t affect the usage with documentation where this syntax currently is needed and thus actually used the change obviously affects the usage with keywords as in the second example above but because currently the syntax has no effect nobody should have any reasons to use it we are going to deprecate using without a meaning in rf and then change the behavior in rf ,0 155141,13612621601.0,IssuesEvent,2020-09-23 10:34:43,DigitalExcellence/dex-backend,https://api.github.com/repos/DigitalExcellence/dex-backend,opened,Research: Do we need Audit Logging for our applications?,documentation,"**Is your feature request related to a problem? Please describe.** Audit Logging can be important in an application. Both for security and debugging. **Describe the solution you'd like** Investigate what the benefits are of audit logging, what the drawbacks are. Investigate what kind of audit logging there is. Investigate what we could and what we should log for our use case. Investigate what the implications are in terms of privacy/gdpr. **Additional context** I'm especially interested in knowing how we should handle GDPR with this. I also think this will be most useful for the Identity Server to log when a user logs in, when they log out, when a role is changed. But also when a project is deleted or other distructive actions like that. ",1.0,"Research: Do we need Audit Logging for our applications? - **Is your feature request related to a problem? Please describe.** Audit Logging can be important in an application. Both for security and debugging. **Describe the solution you'd like** Investigate what the benefits are of audit logging, what the drawbacks are. Investigate what kind of audit logging there is. Investigate what we could and what we should log for our use case. Investigate what the implications are in terms of privacy/gdpr. **Additional context** I'm especially interested in knowing how we should handle GDPR with this. I also think this will be most useful for the Identity Server to log when a user logs in, when they log out, when a role is changed. But also when a project is deleted or other distructive actions like that. ",0,research do we need audit logging for our applications is your feature request related to a problem please describe audit logging can be important in an application both for security and debugging describe the solution you d like investigate what the benefits are of audit logging what the drawbacks are investigate what kind of audit logging there is investigate what we could and what we should log for our use case investigate what the implications are in terms of privacy gdpr additional context i m especially interested in knowing how we should handle gdpr with this i also think this will be most useful for the identity server to log when a user logs in when they log out when a role is changed but also when a project is deleted or other distructive actions like that ,0 5611,20196525016.0,IssuesEvent,2022-02-11 11:10:14,Music-Bot-for-Jitsi/Jimmi,https://api.github.com/repos/Music-Bot-for-Jitsi/Jimmi,opened,Setup CodeCov or Codeclimate,automation," > **As a** developer > **I want** my code to be checked automatically for test coverage on each push on main > **so that** I have clear metrics showing my code quality. ## Description: Setup CodeCov or Codeclimate for this repo using GitHub Actions. ### 🟢 In scope: ### 🔴 Not in scope: ## What should be the result? ",1.0,"Setup CodeCov or Codeclimate - > **As a** developer > **I want** my code to be checked automatically for test coverage on each push on main > **so that** I have clear metrics showing my code quality. ## Description: Setup CodeCov or Codeclimate for this repo using GitHub Actions. ### 🟢 In scope: ### 🔴 Not in scope: ## What should be the result? ",1,setup codecov or codeclimate as a developer i want my code to be checked automatically for test coverage on each push on main so that i have clear metrics showing my code quality description setup codecov or codeclimate for this repo using github actions 🟢 in scope 🔴 not in scope what should be the result ,1 74573,25184650576.0,IssuesEvent,2022-11-11 16:43:40,idaholab/moose,https://api.github.com/repos/idaholab/moose,opened,Sibling transfer issues warning if multiapps are on different execute_ons,C: Framework T: defect P: normal,"## Bug Description When performing a sub-app to sub-app transfer, i.e. sibling transfer, where the multiapps have different `execute_on`s, it is impossible to avoid a warning stating that the transfer does not have the same `execute_on` flags as either multiapp. It is my opinion that the transfer should execute during the `to_multi_app` execution since sibling transfers are before multiapp execution. And there should only be a warning if it doesn't match the `to_multi_app` ## Steps to Reproduce Here is the main and sub app inputs recreating the issue: main.i: ``` [MultiApps] [sub1] type = FullSolveMultiApp input_files = sibling_sub.i execute_on = TIMESTEP_BEGIN cli_args = 'Variables/u/initial_condition=2' [] [sub2] type = FullSolveMultiApp input_files = sibling_sub.i execute_on = TIMESTEP_END [] [] [Transfers] [sibling_transfer] type = MultiAppCopyTransfer from_multi_app = sub1 to_multi_app = sub2 source_variable = u variable = u [] [] [Mesh] [min] type = GeneratedMeshGenerator dim = 1 nx = 1 [] [] [Problem] solve = false kernel_coverage_check = false skip_nl_system_check = true verbose_multiapps = true [] [Executioner] type = Steady [] ``` sibling_sub.i: ``` [Variables/u] [] [Postprocessors/avg_u] type = ElementAverageValue variable = u [] [Mesh] [min] type = GeneratedMeshGenerator dim = 1 nx = 1 [] [] [Problem] solve = false kernel_coverage_check = false skip_nl_system_check = true verbose_multiapps = true [] [Executioner] type = Steady [] ``` Without specifying the transfer's `execute_on` it seems to want to execute on every possible exec flag (`INITIAL`, `TIMESTEP_BEGIN`, `TIMESTEP_END`, `FINAL`) and it issues the warning: ``` *** Warning *** The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"". MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags ``` With `Transfers/sibling_transfer/execute_on=TIMESTEP_END` it again executes on every flag and issues the warning: ``` *** Warning *** The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"". MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags ``` Finally, with `Transfers/sibling_transfer/execute_on='TIMESTEP_BEGIN TIMESTEP_END'` it executes on every flag and issues the warning: ``` *** Warning *** The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"". MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags *** Warning *** The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"". MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags ``` To summarize: - Transfer wants to execute on every flag, no matter what. - No `execute_on`: warning for `to_multi_app` - `execute_on=TIMESTEP_END`: warning for `from_multi_app` - `execute_on='TIMESTEP_BEGIN TIMESTEP_END'`: warning for both `from_multi_app` and `to_multi_app` ## Impact Two major impacts: 1. Although it does not affect the answer, performing transfers on every exec flag could be costly for some transfers. 2. The unavoidable warning message is erroneous (IMO) and a test with this type of transfer will always fail because of the warning.",1.0,"Sibling transfer issues warning if multiapps are on different execute_ons - ## Bug Description When performing a sub-app to sub-app transfer, i.e. sibling transfer, where the multiapps have different `execute_on`s, it is impossible to avoid a warning stating that the transfer does not have the same `execute_on` flags as either multiapp. It is my opinion that the transfer should execute during the `to_multi_app` execution since sibling transfers are before multiapp execution. And there should only be a warning if it doesn't match the `to_multi_app` ## Steps to Reproduce Here is the main and sub app inputs recreating the issue: main.i: ``` [MultiApps] [sub1] type = FullSolveMultiApp input_files = sibling_sub.i execute_on = TIMESTEP_BEGIN cli_args = 'Variables/u/initial_condition=2' [] [sub2] type = FullSolveMultiApp input_files = sibling_sub.i execute_on = TIMESTEP_END [] [] [Transfers] [sibling_transfer] type = MultiAppCopyTransfer from_multi_app = sub1 to_multi_app = sub2 source_variable = u variable = u [] [] [Mesh] [min] type = GeneratedMeshGenerator dim = 1 nx = 1 [] [] [Problem] solve = false kernel_coverage_check = false skip_nl_system_check = true verbose_multiapps = true [] [Executioner] type = Steady [] ``` sibling_sub.i: ``` [Variables/u] [] [Postprocessors/avg_u] type = ElementAverageValue variable = u [] [Mesh] [min] type = GeneratedMeshGenerator dim = 1 nx = 1 [] [] [Problem] solve = false kernel_coverage_check = false skip_nl_system_check = true verbose_multiapps = true [] [Executioner] type = Steady [] ``` Without specifying the transfer's `execute_on` it seems to want to execute on every possible exec flag (`INITIAL`, `TIMESTEP_BEGIN`, `TIMESTEP_END`, `FINAL`) and it issues the warning: ``` *** Warning *** The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"". MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags ``` With `Transfers/sibling_transfer/execute_on=TIMESTEP_END` it again executes on every flag and issues the warning: ``` *** Warning *** The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"". MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags ``` Finally, with `Transfers/sibling_transfer/execute_on='TIMESTEP_BEGIN TIMESTEP_END'` it executes on every flag and issues the warning: ``` *** Warning *** The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"". MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags *** Warning *** The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"". MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags ``` To summarize: - Transfer wants to execute on every flag, no matter what. - No `execute_on`: warning for `to_multi_app` - `execute_on=TIMESTEP_END`: warning for `from_multi_app` - `execute_on='TIMESTEP_BEGIN TIMESTEP_END'`: warning for both `from_multi_app` and `to_multi_app` ## Impact Two major impacts: 1. Although it does not affect the answer, performing transfers on every exec flag could be costly for some transfers. 2. The unavoidable warning message is erroneous (IMO) and a test with this type of transfer will always fail because of the warning.",0,sibling transfer issues warning if multiapps are on different execute ons bug description when performing a sub app to sub app transfer i e sibling transfer where the multiapps have different execute on s it is impossible to avoid a warning stating that the transfer does not have the same execute on flags as either multiapp it is my opinion that the transfer should execute during the to multi app execution since sibling transfers are before multiapp execution and there should only be a warning if it doesn t match the to multi app steps to reproduce here is the main and sub app inputs recreating the issue main i type fullsolvemultiapp input files sibling sub i execute on timestep begin cli args variables u initial condition type fullsolvemultiapp input files sibling sub i execute on timestep end type multiappcopytransfer from multi app to multi app source variable u variable u type generatedmeshgenerator dim nx solve false kernel coverage check false skip nl system check true verbose multiapps true type steady sibling sub i type elementaveragevalue variable u type generatedmeshgenerator dim nx solve false kernel coverage check false skip nl system check true verbose multiapps true type steady without specifying the transfer s execute on it seems to want to execute on every possible exec flag initial timestep begin timestep end final and it issues the warning warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated to multi app execute on flags with transfers sibling transfer execute on timestep end it again executes on every flag and issues the warning warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated from multi app execute on flags finally with transfers sibling transfer execute on timestep begin timestep end it executes on every flag and issues the warning warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated from multi app execute on flags warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated to multi app execute on flags to summarize transfer wants to execute on every flag no matter what no execute on warning for to multi app execute on timestep end warning for from multi app execute on timestep begin timestep end warning for both from multi app and to multi app impact two major impacts although it does not affect the answer performing transfers on every exec flag could be costly for some transfers the unavoidable warning message is erroneous imo and a test with this type of transfer will always fail because of the warning ,0 445901,31334924305.0,IssuesEvent,2023-08-24 04:50:42,bghira/SimpleTuner,https://api.github.com/repos/bghira/SimpleTuner,closed,Optimizer: Adafactor support for consumer GPUs,documentation enhancement help wanted good first issue,"Khoya's recommended settings for training SDXL on a 24G GPU (eg. a 3090, 4090) rely on the use of the Adafactor optimizer. This is not currently supported by SimpleTuner. AC: * Add an option for the use of Adafactor instead of AdamW, AdamW8Bit, Dadapt",1.0,"Optimizer: Adafactor support for consumer GPUs - Khoya's recommended settings for training SDXL on a 24G GPU (eg. a 3090, 4090) rely on the use of the Adafactor optimizer. This is not currently supported by SimpleTuner. AC: * Add an option for the use of Adafactor instead of AdamW, AdamW8Bit, Dadapt",0,optimizer adafactor support for consumer gpus khoya s recommended settings for training sdxl on a gpu eg a rely on the use of the adafactor optimizer this is not currently supported by simpletuner ac add an option for the use of adafactor instead of adamw dadapt,0 4836,17693701785.0,IssuesEvent,2021-08-24 13:10:33,CDCgov/prime-field-teams,https://api.github.com/repos/CDCgov/prime-field-teams,reopened,Solution Testing - CSV File Cleanup VBScript,sender-automation,"**Description:** Maybe an additional parameter to indicate what to do with a file in the Output folder/Arg#1 that already exists in /Processed Folder. -Scenario #1 - The program exporting the CSV file always names the file the same thing, so if the file exists in /Processed, we must append (1), (2), etc to the fileBase and then process the file. The MoveFile function's target must be modified to include the new fileBase + "".csv"" Scenario #2 - If the CSV file always has a unique file name, and if it already exists in /Processed, we do not want to process it. We must skip it. ",1.0,"Solution Testing - CSV File Cleanup VBScript - **Description:** Maybe an additional parameter to indicate what to do with a file in the Output folder/Arg#1 that already exists in /Processed Folder. -Scenario #1 - The program exporting the CSV file always names the file the same thing, so if the file exists in /Processed, we must append (1), (2), etc to the fileBase and then process the file. The MoveFile function's target must be modified to include the new fileBase + "".csv"" Scenario #2 - If the CSV file always has a unique file name, and if it already exists in /Processed, we do not want to process it. We must skip it. ",1,solution testing csv file cleanup vbscript description maybe an additional parameter to indicate what to do with a file in the output folder arg that already exists in processed folder scenario the program exporting the csv file always names the file the same thing so if the file exists in processed we must append etc to the filebase and then process the file the movefile function s target must be modified to include the new filebase csv scenario if the csv file always has a unique file name and if it already exists in processed we do not want to process it we must skip it ,1 8364,26799174301.0,IssuesEvent,2023-02-01 14:02:26,quarkusio/quarkus,https://api.github.com/repos/quarkusio/quarkus,closed,`cancel-previous-runs` GH action is still using deprecated Node.js 12 (only in forked repos),area/housekeeping area/infra-automation,"### Description ![screenshot](https://user-images.githubusercontent.com/22860528/215613294-f868d148-7bbc-4f7c-8e22-f40b4374e8fd.png) https://github.com/famod/quarkus/actions/runs/4047994738 We only use this action in foked repos: https://github.com/quarkusio/quarkus/blob/2.16.0.Final/.github/workflows/ci-actions-incremental.yml#L123-L125 ### Implementation ideas - update the action (hi @n1hility 😉) - or find a replacement - or drop it without a replacement (devs need to cancel the run themselves...meh!)",1.0,"`cancel-previous-runs` GH action is still using deprecated Node.js 12 (only in forked repos) - ### Description ![screenshot](https://user-images.githubusercontent.com/22860528/215613294-f868d148-7bbc-4f7c-8e22-f40b4374e8fd.png) https://github.com/famod/quarkus/actions/runs/4047994738 We only use this action in foked repos: https://github.com/quarkusio/quarkus/blob/2.16.0.Final/.github/workflows/ci-actions-incremental.yml#L123-L125 ### Implementation ideas - update the action (hi @n1hility 😉) - or find a replacement - or drop it without a replacement (devs need to cancel the run themselves...meh!)",1, cancel previous runs gh action is still using deprecated node js only in forked repos description we only use this action in foked repos implementation ideas update the action hi 😉 or find a replacement or drop it without a replacement devs need to cancel the run themselves meh ,1 5998,21866716926.0,IssuesEvent,2022-05-19 00:13:23,Studio-Ops-Org/Studio-2022-S1-Repo,https://api.github.com/repos/Studio-Ops-Org/Studio-2022-S1-Repo,opened,ARM Template w CSV,Automation,Rob would like a version of the ARM template for OE1/OSC that creates student users based on a CSV file. The CSV file will hold their username and password. Talk to Rob for any further specifications. ,1.0,ARM Template w CSV - Rob would like a version of the ARM template for OE1/OSC that creates student users based on a CSV file. The CSV file will hold their username and password. Talk to Rob for any further specifications. ,1,arm template w csv rob would like a version of the arm template for osc that creates student users based on a csv file the csv file will hold their username and password talk to rob for any further specifications ,1 4024,15185279747.0,IssuesEvent,2021-02-15 10:43:13,elastic/apm-server,https://api.github.com/repos/elastic/apm-server,closed,[CI] Package stage fails,automation bug ci team:automation,"after https://github.com/elastic/apm-server/pull/4695 the stage package is failing, we have to investigate why. ``` ./.ci/scripts/package-docker-snapshot.sh e2326063d084e3a019e77734a7f17c32616e4b32 docker.elastic.co/observability-ci/apm-server ... [2021-02-09T02:47:36.777Z] 2021/02/09 02:47:36 exec: go list -m [2021-02-09T02:51:43.550Z] >> package: Building apm-server type=docker for platform=linux/amd64 [2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=deb for platform=linux/amd64 [2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=tar.gz for platform=linux/amd64 [2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=rpm for platform=linux/amd64 [2021-02-09T02:51:46.122Z] >> package: Building apm-server type=tar.gz for platform=linux/amd64 [2021-02-09T02:51:46.701Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release. [2021-02-09T02:51:46.701Z] Require just the needed backports instead, or 'backports/latest'. [2021-02-09T02:51:46.966Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release. [2021-02-09T02:51:46.967Z] Require just the needed backports instead, or 'backports/latest'. [2021-02-09T02:51:57.029Z] >> package: Building apm-server-oss type=docker for platform=linux/amd64 [2021-02-09T02:51:57.029Z] >> package: Building apm-server type=deb for platform=linux/amd64 [2021-02-09T02:52:09.373Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release. [2021-02-09T02:52:09.373Z] Require just the needed backports instead, or 'backports/latest'. [2021-02-09T02:52:11.308Z] >> package: Building apm-server type=rpm for platform=linux/amd64 [2021-02-09T02:52:23.607Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release. [2021-02-09T02:52:23.607Z] Require just the needed backports instead, or 'backports/latest'. [2021-02-09T02:52:50.305Z] >> Testing package contents [2021-02-09T02:53:12.310Z] package ran for 5m44.920545464s [2021-02-09T02:53:12.310Z] INFO: Get the just built docker image [2021-02-09T02:53:12.310Z] INFO: Retag docker image (docker.elastic.co/apm/apm-server:) [2021-02-09T02:53:12.310Z] Error parsing reference: ""docker.elastic.co/apm/apm-server:"" is not a valid repository/tag: invalid reference format ```",2.0,"[CI] Package stage fails - after https://github.com/elastic/apm-server/pull/4695 the stage package is failing, we have to investigate why. ``` ./.ci/scripts/package-docker-snapshot.sh e2326063d084e3a019e77734a7f17c32616e4b32 docker.elastic.co/observability-ci/apm-server ... [2021-02-09T02:47:36.777Z] 2021/02/09 02:47:36 exec: go list -m [2021-02-09T02:51:43.550Z] >> package: Building apm-server type=docker for platform=linux/amd64 [2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=deb for platform=linux/amd64 [2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=tar.gz for platform=linux/amd64 [2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=rpm for platform=linux/amd64 [2021-02-09T02:51:46.122Z] >> package: Building apm-server type=tar.gz for platform=linux/amd64 [2021-02-09T02:51:46.701Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release. [2021-02-09T02:51:46.701Z] Require just the needed backports instead, or 'backports/latest'. [2021-02-09T02:51:46.966Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release. [2021-02-09T02:51:46.967Z] Require just the needed backports instead, or 'backports/latest'. [2021-02-09T02:51:57.029Z] >> package: Building apm-server-oss type=docker for platform=linux/amd64 [2021-02-09T02:51:57.029Z] >> package: Building apm-server type=deb for platform=linux/amd64 [2021-02-09T02:52:09.373Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release. [2021-02-09T02:52:09.373Z] Require just the needed backports instead, or 'backports/latest'. [2021-02-09T02:52:11.308Z] >> package: Building apm-server type=rpm for platform=linux/amd64 [2021-02-09T02:52:23.607Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release. [2021-02-09T02:52:23.607Z] Require just the needed backports instead, or 'backports/latest'. [2021-02-09T02:52:50.305Z] >> Testing package contents [2021-02-09T02:53:12.310Z] package ran for 5m44.920545464s [2021-02-09T02:53:12.310Z] INFO: Get the just built docker image [2021-02-09T02:53:12.310Z] INFO: Retag docker image (docker.elastic.co/apm/apm-server:) [2021-02-09T02:53:12.310Z] Error parsing reference: ""docker.elastic.co/apm/apm-server:"" is not a valid repository/tag: invalid reference format ```",1, package stage fails after the stage package is failing we have to investigate why ci scripts package docker snapshot sh docker elastic co observability ci apm server exec go list m package building apm server type docker for platform linux package building apm server oss type deb for platform linux package building apm server oss type tar gz for platform linux package building apm server oss type rpm for platform linux package building apm server type tar gz for platform linux doing require backports is deprecated and will not load any backport in the next major release require just the needed backports instead or backports latest doing require backports is deprecated and will not load any backport in the next major release require just the needed backports instead or backports latest package building apm server oss type docker for platform linux package building apm server type deb for platform linux doing require backports is deprecated and will not load any backport in the next major release require just the needed backports instead or backports latest package building apm server type rpm for platform linux doing require backports is deprecated and will not load any backport in the next major release require just the needed backports instead or backports latest testing package contents package ran for info get the just built docker image info retag docker image docker elastic co apm apm server error parsing reference docker elastic co apm apm server is not a valid repository tag invalid reference format ,1 51852,12819860741.0,IssuesEvent,2020-07-06 03:46:45,ballerina-platform/ballerina-lang,https://api.github.com/repos/ballerina-platform/ballerina-lang,closed,ballerina: unknown command 'dist',Area/BuildTools,"os:win10 why??? ``` ballerina -v jBallerina 1.2.2 Language specification 2020R1 ``` ``` ballerina dist update ballerina: unknown command 'dist' Run 'ballerina help' for usage. ```",1.0,"ballerina: unknown command 'dist' - os:win10 why??? ``` ballerina -v jBallerina 1.2.2 Language specification 2020R1 ``` ``` ballerina dist update ballerina: unknown command 'dist' Run 'ballerina help' for usage. ```",0,ballerina unknown command dist os why ballerina v jballerina language specification ballerina dist update ballerina unknown command dist run ballerina help for usage ,0 382565,26504553097.0,IssuesEvent,2023-01-18 12:55:31,activepieces/activepieces,https://api.github.com/repos/activepieces/activepieces,closed,Pieces Framework Reference,documentation,"## Table of Contents - Property Types - Action Reference - Trigger Reference - How to Create Triggers and Types - Basic explanation of OAuth2 ",1.0,"Pieces Framework Reference - ## Table of Contents - Property Types - Action Reference - Trigger Reference - How to Create Triggers and Types - Basic explanation of OAuth2 ",0,pieces framework reference table of contents property types action reference trigger reference how to create triggers and types basic explanation of ,0 325783,24061352116.0,IssuesEvent,2022-09-16 23:23:39,dafny-lang/compiler-bootstrap,https://api.github.com/repos/dafny-lang/compiler-bootstrap,closed,Add documentation for auditor,documentation,"To document the use of the auditor tool, let's add a `README.rst` in the `src/Tools/Auditor` directory with a link from the top-level `README.rst`.",1.0,"Add documentation for auditor - To document the use of the auditor tool, let's add a `README.rst` in the `src/Tools/Auditor` directory with a link from the top-level `README.rst`.",0,add documentation for auditor to document the use of the auditor tool let s add a readme rst in the src tools auditor directory with a link from the top level readme rst ,0 6709,23770460387.0,IssuesEvent,2022-09-01 15:52:53,kedacore/keda,https://api.github.com/repos/kedacore/keda,closed,Use re-usable workflows for GitHub Actions,enhancement help wanted cant-touch-this automation,"Use re-usable workflows for GitHub Actions to remove the duplication that we have, for example for [doing e2e-tests](https://github.com/kedacore/keda/pull/2568).",1.0,"Use re-usable workflows for GitHub Actions - Use re-usable workflows for GitHub Actions to remove the duplication that we have, for example for [doing e2e-tests](https://github.com/kedacore/keda/pull/2568).",1,use re usable workflows for github actions use re usable workflows for github actions to remove the duplication that we have for example for ,1 23405,16111036724.0,IssuesEvent,2021-04-27 21:14:45,APSIMInitiative/ApsimX,https://api.github.com/repos/APSIMInitiative/ApsimX,closed,Properties in map component overlap map in gtk2 version,bug interface/infrastructure,"In the gtk2 build, the properties in the map component overlap the actual map area.",1.0,"Properties in map component overlap map in gtk2 version - In the gtk2 build, the properties in the map component overlap the actual map area.",0,properties in map component overlap map in version in the build the properties in the map component overlap the actual map area ,0 2323,11770750916.0,IssuesEvent,2020-03-15 20:38:04,matchID-project/deces-ui,https://api.github.com/repos/matchID-project/deces-ui,closed,Change method for versionning docker image using pertinent code only and tagging versions with branch,Automation,"APP_VERSION should only consider an hash of - js code - docker environment - maybe Makefile But must exclude travis and branche merge for example - an already build and published docker image should not be rebuilt and note republished. Explore it using something like : git describe $(git log -n 1 --format=%H -- path/to/subfolder). Moreover a docker tag should be put after successfull tests only, as a proof of good deployability. A docker version would then just be built once. Reducing from 1""30 each merging process. PR could test locally and publish version so that dev focus on remote deployment. May be 1 min less for local testing in dev and master branches. Finally - a rebuild of a version in travis for dev and master branches should consider caching on sucessfull build like docker image. And skip ditectly to deployment for accelaration then publish tags. A dev merge could pass to 4 minutes. And a master to 7 minutes. All this may incude most changes in matchid/tools ",1.0,"Change method for versionning docker image using pertinent code only and tagging versions with branch - APP_VERSION should only consider an hash of - js code - docker environment - maybe Makefile But must exclude travis and branche merge for example - an already build and published docker image should not be rebuilt and note republished. Explore it using something like : git describe $(git log -n 1 --format=%H -- path/to/subfolder). Moreover a docker tag should be put after successfull tests only, as a proof of good deployability. A docker version would then just be built once. Reducing from 1""30 each merging process. PR could test locally and publish version so that dev focus on remote deployment. May be 1 min less for local testing in dev and master branches. Finally - a rebuild of a version in travis for dev and master branches should consider caching on sucessfull build like docker image. And skip ditectly to deployment for accelaration then publish tags. A dev merge could pass to 4 minutes. And a master to 7 minutes. All this may incude most changes in matchid/tools ",1,change method for versionning docker image using pertinent code only and tagging versions with branch app version should only consider an hash of js code docker environment maybe makefile but must exclude travis and branche merge for example an already build and published docker image should not be rebuilt and note republished explore it using something like git describe git log n format h path to subfolder moreover a docker tag should be put after successfull tests only as a proof of good deployability a docker version would then just be built once reducing from each merging process pr could test locally and publish version so that dev focus on remote deployment may be min less for local testing in dev and master branches finally a rebuild of a version in travis for dev and master branches should consider caching on sucessfull build like docker image and skip ditectly to deployment for accelaration then publish tags a dev merge could pass to minutes and a master to minutes all this may incude most changes in matchid tools ,1 4727,17359791902.0,IssuesEvent,2021-07-29 18:52:26,BCDevOps/developer-experience,https://api.github.com/repos/BCDevOps/developer-experience,opened,Create an RocketChat channel for Advanced Solution's incident ticket. - RC- ServiceNow integration - ,enhancement env/prod rocketchat team/DXC tech/automation,"**Describe the issue** Currently `# testing-for-integration` is set up for RC-SNOW integration testing. This will move to a prod channel once information below are provided. **Definition of done** - [ ] new channel name - [ ] list of people who can access the channel - [ ] set up the channel using integration scripts **Additional context** - Test integration channel: https://chat.developer.gov.bc.ca/group/testing-for-integration - Document for [Integrate RocketChat to AdvSol ServiceNow](https://github.com/bcgov-c/platform-ops/tree/ocp4-base/tools/rocketchat-servicenow#integrate-rocketchat-to-advsol-servicenow) ",1.0,"Create an RocketChat channel for Advanced Solution's incident ticket. - RC- ServiceNow integration - - **Describe the issue** Currently `# testing-for-integration` is set up for RC-SNOW integration testing. This will move to a prod channel once information below are provided. **Definition of done** - [ ] new channel name - [ ] list of people who can access the channel - [ ] set up the channel using integration scripts **Additional context** - Test integration channel: https://chat.developer.gov.bc.ca/group/testing-for-integration - Document for [Integrate RocketChat to AdvSol ServiceNow](https://github.com/bcgov-c/platform-ops/tree/ocp4-base/tools/rocketchat-servicenow#integrate-rocketchat-to-advsol-servicenow) ",1,create an rocketchat channel for advanced solution s incident ticket rc servicenow integration describe the issue currently testing for integration is set up for rc snow integration testing this will move to a prod channel once information below are provided definition of done new channel name list of people who can access the channel set up the channel using integration scripts additional context test integration channel document for ,1 179853,6630553067.0,IssuesEvent,2017-09-25 00:07:19,FACG2/wrap,https://api.github.com/repos/FACG2/wrap,closed,database: change state name in state table to unique,bug priority-3 technical,database: change state name in state table to unique,1.0,database: change state name in state table to unique - database: change state name in state table to unique,0,database change state name in state table to unique database change state name in state table to unique,0 721946,24844385618.0,IssuesEvent,2022-10-26 14:52:07,AY2223S1-CS2103T-T12-1/tp,https://api.github.com/repos/AY2223S1-CS2103T-T12-1/tp,closed,"As a new TA and new user to the system, I can lookup the user guide to better understand what I can do with the system",type.Story priority.HIGH,...so that I can make the most out of the system to ease my TA experience.,1.0,"As a new TA and new user to the system, I can lookup the user guide to better understand what I can do with the system - ...so that I can make the most out of the system to ease my TA experience.",0,as a new ta and new user to the system i can lookup the user guide to better understand what i can do with the system so that i can make the most out of the system to ease my ta experience ,0 25123,4146856586.0,IssuesEvent,2016-06-15 02:47:31,steedos/apps,https://api.github.com/repos/steedos/apps,closed,新建表单,修改表单内容,根据条件判断后,下一步骤名会变,处理人不会变。审批节点的岗位处理人是自动显示的,但是填写节点的处理人属性是审批时指定人员。,fix:Done test:OK type:bug,"下一步骤:审批 ![image](https://cloud.githubusercontent.com/assets/15027092/16005948/b6d38120-319c-11e6-8900-e7b52d26ca01.png) 下一步骤:填写 ![image](https://cloud.githubusercontent.com/assets/15027092/16005967/c829bd72-319c-11e6-964a-e0774198858b.png) ",1.0,"新建表单,修改表单内容,根据条件判断后,下一步骤名会变,处理人不会变。审批节点的岗位处理人是自动显示的,但是填写节点的处理人属性是审批时指定人员。 - 下一步骤:审批 ![image](https://cloud.githubusercontent.com/assets/15027092/16005948/b6d38120-319c-11e6-8900-e7b52d26ca01.png) 下一步骤:填写 ![image](https://cloud.githubusercontent.com/assets/15027092/16005967/c829bd72-319c-11e6-964a-e0774198858b.png) ",0,新建表单,修改表单内容,根据条件判断后,下一步骤名会变,处理人不会变。审批节点的岗位处理人是自动显示的,但是填写节点的处理人属性是审批时指定人员。 下一步骤:审批 下一步骤:填写 ,0 7783,25599037493.0,IssuesEvent,2022-12-01 18:28:50,shellebusch2/DSAllo,https://api.github.com/repos/shellebusch2/DSAllo,closed,Refactor automation solution in Make,enhancement feature-automation,"There needs to be a more efficient way to automate form fields — one that does not use manually-entered if statements and one that can dynamically scale without any manual adjustments when new entries are created on config databases (i.e. ALLO opens a new market) Brian mentioned a solution that involved - pulling entries from config database - putting them in an array - calling that array when mapping/automating make field in the complaint database Although further discussion needs to be had with Brian. ",1.0,"Refactor automation solution in Make - There needs to be a more efficient way to automate form fields — one that does not use manually-entered if statements and one that can dynamically scale without any manual adjustments when new entries are created on config databases (i.e. ALLO opens a new market) Brian mentioned a solution that involved - pulling entries from config database - putting them in an array - calling that array when mapping/automating make field in the complaint database Although further discussion needs to be had with Brian. ",1,refactor automation solution in make there needs to be a more efficient way to automate form fields — one that does not use manually entered if statements and one that can dynamically scale without any manual adjustments when new entries are created on config databases i e allo opens a new market brian mentioned a solution that involved pulling entries from config database putting them in an array calling that array when mapping automating make field in the complaint database although further discussion needs to be had with brian ,1 8024,26125205743.0,IssuesEvent,2022-12-28 17:30:37,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Cypress Test -Keycloak Migration,automation,"- [ ] Update existing test as per changes to identify user (email id instead of user name) - [ ] Create scenario for user migration 1. Assign Access to existing user Spec 1.1 authenticates Janis (api owner) 1.2 Navigate to Namespace Access Page 1.3 Grant namespace access to Old User 2. Authernticate with old user to initiate migration 2.1 authenticates with old user 3. Verify that permission of old user is migrated to new user 3.1 authenticates with new user 3.2 Get the permission of the user 3.3 Verify that new user scopes are same as permissions given to old users 4. Verify that old user is no longer able to sign in 4.1 authenticates with old user 4.2 Verify that user account is disabled ",1.0,"Cypress Test -Keycloak Migration - - [ ] Update existing test as per changes to identify user (email id instead of user name) - [ ] Create scenario for user migration 1. Assign Access to existing user Spec 1.1 authenticates Janis (api owner) 1.2 Navigate to Namespace Access Page 1.3 Grant namespace access to Old User 2. Authernticate with old user to initiate migration 2.1 authenticates with old user 3. Verify that permission of old user is migrated to new user 3.1 authenticates with new user 3.2 Get the permission of the user 3.3 Verify that new user scopes are same as permissions given to old users 4. Verify that old user is no longer able to sign in 4.1 authenticates with old user 4.2 Verify that user account is disabled ",1,cypress test keycloak migration update existing test as per changes to identify user email id instead of user name create scenario for user migration assign access to existing user spec authenticates janis api owner navigate to namespace access page grant namespace access to old user authernticate with old user to initiate migration authenticates with old user verify that permission of old user is migrated to new user authenticates with new user get the permission of the user verify that new user scopes are same as permissions given to old users verify that old user is no longer able to sign in authenticates with old user verify that user account is disabled ,1 163452,6198589631.0,IssuesEvent,2017-07-05 19:31:51,GoogleCloudPlatform/google-cloud-node,https://api.github.com/repos/GoogleCloudPlatform/google-cloud-node,closed,spanner: Unable to insert multiple rows in a single table.insert/transaction.insert call when some rows lack a value for nullable columns,api: spanner priority: p0 type: bug,"In [transaction-request.js](https://github.com/GoogleCloudPlatform/google-cloud-node/blob/ca0d96cb94f7ef4e176edf221f9ee154dcf8c850/packages/spanner/src/transaction-request.js#L687-L695), there is the following (L687-695): ```javascript mutation[method] = { table: table, columns: Object.keys(keyVals[0]), values: keyVals.map(function(keyVal) { return Object.keys(keyVal).map(function(key) { return codec.encode(keyVal[key]); }); }) }; ``` There are at least two bugs here: 1. When an array is provided to table.insert(), if some objects do not have a given key (which is perfectly valid if the column said key corresponds to is nullable), the array within `values` for this object will not be the same length as the columns array, and indices in `columns` will no longer correspond to the correct value in the `values` array for this row/object. This results in (best case) misleading error messages from Spanner about incorrect data types. Worse still, if it so happens that the value in the `values` array for this row and incorrect column index is the same type as the value for the correct column index, the insert request will succeed, and end up storing data in the wrong column entirely. 2. If a consumer of this library works around the issue above by populating missing nullable keys with `null`, as the code above relies on the order of values returned by Object.keys, the newly added values will be at the end of the array returned by Object.keys (which returns keys in insertion order), and again, the indices between the `columns` array, and the `values` array for affected rows will no longer align, with the same end result as what occurs with issue 1 above. This could be fixed by iterating over all rows (rather than naively assuming all rows have identical keys to the first row) and collecting all unique keys into a single array which is subsequently sorted lexically, to build the value for `columns`. Likewise, the value for `values` would be constructed by mapping over `columns` (rather than `Object.keys(keyVal)`) and pulling out the corresponding value from `keyVal` to pass to `codec.encode`.",1.0,"spanner: Unable to insert multiple rows in a single table.insert/transaction.insert call when some rows lack a value for nullable columns - In [transaction-request.js](https://github.com/GoogleCloudPlatform/google-cloud-node/blob/ca0d96cb94f7ef4e176edf221f9ee154dcf8c850/packages/spanner/src/transaction-request.js#L687-L695), there is the following (L687-695): ```javascript mutation[method] = { table: table, columns: Object.keys(keyVals[0]), values: keyVals.map(function(keyVal) { return Object.keys(keyVal).map(function(key) { return codec.encode(keyVal[key]); }); }) }; ``` There are at least two bugs here: 1. When an array is provided to table.insert(), if some objects do not have a given key (which is perfectly valid if the column said key corresponds to is nullable), the array within `values` for this object will not be the same length as the columns array, and indices in `columns` will no longer correspond to the correct value in the `values` array for this row/object. This results in (best case) misleading error messages from Spanner about incorrect data types. Worse still, if it so happens that the value in the `values` array for this row and incorrect column index is the same type as the value for the correct column index, the insert request will succeed, and end up storing data in the wrong column entirely. 2. If a consumer of this library works around the issue above by populating missing nullable keys with `null`, as the code above relies on the order of values returned by Object.keys, the newly added values will be at the end of the array returned by Object.keys (which returns keys in insertion order), and again, the indices between the `columns` array, and the `values` array for affected rows will no longer align, with the same end result as what occurs with issue 1 above. This could be fixed by iterating over all rows (rather than naively assuming all rows have identical keys to the first row) and collecting all unique keys into a single array which is subsequently sorted lexically, to build the value for `columns`. Likewise, the value for `values` would be constructed by mapping over `columns` (rather than `Object.keys(keyVal)`) and pulling out the corresponding value from `keyVal` to pass to `codec.encode`.",0,spanner unable to insert multiple rows in a single table insert transaction insert call when some rows lack a value for nullable columns in there is the following javascript mutation table table columns object keys keyvals values keyvals map function keyval return object keys keyval map function key return codec encode keyval there are at least two bugs here when an array is provided to table insert if some objects do not have a given key which is perfectly valid if the column said key corresponds to is nullable the array within values for this object will not be the same length as the columns array and indices in columns will no longer correspond to the correct value in the values array for this row object this results in best case misleading error messages from spanner about incorrect data types worse still if it so happens that the value in the values array for this row and incorrect column index is the same type as the value for the correct column index the insert request will succeed and end up storing data in the wrong column entirely if a consumer of this library works around the issue above by populating missing nullable keys with null as the code above relies on the order of values returned by object keys the newly added values will be at the end of the array returned by object keys which returns keys in insertion order and again the indices between the columns array and the values array for affected rows will no longer align with the same end result as what occurs with issue above this could be fixed by iterating over all rows rather than naively assuming all rows have identical keys to the first row and collecting all unique keys into a single array which is subsequently sorted lexically to build the value for columns likewise the value for values would be constructed by mapping over columns rather than object keys keyval and pulling out the corresponding value from keyval to pass to codec encode ,0 8068,26149082932.0,IssuesEvent,2022-12-30 10:38:02,dyne/frei0r,https://api.github.com/repos/dyne/frei0r,opened,Plugin fuzzing test,automation,"Include a test for fuzzing the input of plugins and loading them one-by-one with a test video source and changing every parameter with all possible values (perhaps also beyond the defined scope) to test their stability. ## Links - [Info on fuzzing](https://www.code-intelligence.com/blog/secure-coding-cpp-using-fuzzing) - Best fuzzing tool for our use case may be [cifuzz](https://github.com/CodeIntelligenceTesting/cifuzz)",1.0,"Plugin fuzzing test - Include a test for fuzzing the input of plugins and loading them one-by-one with a test video source and changing every parameter with all possible values (perhaps also beyond the defined scope) to test their stability. ## Links - [Info on fuzzing](https://www.code-intelligence.com/blog/secure-coding-cpp-using-fuzzing) - Best fuzzing tool for our use case may be [cifuzz](https://github.com/CodeIntelligenceTesting/cifuzz)",1,plugin fuzzing test include a test for fuzzing the input of plugins and loading them one by one with a test video source and changing every parameter with all possible values perhaps also beyond the defined scope to test their stability links best fuzzing tool for our use case may be ,1 7686,25451838388.0,IssuesEvent,2022-11-24 11:03:49,Budibase/budibase,https://api.github.com/repos/Budibase/budibase,closed,Query Rows does not work with bindings when using a MySQL datasource,bug binding automations sev2 - severe filtering,"**Hosting** - Self - Method: BudiCLI - Budibase Version: 1.0.178 - App Version: 1.0.178 **Describe the bug** Query Rows does not work with bindings when using a MySQL datasource. Instead it returns all rows until the row limit is hit. **To Reproduce** Steps to reproduce the behavior: 1. Create a new app with a MySQL data source 2. Create two tables Record and RecordType 3. Create two columns in RecordType: Type/Text , Value/Number 4. Create at least two rows in the with both columns filled with different values in the RecordType table 5. Create the following relationsip between the tables: One RecordType row > many Record rows 6. Create a column in the Record table called Value and make it a number type 7. Create an automation to launch when creating a row 8. Create a Query Rows step in the automation to query the RecordType table and filter it by the foreign key from the trigger row 9. Create an Update Rows step in the automation to update the Record Table 10. Set the RecordType field to `return $(""trigger.row.RecordType"");` 11. Set the Fk_RecordType_Record field to `return $(""trigger.row.fk_RecordType_Record"");` 12. Set the Value field to `return $(""steps.1.rows.0.Value"");` 13. Set the RowID field to `return $(""trigger.id"");` 14. Press Run test at the top of the automation page and select anything but the first record in the RecordType table created earlier 15. The first value will have been pulled through instead of the correct value. Query Row Input: ` { ""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"", ""filters"": { ""string"": {}, ""fuzzy"": {}, ""range"": {}, ""equal"": { ""id"": null }, ""notEqual"": {}, ""empty"": {}, ""notEmpty"": {}, ""contains"": {}, ""notContains"": {} }, ""filters-def"": [ { ""id"": ""Aa5fR_PiF"", ""field"": ""id"", ""operator"": ""equal"", ""value"": ""{{ js \""cmV0dXJuICQoInRyaWdnZXIucm93LmZrX1JlY29yZFR5cGVfUmVjb3JkIik7\"" }}"", ""valueType"": ""Binding"", ""type"": ""number"" } ], ""sortColumn"": ""id"", ""sortOrder"": ""ascending"", ""limit"": ""50"" } ` Query Row Output: ` { ""rows"": [ { ""id"": 1, ""Type"": ""Type1"", ""Value"": 1, ""_id"": ""%5B1%5D"", ""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"", ""_rev"": ""rev"" }, { ""id"": 2, ""Type"": ""Type2"", ""Value"": 2, ""_id"": ""%5B2%5D"", ""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"", ""_rev"": ""rev"" }, { ""id"": 3, ""Type"": ""Type3"", ""Value"": 3, ""_id"": ""%5B3%5D"", ""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"", ""_rev"": ""rev"", ""Record"": [ { ""_id"": ""%5B18%5D"" }, { ""_id"": ""%5B19%5D"" }, { ""_id"": ""%5B20%5D"" }, { ""_id"": ""%5B21%5D"" } ] } ], ""success"": true } ` Update Row Output: ` { ""row"": { ""id"": 22, ""fk_RecordType_Record"": 3, ""Value"": 1, ""_id"": ""%5B22%5D"", ""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__Record"", ""_rev"": ""rev"", ""RecordType"": [ { ""_id"": ""%5B22%5D"" } ] }, ""response"": ""Record saved successfully"", ""id"": ""%5B22%5D"", ""revision"": ""rev"", ""success"": true } ` **Expected behavior** The correct values have been filtered to the expected position. e.g. in the outputs above the expected value is 3 instead of 1 **Desktop (please complete the following information):** - OS: Windows 10 - Browser: Chrome - Version: 101.0.4951.67 **Additional context** Both handlebars and javascript bindings have this issue ",1.0,"Query Rows does not work with bindings when using a MySQL datasource - **Hosting** - Self - Method: BudiCLI - Budibase Version: 1.0.178 - App Version: 1.0.178 **Describe the bug** Query Rows does not work with bindings when using a MySQL datasource. Instead it returns all rows until the row limit is hit. **To Reproduce** Steps to reproduce the behavior: 1. Create a new app with a MySQL data source 2. Create two tables Record and RecordType 3. Create two columns in RecordType: Type/Text , Value/Number 4. Create at least two rows in the with both columns filled with different values in the RecordType table 5. Create the following relationsip between the tables: One RecordType row > many Record rows 6. Create a column in the Record table called Value and make it a number type 7. Create an automation to launch when creating a row 8. Create a Query Rows step in the automation to query the RecordType table and filter it by the foreign key from the trigger row 9. Create an Update Rows step in the automation to update the Record Table 10. Set the RecordType field to `return $(""trigger.row.RecordType"");` 11. Set the Fk_RecordType_Record field to `return $(""trigger.row.fk_RecordType_Record"");` 12. Set the Value field to `return $(""steps.1.rows.0.Value"");` 13. Set the RowID field to `return $(""trigger.id"");` 14. Press Run test at the top of the automation page and select anything but the first record in the RecordType table created earlier 15. The first value will have been pulled through instead of the correct value. Query Row Input: ` { ""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"", ""filters"": { ""string"": {}, ""fuzzy"": {}, ""range"": {}, ""equal"": { ""id"": null }, ""notEqual"": {}, ""empty"": {}, ""notEmpty"": {}, ""contains"": {}, ""notContains"": {} }, ""filters-def"": [ { ""id"": ""Aa5fR_PiF"", ""field"": ""id"", ""operator"": ""equal"", ""value"": ""{{ js \""cmV0dXJuICQoInRyaWdnZXIucm93LmZrX1JlY29yZFR5cGVfUmVjb3JkIik7\"" }}"", ""valueType"": ""Binding"", ""type"": ""number"" } ], ""sortColumn"": ""id"", ""sortOrder"": ""ascending"", ""limit"": ""50"" } ` Query Row Output: ` { ""rows"": [ { ""id"": 1, ""Type"": ""Type1"", ""Value"": 1, ""_id"": ""%5B1%5D"", ""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"", ""_rev"": ""rev"" }, { ""id"": 2, ""Type"": ""Type2"", ""Value"": 2, ""_id"": ""%5B2%5D"", ""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"", ""_rev"": ""rev"" }, { ""id"": 3, ""Type"": ""Type3"", ""Value"": 3, ""_id"": ""%5B3%5D"", ""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"", ""_rev"": ""rev"", ""Record"": [ { ""_id"": ""%5B18%5D"" }, { ""_id"": ""%5B19%5D"" }, { ""_id"": ""%5B20%5D"" }, { ""_id"": ""%5B21%5D"" } ] } ], ""success"": true } ` Update Row Output: ` { ""row"": { ""id"": 22, ""fk_RecordType_Record"": 3, ""Value"": 1, ""_id"": ""%5B22%5D"", ""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__Record"", ""_rev"": ""rev"", ""RecordType"": [ { ""_id"": ""%5B22%5D"" } ] }, ""response"": ""Record saved successfully"", ""id"": ""%5B22%5D"", ""revision"": ""rev"", ""success"": true } ` **Expected behavior** The correct values have been filtered to the expected position. e.g. in the outputs above the expected value is 3 instead of 1 **Desktop (please complete the following information):** - OS: Windows 10 - Browser: Chrome - Version: 101.0.4951.67 **Additional context** Both handlebars and javascript bindings have this issue ",1,query rows does not work with bindings when using a mysql datasource hosting self method budicli budibase version app version describe the bug query rows does not work with bindings when using a mysql datasource instead it returns all rows until the row limit is hit to reproduce steps to reproduce the behavior create a new app with a mysql data source create two tables record and recordtype create two columns in recordtype type text value number create at least two rows in the with both columns filled with different values in the recordtype table create the following relationsip between the tables one recordtype row many record rows create a column in the record table called value and make it a number type create an automation to launch when creating a row create a query rows step in the automation to query the recordtype table and filter it by the foreign key from the trigger row create an update rows step in the automation to update the record table set the recordtype field to return trigger row recordtype set the fk recordtype record field to return trigger row fk recordtype record set the value field to return steps rows value set the rowid field to return trigger id press run test at the top of the automation page and select anything but the first record in the recordtype table created earlier the first value will have been pulled through instead of the correct value query row input tableid datasource plus recordtype filters string fuzzy range equal id null notequal empty notempty contains notcontains filters def id pif field id operator equal value js valuetype binding type number sortcolumn id sortorder ascending limit query row output rows id type value id tableid datasource plus recordtype rev rev id type value id tableid datasource plus recordtype rev rev id type value id tableid datasource plus recordtype rev rev record id id id id success true update row output row id fk recordtype record value id tableid datasource plus record rev rev recordtype id response record saved successfully id revision rev success true expected behavior the correct values have been filtered to the expected position e g in the outputs above the expected value is instead of desktop please complete the following information os windows browser chrome version additional context both handlebars and javascript bindings have this issue ,1 391004,11567200810.0,IssuesEvent,2020-02-20 13:55:16,bisq-network/bisq,https://api.github.com/repos/bisq-network/bisq,closed,Message state not updated properly during arbitration,a:bug in:arbitration is:critical bug is:priority,"When the arbitrator is offline during opening of a dispute the initial message state will stay ""Message saved in receiver's mailbox"" forever. Should be ""Message arrival confirmed by receiver"" after arbitrator is online again. ![Bildschirmfoto 2019-04-01 um 12.18.21.png](https://images.zenhubusercontent.com/5a0bffe08a75884b90877eda/1c653483-f064-4018-9413-cdb8cf9e443a)",1.0,"Message state not updated properly during arbitration - When the arbitrator is offline during opening of a dispute the initial message state will stay ""Message saved in receiver's mailbox"" forever. Should be ""Message arrival confirmed by receiver"" after arbitrator is online again. ![Bildschirmfoto 2019-04-01 um 12.18.21.png](https://images.zenhubusercontent.com/5a0bffe08a75884b90877eda/1c653483-f064-4018-9413-cdb8cf9e443a)",0,message state not updated properly during arbitration when the arbitrator is offline during opening of a dispute the initial message state will stay message saved in receiver s mailbox forever should be message arrival confirmed by receiver after arbitrator is online again ,0 414102,12099035620.0,IssuesEvent,2020-04-20 11:25:03,hotosm/tasking-manager,https://api.github.com/repos/hotosm/tasking-manager,closed,Show loading progress for login delay,Component: Frontend Priority: Low,"While logging in, there is a slight delay for the authentication, during which I can move around and click on the page for other actions. Ideally, I expect the page preventing further action till the login shows some result. ![login-progress](https://user-images.githubusercontent.com/12103383/72183932-35375e00-3415-11ea-9e12-3cedd709e990.gif) ",1.0,"Show loading progress for login delay - While logging in, there is a slight delay for the authentication, during which I can move around and click on the page for other actions. Ideally, I expect the page preventing further action till the login shows some result. ![login-progress](https://user-images.githubusercontent.com/12103383/72183932-35375e00-3415-11ea-9e12-3cedd709e990.gif) ",0,show loading progress for login delay while logging in there is a slight delay for the authentication during which i can move around and click on the page for other actions ideally i expect the page preventing further action till the login shows some result ,0 3270,13305817996.0,IssuesEvent,2020-08-25 19:10:30,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,opened,Add missing methods to Stack/Workspace,area/automation-api,"The Automation API is missing a few methods `pulumi cancel`, and import/export just to name a few. We should do a sweep to find missing methods and add them where appropriate.",1.0,"Add missing methods to Stack/Workspace - The Automation API is missing a few methods `pulumi cancel`, and import/export just to name a few. We should do a sweep to find missing methods and add them where appropriate.",1,add missing methods to stack workspace the automation api is missing a few methods pulumi cancel and import export just to name a few we should do a sweep to find missing methods and add them where appropriate ,1 10268,32061687617.0,IssuesEvent,2023-09-24 18:28:14,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,opened,"[YSQL][xCluster] Alter table with add colum, Default value NOW() shows discrepancy in data in xcluster setup",kind/bug area/ysql QA xCluster status/awaiting-triage qa_automation,"### Description Version: observed on master Observed a discrepancy in the values of the generated_at column between the source and target databases. The value at the source side is different from the value at the target side. ### Steps: 1. Create table at both source `CREATE TABLE tab(id int, name text)` 2. Restore database at target 3. Setup replication 4. ALTER table with default value at source `ALTER TABLE tab ADD COLUMN generated_at timestamp DEFAULT NOW()` 5. Load data to table tab at source (REPLICATION is paused) 6. ALTER table with default value at Target `ALTER TABLE tab ADD COLUMN generated_at timestamp DEFAULT NOW()` 7. Wait for few mins data to replicate **Issue**: Value of generated_at column at source is different from value at Target ### Warning: Please confirm that this issue does not contain any sensitive information - [X] I confirm this issue does not contain any sensitive information.",1.0,"[YSQL][xCluster] Alter table with add colum, Default value NOW() shows discrepancy in data in xcluster setup - ### Description Version: observed on master Observed a discrepancy in the values of the generated_at column between the source and target databases. The value at the source side is different from the value at the target side. ### Steps: 1. Create table at both source `CREATE TABLE tab(id int, name text)` 2. Restore database at target 3. Setup replication 4. ALTER table with default value at source `ALTER TABLE tab ADD COLUMN generated_at timestamp DEFAULT NOW()` 5. Load data to table tab at source (REPLICATION is paused) 6. ALTER table with default value at Target `ALTER TABLE tab ADD COLUMN generated_at timestamp DEFAULT NOW()` 7. Wait for few mins data to replicate **Issue**: Value of generated_at column at source is different from value at Target ### Warning: Please confirm that this issue does not contain any sensitive information - [X] I confirm this issue does not contain any sensitive information.",1, alter table with add colum default value now shows discrepancy in data in xcluster setup description version observed on master observed a discrepancy in the values of the generated at column between the source and target databases the value at the source side is different from the value at the target side steps create table at both source create table tab id int name text restore database at target setup replication alter table with default value at source alter table tab add column generated at timestamp default now load data to table tab at source replication is paused alter table with default value at target alter table tab add column generated at timestamp default now wait for few mins data to replicate issue value of generated at column at source is different from value at target warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information ,1 909,8678743841.0,IssuesEvent,2018-11-30 20:59:25,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Improvements in Exception list validation,assigned-to-author automation/svc product-feedback triaged,"Hello, I wanted to point out a few inmprovements that can help with the solution. 1) On the Start or Stop action exclude the VM's that are already on a Running or Dealocated power state 2) Don't send an email if there is no action to be taken 3) Why validate twice the exclusion list, once at the start to check if there are any issues with the list and then after processing the resource groups to create the final exclusion list, as for larger environments it can increase significant the processing time. 4) Why not create a parallel execution of the VMs validation and/or the RGs. $PowerState = (Get-AzureRmVM -ResourceGroupName $vmResource.ResourceGroupName -name $vmResource.Name -Status).statuses[1].code If ($Action -eq 'Stop' -and $PowerState -ne ""PowerState/deallocated"") {} - --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 225c9d05-83dd-b006-0025-3753f5ab25bf * Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096 * Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback) * Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Improvements in Exception list validation - Hello, I wanted to point out a few inmprovements that can help with the solution. 1) On the Start or Stop action exclude the VM's that are already on a Running or Dealocated power state 2) Don't send an email if there is no action to be taken 3) Why validate twice the exclusion list, once at the start to check if there are any issues with the list and then after processing the resource groups to create the final exclusion list, as for larger environments it can increase significant the processing time. 4) Why not create a parallel execution of the VMs validation and/or the RGs. $PowerState = (Get-AzureRmVM -ResourceGroupName $vmResource.ResourceGroupName -name $vmResource.Name -Status).statuses[1].code If ($Action -eq 'Stop' -and $PowerState -ne ""PowerState/deallocated"") {} - --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 225c9d05-83dd-b006-0025-3753f5ab25bf * Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096 * Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback) * Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,improvements in exception list validation hello i wanted to point out a few inmprovements that can help with the solution on the start or stop action exclude the vm s that are already on a running or dealocated power state don t send an email if there is no action to be taken why validate twice the exclusion list once at the start to check if there are any issues with the list and then after processing the resource groups to create the final exclusion list as for larger environments it can increase significant the processing time why not create a parallel execution of the vms validation and or the rgs powerstate get azurermvm resourcegroupname vmresource resourcegroupname name vmresource name status statuses code if action eq stop and powerstate ne powerstate deallocated document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 5701,20778536304.0,IssuesEvent,2022-03-16 12:51:06,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,reopened,using MySQL 5.5 and 5.6 clients connecting with a passwordless account to tidb fail,type/bug sig/sql-infra severity/minor found/automation,"## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) 1. download 5.5 and 5.6 version MySQL client dbdeployer downloads get-unpack mysql-5.5.62.tar.xz dbdeployer downloads get-unpack mysql-5.6.44.tar.xz 2. create use nopw with no password, CREATE USER 'nopw'@'%' IDENTIFIED WITH mysql_native_password 3. use 5.5 and 5.6 MySQL client connect to tidb nightly version(v5.5.0-nightly-20220208) with this ""nopw"" user ### 2. What did you expect to see? (Required) connect successully ### 3. What did you see instead (Required) root@wkload-0:/upgrade-test# /root/opt/mysql/5.5.62/bin/mysql -u nopw -h tiup-peer -P3390 ERROR 2012 (HY000): Error in server handshake ### 4. What is your TiDB version? (Required) v5.5.0-nightly-20220208 ",1.0,"using MySQL 5.5 and 5.6 clients connecting with a passwordless account to tidb fail - ## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) 1. download 5.5 and 5.6 version MySQL client dbdeployer downloads get-unpack mysql-5.5.62.tar.xz dbdeployer downloads get-unpack mysql-5.6.44.tar.xz 2. create use nopw with no password, CREATE USER 'nopw'@'%' IDENTIFIED WITH mysql_native_password 3. use 5.5 and 5.6 MySQL client connect to tidb nightly version(v5.5.0-nightly-20220208) with this ""nopw"" user ### 2. What did you expect to see? (Required) connect successully ### 3. What did you see instead (Required) root@wkload-0:/upgrade-test# /root/opt/mysql/5.5.62/bin/mysql -u nopw -h tiup-peer -P3390 ERROR 2012 (HY000): Error in server handshake ### 4. What is your TiDB version? (Required) v5.5.0-nightly-20220208 ",1,using mysql and clients connecting with a passwordless account to tidb fail bug report please answer these questions before submitting your issue thanks minimal reproduce step required download and version mysql client dbdeployer downloads get unpack mysql tar xz dbdeployer downloads get unpack mysql tar xz create use nopw with no password create user nopw identified with mysql native password use and mysql client connect to tidb nightly version nightly with this nopw user what did you expect to see required connect successully what did you see instead required root wkload upgrade test root opt mysql bin mysql u nopw h tiup peer error error in server handshake what is your tidb version required nightly ,1 33896,6266116819.0,IssuesEvent,2017-07-16 23:13:36,MartinLoeper/KAMP-DSL,https://api.github.com/repos/MartinLoeper/KAMP-DSL,closed,Create an easy installer,documentation enhancement,"As the name says: create an installer using software configurations and project sets. Update the wiki accordingly.",1.0,"Create an easy installer - As the name says: create an installer using software configurations and project sets. Update the wiki accordingly.",0,create an easy installer as the name says create an installer using software configurations and project sets update the wiki accordingly ,0 161137,20120415002.0,IssuesEvent,2022-02-08 01:16:43,arohablue/BlockDockServer,https://api.github.com/repos/arohablue/BlockDockServer,closed,CVE-2020-10693 (Medium) detected in hibernate-validator-5.3.5.Final.jar - autoclosed,security vulnerability,"## CVE-2020-10693 - Medium Severity Vulnerability
Vulnerable Library - hibernate-validator-5.3.5.Final.jar

Hibernate's Bean Validation (JSR-303) reference implementation.

Library home page: http://hibernate.org/validator

Path to dependency file: /BlockDockServer/build.gradle

Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.hibernate/hibernate-validator/5.3.5.Final/622a9bcef2eed6d41b5b8e0662c36212009e375/hibernate-validator-5.3.5.Final.jar

Dependency Hierarchy: - hibernate5-6.1.6.jar (Root Library) - grails-datastore-gorm-hibernate5-6.1.6.RELEASE.jar - :x: **hibernate-validator-5.3.5.Final.jar** (Vulnerable Library)

Vulnerability Details

A flaw was found in Hibernate Validator version 6.1.2.Final. A bug in the message interpolation processor enables invalid EL expressions to be evaluated as if they were valid. This flaw allows attackers to bypass input sanitation (escaping, stripping) controls that developers may have put in place when handling user-controlled data in error messages.

Publish Date: 2020-05-06

URL: CVE-2020-10693

CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://hibernate.atlassian.net/projects/HV/issues/HV-1774

Release Date: 2020-05-06

Fix Resolution: org.hibernate:hibernate-validator:6.0.20.Final,6.1.5.Final

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-10693 (Medium) detected in hibernate-validator-5.3.5.Final.jar - autoclosed - ## CVE-2020-10693 - Medium Severity Vulnerability
Vulnerable Library - hibernate-validator-5.3.5.Final.jar

Hibernate's Bean Validation (JSR-303) reference implementation.

Library home page: http://hibernate.org/validator

Path to dependency file: /BlockDockServer/build.gradle

Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.hibernate/hibernate-validator/5.3.5.Final/622a9bcef2eed6d41b5b8e0662c36212009e375/hibernate-validator-5.3.5.Final.jar

Dependency Hierarchy: - hibernate5-6.1.6.jar (Root Library) - grails-datastore-gorm-hibernate5-6.1.6.RELEASE.jar - :x: **hibernate-validator-5.3.5.Final.jar** (Vulnerable Library)

Vulnerability Details

A flaw was found in Hibernate Validator version 6.1.2.Final. A bug in the message interpolation processor enables invalid EL expressions to be evaluated as if they were valid. This flaw allows attackers to bypass input sanitation (escaping, stripping) controls that developers may have put in place when handling user-controlled data in error messages.

Publish Date: 2020-05-06

URL: CVE-2020-10693

CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://hibernate.atlassian.net/projects/HV/issues/HV-1774

Release Date: 2020-05-06

Fix Resolution: org.hibernate:hibernate-validator:6.0.20.Final,6.1.5.Final

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in hibernate validator final jar autoclosed cve medium severity vulnerability vulnerable library hibernate validator final jar hibernate s bean validation jsr reference implementation library home page a href path to dependency file blockdockserver build gradle path to vulnerable library root gradle caches modules files org hibernate hibernate validator final hibernate validator final jar dependency hierarchy jar root library grails datastore gorm release jar x hibernate validator final jar vulnerable library vulnerability details a flaw was found in hibernate validator version final a bug in the message interpolation processor enables invalid el expressions to be evaluated as if they were valid this flaw allows attackers to bypass input sanitation escaping stripping controls that developers may have put in place when handling user controlled data in error messages publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org hibernate hibernate validator final final step up your open source security game with whitesource ,0 14384,17402348882.0,IssuesEvent,2021-08-02 21:44:58,googleapis/python-firestore,https://api.github.com/repos/googleapis/python-firestore,opened,Split out system tests into separate Kokoro job,type: process,"Working to reduce CI latency.Here are timings on my local machine (note the pre-run with `--install-only` to avoid measuring virtualenv creation time): ```bash $ for job in $(nox --list | grep ""^\*"" | cut -d "" "" -f 2); do echo $job; nox -e $job --install-only; time nox -re $job; done lint nox > Running session lint nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint nox > python -m pip install flake8 black==19.10b0 nox > Skipping black run, as --install-only is set. nox > Skipping flake8 run, as --install-only is set. nox > Session lint was successful. nox > Running session lint nox > Re-using existing virtual environment at .nox/lint. nox > python -m pip install flake8 black==19.10b0 nox > black --check docs google tests noxfile.py setup.py All done! ✨ 🍰 ✨ 109 files would be left unchanged. nox > flake8 google tests nox > Session lint was successful. real 0m3.902s user 0m16.218s sys 0m0.277s blacken nox > Running session blacken nox > Creating virtual environment (virtualenv) using python3.8 in .nox/blacken nox > python -m pip install black==19.10b0 nox > Skipping black run, as --install-only is set. nox > Session blacken was successful. nox > Running session blacken nox > Re-using existing virtual environment at .nox/blacken. nox > python -m pip install black==19.10b0 nox > black docs google tests noxfile.py setup.py All done! ✨ 🍰 ✨ 109 files left unchanged. nox > Session blacken was successful. real 0m1.007s user 0m0.884s sys 0m0.127s lint_setup_py nox > Running session lint_setup_py nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint_setup_py nox > python -m pip install docutils pygments nox > Skipping python run, as --install-only is set. nox > Session lint_setup_py was successful. nox > Running session lint_setup_py nox > Re-using existing virtual environment at .nox/lint_setup_py. nox > python -m pip install docutils pygments nox > python setup.py check --restructuredtext --strict running check nox > Session lint_setup_py was successful. real 0m1.067s user 0m0.946s sys 0m0.123s unit-3.6 nox > Running session unit-3.6 nox > Creating virtual environment (virtualenv) using python3.6 in .nox/unit-3-6 nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > Skipping py.test run, as --install-only is set. nox > Session unit-3.6 was successful. nox > Running session unit-3.6 nox > Re-using existing virtual environment at .nox/unit-3-6. nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > py.test --quiet --junitxml=unit_3.6_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 5%] ...............................................................s..s.ss.. [ 10%] ........................................................................ [ 15%] ........................................................................ [ 20%] ...................................................s..s.ss.............. [ 25%] ........................................................................ [ 30%] ........................................................................ [ 35%] ........................................................................ [ 40%] ........................................................................ [ 45%] ........................................................................ [ 50%] ........................................................................ [ 55%] ........................................................................ [ 60%] ........................................................................ [ 65%] ........................................................................ [ 70%] ............................................................ssssssssssss [ 75%] ssssssssssssssssssssssssssssssss........................................ [ 80%] ........................................................................ [ 85%] ........................................................................ [ 90%] ........................................................................ [ 95%] ........................................................... [100%] - generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.6_sponge_log.xml - 1375 passed, 52 skipped in 14.10s nox > Session unit-3.6 was successful. real 0m18.388s user 0m17.654s sys 0m0.675s unit-3.7 nox > Running session unit-3.7 nox > Creating virtual environment (virtualenv) using python3.7 in .nox/unit-3-7 nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > Skipping py.test run, as --install-only is set. nox > Session unit-3.7 was successful. nox > Running session unit-3.7 nox > Re-using existing virtual environment at .nox/unit-3-7. nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > py.test --quiet --junitxml=unit_3.7_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 5%] ................................................................s..s..ss [ 10%] ........................................................................ [ 15%] ........................................................................ [ 20%] ....................................................s..s..ss............ [ 25%] ........................................................................ [ 30%] ........................................................................ [ 35%] ........................................................................ [ 40%] ........................................................................ [ 45%] ........................................................................ [ 50%] ........................................................................ [ 55%] ........................................................................ [ 60%] ........................................................................ [ 65%] ........................................................................ [ 70%] ............................................................ssssssssssss [ 75%] ssssssssssssssssssssssssssssssss........................................ [ 80%] ........................................................................ [ 85%] ........................................................................ [ 90%] ........................................................................ [ 95%] ........................................................... [100%] - generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.7_sponge_log.xml - 1375 passed, 52 skipped in 14.09s nox > Session unit-3.7 was successful. real 0m17.930s user 0m17.185s sys 0m0.732s unit-3.8 nox > Running session unit-3.8 nox > Creating virtual environment (virtualenv) using python3.8 in .nox/unit-3-8 nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > Skipping py.test run, as --install-only is set. nox > Session unit-3.8 was successful. nox > Running session unit-3.8 nox > Re-using existing virtual environment at .nox/unit-3-8. nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > py.test --quiet --junitxml=unit_3.8_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 5%] ................................................................s..s..ss [ 10%] ........................................................................ [ 15%] ........................................................................ [ 20%] ....................................................s..s..ss............ [ 25%] ........................................................................ [ 30%] ........................................................................ [ 35%] ........................................................................ [ 40%] ........................................................................ [ 45%] ........................................................................ [ 50%] ........................................................................ [ 55%] ........................................................................ [ 60%] ........................................................................ [ 65%] ........................................................................ [ 70%] ............................................................ssssssssssss [ 75%] ssssssssssssssssssssssssssssssss........................................ [ 80%] ........................................................................ [ 85%] ........................................................................ [ 90%] ........................................................................ [ 95%] ........................................................... [100%] - generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.8_sponge_log.xml - 1375 passed, 52 skipped in 13.40s nox > Session unit-3.8 was successful. real 0m17.162s user 0m16.517s sys 0m0.638s unit-3.9 nox > Running session unit-3.9 nox > Creating virtual environment (virtualenv) using python3.9 in .nox/unit-3-9 nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > Skipping py.test run, as --install-only is set. nox > Session unit-3.9 was successful. nox > Running session unit-3.9 nox > Re-using existing virtual environment at .nox/unit-3-9. nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > py.test --quiet --junitxml=unit_3.9_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 5%] ................................................................s..s..ss [ 10%] ........................................................................ [ 15%] ........................................................................ [ 20%] ....................................................s..s..ss............ [ 25%] ........................................................................ [ 30%] ........................................................................ [ 35%] ........................................................................ [ 40%] ........................................................................ [ 45%] ........................................................................ [ 50%] ........................................................................ [ 55%] ........................................................................ [ 60%] ........................................................................ [ 65%] ........................................................................ [ 70%] ............................................................ssssssssssss [ 75%] ssssssssssssssssssssssssssssssss........................................ [ 80%] ........................................................................ [ 85%] ........................................................................ [ 90%] ........................................................................ [ 95%] ........................................................... [100%] - generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.9_sponge_log.xml - 1375 passed, 52 skipped in 15.70s nox > Session unit-3.9 was successful. real 0m19.250s user 0m18.510s sys 0m0.715s system-3.7 nox > Running session system-3.7 nox > Creating virtual environment (virtualenv) using python3.7 in .nox/system-3-7 nox > python -m pip install --pre grpcio nox > python -m pip install mock pytest google-cloud-testutils pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > Skipping py.test run, as --install-only is set. nox > Session system-3.7 was successful. nox > Running session system-3.7 nox > Re-using existing virtual environment at .nox/system-3-7. nox > python -m pip install --pre grpcio nox > python -m pip install mock pytest google-cloud-testutils pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > py.test --verbose --junitxml=system_3.7_sponge_log.xml tests/system ============================= test session starts ============================== platform linux -- Python 3.7.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/system-3-7/bin/python cachedir: .pytest_cache rootdir: /home/tseaver/projects/agendaless/Google/src/python-firestore plugins: asyncio-0.15.1 collected 77 items tests/system/test_system.py::test_collections PASSED [ 1%] tests/system/test_system.py::test_collections_w_import PASSED [ 2%] tests/system/test_system.py::test_create_document PASSED [ 3%] tests/system/test_system.py::test_create_document_w_subcollection PASSED [ 5%] tests/system/test_system.py::test_cannot_use_foreign_key PASSED [ 6%] tests/system/test_system.py::test_no_document PASSED [ 7%] tests/system/test_system.py::test_document_set PASSED [ 9%] tests/system/test_system.py::test_document_integer_field PASSED [ 10%] tests/system/test_system.py::test_document_set_merge PASSED [ 11%] tests/system/test_system.py::test_document_set_w_int_field PASSED [ 12%] tests/system/test_system.py::test_document_update_w_int_field PASSED [ 14%] tests/system/test_system.py::test_update_document PASSED [ 15%] tests/system/test_system.py::test_document_get PASSED [ 16%] tests/system/test_system.py::test_document_delete PASSED [ 18%] tests/system/test_system.py::test_collection_add PASSED [ 19%] tests/system/test_system.py::test_query_stream_w_simple_field_eq_op PASSED [ 20%] tests/system/test_system.py::test_query_stream_w_simple_field_array_contains_op PASSED [ 22%] tests/system/test_system.py::test_query_stream_w_simple_field_in_op PASSED [ 23%] tests/system/test_system.py::test_query_stream_w_not_eq_op PASSED [ 24%] tests/system/test_system.py::test_query_stream_w_simple_not_in_op PASSED [ 25%] tests/system/test_system.py::test_query_stream_w_simple_field_array_contains_any_op PASSED [ 27%] tests/system/test_system.py::test_query_stream_w_order_by PASSED [ 28%] tests/system/test_system.py::test_query_stream_w_field_path PASSED [ 29%] tests/system/test_system.py::test_query_stream_w_start_end_cursor PASSED [ 31%] tests/system/test_system.py::test_query_stream_wo_results PASSED [ 32%] tests/system/test_system.py::test_query_stream_w_projection PASSED [ 33%] tests/system/test_system.py::test_query_stream_w_multiple_filters PASSED [ 35%] tests/system/test_system.py::test_query_stream_w_offset PASSED [ 36%] tests/system/test_system.py::test_query_with_order_dot_key PASSED [ 37%] tests/system/test_system.py::test_query_unary PASSED [ 38%] tests/system/test_system.py::test_collection_group_queries PASSED [ 40%] tests/system/test_system.py::test_collection_group_queries_startat_endat PASSED [ 41%] tests/system/test_system.py::test_collection_group_queries_filters PASSED [ 42%] tests/system/test_system.py::test_partition_query_no_partitions PASSED [ 44%] tests/system/test_system.py::test_partition_query PASSED [ 45%] tests/system/test_system.py::test_get_all PASSED [ 46%] tests/system/test_system.py::test_batch PASSED [ 48%] tests/system/test_system.py::test_watch_document PASSED [ 49%] tests/system/test_system.py::test_watch_collection PASSED [ 50%] tests/system/test_system.py::test_watch_query PASSED [ 51%] tests/system/test_system.py::test_array_union PASSED [ 53%] tests/system/test_system.py::test_watch_query_order PASSED [ 54%] tests/system/test_system_async.py::test_collections PASSED [ 55%] tests/system/test_system_async.py::test_collections_w_import PASSED [ 57%] tests/system/test_system_async.py::test_create_document PASSED [ 58%] tests/system/test_system_async.py::test_create_document_w_subcollection PASSED [ 59%] tests/system/test_system_async.py::test_cannot_use_foreign_key PASSED [ 61%] tests/system/test_system_async.py::test_no_document PASSED [ 62%] tests/system/test_system_async.py::test_document_set PASSED [ 63%] tests/system/test_system_async.py::test_document_integer_field PASSED [ 64%] tests/system/test_system_async.py::test_document_set_merge PASSED [ 66%] tests/system/test_system_async.py::test_document_set_w_int_field PASSED [ 67%] tests/system/test_system_async.py::test_document_update_w_int_field PASSED [ 68%] tests/system/test_system_async.py::test_update_document PASSED [ 70%] tests/system/test_system_async.py::test_document_get PASSED [ 71%] tests/system/test_system_async.py::test_document_delete PASSED [ 72%] tests/system/test_system_async.py::test_collection_add PASSED [ 74%] tests/system/test_system_async.py::test_query_stream_w_simple_field_eq_op PASSED [ 75%] tests/system/test_system_async.py::test_query_stream_w_simple_field_array_contains_op PASSED [ 76%] tests/system/test_system_async.py::test_query_stream_w_simple_field_in_op PASSED [ 77%] tests/system/test_system_async.py::test_query_stream_w_simple_field_array_contains_any_op PASSED [ 79%] tests/system/test_system_async.py::test_query_stream_w_order_by PASSED [ 80%] tests/system/test_system_async.py::test_query_stream_w_field_path PASSED [ 81%] tests/system/test_system_async.py::test_query_stream_w_start_end_cursor PASSED [ 83%] tests/system/test_system_async.py::test_query_stream_wo_results PASSED [ 84%] tests/system/test_system_async.py::test_query_stream_w_projection PASSED [ 85%] tests/system/test_system_async.py::test_query_stream_w_multiple_filters PASSED [ 87%] tests/system/test_system_async.py::test_query_stream_w_offset PASSED [ 88%] tests/system/test_system_async.py::test_query_with_order_dot_key PASSED [ 89%] tests/system/test_system_async.py::test_query_unary PASSED [ 90%] tests/system/test_system_async.py::test_collection_group_queries PASSED [ 92%] tests/system/test_system_async.py::test_collection_group_queries_startat_endat PASSED [ 93%] tests/system/test_system_async.py::test_collection_group_queries_filters PASSED [ 94%] tests/system/test_system_async.py::test_partition_query_no_partitions PASSED [ 96%] tests/system/test_system_async.py::test_partition_query PASSED [ 97%] tests/system/test_system_async.py::test_get_all PASSED [ 98%] tests/system/test_system_async.py::test_batch PASSED [100%] =================== 77 passed in 211.00s (0:03:31) =================== nox > Command py.test --verbose --junitxml=system_3.7_sponge_log.xml tests/system passed nox > Session system-3.7 was successful. real 3m34.561s user 0m11.371s sys 0m1.881s cover nox > Running session cover nox > Creating virtual environment (virtualenv) using python3.8 in .nox/cover nox > python -m pip install coverage pytest-cov nox > Skipping coverage run, as --install-only is set. nox > Skipping coverage run, as --install-only is set. nox > Session cover was successful. nox > Running session cover nox > Re-using existing virtual environment at .nox/cover. nox > python -m pip install coverage pytest-cov nox > coverage report --show-missing --fail-under=100 Name Stmts Miss Branch BrPart Cover Missing --------------------------------------------------------------------------------------------------------------------------------- google/cloud/firestore.py 35 0 0 0 100% google/cloud/firestore_admin_v1/__init__.py 23 0 0 0 100% google/cloud/firestore_admin_v1/services/__init__.py 0 0 0 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/__init__.py 3 0 0 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/async_client.py 168 0 38 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/client.py 282 0 90 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/pagers.py 82 0 20 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/transports/__init__.py 9 0 0 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/transports/base.py 72 0 12 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/transports/grpc.py 100 0 34 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/transports/grpc_asyncio.py 103 0 34 0 100% google/cloud/firestore_admin_v1/types/__init__.py 6 0 0 0 100% google/cloud/firestore_admin_v1/types/field.py 12 0 0 0 100% google/cloud/firestore_admin_v1/types/firestore_admin.py 48 0 0 0 100% google/cloud/firestore_admin_v1/types/index.py 28 0 0 0 100% google/cloud/firestore_admin_v1/types/location.py 4 0 0 0 100% google/cloud/firestore_admin_v1/types/operation.py 57 0 0 0 100% google/cloud/firestore_bundle/__init__.py 7 0 0 0 100% google/cloud/firestore_bundle/_helpers.py 4 0 0 0 100% google/cloud/firestore_bundle/bundle.py 111 0 32 0 100% google/cloud/firestore_bundle/services/__init__.py 0 0 0 0 100% google/cloud/firestore_bundle/types/__init__.py 2 0 0 0 100% google/cloud/firestore_bundle/types/bundle.py 33 0 0 0 100% google/cloud/firestore_v1/__init__.py 37 0 0 0 100% google/cloud/firestore_v1/_helpers.py 478 0 240 0 100% google/cloud/firestore_v1/async_batch.py 19 0 2 0 100% google/cloud/firestore_v1/async_client.py 41 0 4 0 100% google/cloud/firestore_v1/async_collection.py 32 0 4 0 100% google/cloud/firestore_v1/async_document.py 44 0 4 0 100% google/cloud/firestore_v1/async_query.py 45 0 16 0 100% google/cloud/firestore_v1/async_transaction.py 98 0 22 0 100% google/cloud/firestore_v1/base_batch.py 32 0 4 0 100% google/cloud/firestore_v1/base_client.py 151 0 42 0 100% google/cloud/firestore_v1/base_collection.py 101 0 16 0 100% google/cloud/firestore_v1/base_document.py 145 0 24 0 100% google/cloud/firestore_v1/base_query.py 331 0 130 0 100% google/cloud/firestore_v1/base_transaction.py 65 0 6 0 100% google/cloud/firestore_v1/batch.py 19 0 2 0 100% google/cloud/firestore_v1/client.py 42 0 4 0 100% google/cloud/firestore_v1/collection.py 30 0 2 0 100% google/cloud/firestore_v1/document.py 48 0 4 0 100% google/cloud/firestore_v1/field_path.py 135 0 56 0 100% google/cloud/firestore_v1/order.py 130 0 54 0 100% google/cloud/firestore_v1/query.py 47 0 14 0 100% google/cloud/firestore_v1/services/__init__.py 0 0 0 0 100% google/cloud/firestore_v1/services/firestore/__init__.py 3 0 0 0 100% google/cloud/firestore_v1/services/firestore/async_client.py 178 0 30 0 100% google/cloud/firestore_v1/services/firestore/client.py 276 0 90 0 100% google/cloud/firestore_v1/services/firestore/pagers.py 121 0 30 0 100% google/cloud/firestore_v1/services/firestore/transports/__init__.py 9 0 0 0 100% google/cloud/firestore_v1/services/firestore/transports/base.py 80 0 12 0 100% google/cloud/firestore_v1/services/firestore/transports/grpc.py 122 0 44 0 100% google/cloud/firestore_v1/services/firestore/transports/grpc_asyncio.py 125 0 44 0 100% google/cloud/firestore_v1/transaction.py 97 0 22 0 100% google/cloud/firestore_v1/transforms.py 39 0 10 0 100% google/cloud/firestore_v1/types/__init__.py 6 0 0 0 100% google/cloud/firestore_v1/types/common.py 16 0 0 0 100% google/cloud/firestore_v1/types/document.py 27 0 0 0 100% google/cloud/firestore_v1/types/firestore.py 157 0 0 0 100% google/cloud/firestore_v1/types/query.py 66 0 0 0 100% google/cloud/firestore_v1/types/write.py 45 0 0 0 100% google/cloud/firestore_v1/watch.py 325 0 78 0 100% tests/unit/__init__.py 0 0 0 0 100% tests/unit/test_firestore_shim.py 10 0 2 0 100% tests/unit/v1/__init__.py 0 0 0 0 100% tests/unit/v1/_test_helpers.py 22 0 0 0 100% tests/unit/v1/conformance_tests.py 106 0 0 0 100% tests/unit/v1/test__helpers.py 1653 0 36 0 100% tests/unit/v1/test_async_batch.py 98 0 0 0 100% tests/unit/v1/test_async_client.py 267 0 18 0 100% tests/unit/v1/test_async_collection.py 223 0 20 0 100% tests/unit/v1/test_async_document.py 334 0 32 0 100% tests/unit/v1/test_async_query.py 327 0 26 0 100% tests/unit/v1/test_async_transaction.py 584 0 0 0 100% tests/unit/v1/test_base_batch.py 98 0 0 0 100% tests/unit/v1/test_base_client.py 238 0 0 0 100% tests/unit/v1/test_base_collection.py 239 0 0 0 100% tests/unit/v1/test_base_document.py 293 0 2 0 100% tests/unit/v1/test_base_query.py 1006 0 20 0 100% tests/unit/v1/test_base_transaction.py 75 0 0 0 100% tests/unit/v1/test_batch.py 92 0 0 0 100% tests/unit/v1/test_bundle.py 268 0 4 0 100% tests/unit/v1/test_client.py 256 0 12 0 100% tests/unit/v1/test_collection.py 197 0 10 0 100% tests/unit/v1/test_cross_language.py 207 0 82 0 100% tests/unit/v1/test_document.py 307 0 26 0 100% tests/unit/v1/test_field_path.py 355 0 8 0 100% tests/unit/v1/test_order.py 138 0 8 0 100% tests/unit/v1/test_query.py 318 0 0 0 100% tests/unit/v1/test_transaction.py 560 0 0 0 100% tests/unit/v1/test_transforms.py 78 0 8 0 100% tests/unit/v1/test_watch.py 667 0 4 0 100% --------------------------------------------------------------------------------------------------------------------------------- TOTAL 13967 0 1588 0 100% nox > coverage erase nox > Session cover was successful. real 0m3.581s user 0m3.419s sys 0m0.163s docs nox > Running session docs nox > Creating virtual environment (virtualenv) using python3.8 in .nox/docs nox > python -m pip install -e . nox > python -m pip install sphinx==4.0.1 alabaster recommonmark nox > Skipping sphinx-build run, as --install-only is set. nox > Session docs was successful. nox > Running session docs nox > Re-using existing virtual environment at .nox/docs. nox > python -m pip install -e . nox > python -m pip install sphinx==4.0.1 alabaster recommonmark nox > sphinx-build -W -T -N -b html -d docs/_build/doctrees/ docs/ docs/_build/html/ Running Sphinx v4.0.1 making output directory... done [autosummary] generating autosummary for: README.rst, UPGRADING.md, admin_client.rst, batch.rst, changelog.md, client.rst, collection.rst, document.rst, field_path.rst, index.rst, multiprocessing.rst, query.rst, transaction.rst, transforms.rst, types.rst loading intersphinx inventory from https://python.readthedocs.org/en/latest/objects.inv... loading intersphinx inventory from https://googleapis.dev/python/google-auth/latest/objects.inv... loading intersphinx inventory from https://googleapis.dev/python/google-api-core/latest/objects.inv... loading intersphinx inventory from https://grpc.github.io/grpc/python/objects.inv... loading intersphinx inventory from https://proto-plus-python.readthedocs.io/en/latest/objects.inv... loading intersphinx inventory from https://googleapis.dev/python/protobuf/latest/objects.inv... intersphinx inventory has moved: https://python.readthedocs.org/en/latest/objects.inv -> https://python.readthedocs.io/en/latest/objects.inv building [mo]: targets for 0 po files that are out of date building [html]: targets for 15 source files that are out of date updating environment: [new config] 15 added, 0 changed, 0 removed reading sources... [ 6%] README reading sources... [ 13%] UPGRADING /home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document warn(""Container node skipped: type={0}"".format(mdnode.t)) reading sources... [ 20%] admin_client reading sources... [ 26%] batch reading sources... [ 33%] changelog /home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document warn(""Container node skipped: type={0}"".format(mdnode.t)) reading sources... [ 40%] client reading sources... [ 46%] collection reading sources... [ 53%] document reading sources... [ 60%] field_path reading sources... [ 66%] index reading sources... [ 73%] multiprocessing reading sources... [ 80%] query reading sources... [ 86%] transaction reading sources... [ 93%] transforms reading sources... [100%] types looking for now-outdated files... none found pickling environment... done checking consistency... done preparing documents... done writing output... [ 6%] README writing output... [ 13%] UPGRADING writing output... [ 20%] admin_client writing output... [ 26%] batch writing output... [ 33%] changelog writing output... [ 40%] client writing output... [ 46%] collection writing output... [ 53%] document writing output... [ 60%] field_path writing output... [ 66%] index writing output... [ 73%] multiprocessing writing output... [ 80%] query writing output... [ 86%] transaction writing output... [ 93%] transforms writing output... [100%] types generating indices... genindex py-modindex done highlighting module code... [ 3%] google.cloud.firestore_admin_v1.services.firestore_admin.client highlighting module code... [ 7%] google.cloud.firestore_v1.async_batch highlighting module code... [ 11%] google.cloud.firestore_v1.async_client highlighting module code... [ 15%] google.cloud.firestore_v1.async_collection highlighting module code... [ 19%] google.cloud.firestore_v1.async_document highlighting module code... [ 23%] google.cloud.firestore_v1.async_query highlighting module code... [ 26%] google.cloud.firestore_v1.async_transaction highlighting module code... [ 30%] google.cloud.firestore_v1.base_batch highlighting module code... [ 34%] google.cloud.firestore_v1.base_client highlighting module code... [ 38%] google.cloud.firestore_v1.base_collection highlighting module code... [ 42%] google.cloud.firestore_v1.base_document highlighting module code... [ 46%] google.cloud.firestore_v1.base_query highlighting module code... [ 50%] google.cloud.firestore_v1.base_transaction highlighting module code... [ 53%] google.cloud.firestore_v1.batch highlighting module code... [ 57%] google.cloud.firestore_v1.client highlighting module code... [ 61%] google.cloud.firestore_v1.collection highlighting module code... [ 65%] google.cloud.firestore_v1.document highlighting module code... [ 69%] google.cloud.firestore_v1.field_path highlighting module code... [ 73%] google.cloud.firestore_v1.query highlighting module code... [ 76%] google.cloud.firestore_v1.transaction highlighting module code... [ 80%] google.cloud.firestore_v1.transforms highlighting module code... [ 84%] google.cloud.firestore_v1.types.common highlighting module code... [ 88%] google.cloud.firestore_v1.types.document highlighting module code... [ 92%] google.cloud.firestore_v1.types.firestore highlighting module code... [ 96%] google.cloud.firestore_v1.types.query highlighting module code... [100%] google.cloud.firestore_v1.types.write writing additional pages... search done copying static files... done copying extra files... done dumping search index in English (code: en)... done dumping object inventory... done build succeeded. The HTML pages are in docs/_build/html. nox > Session docs was successful. real 0m12.548s user 0m12.024s sys 0m0.354s ``` Given that the system tests take 3 - 4 minutes to run, ISTM it would be good to break them out into a separate Kokoro job, running in parallel with the other test. This change will require updates to the google3 internal configuration for Kokoro, similar to those @tswast made to enable them for googleapis/python-bigtable#390.",1.0,"Split out system tests into separate Kokoro job - Working to reduce CI latency.Here are timings on my local machine (note the pre-run with `--install-only` to avoid measuring virtualenv creation time): ```bash $ for job in $(nox --list | grep ""^\*"" | cut -d "" "" -f 2); do echo $job; nox -e $job --install-only; time nox -re $job; done lint nox > Running session lint nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint nox > python -m pip install flake8 black==19.10b0 nox > Skipping black run, as --install-only is set. nox > Skipping flake8 run, as --install-only is set. nox > Session lint was successful. nox > Running session lint nox > Re-using existing virtual environment at .nox/lint. nox > python -m pip install flake8 black==19.10b0 nox > black --check docs google tests noxfile.py setup.py All done! ✨ 🍰 ✨ 109 files would be left unchanged. nox > flake8 google tests nox > Session lint was successful. real 0m3.902s user 0m16.218s sys 0m0.277s blacken nox > Running session blacken nox > Creating virtual environment (virtualenv) using python3.8 in .nox/blacken nox > python -m pip install black==19.10b0 nox > Skipping black run, as --install-only is set. nox > Session blacken was successful. nox > Running session blacken nox > Re-using existing virtual environment at .nox/blacken. nox > python -m pip install black==19.10b0 nox > black docs google tests noxfile.py setup.py All done! ✨ 🍰 ✨ 109 files left unchanged. nox > Session blacken was successful. real 0m1.007s user 0m0.884s sys 0m0.127s lint_setup_py nox > Running session lint_setup_py nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint_setup_py nox > python -m pip install docutils pygments nox > Skipping python run, as --install-only is set. nox > Session lint_setup_py was successful. nox > Running session lint_setup_py nox > Re-using existing virtual environment at .nox/lint_setup_py. nox > python -m pip install docutils pygments nox > python setup.py check --restructuredtext --strict running check nox > Session lint_setup_py was successful. real 0m1.067s user 0m0.946s sys 0m0.123s unit-3.6 nox > Running session unit-3.6 nox > Creating virtual environment (virtualenv) using python3.6 in .nox/unit-3-6 nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > Skipping py.test run, as --install-only is set. nox > Session unit-3.6 was successful. nox > Running session unit-3.6 nox > Re-using existing virtual environment at .nox/unit-3-6. nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt nox > py.test --quiet --junitxml=unit_3.6_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 5%] ...............................................................s..s.ss.. [ 10%] ........................................................................ [ 15%] ........................................................................ [ 20%] ...................................................s..s.ss.............. [ 25%] ........................................................................ [ 30%] ........................................................................ [ 35%] ........................................................................ [ 40%] ........................................................................ [ 45%] ........................................................................ [ 50%] ........................................................................ [ 55%] ........................................................................ [ 60%] ........................................................................ [ 65%] ........................................................................ [ 70%] ............................................................ssssssssssss [ 75%] ssssssssssssssssssssssssssssssss........................................ [ 80%] ........................................................................ [ 85%] ........................................................................ [ 90%] ........................................................................ [ 95%] ........................................................... [100%] - generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.6_sponge_log.xml - 1375 passed, 52 skipped in 14.10s nox > Session unit-3.6 was successful. real 0m18.388s user 0m17.654s sys 0m0.675s unit-3.7 nox > Running session unit-3.7 nox > Creating virtual environment (virtualenv) using python3.7 in .nox/unit-3-7 nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > Skipping py.test run, as --install-only is set. nox > Session unit-3.7 was successful. nox > Running session unit-3.7 nox > Re-using existing virtual environment at .nox/unit-3-7. nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > py.test --quiet --junitxml=unit_3.7_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 5%] ................................................................s..s..ss [ 10%] ........................................................................ [ 15%] ........................................................................ [ 20%] ....................................................s..s..ss............ [ 25%] ........................................................................ [ 30%] ........................................................................ [ 35%] ........................................................................ [ 40%] ........................................................................ [ 45%] ........................................................................ [ 50%] ........................................................................ [ 55%] ........................................................................ [ 60%] ........................................................................ [ 65%] ........................................................................ [ 70%] ............................................................ssssssssssss [ 75%] ssssssssssssssssssssssssssssssss........................................ [ 80%] ........................................................................ [ 85%] ........................................................................ [ 90%] ........................................................................ [ 95%] ........................................................... [100%] - generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.7_sponge_log.xml - 1375 passed, 52 skipped in 14.09s nox > Session unit-3.7 was successful. real 0m17.930s user 0m17.185s sys 0m0.732s unit-3.8 nox > Running session unit-3.8 nox > Creating virtual environment (virtualenv) using python3.8 in .nox/unit-3-8 nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > Skipping py.test run, as --install-only is set. nox > Session unit-3.8 was successful. nox > Running session unit-3.8 nox > Re-using existing virtual environment at .nox/unit-3-8. nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt nox > py.test --quiet --junitxml=unit_3.8_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 5%] ................................................................s..s..ss [ 10%] ........................................................................ [ 15%] ........................................................................ [ 20%] ....................................................s..s..ss............ [ 25%] ........................................................................ [ 30%] ........................................................................ [ 35%] ........................................................................ [ 40%] ........................................................................ [ 45%] ........................................................................ [ 50%] ........................................................................ [ 55%] ........................................................................ [ 60%] ........................................................................ [ 65%] ........................................................................ [ 70%] ............................................................ssssssssssss [ 75%] ssssssssssssssssssssssssssssssss........................................ [ 80%] ........................................................................ [ 85%] ........................................................................ [ 90%] ........................................................................ [ 95%] ........................................................... [100%] - generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.8_sponge_log.xml - 1375 passed, 52 skipped in 13.40s nox > Session unit-3.8 was successful. real 0m17.162s user 0m16.517s sys 0m0.638s unit-3.9 nox > Running session unit-3.9 nox > Creating virtual environment (virtualenv) using python3.9 in .nox/unit-3-9 nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > Skipping py.test run, as --install-only is set. nox > Session unit-3.9 was successful. nox > Running session unit-3.9 nox > Re-using existing virtual environment at .nox/unit-3-9. nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt nox > py.test --quiet --junitxml=unit_3.9_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 5%] ................................................................s..s..ss [ 10%] ........................................................................ [ 15%] ........................................................................ [ 20%] ....................................................s..s..ss............ [ 25%] ........................................................................ [ 30%] ........................................................................ [ 35%] ........................................................................ [ 40%] ........................................................................ [ 45%] ........................................................................ [ 50%] ........................................................................ [ 55%] ........................................................................ [ 60%] ........................................................................ [ 65%] ........................................................................ [ 70%] ............................................................ssssssssssss [ 75%] ssssssssssssssssssssssssssssssss........................................ [ 80%] ........................................................................ [ 85%] ........................................................................ [ 90%] ........................................................................ [ 95%] ........................................................... [100%] - generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.9_sponge_log.xml - 1375 passed, 52 skipped in 15.70s nox > Session unit-3.9 was successful. real 0m19.250s user 0m18.510s sys 0m0.715s system-3.7 nox > Running session system-3.7 nox > Creating virtual environment (virtualenv) using python3.7 in .nox/system-3-7 nox > python -m pip install --pre grpcio nox > python -m pip install mock pytest google-cloud-testutils pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > Skipping py.test run, as --install-only is set. nox > Session system-3.7 was successful. nox > Running session system-3.7 nox > Re-using existing virtual environment at .nox/system-3-7. nox > python -m pip install --pre grpcio nox > python -m pip install mock pytest google-cloud-testutils pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt nox > py.test --verbose --junitxml=system_3.7_sponge_log.xml tests/system ============================= test session starts ============================== platform linux -- Python 3.7.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/system-3-7/bin/python cachedir: .pytest_cache rootdir: /home/tseaver/projects/agendaless/Google/src/python-firestore plugins: asyncio-0.15.1 collected 77 items tests/system/test_system.py::test_collections PASSED [ 1%] tests/system/test_system.py::test_collections_w_import PASSED [ 2%] tests/system/test_system.py::test_create_document PASSED [ 3%] tests/system/test_system.py::test_create_document_w_subcollection PASSED [ 5%] tests/system/test_system.py::test_cannot_use_foreign_key PASSED [ 6%] tests/system/test_system.py::test_no_document PASSED [ 7%] tests/system/test_system.py::test_document_set PASSED [ 9%] tests/system/test_system.py::test_document_integer_field PASSED [ 10%] tests/system/test_system.py::test_document_set_merge PASSED [ 11%] tests/system/test_system.py::test_document_set_w_int_field PASSED [ 12%] tests/system/test_system.py::test_document_update_w_int_field PASSED [ 14%] tests/system/test_system.py::test_update_document PASSED [ 15%] tests/system/test_system.py::test_document_get PASSED [ 16%] tests/system/test_system.py::test_document_delete PASSED [ 18%] tests/system/test_system.py::test_collection_add PASSED [ 19%] tests/system/test_system.py::test_query_stream_w_simple_field_eq_op PASSED [ 20%] tests/system/test_system.py::test_query_stream_w_simple_field_array_contains_op PASSED [ 22%] tests/system/test_system.py::test_query_stream_w_simple_field_in_op PASSED [ 23%] tests/system/test_system.py::test_query_stream_w_not_eq_op PASSED [ 24%] tests/system/test_system.py::test_query_stream_w_simple_not_in_op PASSED [ 25%] tests/system/test_system.py::test_query_stream_w_simple_field_array_contains_any_op PASSED [ 27%] tests/system/test_system.py::test_query_stream_w_order_by PASSED [ 28%] tests/system/test_system.py::test_query_stream_w_field_path PASSED [ 29%] tests/system/test_system.py::test_query_stream_w_start_end_cursor PASSED [ 31%] tests/system/test_system.py::test_query_stream_wo_results PASSED [ 32%] tests/system/test_system.py::test_query_stream_w_projection PASSED [ 33%] tests/system/test_system.py::test_query_stream_w_multiple_filters PASSED [ 35%] tests/system/test_system.py::test_query_stream_w_offset PASSED [ 36%] tests/system/test_system.py::test_query_with_order_dot_key PASSED [ 37%] tests/system/test_system.py::test_query_unary PASSED [ 38%] tests/system/test_system.py::test_collection_group_queries PASSED [ 40%] tests/system/test_system.py::test_collection_group_queries_startat_endat PASSED [ 41%] tests/system/test_system.py::test_collection_group_queries_filters PASSED [ 42%] tests/system/test_system.py::test_partition_query_no_partitions PASSED [ 44%] tests/system/test_system.py::test_partition_query PASSED [ 45%] tests/system/test_system.py::test_get_all PASSED [ 46%] tests/system/test_system.py::test_batch PASSED [ 48%] tests/system/test_system.py::test_watch_document PASSED [ 49%] tests/system/test_system.py::test_watch_collection PASSED [ 50%] tests/system/test_system.py::test_watch_query PASSED [ 51%] tests/system/test_system.py::test_array_union PASSED [ 53%] tests/system/test_system.py::test_watch_query_order PASSED [ 54%] tests/system/test_system_async.py::test_collections PASSED [ 55%] tests/system/test_system_async.py::test_collections_w_import PASSED [ 57%] tests/system/test_system_async.py::test_create_document PASSED [ 58%] tests/system/test_system_async.py::test_create_document_w_subcollection PASSED [ 59%] tests/system/test_system_async.py::test_cannot_use_foreign_key PASSED [ 61%] tests/system/test_system_async.py::test_no_document PASSED [ 62%] tests/system/test_system_async.py::test_document_set PASSED [ 63%] tests/system/test_system_async.py::test_document_integer_field PASSED [ 64%] tests/system/test_system_async.py::test_document_set_merge PASSED [ 66%] tests/system/test_system_async.py::test_document_set_w_int_field PASSED [ 67%] tests/system/test_system_async.py::test_document_update_w_int_field PASSED [ 68%] tests/system/test_system_async.py::test_update_document PASSED [ 70%] tests/system/test_system_async.py::test_document_get PASSED [ 71%] tests/system/test_system_async.py::test_document_delete PASSED [ 72%] tests/system/test_system_async.py::test_collection_add PASSED [ 74%] tests/system/test_system_async.py::test_query_stream_w_simple_field_eq_op PASSED [ 75%] tests/system/test_system_async.py::test_query_stream_w_simple_field_array_contains_op PASSED [ 76%] tests/system/test_system_async.py::test_query_stream_w_simple_field_in_op PASSED [ 77%] tests/system/test_system_async.py::test_query_stream_w_simple_field_array_contains_any_op PASSED [ 79%] tests/system/test_system_async.py::test_query_stream_w_order_by PASSED [ 80%] tests/system/test_system_async.py::test_query_stream_w_field_path PASSED [ 81%] tests/system/test_system_async.py::test_query_stream_w_start_end_cursor PASSED [ 83%] tests/system/test_system_async.py::test_query_stream_wo_results PASSED [ 84%] tests/system/test_system_async.py::test_query_stream_w_projection PASSED [ 85%] tests/system/test_system_async.py::test_query_stream_w_multiple_filters PASSED [ 87%] tests/system/test_system_async.py::test_query_stream_w_offset PASSED [ 88%] tests/system/test_system_async.py::test_query_with_order_dot_key PASSED [ 89%] tests/system/test_system_async.py::test_query_unary PASSED [ 90%] tests/system/test_system_async.py::test_collection_group_queries PASSED [ 92%] tests/system/test_system_async.py::test_collection_group_queries_startat_endat PASSED [ 93%] tests/system/test_system_async.py::test_collection_group_queries_filters PASSED [ 94%] tests/system/test_system_async.py::test_partition_query_no_partitions PASSED [ 96%] tests/system/test_system_async.py::test_partition_query PASSED [ 97%] tests/system/test_system_async.py::test_get_all PASSED [ 98%] tests/system/test_system_async.py::test_batch PASSED [100%] =================== 77 passed in 211.00s (0:03:31) =================== nox > Command py.test --verbose --junitxml=system_3.7_sponge_log.xml tests/system passed nox > Session system-3.7 was successful. real 3m34.561s user 0m11.371s sys 0m1.881s cover nox > Running session cover nox > Creating virtual environment (virtualenv) using python3.8 in .nox/cover nox > python -m pip install coverage pytest-cov nox > Skipping coverage run, as --install-only is set. nox > Skipping coverage run, as --install-only is set. nox > Session cover was successful. nox > Running session cover nox > Re-using existing virtual environment at .nox/cover. nox > python -m pip install coverage pytest-cov nox > coverage report --show-missing --fail-under=100 Name Stmts Miss Branch BrPart Cover Missing --------------------------------------------------------------------------------------------------------------------------------- google/cloud/firestore.py 35 0 0 0 100% google/cloud/firestore_admin_v1/__init__.py 23 0 0 0 100% google/cloud/firestore_admin_v1/services/__init__.py 0 0 0 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/__init__.py 3 0 0 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/async_client.py 168 0 38 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/client.py 282 0 90 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/pagers.py 82 0 20 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/transports/__init__.py 9 0 0 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/transports/base.py 72 0 12 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/transports/grpc.py 100 0 34 0 100% google/cloud/firestore_admin_v1/services/firestore_admin/transports/grpc_asyncio.py 103 0 34 0 100% google/cloud/firestore_admin_v1/types/__init__.py 6 0 0 0 100% google/cloud/firestore_admin_v1/types/field.py 12 0 0 0 100% google/cloud/firestore_admin_v1/types/firestore_admin.py 48 0 0 0 100% google/cloud/firestore_admin_v1/types/index.py 28 0 0 0 100% google/cloud/firestore_admin_v1/types/location.py 4 0 0 0 100% google/cloud/firestore_admin_v1/types/operation.py 57 0 0 0 100% google/cloud/firestore_bundle/__init__.py 7 0 0 0 100% google/cloud/firestore_bundle/_helpers.py 4 0 0 0 100% google/cloud/firestore_bundle/bundle.py 111 0 32 0 100% google/cloud/firestore_bundle/services/__init__.py 0 0 0 0 100% google/cloud/firestore_bundle/types/__init__.py 2 0 0 0 100% google/cloud/firestore_bundle/types/bundle.py 33 0 0 0 100% google/cloud/firestore_v1/__init__.py 37 0 0 0 100% google/cloud/firestore_v1/_helpers.py 478 0 240 0 100% google/cloud/firestore_v1/async_batch.py 19 0 2 0 100% google/cloud/firestore_v1/async_client.py 41 0 4 0 100% google/cloud/firestore_v1/async_collection.py 32 0 4 0 100% google/cloud/firestore_v1/async_document.py 44 0 4 0 100% google/cloud/firestore_v1/async_query.py 45 0 16 0 100% google/cloud/firestore_v1/async_transaction.py 98 0 22 0 100% google/cloud/firestore_v1/base_batch.py 32 0 4 0 100% google/cloud/firestore_v1/base_client.py 151 0 42 0 100% google/cloud/firestore_v1/base_collection.py 101 0 16 0 100% google/cloud/firestore_v1/base_document.py 145 0 24 0 100% google/cloud/firestore_v1/base_query.py 331 0 130 0 100% google/cloud/firestore_v1/base_transaction.py 65 0 6 0 100% google/cloud/firestore_v1/batch.py 19 0 2 0 100% google/cloud/firestore_v1/client.py 42 0 4 0 100% google/cloud/firestore_v1/collection.py 30 0 2 0 100% google/cloud/firestore_v1/document.py 48 0 4 0 100% google/cloud/firestore_v1/field_path.py 135 0 56 0 100% google/cloud/firestore_v1/order.py 130 0 54 0 100% google/cloud/firestore_v1/query.py 47 0 14 0 100% google/cloud/firestore_v1/services/__init__.py 0 0 0 0 100% google/cloud/firestore_v1/services/firestore/__init__.py 3 0 0 0 100% google/cloud/firestore_v1/services/firestore/async_client.py 178 0 30 0 100% google/cloud/firestore_v1/services/firestore/client.py 276 0 90 0 100% google/cloud/firestore_v1/services/firestore/pagers.py 121 0 30 0 100% google/cloud/firestore_v1/services/firestore/transports/__init__.py 9 0 0 0 100% google/cloud/firestore_v1/services/firestore/transports/base.py 80 0 12 0 100% google/cloud/firestore_v1/services/firestore/transports/grpc.py 122 0 44 0 100% google/cloud/firestore_v1/services/firestore/transports/grpc_asyncio.py 125 0 44 0 100% google/cloud/firestore_v1/transaction.py 97 0 22 0 100% google/cloud/firestore_v1/transforms.py 39 0 10 0 100% google/cloud/firestore_v1/types/__init__.py 6 0 0 0 100% google/cloud/firestore_v1/types/common.py 16 0 0 0 100% google/cloud/firestore_v1/types/document.py 27 0 0 0 100% google/cloud/firestore_v1/types/firestore.py 157 0 0 0 100% google/cloud/firestore_v1/types/query.py 66 0 0 0 100% google/cloud/firestore_v1/types/write.py 45 0 0 0 100% google/cloud/firestore_v1/watch.py 325 0 78 0 100% tests/unit/__init__.py 0 0 0 0 100% tests/unit/test_firestore_shim.py 10 0 2 0 100% tests/unit/v1/__init__.py 0 0 0 0 100% tests/unit/v1/_test_helpers.py 22 0 0 0 100% tests/unit/v1/conformance_tests.py 106 0 0 0 100% tests/unit/v1/test__helpers.py 1653 0 36 0 100% tests/unit/v1/test_async_batch.py 98 0 0 0 100% tests/unit/v1/test_async_client.py 267 0 18 0 100% tests/unit/v1/test_async_collection.py 223 0 20 0 100% tests/unit/v1/test_async_document.py 334 0 32 0 100% tests/unit/v1/test_async_query.py 327 0 26 0 100% tests/unit/v1/test_async_transaction.py 584 0 0 0 100% tests/unit/v1/test_base_batch.py 98 0 0 0 100% tests/unit/v1/test_base_client.py 238 0 0 0 100% tests/unit/v1/test_base_collection.py 239 0 0 0 100% tests/unit/v1/test_base_document.py 293 0 2 0 100% tests/unit/v1/test_base_query.py 1006 0 20 0 100% tests/unit/v1/test_base_transaction.py 75 0 0 0 100% tests/unit/v1/test_batch.py 92 0 0 0 100% tests/unit/v1/test_bundle.py 268 0 4 0 100% tests/unit/v1/test_client.py 256 0 12 0 100% tests/unit/v1/test_collection.py 197 0 10 0 100% tests/unit/v1/test_cross_language.py 207 0 82 0 100% tests/unit/v1/test_document.py 307 0 26 0 100% tests/unit/v1/test_field_path.py 355 0 8 0 100% tests/unit/v1/test_order.py 138 0 8 0 100% tests/unit/v1/test_query.py 318 0 0 0 100% tests/unit/v1/test_transaction.py 560 0 0 0 100% tests/unit/v1/test_transforms.py 78 0 8 0 100% tests/unit/v1/test_watch.py 667 0 4 0 100% --------------------------------------------------------------------------------------------------------------------------------- TOTAL 13967 0 1588 0 100% nox > coverage erase nox > Session cover was successful. real 0m3.581s user 0m3.419s sys 0m0.163s docs nox > Running session docs nox > Creating virtual environment (virtualenv) using python3.8 in .nox/docs nox > python -m pip install -e . nox > python -m pip install sphinx==4.0.1 alabaster recommonmark nox > Skipping sphinx-build run, as --install-only is set. nox > Session docs was successful. nox > Running session docs nox > Re-using existing virtual environment at .nox/docs. nox > python -m pip install -e . nox > python -m pip install sphinx==4.0.1 alabaster recommonmark nox > sphinx-build -W -T -N -b html -d docs/_build/doctrees/ docs/ docs/_build/html/ Running Sphinx v4.0.1 making output directory... done [autosummary] generating autosummary for: README.rst, UPGRADING.md, admin_client.rst, batch.rst, changelog.md, client.rst, collection.rst, document.rst, field_path.rst, index.rst, multiprocessing.rst, query.rst, transaction.rst, transforms.rst, types.rst loading intersphinx inventory from https://python.readthedocs.org/en/latest/objects.inv... loading intersphinx inventory from https://googleapis.dev/python/google-auth/latest/objects.inv... loading intersphinx inventory from https://googleapis.dev/python/google-api-core/latest/objects.inv... loading intersphinx inventory from https://grpc.github.io/grpc/python/objects.inv... loading intersphinx inventory from https://proto-plus-python.readthedocs.io/en/latest/objects.inv... loading intersphinx inventory from https://googleapis.dev/python/protobuf/latest/objects.inv... intersphinx inventory has moved: https://python.readthedocs.org/en/latest/objects.inv -> https://python.readthedocs.io/en/latest/objects.inv building [mo]: targets for 0 po files that are out of date building [html]: targets for 15 source files that are out of date updating environment: [new config] 15 added, 0 changed, 0 removed reading sources... [ 6%] README reading sources... [ 13%] UPGRADING /home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document warn(""Container node skipped: type={0}"".format(mdnode.t)) reading sources... [ 20%] admin_client reading sources... [ 26%] batch reading sources... [ 33%] changelog /home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document warn(""Container node skipped: type={0}"".format(mdnode.t)) reading sources... [ 40%] client reading sources... [ 46%] collection reading sources... [ 53%] document reading sources... [ 60%] field_path reading sources... [ 66%] index reading sources... [ 73%] multiprocessing reading sources... [ 80%] query reading sources... [ 86%] transaction reading sources... [ 93%] transforms reading sources... [100%] types looking for now-outdated files... none found pickling environment... done checking consistency... done preparing documents... done writing output... [ 6%] README writing output... [ 13%] UPGRADING writing output... [ 20%] admin_client writing output... [ 26%] batch writing output... [ 33%] changelog writing output... [ 40%] client writing output... [ 46%] collection writing output... [ 53%] document writing output... [ 60%] field_path writing output... [ 66%] index writing output... [ 73%] multiprocessing writing output... [ 80%] query writing output... [ 86%] transaction writing output... [ 93%] transforms writing output... [100%] types generating indices... genindex py-modindex done highlighting module code... [ 3%] google.cloud.firestore_admin_v1.services.firestore_admin.client highlighting module code... [ 7%] google.cloud.firestore_v1.async_batch highlighting module code... [ 11%] google.cloud.firestore_v1.async_client highlighting module code... [ 15%] google.cloud.firestore_v1.async_collection highlighting module code... [ 19%] google.cloud.firestore_v1.async_document highlighting module code... [ 23%] google.cloud.firestore_v1.async_query highlighting module code... [ 26%] google.cloud.firestore_v1.async_transaction highlighting module code... [ 30%] google.cloud.firestore_v1.base_batch highlighting module code... [ 34%] google.cloud.firestore_v1.base_client highlighting module code... [ 38%] google.cloud.firestore_v1.base_collection highlighting module code... [ 42%] google.cloud.firestore_v1.base_document highlighting module code... [ 46%] google.cloud.firestore_v1.base_query highlighting module code... [ 50%] google.cloud.firestore_v1.base_transaction highlighting module code... [ 53%] google.cloud.firestore_v1.batch highlighting module code... [ 57%] google.cloud.firestore_v1.client highlighting module code... [ 61%] google.cloud.firestore_v1.collection highlighting module code... [ 65%] google.cloud.firestore_v1.document highlighting module code... [ 69%] google.cloud.firestore_v1.field_path highlighting module code... [ 73%] google.cloud.firestore_v1.query highlighting module code... [ 76%] google.cloud.firestore_v1.transaction highlighting module code... [ 80%] google.cloud.firestore_v1.transforms highlighting module code... [ 84%] google.cloud.firestore_v1.types.common highlighting module code... [ 88%] google.cloud.firestore_v1.types.document highlighting module code... [ 92%] google.cloud.firestore_v1.types.firestore highlighting module code... [ 96%] google.cloud.firestore_v1.types.query highlighting module code... [100%] google.cloud.firestore_v1.types.write writing additional pages... search done copying static files... done copying extra files... done dumping search index in English (code: en)... done dumping object inventory... done build succeeded. The HTML pages are in docs/_build/html. nox > Session docs was successful. real 0m12.548s user 0m12.024s sys 0m0.354s ``` Given that the system tests take 3 - 4 minutes to run, ISTM it would be good to break them out into a separate Kokoro job, running in parallel with the other test. This change will require updates to the google3 internal configuration for Kokoro, similar to those @tswast made to enable them for googleapis/python-bigtable#390.",0,split out system tests into separate kokoro job working to reduce ci latency here are timings on my local machine note the pre run with install only to avoid measuring virtualenv creation time bash for job in nox list grep cut d f do echo job nox e job install only time nox re job done lint nox running session lint nox creating virtual environment virtualenv using in nox lint nox python m pip install black nox skipping black run as install only is set nox skipping run as install only is set nox session lint was successful nox running session lint nox re using existing virtual environment at nox lint nox python m pip install black nox black check docs google tests noxfile py setup py all done ✨ 🍰 ✨ files would be left unchanged nox google tests nox session lint was successful real user sys blacken nox running session blacken nox creating virtual environment virtualenv using in nox blacken nox python m pip install black nox skipping black run as install only is set nox session blacken was successful nox running session blacken nox re using existing virtual environment at nox blacken nox python m pip install black nox black docs google tests noxfile py setup py all done ✨ 🍰 ✨ files left unchanged nox session blacken was successful real user sys lint setup py nox running session lint setup py nox creating virtual environment virtualenv using in nox lint setup py nox python m pip install docutils pygments nox skipping python run as install only is set nox session lint setup py was successful nox running session lint setup py nox re using existing virtual environment at nox lint setup py nox python m pip install docutils pygments nox python setup py check restructuredtext strict running check nox session lint setup py was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit s s ss s s ss ssssssssssss ssssssssssssssssssssssssssssssss generated xml file home tseaver projects agendaless google src python firestore unit sponge log xml passed skipped in nox session unit was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit s s ss s s ss ssssssssssss ssssssssssssssssssssssssssssssss generated xml file home tseaver projects agendaless google src python firestore unit sponge log xml passed skipped in nox session unit was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit s s ss s s ss ssssssssssss ssssssssssssssssssssssssssssssss generated xml file home tseaver projects agendaless google src python firestore unit sponge log xml passed skipped in nox session unit was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit s s ss s s ss ssssssssssss ssssssssssssssssssssssssssssssss generated xml file home tseaver projects agendaless google src python firestore unit sponge log xml passed skipped in nox session unit was successful real user sys system nox running session system nox creating virtual environment virtualenv using in nox system nox python m pip install pre grpcio nox python m pip install mock pytest google cloud testutils pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox skipping py test run as install only is set nox session system was successful nox running session system nox re using existing virtual environment at nox system nox python m pip install pre grpcio nox python m pip install mock pytest google cloud testutils pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox py test verbose junitxml system sponge log xml tests system test session starts platform linux python pytest py pluggy home tseaver projects agendaless google src python firestore nox system bin python cachedir pytest cache rootdir home tseaver projects agendaless google src python firestore plugins asyncio collected items tests system test system py test collections passed tests system test system py test collections w import passed tests system test system py test create document passed tests system test system py test create document w subcollection passed tests system test system py test cannot use foreign key passed tests system test system py test no document passed tests system test system py test document set passed tests system test system py test document integer field passed tests system test system py test document set merge passed tests system test system py test document set w int field passed tests system test system py test document update w int field passed tests system test system py test update document passed tests system test system py test document get passed tests system test system py test document delete passed tests system test system py test collection add passed tests system test system py test query stream w simple field eq op passed tests system test system py test query stream w simple field array contains op passed tests system test system py test query stream w simple field in op passed tests system test system py test query stream w not eq op passed tests system test system py test query stream w simple not in op passed tests system test system py test query stream w simple field array contains any op passed tests system test system py test query stream w order by passed tests system test system py test query stream w field path passed tests system test system py test query stream w start end cursor passed tests system test system py test query stream wo results passed tests system test system py test query stream w projection passed tests system test system py test query stream w multiple filters passed tests system test system py test query stream w offset passed tests system test system py test query with order dot key passed tests system test system py test query unary passed tests system test system py test collection group queries passed tests system test system py test collection group queries startat endat passed tests system test system py test collection group queries filters passed tests system test system py test partition query no partitions passed tests system test system py test partition query passed tests system test system py test get all passed tests system test system py test batch passed tests system test system py test watch document passed tests system test system py test watch collection passed tests system test system py test watch query passed tests system test system py test array union passed tests system test system py test watch query order passed tests system test system async py test collections passed tests system test system async py test collections w import passed tests system test system async py test create document passed tests system test system async py test create document w subcollection passed tests system test system async py test cannot use foreign key passed tests system test system async py test no document passed tests system test system async py test document set passed tests system test system async py test document integer field passed tests system test system async py test document set merge passed tests system test system async py test document set w int field passed tests system test system async py test document update w int field passed tests system test system async py test update document passed tests system test system async py test document get passed tests system test system async py test document delete passed tests system test system async py test collection add passed tests system test system async py test query stream w simple field eq op passed tests system test system async py test query stream w simple field array contains op passed tests system test system async py test query stream w simple field in op passed tests system test system async py test query stream w simple field array contains any op passed tests system test system async py test query stream w order by passed tests system test system async py test query stream w field path passed tests system test system async py test query stream w start end cursor passed tests system test system async py test query stream wo results passed tests system test system async py test query stream w projection passed tests system test system async py test query stream w multiple filters passed tests system test system async py test query stream w offset passed tests system test system async py test query with order dot key passed tests system test system async py test query unary passed tests system test system async py test collection group queries passed tests system test system async py test collection group queries startat endat passed tests system test system async py test collection group queries filters passed tests system test system async py test partition query no partitions passed tests system test system async py test partition query passed tests system test system async py test get all passed tests system test system async py test batch passed passed in nox command py test verbose junitxml system sponge log xml tests system passed nox session system was successful real user sys cover nox running session cover nox creating virtual environment virtualenv using in nox cover nox python m pip install coverage pytest cov nox skipping coverage run as install only is set nox skipping coverage run as install only is set nox session cover was successful nox running session cover nox re using existing virtual environment at nox cover nox python m pip install coverage pytest cov nox coverage report show missing fail under name stmts miss branch brpart cover missing google cloud firestore py google cloud firestore admin init py google cloud firestore admin services init py google cloud firestore admin services firestore admin init py google cloud firestore admin services firestore admin async client py google cloud firestore admin services firestore admin client py google cloud firestore admin services firestore admin pagers py google cloud firestore admin services firestore admin transports init py google cloud firestore admin services firestore admin transports base py google cloud firestore admin services firestore admin transports grpc py google cloud firestore admin services firestore admin transports grpc asyncio py google cloud firestore admin types init py google cloud firestore admin types field py google cloud firestore admin types firestore admin py google cloud firestore admin types index py google cloud firestore admin types location py google cloud firestore admin types operation py google cloud firestore bundle init py google cloud firestore bundle helpers py google cloud firestore bundle bundle py google cloud firestore bundle services init py google cloud firestore bundle types init py google cloud firestore bundle types bundle py google cloud firestore init py google cloud firestore helpers py google cloud firestore async batch py google cloud firestore async client py google cloud firestore async collection py google cloud firestore async document py google cloud firestore async query py google cloud firestore async transaction py google cloud firestore base batch py google cloud firestore base client py google cloud firestore base collection py google cloud firestore base document py google cloud firestore base query py google cloud firestore base transaction py google cloud firestore batch py google cloud firestore client py google cloud firestore collection py google cloud firestore document py google cloud firestore field path py google cloud firestore order py google cloud firestore query py google cloud firestore services init py google cloud firestore services firestore init py google cloud firestore services firestore async client py google cloud firestore services firestore client py google cloud firestore services firestore pagers py google cloud firestore services firestore transports init py google cloud firestore services firestore transports base py google cloud firestore services firestore transports grpc py google cloud firestore services firestore transports grpc asyncio py google cloud firestore transaction py google cloud firestore transforms py google cloud firestore types init py google cloud firestore types common py google cloud firestore types document py google cloud firestore types firestore py google cloud firestore types query py google cloud firestore types write py google cloud firestore watch py tests unit init py tests unit test firestore shim py tests unit init py tests unit test helpers py tests unit conformance tests py tests unit test helpers py tests unit test async batch py tests unit test async client py tests unit test async collection py tests unit test async document py tests unit test async query py tests unit test async transaction py tests unit test base batch py tests unit test base client py tests unit test base collection py tests unit test base document py tests unit test base query py tests unit test base transaction py tests unit test batch py tests unit test bundle py tests unit test client py tests unit test collection py tests unit test cross language py tests unit test document py tests unit test field path py tests unit test order py tests unit test query py tests unit test transaction py tests unit test transforms py tests unit test watch py total nox coverage erase nox session cover was successful real user sys docs nox running session docs nox creating virtual environment virtualenv using in nox docs nox python m pip install e nox python m pip install sphinx alabaster recommonmark nox skipping sphinx build run as install only is set nox session docs was successful nox running session docs nox re using existing virtual environment at nox docs nox python m pip install e nox python m pip install sphinx alabaster recommonmark nox sphinx build w t n b html d docs build doctrees docs docs build html running sphinx making output directory done generating autosummary for readme rst upgrading md admin client rst batch rst changelog md client rst collection rst document rst field path rst index rst multiprocessing rst query rst transaction rst transforms rst types rst loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from intersphinx inventory has moved building targets for po files that are out of date building targets for source files that are out of date updating environment added changed removed reading sources readme reading sources upgrading home tseaver projects agendaless google src python firestore nox docs lib site packages recommonmark parser py userwarning container node skipped type document warn container node skipped type format mdnode t reading sources admin client reading sources batch reading sources changelog home tseaver projects agendaless google src python firestore nox docs lib site packages recommonmark parser py userwarning container node skipped type document warn container node skipped type format mdnode t reading sources client reading sources collection reading sources document reading sources field path reading sources index reading sources multiprocessing reading sources query reading sources transaction reading sources transforms reading sources types looking for now outdated files none found pickling environment done checking consistency done preparing documents done writing output readme writing output upgrading writing output admin client writing output batch writing output changelog writing output client writing output collection writing output document writing output field path writing output index writing output multiprocessing writing output query writing output transaction writing output transforms writing output types generating indices genindex py modindex done highlighting module code google cloud firestore admin services firestore admin client highlighting module code google cloud firestore async batch highlighting module code google cloud firestore async client highlighting module code google cloud firestore async collection highlighting module code google cloud firestore async document highlighting module code google cloud firestore async query highlighting module code google cloud firestore async transaction highlighting module code google cloud firestore base batch highlighting module code google cloud firestore base client highlighting module code google cloud firestore base collection highlighting module code google cloud firestore base document highlighting module code google cloud firestore base query highlighting module code google cloud firestore base transaction highlighting module code google cloud firestore batch highlighting module code google cloud firestore client highlighting module code google cloud firestore collection highlighting module code google cloud firestore document highlighting module code google cloud firestore field path highlighting module code google cloud firestore query highlighting module code google cloud firestore transaction highlighting module code google cloud firestore transforms highlighting module code google cloud firestore types common highlighting module code google cloud firestore types document highlighting module code google cloud firestore types firestore highlighting module code google cloud firestore types query highlighting module code google cloud firestore types write writing additional pages search done copying static files done copying extra files done dumping search index in english code en done dumping object inventory done build succeeded the html pages are in docs build html nox session docs was successful real user sys given that the system tests take minutes to run istm it would be good to break them out into a separate kokoro job running in parallel with the other test this change will require updates to the internal configuration for kokoro similar to those tswast made to enable them for googleapis python bigtable ,0 3836,14674163917.0,IssuesEvent,2020-12-30 14:44:58,z0ph/status,https://api.github.com/repos/z0ph/status,closed,🛑 Home Automation is down,home-automation status,"In [`6277ed7`](https://github.com/z0ph/status/commit/6277ed7dc02f689ab1eeb45b909d514187a126ef ), Home Automation ($HOME_AUTOMATION) was **down**: - HTTP code: 0 - Response time: 0 ms ",1.0,"🛑 Home Automation is down - In [`6277ed7`](https://github.com/z0ph/status/commit/6277ed7dc02f689ab1eeb45b909d514187a126ef ), Home Automation ($HOME_AUTOMATION) was **down**: - HTTP code: 0 - Response time: 0 ms ",1,🛑 home automation is down in home automation home automation was down http code response time ms ,1 63701,14656763821.0,IssuesEvent,2020-12-28 14:08:44,fu1771695yongxie/vue-router,https://api.github.com/repos/fu1771695yongxie/vue-router,opened,CVE-2019-14863 (Medium) detected in angular-1.4.2.min.js,security vulnerability,"## CVE-2019-14863 - Medium Severity Vulnerability
Vulnerable Library - angular-1.4.2.min.js

AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.

Library home page: https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js

Path to dependency file: vue-router/node_modules/autocomplete.js/test/playground_angular.html

Path to vulnerable library: vue-router/node_modules/autocomplete.js/test/playground_angular.html,vue-router/node_modules/autocomplete.js/examples/basic_angular.html

Dependency Hierarchy: - :x: **angular-1.4.2.min.js** (Vulnerable Library)

Found in HEAD commit: cfc84cbc0d997f1a421f1406c99645b1f6405595

Found in base branch: dev

Vulnerability Details

There is a vulnerability in all angular versions before 1.5.0-beta.0, where after escaping the context of the web application, the web application delivers data to its users along with other trusted dynamic content, without validating it.

Publish Date: 2020-01-02

URL: CVE-2019-14863

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/angular/angular.js/pull/12524

Release Date: 2020-01-02

Fix Resolution: angular - v1.5.0-beta.1;org.webjars:angularjs:1.5.0-rc.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-14863 (Medium) detected in angular-1.4.2.min.js - ## CVE-2019-14863 - Medium Severity Vulnerability
Vulnerable Library - angular-1.4.2.min.js

AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.

Library home page: https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js

Path to dependency file: vue-router/node_modules/autocomplete.js/test/playground_angular.html

Path to vulnerable library: vue-router/node_modules/autocomplete.js/test/playground_angular.html,vue-router/node_modules/autocomplete.js/examples/basic_angular.html

Dependency Hierarchy: - :x: **angular-1.4.2.min.js** (Vulnerable Library)

Found in HEAD commit: cfc84cbc0d997f1a421f1406c99645b1f6405595

Found in base branch: dev

Vulnerability Details

There is a vulnerability in all angular versions before 1.5.0-beta.0, where after escaping the context of the web application, the web application delivers data to its users along with other trusted dynamic content, without validating it.

Publish Date: 2020-01-02

URL: CVE-2019-14863

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/angular/angular.js/pull/12524

Release Date: 2020-01-02

Fix Resolution: angular - v1.5.0-beta.1;org.webjars:angularjs:1.5.0-rc.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in angular min js cve medium severity vulnerability vulnerable library angular min js angularjs is an mvc framework for building web applications the core features include html enhanced with custom component and data binding capabilities dependency injection and strong focus on simplicity testability maintainability and boiler plate reduction library home page a href path to dependency file vue router node modules autocomplete js test playground angular html path to vulnerable library vue router node modules autocomplete js test playground angular html vue router node modules autocomplete js examples basic angular html dependency hierarchy x angular min js vulnerable library found in head commit a href found in base branch dev vulnerability details there is a vulnerability in all angular versions before beta where after escaping the context of the web application the web application delivers data to its users along with other trusted dynamic content without validating it publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution angular beta org webjars angularjs rc step up your open source security game with whitesource ,0 102015,4149722155.0,IssuesEvent,2016-06-15 15:14:40,ArdaCraft/IssueTracker,https://api.github.com/repos/ArdaCraft/IssueTracker,closed,No Physics,high priority plugin,A plugin to prevent vanilla physics/block updates so that blocks can be placed in special ways (such as floating torches),1.0,No Physics - A plugin to prevent vanilla physics/block updates so that blocks can be placed in special ways (such as floating torches),0,no physics a plugin to prevent vanilla physics block updates so that blocks can be placed in special ways such as floating torches ,0 182312,14114604065.0,IssuesEvent,2020-11-07 16:46:29,compare-ci/admin,https://api.github.com/repos/compare-ci/admin,closed,Automated test 1604767395.825574,Test,"This is a tracking issue for the automated tests being run. Test id: `automated-test-1604767395.825574` |[python-sum](https://github.com/compare-ci/python-sum/pull/1188)|Pull Created|Check Start|Check End|Total|Check| |-|-|-|-|-|-| |CircleCI Checks|16:43:20|16:43:21|16:43:30|0:00:10|0:00:09| |Travis CI|16:43:20|16:43:42|16:44:00|0:00:40|0:00:18| |Azure Pipelines|16:43:20|16:45:54|16:46:03|0:02:43|0:00:09| |[node-sum](https://github.com/compare-ci/node-sum/pull/1169)|Pull Created|Check Start|Check End|Total|Check| |-|-|-|-|-|-| |CircleCI Checks|16:43:25|16:43:26|16:44:26|0:01:01|0:01:00| |Travis CI|16:43:25|16:43:45|16:44:27|0:01:02|0:00:42| |GitHub Actions|16:43:25|16:43:38|16:43:58|0:00:33|0:00:20| |Azure Pipelines|16:43:25|16:45:47|16:46:11|0:02:46|0:00:24| ",1.0,"Automated test 1604767395.825574 - This is a tracking issue for the automated tests being run. Test id: `automated-test-1604767395.825574` |[python-sum](https://github.com/compare-ci/python-sum/pull/1188)|Pull Created|Check Start|Check End|Total|Check| |-|-|-|-|-|-| |CircleCI Checks|16:43:20|16:43:21|16:43:30|0:00:10|0:00:09| |Travis CI|16:43:20|16:43:42|16:44:00|0:00:40|0:00:18| |Azure Pipelines|16:43:20|16:45:54|16:46:03|0:02:43|0:00:09| |[node-sum](https://github.com/compare-ci/node-sum/pull/1169)|Pull Created|Check Start|Check End|Total|Check| |-|-|-|-|-|-| |CircleCI Checks|16:43:25|16:43:26|16:44:26|0:01:01|0:01:00| |Travis CI|16:43:25|16:43:45|16:44:27|0:01:02|0:00:42| |GitHub Actions|16:43:25|16:43:38|16:43:58|0:00:33|0:00:20| |Azure Pipelines|16:43:25|16:45:47|16:46:11|0:02:46|0:00:24| ",0,automated test this is a tracking issue for the automated tests being run test id automated test created check start check end total check circleci checks travis ci azure pipelines created check start check end total check circleci checks travis ci github actions azure pipelines ,0 3274,13309679472.0,IssuesEvent,2020-08-26 04:45:37,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,insufficient privileges issue from powershell workflow runbook.,Pri2 automation/svc cxp product-question shared-capabilities/subsvc triaged," Article is not describing about how to use AD registered applications from azure powershell workflow runbooks while facing the insufficient privileges. Able to connect to the Acitive Directory with conenct-azuread cmdlet. But then after facing the insufficient privileges issue. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 56e2500f-e1f5-bc87-6e5c-f41b59265049 * Version Independent ID: d212be48-7d05-847d-3045-cea82e6ba603 * Content: [Manage an Azure Automation Run As account](https://docs.microsoft.com/en-us/azure/automation/manage-runas-account) * Content Source: [articles/automation/manage-runas-account.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/manage-runas-account.md) * Service: **automation** * Sub-service: **shared-capabilities** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1.0,"insufficient privileges issue from powershell workflow runbook. - Article is not describing about how to use AD registered applications from azure powershell workflow runbooks while facing the insufficient privileges. Able to connect to the Acitive Directory with conenct-azuread cmdlet. But then after facing the insufficient privileges issue. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 56e2500f-e1f5-bc87-6e5c-f41b59265049 * Version Independent ID: d212be48-7d05-847d-3045-cea82e6ba603 * Content: [Manage an Azure Automation Run As account](https://docs.microsoft.com/en-us/azure/automation/manage-runas-account) * Content Source: [articles/automation/manage-runas-account.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/manage-runas-account.md) * Service: **automation** * Sub-service: **shared-capabilities** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1,insufficient privileges issue from powershell workflow runbook article is not describing about how to use ad registered applications from azure powershell workflow runbooks while facing the insufficient privileges able to connect to the acitive directory with conenct azuread cmdlet but then after facing the insufficient privileges issue document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service shared capabilities github login mgoedtel microsoft alias magoedte ,1 51190,21582629884.0,IssuesEvent,2022-05-02 20:30:06,Azure/azure-powershell,https://api.github.com/repos/Azure/azure-powershell,closed,"Add-AzVHD - Error with path with ampersand (&), even if properly quoted",Compute Service Attention bug question customer-reported,"### Description `Add-AzVHD` will fail at the MD5 calculation step if the path to the VHD contains an ampersand, which is a legal path character on Windows. The problem is here: https://github.com/Azure/azure-powershell/blob/ebc4710853f09e2e16d33ff479fd2ee9b45a8156/src/Compute/Compute/Models/PSSyncOutputEvents.cs#L108-L116 I'm not sure why it's doing this instead of just calling [WriteProgress](https://docs.microsoft.com/en-us/dotnet/api/system.management.automation.cmdlet.writeprogress), `WriteInformation`, or similar. There seems to be a lot of what is effectively ""eval""ing in this file and it should probably all be audited for injection vulnerabilities. ### Environment data ```PowerShell Name Value ---- ----- PSVersion 5.1.20348.320 PSEdition Desktop PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} BuildVersion 10.0.20348.320 CLRVersion 4.0.30319.42000 WSManStackVersion 3.0 PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 ``` ### Module versions ```PowerShell ModuleType Version Name ---------- ------- ---- Script 2.7.1 Az.Accounts Script 4.23.0 Az.Compute ``` ### Error output ```PowerShell Message : At line:1 char:79 + ... sh is being calculated for the file 'E:\my vms & more ... + ~ The ampersand (&) character is not allowed. The & operator is reserved for future use; wrap an ampersand in double quotation marks (""&"") to pass it as part of a string. StackTrace : at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input) at System.Management.Automation.PowerShell.Worker.ConstructPipelineAndDoWork(Runspace rs, Boolean performSyncInvoke) at System.Management.Automation.PowerShell.Worker.CreateRunspaceIfNeededAndDoWork(Runspace rsToUse, Boolean isSync) at System.Management.Automation.PowerShell.CoreInvokeHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings) at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings) at System.Management.Automation.PowerShell.Invoke(IEnumerable input, PSInvocationSettings settings) at Microsoft.Azure.Commands.Compute.Models.PSSyncOutputEvents.LogMessage(String format, Object[] parameters) at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.CalculateMd5Hash(Stream stream, String filePath) at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.Create(String filePath) at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_OperationMetaData() at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_MD5HashOfLocalVhd() at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.Create(FileInfo localVhd, PSPageBlobClient pageblob, Boolean overWrite) at Microsoft.Azure.Commands.Compute.StorageServices.AddAzureVhdCommand.b__51_0() at Microsoft.Azure.Commands.Compute.ComputeClientBaseCmdlet.ExecuteClientAction(Action action) at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord() Exception : System.Management.Automation.ParseException InvocationInfo : {Add-AzVhd} Position : At line:2 char:1 + Add-AzVhd -LocalFilePath ""E:\my vms & more\disk ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ HistoryId : 20 Message : At line:1 char:79 + ... sh is being calculated for the file 'E:\my vms & more... + ~ The ampersand (&) character is not allowed. The & operator is reserved for future use; wrap an ampersand in double quotation marks (""&"") to pass it as part of a string. StackTrace : at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input) at System.Management.Automation.PowerShell.Worker.ConstructPipelineAndDoWork(Runspace rs, Boolean performSyncInvoke) at System.Management.Automation.PowerShell.Worker.CreateRunspaceIfNeededAndDoWork(Runspace rsToUse, Boolean isSync) at System.Management.Automation.PowerShell.CoreInvokeHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings) at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings) at System.Management.Automation.PowerShell.Invoke(IEnumerable input, PSInvocationSettings settings) at Microsoft.Azure.Commands.Compute.Models.PSSyncOutputEvents.LogMessage(String format, Object[] parameters) at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.CalculateMd5Hash(Stream stream, String filePath) at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.Create(String filePath) at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_OperationMetaData() at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_MD5HashOfLocalVhd() at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.Create(FileInfo localVhd, PSPageBlobClient pageblob, Boolean overWrite) at Microsoft.Azure.Commands.Compute.StorageServices.AddAzureVhdCommand.b__51_0() at Microsoft.Azure.Commands.Compute.ComputeClientBaseCmdlet.ExecuteClientAction(Action action) at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord() Exception : System.Management.Automation.ParseException InvocationInfo : {Add-AzVhd} Line : Add-AzVhd -LocalFilePath ""E:\my vms & more\disk-0.vhdx"" -ResourceGroupName RG-PrintScan -Location australiasoutheast -DiskName dsubmel753_os -DiskSku StandardSSD_LRS ``` ",1.0,"Add-AzVHD - Error with path with ampersand (&), even if properly quoted - ### Description `Add-AzVHD` will fail at the MD5 calculation step if the path to the VHD contains an ampersand, which is a legal path character on Windows. The problem is here: https://github.com/Azure/azure-powershell/blob/ebc4710853f09e2e16d33ff479fd2ee9b45a8156/src/Compute/Compute/Models/PSSyncOutputEvents.cs#L108-L116 I'm not sure why it's doing this instead of just calling [WriteProgress](https://docs.microsoft.com/en-us/dotnet/api/system.management.automation.cmdlet.writeprogress), `WriteInformation`, or similar. There seems to be a lot of what is effectively ""eval""ing in this file and it should probably all be audited for injection vulnerabilities. ### Environment data ```PowerShell Name Value ---- ----- PSVersion 5.1.20348.320 PSEdition Desktop PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} BuildVersion 10.0.20348.320 CLRVersion 4.0.30319.42000 WSManStackVersion 3.0 PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 ``` ### Module versions ```PowerShell ModuleType Version Name ---------- ------- ---- Script 2.7.1 Az.Accounts Script 4.23.0 Az.Compute ``` ### Error output ```PowerShell Message : At line:1 char:79 + ... sh is being calculated for the file 'E:\my vms & more ... + ~ The ampersand (&) character is not allowed. The & operator is reserved for future use; wrap an ampersand in double quotation marks (""&"") to pass it as part of a string. StackTrace : at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input) at System.Management.Automation.PowerShell.Worker.ConstructPipelineAndDoWork(Runspace rs, Boolean performSyncInvoke) at System.Management.Automation.PowerShell.Worker.CreateRunspaceIfNeededAndDoWork(Runspace rsToUse, Boolean isSync) at System.Management.Automation.PowerShell.CoreInvokeHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings) at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings) at System.Management.Automation.PowerShell.Invoke(IEnumerable input, PSInvocationSettings settings) at Microsoft.Azure.Commands.Compute.Models.PSSyncOutputEvents.LogMessage(String format, Object[] parameters) at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.CalculateMd5Hash(Stream stream, String filePath) at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.Create(String filePath) at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_OperationMetaData() at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_MD5HashOfLocalVhd() at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.Create(FileInfo localVhd, PSPageBlobClient pageblob, Boolean overWrite) at Microsoft.Azure.Commands.Compute.StorageServices.AddAzureVhdCommand.b__51_0() at Microsoft.Azure.Commands.Compute.ComputeClientBaseCmdlet.ExecuteClientAction(Action action) at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord() Exception : System.Management.Automation.ParseException InvocationInfo : {Add-AzVhd} Position : At line:2 char:1 + Add-AzVhd -LocalFilePath ""E:\my vms & more\disk ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ HistoryId : 20 Message : At line:1 char:79 + ... sh is being calculated for the file 'E:\my vms & more... + ~ The ampersand (&) character is not allowed. The & operator is reserved for future use; wrap an ampersand in double quotation marks (""&"") to pass it as part of a string. StackTrace : at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input) at System.Management.Automation.PowerShell.Worker.ConstructPipelineAndDoWork(Runspace rs, Boolean performSyncInvoke) at System.Management.Automation.PowerShell.Worker.CreateRunspaceIfNeededAndDoWork(Runspace rsToUse, Boolean isSync) at System.Management.Automation.PowerShell.CoreInvokeHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings) at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings) at System.Management.Automation.PowerShell.Invoke(IEnumerable input, PSInvocationSettings settings) at Microsoft.Azure.Commands.Compute.Models.PSSyncOutputEvents.LogMessage(String format, Object[] parameters) at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.CalculateMd5Hash(Stream stream, String filePath) at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.Create(String filePath) at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_OperationMetaData() at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_MD5HashOfLocalVhd() at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.Create(FileInfo localVhd, PSPageBlobClient pageblob, Boolean overWrite) at Microsoft.Azure.Commands.Compute.StorageServices.AddAzureVhdCommand.b__51_0() at Microsoft.Azure.Commands.Compute.ComputeClientBaseCmdlet.ExecuteClientAction(Action action) at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord() Exception : System.Management.Automation.ParseException InvocationInfo : {Add-AzVhd} Line : Add-AzVhd -LocalFilePath ""E:\my vms & more\disk-0.vhdx"" -ResourceGroupName RG-PrintScan -Location australiasoutheast -DiskName dsubmel753_os -DiskSku StandardSSD_LRS ``` ",0,add azvhd error with path with ampersand even if properly quoted description add azvhd will fail at the calculation step if the path to the vhd contains an ampersand which is a legal path character on windows the problem is here i m not sure why it s doing this instead of just calling writeinformation or similar there seems to be a lot of what is effectively eval ing in this file and it should probably all be audited for injection vulnerabilities environment data powershell name value psversion psedition desktop pscompatibleversions buildversion clrversion wsmanstackversion psremotingprotocolversion serializationversion module versions powershell moduletype version name script az accounts script az compute error output powershell message at line char sh is being calculated for the file e my vms more the ampersand character is not allowed the operator is reserved for future use wrap an ampersand in double quotation marks to pass it as part of a string stacktrace at system management automation runspaces pipelinebase invoke ienumerable input at system management automation powershell worker constructpipelineanddowork runspace rs boolean performsyncinvoke at system management automation powershell worker createrunspaceifneededanddowork runspace rstouse boolean issync at system management automation powershell coreinvokehelper psdatacollection input psdatacollection output psinvocationsettings settings at system management automation powershell coreinvoke psdatacollection input psdatacollection output psinvocationsettings settings at system management automation powershell invoke ienumerable input psinvocationsettings settings at microsoft azure commands compute models pssyncoutputevents logmessage string format object parameters at microsoft windowsazure commands sync upload filemetadata stream stream string filepath at microsoft windowsazure commands sync upload filemetadata create string filepath at microsoft azure commands compute sync upload diskuploadcreator get operationmetadata at microsoft azure commands compute sync upload diskuploadcreator get at microsoft azure commands compute sync upload diskuploadcreator create fileinfo localvhd pspageblobclient pageblob boolean overwrite at microsoft azure commands compute storageservices addazurevhdcommand b at microsoft azure commands compute computeclientbasecmdlet executeclientaction action action at microsoft windowsazure commands utilities common azurepscmdlet processrecord exception system management automation parseexception invocationinfo add azvhd position at line char add azvhd localfilepath e my vms more disk historyid message at line char sh is being calculated for the file e my vms more the ampersand character is not allowed the operator is reserved for future use wrap an ampersand in double quotation marks to pass it as part of a string stacktrace at system management automation runspaces pipelinebase invoke ienumerable input at system management automation powershell worker constructpipelineanddowork runspace rs boolean performsyncinvoke at system management automation powershell worker createrunspaceifneededanddowork runspace rstouse boolean issync at system management automation powershell coreinvokehelper psdatacollection input psdatacollection output psinvocationsettings settings at system management automation powershell coreinvoke psdatacollection input psdatacollection output psinvocationsettings settings at system management automation powershell invoke ienumerable input psinvocationsettings settings at microsoft azure commands compute models pssyncoutputevents logmessage string format object parameters at microsoft windowsazure commands sync upload filemetadata stream stream string filepath at microsoft windowsazure commands sync upload filemetadata create string filepath at microsoft azure commands compute sync upload diskuploadcreator get operationmetadata at microsoft azure commands compute sync upload diskuploadcreator get at microsoft azure commands compute sync upload diskuploadcreator create fileinfo localvhd pspageblobclient pageblob boolean overwrite at microsoft azure commands compute storageservices addazurevhdcommand b at microsoft azure commands compute computeclientbasecmdlet executeclientaction action action at microsoft windowsazure commands utilities common azurepscmdlet processrecord exception system management automation parseexception invocationinfo add azvhd line add azvhd localfilepath e my vms more disk vhdx resourcegroupname rg printscan location australiasoutheast diskname os disksku standardssd lrs ,0 2440,11962793398.0,IssuesEvent,2020-04-05 13:45:08,BuildingCityDashboards/bcd-dd-v2.1,https://api.github.com/repos/BuildingCityDashboards/bcd-dd-v2.1,opened,Add blob storage,automation enhancement,"Move static data documents to blob storage e.g. all csvs and json. This should be linked to a DB that serves as a lookup. Refs can be changed to reflect latest version of the document e.g. with update figures. Need to define which data will be stored (S3 is cheap, so don't be conservative) and which will remain as a call to an external API. Realtime data being archived should also be persisted to blob storage as a source of truth. Related #126 #638 #209 #207 ",1.0,"Add blob storage - Move static data documents to blob storage e.g. all csvs and json. This should be linked to a DB that serves as a lookup. Refs can be changed to reflect latest version of the document e.g. with update figures. Need to define which data will be stored (S3 is cheap, so don't be conservative) and which will remain as a call to an external API. Realtime data being archived should also be persisted to blob storage as a source of truth. Related #126 #638 #209 #207 ",1,add blob storage move static data documents to blob storage e g all csvs and json this should be linked to a db that serves as a lookup refs can be changed to reflect latest version of the document e g with update figures need to define which data will be stored is cheap so don t be conservative and which will remain as a call to an external api realtime data being archived should also be persisted to blob storage as a source of truth related ,1 106678,23265828431.0,IssuesEvent,2022-08-04 17:16:29,objectos/objectos,https://api.github.com/repos/objectos/objectos,reopened,AsciiDoc: support inline macros,t:feature c:code a:objectos-asciidoc,"## Test cases - [x] tc01: well formed https - [x] tc02: not an inline macro (rollback)",1.0,"AsciiDoc: support inline macros - ## Test cases - [x] tc01: well formed https - [x] tc02: not an inline macro (rollback)",0,asciidoc support inline macros test cases well formed https not an inline macro rollback ,0 1564,10343118877.0,IssuesEvent,2019-09-04 08:15:33,a-t-0/Taskwarrior-installation,https://api.github.com/repos/a-t-0/Taskwarrior-installation,opened,"Change the way the arguments are read, when running from a cronjob",Automation bug,"When running a cronjob, it can be difficult to pass all the arguments to the `javaServerSort.jar` file. As a solution, you can either: 1. Put all the arguments between `""""` quotation marks to put them in a single string element, as currently is the case in the way the args are read. 2. Put them all separately after the `java -jar JavaServerSort.jar -argName0 -argValue0 -argName1 -argValue1..` command. 3. After the arguments are passed to the `JavaServerSort.jar` file the first time, create a config file that contains the arguments which is checked for existence, before checking the incoming input arguments.",1.0,"Change the way the arguments are read, when running from a cronjob - When running a cronjob, it can be difficult to pass all the arguments to the `javaServerSort.jar` file. As a solution, you can either: 1. Put all the arguments between `""""` quotation marks to put them in a single string element, as currently is the case in the way the args are read. 2. Put them all separately after the `java -jar JavaServerSort.jar -argName0 -argValue0 -argName1 -argValue1..` command. 3. After the arguments are passed to the `JavaServerSort.jar` file the first time, create a config file that contains the arguments which is checked for existence, before checking the incoming input arguments.",1,change the way the arguments are read when running from a cronjob when running a cronjob it can be difficult to pass all the arguments to the javaserversort jar file as a solution you can either put all the arguments between quotation marks to put them in a single string element as currently is the case in the way the args are read put them all separately after the java jar javaserversort jar command after the arguments are passed to the javaserversort jar file the first time create a config file that contains the arguments which is checked for existence before checking the incoming input arguments ,1 62762,8639383533.0,IssuesEvent,2018-11-23 18:41:26,erlang/rebar3,https://api.github.com/repos/erlang/rebar3,closed,Document rebar_compiler behaviour,documentation,"The new rebar_compiler behaviour is the recommended way to teach rebar to compile new file types, but use of the behaviour is non-obvious. It would be great to get some documentation (such as comments on the behaviour callbacks https://github.com/erlang/rebar3/blob/311ee6b1371c3eea3611dc5d7945b1b5667c75bd/src/rebar_compiler.erl#L17-L24) so that it's clearer how to make use of it :)",1.0,"Document rebar_compiler behaviour - The new rebar_compiler behaviour is the recommended way to teach rebar to compile new file types, but use of the behaviour is non-obvious. It would be great to get some documentation (such as comments on the behaviour callbacks https://github.com/erlang/rebar3/blob/311ee6b1371c3eea3611dc5d7945b1b5667c75bd/src/rebar_compiler.erl#L17-L24) so that it's clearer how to make use of it :)",0,document rebar compiler behaviour the new rebar compiler behaviour is the recommended way to teach rebar to compile new file types but use of the behaviour is non obvious it would be great to get some documentation such as comments on the behaviour callbacks so that it s clearer how to make use of it ,0 18661,5683750037.0,IssuesEvent,2017-04-13 13:31:10,fabric8io/fabric8-ux,https://api.github.com/repos/fabric8io/fabric8-ux,opened,Hover state for iteration side panel,code,Add a hover state on the iteration side panel and submit PR,1.0,Hover state for iteration side panel - Add a hover state on the iteration side panel and submit PR,0,hover state for iteration side panel add a hover state on the iteration side panel and submit pr,0 3712,14403345024.0,IssuesEvent,2020-12-03 15:56:54,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,Only run integration tests on travis for nightlies to save some build credits,automation type:chore,"Right now integration tests will run every time we push something to master (e.g., when merging PRs). This consumes an abnormal amount of credits on travis-ci, and we need to save them. As a quick fix, we can disable all integration tests to only run once per day (e.g., nightlies).",1.0,"Only run integration tests on travis for nightlies to save some build credits - Right now integration tests will run every time we push something to master (e.g., when merging PRs). This consumes an abnormal amount of credits on travis-ci, and we need to save them. As a quick fix, we can disable all integration tests to only run once per day (e.g., nightlies).",1,only run integration tests on travis for nightlies to save some build credits right now integration tests will run every time we push something to master e g when merging prs this consumes an abnormal amount of credits on travis ci and we need to save them as a quick fix we can disable all integration tests to only run once per day e g nightlies ,1 430866,30204303712.0,IssuesEvent,2023-07-05 08:23:42,PaloAltoNetworks/pan.dev,https://api.github.com/repos/PaloAltoNetworks/pan.dev,opened,"Issue/Help with ""List Host Findings""",documentation,"## Documentation link https://pan.dev/prisma-cloud/api/cspm/get-host-findings/#list-host-findings ## Describe the problem There is an issue with this page : - Typo : The response is included within the block next to """"An example request body for all finding types is:"" - Behaviour : I cannot use this endpoint, I get a 400 Bad Request HTTP Error with no payload. We need a clarification how to use this API. ## Suggested fix - Correct the documentation : give the expected answer and clarify responses of this API endpoint ",1.0,"Issue/Help with ""List Host Findings"" - ## Documentation link https://pan.dev/prisma-cloud/api/cspm/get-host-findings/#list-host-findings ## Describe the problem There is an issue with this page : - Typo : The response is included within the block next to """"An example request body for all finding types is:"" - Behaviour : I cannot use this endpoint, I get a 400 Bad Request HTTP Error with no payload. We need a clarification how to use this API. ## Suggested fix - Correct the documentation : give the expected answer and clarify responses of this API endpoint ",0,issue help with list host findings documentation link describe the problem there is an issue with this page typo the response is included within the block next to an example request body for all finding types is behaviour i cannot use this endpoint i get a bad request http error with no payload we need a clarification how to use this api suggested fix correct the documentation give the expected answer and clarify responses of this api endpoint ,0 2361,11825240745.0,IssuesEvent,2020-03-21 11:42:55,tajmone/hugo-book,https://api.github.com/repos/tajmone/hugo-book,closed,Enable Images Previews in Coalesced AsciiDoc Book,:bulb: enhancement :hammer: Travis CI :star: assets :star: automation :star: images,"- [x] Edit [`docs_src/hugo-book.asciidoc`][hugo-book.asciidoc]: + [x] Define `imagesdir` attr. via conditional preprocessor directives so that images are viewable in GitHub's WebUI, in both the sources inside `docs_src/` folder as well as in the [standalone AsciiDoc version]. - [x] Edit [`docs_src/build.sh`][build.sh]: + [x] After creating the [standalone AsciiDoc version], test that it would convert to HTML without errors — i.e. run it through Asciidoctor redirecting output to `>/dev/null`, using `--failure-level WARN`, so that failure to find the required images would fail the build on Travis CI. - [x] Manually verify that all conversion scripts are producing correct output, and that images are displayed as expected: + [x] [`docs_src/build.sh`][build.sh]: * [x] `docs/index.html` * [x] `hugo-book.html` * [x] `hugo-book.asciidoc ` + [x] [`docs_src/preview.sh`][preview.sh]: * [x] `docs_src/preview.html` - [x] Document above changes: + [x] Main `README.md` Changelog. ------------------------------------------------------------------------------- Currently, previewing the [standalone AsciiDoc version] on GitHub doesn't show the images diagrams. This should be easily fixable by adding the right attribute in the header, providing the relative path to find the images. When I initially worked on the AsciiDoc sources, I ensured that previewing each chapter source on GitHub's WebUI would correctly show the diagrams. At the time I didn't consider that I would be adding also a standalone coalesced version of the document. If possible, it would be great if the diagrams could be shown correctly both in the standalone version as well as in the single chapters sources. But this might either be not achievable (due to limitations in the GitHub previewer or the AsciiDoc Coalescer), or require too complex hacks; in this case I should give precedence to the standalone AsciiDoc version over the single sources. ### References - [Asciidoctor Manual]: + [§29.1. Setting the Location of Images] + [§48. Conditional Preprocessor Directives] [standalone AsciiDoc version]: ../blob/master/hugo-book.asciidoc ""hugo-book.asciidoc"" [hugo-book.asciidoc]: ../blob/master/docs_src/hugo-book.asciidoc ""View source file"" [build.sh]: ../blob/master/docs_src/build.sh ""View source file"" [preview.sh]: ../blob/master/docs_src/preview.sh ""View source file"" [Asciidoctor Manual]: https://asciidoctor.org/docs/user-manual/#setting-the-location-of-images [§29.1. Setting the Location of Images]: https://asciidoctor.org/docs/user-manual/#setting-the-location-of-images [§48. Conditional Preprocessor Directives]: https://asciidoctor.org/docs/user-manual/#conditional-preprocessor-directives ",1.0,"Enable Images Previews in Coalesced AsciiDoc Book - - [x] Edit [`docs_src/hugo-book.asciidoc`][hugo-book.asciidoc]: + [x] Define `imagesdir` attr. via conditional preprocessor directives so that images are viewable in GitHub's WebUI, in both the sources inside `docs_src/` folder as well as in the [standalone AsciiDoc version]. - [x] Edit [`docs_src/build.sh`][build.sh]: + [x] After creating the [standalone AsciiDoc version], test that it would convert to HTML without errors — i.e. run it through Asciidoctor redirecting output to `>/dev/null`, using `--failure-level WARN`, so that failure to find the required images would fail the build on Travis CI. - [x] Manually verify that all conversion scripts are producing correct output, and that images are displayed as expected: + [x] [`docs_src/build.sh`][build.sh]: * [x] `docs/index.html` * [x] `hugo-book.html` * [x] `hugo-book.asciidoc ` + [x] [`docs_src/preview.sh`][preview.sh]: * [x] `docs_src/preview.html` - [x] Document above changes: + [x] Main `README.md` Changelog. ------------------------------------------------------------------------------- Currently, previewing the [standalone AsciiDoc version] on GitHub doesn't show the images diagrams. This should be easily fixable by adding the right attribute in the header, providing the relative path to find the images. When I initially worked on the AsciiDoc sources, I ensured that previewing each chapter source on GitHub's WebUI would correctly show the diagrams. At the time I didn't consider that I would be adding also a standalone coalesced version of the document. If possible, it would be great if the diagrams could be shown correctly both in the standalone version as well as in the single chapters sources. But this might either be not achievable (due to limitations in the GitHub previewer or the AsciiDoc Coalescer), or require too complex hacks; in this case I should give precedence to the standalone AsciiDoc version over the single sources. ### References - [Asciidoctor Manual]: + [§29.1. Setting the Location of Images] + [§48. Conditional Preprocessor Directives] [standalone AsciiDoc version]: ../blob/master/hugo-book.asciidoc ""hugo-book.asciidoc"" [hugo-book.asciidoc]: ../blob/master/docs_src/hugo-book.asciidoc ""View source file"" [build.sh]: ../blob/master/docs_src/build.sh ""View source file"" [preview.sh]: ../blob/master/docs_src/preview.sh ""View source file"" [Asciidoctor Manual]: https://asciidoctor.org/docs/user-manual/#setting-the-location-of-images [§29.1. Setting the Location of Images]: https://asciidoctor.org/docs/user-manual/#setting-the-location-of-images [§48. Conditional Preprocessor Directives]: https://asciidoctor.org/docs/user-manual/#conditional-preprocessor-directives ",1,enable images previews in coalesced asciidoc book edit define imagesdir attr via conditional preprocessor directives so that images are viewable in github s webui in both the sources inside docs src folder as well as in the edit after creating the test that it would convert to html without errors — i e run it through asciidoctor redirecting output to dev null using failure level warn so that failure to find the required images would fail the build on travis ci manually verify that all conversion scripts are producing correct output and that images are displayed as expected docs index html hugo book html hugo book asciidoc docs src preview html document above changes main readme md changelog currently previewing the on github doesn t show the images diagrams this should be easily fixable by adding the right attribute in the header providing the relative path to find the images when i initially worked on the asciidoc sources i ensured that previewing each chapter source on github s webui would correctly show the diagrams at the time i didn t consider that i would be adding also a standalone coalesced version of the document if possible it would be great if the diagrams could be shown correctly both in the standalone version as well as in the single chapters sources but this might either be not achievable due to limitations in the github previewer or the asciidoc coalescer or require too complex hacks in this case i should give precedence to the standalone asciidoc version over the single sources references reference links blob master hugo book asciidoc hugo book asciidoc blob master docs src hugo book asciidoc view source file blob master docs src build sh view source file blob master docs src preview sh view source file ,1 174380,27630826059.0,IssuesEvent,2023-03-10 10:41:58,Regalis11/Barotrauma,https://api.github.com/repos/Regalis11/Barotrauma,closed,Forbidden word list not checking descriptions,Design,"### Disclaimers - [X] I have searched the issue tracker to check if the issue has already been reported. - [ ] My issue happened while using mods. ### What happened? I've searched for others with this issue, no one seems to have commented on this. As the title says, if you try blocking words with the forbiddenwordlist.txt it wont hide servers like it should. though if the servers name has a forbidden word in it, then it will block it. (Small issue) ### Reproduction steps 1. Enable hide forbidden words 2. See servers with horrible words ### Bug prevalence Happens every time I play ### Version 0.20.16.1 ### - _No response_ ### Which operating system did you encounter this bug on? Windows ### Relevant error messages and crash reports _No response_",1.0,"Forbidden word list not checking descriptions - ### Disclaimers - [X] I have searched the issue tracker to check if the issue has already been reported. - [ ] My issue happened while using mods. ### What happened? I've searched for others with this issue, no one seems to have commented on this. As the title says, if you try blocking words with the forbiddenwordlist.txt it wont hide servers like it should. though if the servers name has a forbidden word in it, then it will block it. (Small issue) ### Reproduction steps 1. Enable hide forbidden words 2. See servers with horrible words ### Bug prevalence Happens every time I play ### Version 0.20.16.1 ### - _No response_ ### Which operating system did you encounter this bug on? Windows ### Relevant error messages and crash reports _No response_",0,forbidden word list not checking descriptions disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened i ve searched for others with this issue no one seems to have commented on this as the title says if you try blocking words with the forbiddenwordlist txt it wont hide servers like it should though if the servers name has a forbidden word in it then it will block it small issue reproduction steps enable hide forbidden words see servers with horrible words bug prevalence happens every time i play version no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response ,0 691392,23695616386.0,IssuesEvent,2022-08-29 14:25:36,Adyen/adyen-magento2,https://api.github.com/repos/Adyen/adyen-magento2,closed,[PW-5393] sales_order_payment table size keeps growing,Enhancement Priority: medium Confirmed,"**Is your feature request related to a problem? Please describe.** As far as our e-commerce grows, we have noticed that `sales_order_payment` table is the largest one on our e-commerce platform. After taking a look deeper in which fields contains a large amount of data, we have noticed that Adyen module is storing a lot of data on `additional_information` column. **Describe the solution you'd like** After placing payment, we may don't need anymore whole data which is stored on `additional_information` column. Most important data is PSP Reference and it is stored in a separate column.",1.0,"[PW-5393] sales_order_payment table size keeps growing - **Is your feature request related to a problem? Please describe.** As far as our e-commerce grows, we have noticed that `sales_order_payment` table is the largest one on our e-commerce platform. After taking a look deeper in which fields contains a large amount of data, we have noticed that Adyen module is storing a lot of data on `additional_information` column. **Describe the solution you'd like** After placing payment, we may don't need anymore whole data which is stored on `additional_information` column. Most important data is PSP Reference and it is stored in a separate column.",0, sales order payment table size keeps growing is your feature request related to a problem please describe as far as our e commerce grows we have noticed that sales order payment table is the largest one on our e commerce platform after taking a look deeper in which fields contains a large amount of data we have noticed that adyen module is storing a lot of data on additional information column describe the solution you d like after placing payment we may don t need anymore whole data which is stored on additional information column most important data is psp reference and it is stored in a separate column ,0 22923,4858108475.0,IssuesEvent,2016-11-12 23:35:12,n8rzz/atc,https://api.github.com/repos/n8rzz/atc,closed,Document git flow process,documentation,"- [ ] gh-pages, how/when code gets there - [ ] release/x.x.x - [ ] develop - [ ] feature/ATC-xxx where to branch from and where to target - [ ] bugfix/ATC-xxx where to branch from and where to target",1.0,"Document git flow process - - [ ] gh-pages, how/when code gets there - [ ] release/x.x.x - [ ] develop - [ ] feature/ATC-xxx where to branch from and where to target - [ ] bugfix/ATC-xxx where to branch from and where to target",0,document git flow process gh pages how when code gets there release x x x develop feature atc xxx where to branch from and where to target bugfix atc xxx where to branch from and where to target,0 2520,12177870524.0,IssuesEvent,2020-04-28 08:03:34,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,Build APM Server OSS for the OSS test,[zube]: In Progress automation bug,"After this change https://github.com/elastic/apm-server/commit/7520b723ceb424c9338b231e5ecf83779877398e#diff-b67911656ef5d18c4ae36cb6741b7965R35-R37 the default version we build is the x-pack version, this makes that the ITs fail in the OSS test.",1.0,"Build APM Server OSS for the OSS test - After this change https://github.com/elastic/apm-server/commit/7520b723ceb424c9338b231e5ecf83779877398e#diff-b67911656ef5d18c4ae36cb6741b7965R35-R37 the default version we build is the x-pack version, this makes that the ITs fail in the OSS test.",1,build apm server oss for the oss test after this change the default version we build is the x pack version this makes that the its fail in the oss test ,1 777186,27270904679.0,IssuesEvent,2023-02-22 22:14:21,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,beta.character.ai - Account login is not performed,browser-firefox-mobile priority-normal severity-critical browser-fenix engine-gecko diagnosis-priority-p1 trend-login," **URL**: https://beta.character.ai **Browser / Version**: Firefox Mobile 108.0 **Operating System**: Android 10 **Tested Another Browser**: Yes Other **Problem type**: Site is not usable **Description**: Unable to login **Steps to Reproduce**: First, I entered the website and clicked the ""log in"" button on the top right corner of the page. Then, I entered my email and password. Seemingly, the site accepted them as valid. However, after the site's main page reloaded as a result of logging in, the site acted as if I still weren't logged in at all. In fact, every time I ty to pick an AI, the site simply asks me to log in again. This process loops continuously and thus, I can't access my account or any of the site's content. I can confirm that the website works correctly on Firefox for PC.
Browser Configuration
  • gfx.webrender.all: false
  • gfx.webrender.blob-images: true
  • gfx.webrender.enabled: false
  • image.mem.shared: true
  • buildID: 20221020093353
  • channel: nightly
  • hasTouchScreen: true
  • mixed active content blocked: false
  • mixed passive content blocked: false
  • tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2022/10/224dcaf3-1e82-4697-a328-59d035de0aa2) _From [webcompat.com](https://webcompat.com/) with ❤️_",2.0,"beta.character.ai - Account login is not performed - **URL**: https://beta.character.ai **Browser / Version**: Firefox Mobile 108.0 **Operating System**: Android 10 **Tested Another Browser**: Yes Other **Problem type**: Site is not usable **Description**: Unable to login **Steps to Reproduce**: First, I entered the website and clicked the ""log in"" button on the top right corner of the page. Then, I entered my email and password. Seemingly, the site accepted them as valid. However, after the site's main page reloaded as a result of logging in, the site acted as if I still weren't logged in at all. In fact, every time I ty to pick an AI, the site simply asks me to log in again. This process loops continuously and thus, I can't access my account or any of the site's content. I can confirm that the website works correctly on Firefox for PC.
Browser Configuration
  • gfx.webrender.all: false
  • gfx.webrender.blob-images: true
  • gfx.webrender.enabled: false
  • image.mem.shared: true
  • buildID: 20221020093353
  • channel: nightly
  • hasTouchScreen: true
  • mixed active content blocked: false
  • mixed passive content blocked: false
  • tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2022/10/224dcaf3-1e82-4697-a328-59d035de0aa2) _From [webcompat.com](https://webcompat.com/) with ❤️_",0,beta character ai account login is not performed url browser version firefox mobile operating system android tested another browser yes other problem type site is not usable description unable to login steps to reproduce first i entered the website and clicked the log in button on the top right corner of the page then i entered my email and password seemingly the site accepted them as valid however after the site s main page reloaded as a result of logging in the site acted as if i still weren t logged in at all in fact every time i ty to pick an ai the site simply asks me to log in again this process loops continuously and thus i can t access my account or any of the site s content i can confirm that the website works correctly on firefox for pc browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ ,0 818,8211503200.0,IssuesEvent,2018-09-04 13:59:26,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed," ""The element that matches the specified selector is not visible"" error on attempt to drag visible element",AREA: client SYSTEM: automations TYPE: bug,"### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? Trying to drag ""seekbar knob"" selector raising: ""The element that matches the specified selector is not visible"" error ### What is the expected behavior? Should be able to perform drag operation ### How would you reproduce the current behavior (if this is a bug)? #### Provide the test code and the tested page URL (if applicable) Tested page URL: https://kabbalahmedia.info/en/lessons/cu/iIqZE7y7?language=en Test code ```js test('timeCodeUpdateByDrag', async t => { const getSeekbarRect = ClientFunction((selector) => { const {top, left, bottom, right} = document.querySelector(selector).getBoundingClientRect(); return {top, left, bottom, right}; }); await player_utils.waitForPlayerToLoad(); let rect = await getSeekbarRect('.seekbar__knob'); console.debug(""Rect >> top: "" + rect.top + "" left: "" + rect.left + "" bottom: "" + rect.bottom + "" right: "" + rect.right); let current_mouse_x = rect.left + ((rect.right - rect.left) / 2); let current_mouse_y = rect.top + ((rect.top - rect.bottom) / 2); const seekbarSelector = await Selector('.seekbar__knob'); await t.drag(seekbarSelector, current_mouse_x + 100, parseInt(current_mouse_y)); }); ``` ### Specify your * operating system: MacOS HighSierra * testcafe version:0.21.1 * node.js version:9.8",1.0," ""The element that matches the specified selector is not visible"" error on attempt to drag visible element - ### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? Trying to drag ""seekbar knob"" selector raising: ""The element that matches the specified selector is not visible"" error ### What is the expected behavior? Should be able to perform drag operation ### How would you reproduce the current behavior (if this is a bug)? #### Provide the test code and the tested page URL (if applicable) Tested page URL: https://kabbalahmedia.info/en/lessons/cu/iIqZE7y7?language=en Test code ```js test('timeCodeUpdateByDrag', async t => { const getSeekbarRect = ClientFunction((selector) => { const {top, left, bottom, right} = document.querySelector(selector).getBoundingClientRect(); return {top, left, bottom, right}; }); await player_utils.waitForPlayerToLoad(); let rect = await getSeekbarRect('.seekbar__knob'); console.debug(""Rect >> top: "" + rect.top + "" left: "" + rect.left + "" bottom: "" + rect.bottom + "" right: "" + rect.right); let current_mouse_x = rect.left + ((rect.right - rect.left) / 2); let current_mouse_y = rect.top + ((rect.top - rect.bottom) / 2); const seekbarSelector = await Selector('.seekbar__knob'); await t.drag(seekbarSelector, current_mouse_x + 100, parseInt(current_mouse_y)); }); ``` ### Specify your * operating system: MacOS HighSierra * testcafe version:0.21.1 * node.js version:9.8",1, the element that matches the specified selector is not visible error on attempt to drag visible element are you requesting a feature or reporting a bug bug what is the current behavior trying to drag seekbar knob selector raising the element that matches the specified selector is not visible error what is the expected behavior should be able to perform drag operation how would you reproduce the current behavior if this is a bug provide the test code and the tested page url if applicable tested page url test code js test timecodeupdatebydrag async t const getseekbarrect clientfunction selector const top left bottom right document queryselector selector getboundingclientrect return top left bottom right await player utils waitforplayertoload let rect await getseekbarrect seekbar knob console debug rect top rect top left rect left bottom rect bottom right rect right let current mouse x rect left rect right rect left let current mouse y rect top rect top rect bottom const seekbarselector await selector seekbar knob await t drag seekbarselector current mouse x parseint current mouse y specify your operating system macos highsierra testcafe version node js version ,1 5132,18717583808.0,IssuesEvent,2021-11-03 07:55:45,extratone/extratone,https://api.github.com/repos/extratone/extratone,opened,Join us Day 2 Into Focus at Microsoft Ignite! [(Open email in Spark)](readdlespark://bl=QTphc3BoYWx0YXBvc3RsZUBpY2xvdWQuY29tO0lEOnQycXNodWhGUW9pWjBpRThw%0D%0AWW04VmdAZ2VvcG9kLWlzbXRwZC02LTI7MzkzMjc4ODk3OQ%3D%3D),automation,03-Nov-2021 07:51:16 - 2274454577 -,1.0,Join us Day 2 Into Focus at Microsoft Ignite! [(Open email in Spark)](readdlespark://bl=QTphc3BoYWx0YXBvc3RsZUBpY2xvdWQuY29tO0lEOnQycXNodWhGUW9pWjBpRThw%0D%0AWW04VmdAZ2VvcG9kLWlzbXRwZC02LTI7MzkzMjc4ODk3OQ%3D%3D) - 03-Nov-2021 07:51:16 - 2274454577 -,1,join us day into focus at microsoft ignite readdlespark bl nov ,1 5686,20750088142.0,IssuesEvent,2022-03-15 06:15:26,EthanThatOneKid/acmcsuf.com,https://api.github.com/repos/EthanThatOneKid/acmcsuf.com,closed,[OFFICER_AUTOMATION],automation:officer,"### >>Officer Name<< Angel Armendariz ### >>Term to Overwrite<< Spring 2022 ### >>Overwrite Officer Position Title<< Dev Project Manager ### >>Overwrite Officer Position Tier<< Dev Project Manager ### >>Overwrite Officer Picture<< ![image](https://user-images.githubusercontent.com/95111582/158317216-867b01bf-d09e-40fe-810e-ce987a0ae7f8.png) ### >>Overwrite Officer GitHub Username<< Angel-Armendariz",1.0,"[OFFICER_AUTOMATION] - ### >>Officer Name<< Angel Armendariz ### >>Term to Overwrite<< Spring 2022 ### >>Overwrite Officer Position Title<< Dev Project Manager ### >>Overwrite Officer Position Tier<< Dev Project Manager ### >>Overwrite Officer Picture<< ![image](https://user-images.githubusercontent.com/95111582/158317216-867b01bf-d09e-40fe-810e-ce987a0ae7f8.png) ### >>Overwrite Officer GitHub Username<< Angel-Armendariz",1, officer name angel armendariz term to overwrite spring overwrite officer position title dev project manager overwrite officer position tier dev project manager overwrite officer picture overwrite officer github username angel armendariz,1 5652,20608213674.0,IssuesEvent,2022-03-07 04:37:11,Studio-Ops-Org/Studio-2022-S1-Repo,https://api.github.com/repos/Studio-Ops-Org/Studio-2022-S1-Repo,opened,Develop ARM templates for OE1,Automation,"Create an editable ARM template that can be used for OE1. This should do the following: - [ ] Deploy x amount of Linux VMs using B1ls - [ ] All VMs be in the same VNET, subnet and resource group - [ ] Unique public IP addresses for each machine - [ ] NSG rules that allow access only from polytech - [ ] Management username as password (sudo access) - [ ] Student account (no authority) -> unsure if this can be done via ARM template What are ARM templates? [This will help](https://www.varonis.com/blog/arm-template#:~:text=deleting%20Azure%20resources.-,What%20are%20ARM%20templates%3F,how%20the%20resources%20are%20created.) Need some Azure documentation? [Here you go!](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview)",1.0,"Develop ARM templates for OE1 - Create an editable ARM template that can be used for OE1. This should do the following: - [ ] Deploy x amount of Linux VMs using B1ls - [ ] All VMs be in the same VNET, subnet and resource group - [ ] Unique public IP addresses for each machine - [ ] NSG rules that allow access only from polytech - [ ] Management username as password (sudo access) - [ ] Student account (no authority) -> unsure if this can be done via ARM template What are ARM templates? [This will help](https://www.varonis.com/blog/arm-template#:~:text=deleting%20Azure%20resources.-,What%20are%20ARM%20templates%3F,how%20the%20resources%20are%20created.) Need some Azure documentation? [Here you go!](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview)",1,develop arm templates for create an editable arm template that can be used for this should do the following deploy x amount of linux vms using all vms be in the same vnet subnet and resource group unique public ip addresses for each machine nsg rules that allow access only from polytech management username as password sudo access student account no authority unsure if this can be done via arm template what are arm templates need some azure documentation ,1 349367,10468028011.0,IssuesEvent,2019-09-22 10:42:09,googleapis/google-cloud-ruby,https://api.github.com/repos/googleapis/google-cloud-ruby,closed,Synthesis failed for redis,api: redis autosynth failure priority: p1 type: bug,"Hello! Autosynth couldn't regenerate redis. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth-redis' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/google-cloud-redis/synth.py. synthtool > Ensuring dependencies. synthtool > Pulling artman image. latest: Pulling from googleapis/artman Digest: sha256:66ca01f27ef7dc50fbfb7743b67028115a6a8acf43b2d82f9fc826de008adac4 Status: Image is up to date for googleapis/artman:latest synthtool > Cloning googleapis. synthtool > Running generator for google/cloud/redis/artman_redis_v1.yaml. synthtool > Failed executing docker run --name artman-docker --rm -i -e HOST_USER_ID=1000 -e HOST_GROUP_ID=1000 -e RUNNING_IN_ARTMAN_DOCKER=True -v /home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis -v /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles -w /home/kbuilder/.cache/synthtool/googleapis googleapis/artman:latest /bin/bash -c artman --local --config google/cloud/redis/artman_redis_v1.yaml generate ruby_gapic: artman> Final args: artman> api_name: redis artman> api_version: v1 artman> artifact_type: GAPIC artman> aspect: ALL artman> gapic_code_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby/google-cloud-ruby/google-cloud-redis artman> gapic_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1/redis_gapic.yaml artman> generator_args: null artman> import_proto_path: artman> - /home/kbuilder/.cache/synthtool/googleapis artman> language: ruby artman> organization_name: google-cloud artman> output_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles artman> proto_deps: artman> - name: google-common-protos artman> proto_package: '' artman> root_dir: /home/kbuilder/.cache/synthtool/googleapis artman> samples: '' artman> service_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/redis_v1.yaml artman> src_proto_path: artman> - /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1 artman> toolkit_path: /toolkit artman> artman> Creating GapicClientPipeline. artman.output > WARNING: toplevel: (lint) control-presence: Service redis.googleapis.com does not have control environment configured. ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.ListInstancesRequest.google.cloud.redis.v1.ListInstancesRequest.parent ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.CreateInstanceRequest.google.cloud.redis.v1.CreateInstanceRequest.parent WARNING: toplevel: (lint) control-presence: Service redis.googleapis.com does not have control environment configured. ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.ListInstancesRequest.google.cloud.redis.v1.ListInstancesRequest.parent ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.CreateInstanceRequest.google.cloud.redis.v1.CreateInstanceRequest.parent artman> Traceback (most recent call last): File ""/artman/artman/cli/main.py"", line 72, in main engine.run() File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py"", line 247, in run for _state in self.run_iter(timeout=timeout): File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py"", line 340, in run_iter failure.Failure.reraise_if_any(er_failures) File ""/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py"", line 339, in reraise_if_any failures[0].reraise() File ""/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py"", line 346, in reraise six.reraise(*self._exc_info) File ""/usr/local/lib/python3.5/dist-packages/six.py"", line 693, in reraise raise value File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/executor.py"", line 53, in _execute_task result = task.execute(**arguments) File ""/artman/artman/tasks/gapic_tasks.py"", line 146, in execute task_utils.gapic_gen_task(toolkit_path, [gapic_artifact] + args)) File ""/artman/artman/tasks/task_base.py"", line 64, in exec_command raise e File ""/artman/artman/tasks/task_base.py"", line 56, in exec_command output = subprocess.check_output(args, stderr=subprocess.STDOUT) File ""/usr/lib/python3.5/subprocess.py"", line 626, in check_output **kwargs).stdout File ""/usr/lib/python3.5/subprocess.py"", line 708, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['java', '-cp', '/toolkit/build/libs/gapic-generator-latest-fatjar.jar', 'com.google.api.codegen.GeneratorMain', 'LEGACY_GAPIC_AND_PACKAGE', '--descriptor_set=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/google-cloud-redis-v1.desc', '--package_yaml2=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby_google-cloud-redis-v1_package2.yaml', '--output=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby/google-cloud-ruby/google-cloud-redis', '--language=ruby', '--service_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/redis_v1.yaml', '--gapic_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1/redis_gapic.yaml']' returned non-zero exit status 1 Traceback (most recent call last): File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py"", line 193, in _run_module_as_main ""__main__"", mod_spec) File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py"", line 85, in _run_code exec(code, run_globals) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py"", line 87, in main() File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 764, in __call__ return self.main(*args, **kwargs) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 717, in main rv = self.invoke(ctx) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 555, in invoke return callback(*args, **kwargs) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py"", line 79, in main spec.loader.exec_module(synth_module) # type: ignore File """", line 678, in exec_module File """", line 205, in _call_with_frames_removed File ""/tmpfs/src/git/autosynth/working_repo/google-cloud-redis/synth.py"", line 30, in artman_output_name='google-cloud-ruby/google-cloud-redis' File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py"", line 58, in ruby_library return self._generate_code(service, version, ""ruby"", **kwargs) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py"", line 138, in _generate_code generator_args=generator_args, File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/artman.py"", line 141, in run shell.run(cmd, cwd=root_dir) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py"", line 39, in run raise exc File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py"", line 33, in run encoding=""utf-8"", File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py"", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['docker', 'run', '--name', 'artman-docker', '--rm', '-i', '-e', 'HOST_USER_ID=1000', '-e', 'HOST_GROUP_ID=1000', '-e', 'RUNNING_IN_ARTMAN_DOCKER=True', '-v', '/home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis', '-v', '/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles', '-w', PosixPath('/home/kbuilder/.cache/synthtool/googleapis'), 'googleapis/artman:latest', '/bin/bash', '-c', 'artman --local --config google/cloud/redis/artman_redis_v1.yaml generate ruby_gapic']' returned non-zero exit status 32. synthtool > Cleaned up 0 temporary directories. synthtool > Wrote metadata to synth.metadata. Synthesis failed ``` Google internal developers can see the full log [here](https://sponge/33d501c0-dede-4a59-8f50-73663fbab2f6). ",1.0,"Synthesis failed for redis - Hello! Autosynth couldn't regenerate redis. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth-redis' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/google-cloud-redis/synth.py. synthtool > Ensuring dependencies. synthtool > Pulling artman image. latest: Pulling from googleapis/artman Digest: sha256:66ca01f27ef7dc50fbfb7743b67028115a6a8acf43b2d82f9fc826de008adac4 Status: Image is up to date for googleapis/artman:latest synthtool > Cloning googleapis. synthtool > Running generator for google/cloud/redis/artman_redis_v1.yaml. synthtool > Failed executing docker run --name artman-docker --rm -i -e HOST_USER_ID=1000 -e HOST_GROUP_ID=1000 -e RUNNING_IN_ARTMAN_DOCKER=True -v /home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis -v /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles -w /home/kbuilder/.cache/synthtool/googleapis googleapis/artman:latest /bin/bash -c artman --local --config google/cloud/redis/artman_redis_v1.yaml generate ruby_gapic: artman> Final args: artman> api_name: redis artman> api_version: v1 artman> artifact_type: GAPIC artman> aspect: ALL artman> gapic_code_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby/google-cloud-ruby/google-cloud-redis artman> gapic_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1/redis_gapic.yaml artman> generator_args: null artman> import_proto_path: artman> - /home/kbuilder/.cache/synthtool/googleapis artman> language: ruby artman> organization_name: google-cloud artman> output_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles artman> proto_deps: artman> - name: google-common-protos artman> proto_package: '' artman> root_dir: /home/kbuilder/.cache/synthtool/googleapis artman> samples: '' artman> service_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/redis_v1.yaml artman> src_proto_path: artman> - /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1 artman> toolkit_path: /toolkit artman> artman> Creating GapicClientPipeline. artman.output > WARNING: toplevel: (lint) control-presence: Service redis.googleapis.com does not have control environment configured. ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.ListInstancesRequest.google.cloud.redis.v1.ListInstancesRequest.parent ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.CreateInstanceRequest.google.cloud.redis.v1.CreateInstanceRequest.parent WARNING: toplevel: (lint) control-presence: Service redis.googleapis.com does not have control environment configured. ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.ListInstancesRequest.google.cloud.redis.v1.ListInstancesRequest.parent ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.CreateInstanceRequest.google.cloud.redis.v1.CreateInstanceRequest.parent artman> Traceback (most recent call last): File ""/artman/artman/cli/main.py"", line 72, in main engine.run() File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py"", line 247, in run for _state in self.run_iter(timeout=timeout): File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py"", line 340, in run_iter failure.Failure.reraise_if_any(er_failures) File ""/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py"", line 339, in reraise_if_any failures[0].reraise() File ""/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py"", line 346, in reraise six.reraise(*self._exc_info) File ""/usr/local/lib/python3.5/dist-packages/six.py"", line 693, in reraise raise value File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/executor.py"", line 53, in _execute_task result = task.execute(**arguments) File ""/artman/artman/tasks/gapic_tasks.py"", line 146, in execute task_utils.gapic_gen_task(toolkit_path, [gapic_artifact] + args)) File ""/artman/artman/tasks/task_base.py"", line 64, in exec_command raise e File ""/artman/artman/tasks/task_base.py"", line 56, in exec_command output = subprocess.check_output(args, stderr=subprocess.STDOUT) File ""/usr/lib/python3.5/subprocess.py"", line 626, in check_output **kwargs).stdout File ""/usr/lib/python3.5/subprocess.py"", line 708, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['java', '-cp', '/toolkit/build/libs/gapic-generator-latest-fatjar.jar', 'com.google.api.codegen.GeneratorMain', 'LEGACY_GAPIC_AND_PACKAGE', '--descriptor_set=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/google-cloud-redis-v1.desc', '--package_yaml2=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby_google-cloud-redis-v1_package2.yaml', '--output=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby/google-cloud-ruby/google-cloud-redis', '--language=ruby', '--service_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/redis_v1.yaml', '--gapic_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1/redis_gapic.yaml']' returned non-zero exit status 1 Traceback (most recent call last): File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py"", line 193, in _run_module_as_main ""__main__"", mod_spec) File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py"", line 85, in _run_code exec(code, run_globals) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py"", line 87, in main() File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 764, in __call__ return self.main(*args, **kwargs) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 717, in main rv = self.invoke(ctx) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 555, in invoke return callback(*args, **kwargs) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py"", line 79, in main spec.loader.exec_module(synth_module) # type: ignore File """", line 678, in exec_module File """", line 205, in _call_with_frames_removed File ""/tmpfs/src/git/autosynth/working_repo/google-cloud-redis/synth.py"", line 30, in artman_output_name='google-cloud-ruby/google-cloud-redis' File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py"", line 58, in ruby_library return self._generate_code(service, version, ""ruby"", **kwargs) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py"", line 138, in _generate_code generator_args=generator_args, File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/artman.py"", line 141, in run shell.run(cmd, cwd=root_dir) File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py"", line 39, in run raise exc File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py"", line 33, in run encoding=""utf-8"", File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py"", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['docker', 'run', '--name', 'artman-docker', '--rm', '-i', '-e', 'HOST_USER_ID=1000', '-e', 'HOST_GROUP_ID=1000', '-e', 'RUNNING_IN_ARTMAN_DOCKER=True', '-v', '/home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis', '-v', '/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles', '-w', PosixPath('/home/kbuilder/.cache/synthtool/googleapis'), 'googleapis/artman:latest', '/bin/bash', '-c', 'artman --local --config google/cloud/redis/artman_redis_v1.yaml generate ruby_gapic']' returned non-zero exit status 32. synthtool > Cleaned up 0 temporary directories. synthtool > Wrote metadata to synth.metadata. Synthesis failed ``` Google internal developers can see the full log [here](https://sponge/33d501c0-dede-4a59-8f50-73663fbab2f6). ",0,synthesis failed for redis hello autosynth couldn t regenerate redis broken heart here s the output from running synth py cloning into working repo switched to branch autosynth redis running synthtool synthtool executing tmpfs src git autosynth working repo google cloud redis synth py synthtool ensuring dependencies synthtool pulling artman image latest pulling from googleapis artman digest status image is up to date for googleapis artman latest synthtool cloning googleapis synthtool running generator for google cloud redis artman redis yaml synthtool failed executing docker run name artman docker rm i e host user id e host group id e running in artman docker true v home kbuilder cache synthtool googleapis home kbuilder cache synthtool googleapis v home kbuilder cache synthtool googleapis artman genfiles home kbuilder cache synthtool googleapis artman genfiles w home kbuilder cache synthtool googleapis googleapis artman latest bin bash c artman local config google cloud redis artman redis yaml generate ruby gapic artman final args artman api name redis artman api version artman artifact type gapic artman aspect all artman gapic code dir home kbuilder cache synthtool googleapis artman genfiles ruby google cloud ruby google cloud redis artman gapic yaml home kbuilder cache synthtool googleapis google cloud redis redis gapic yaml artman generator args null artman import proto path artman home kbuilder cache synthtool googleapis artman language ruby artman organization name google cloud artman output dir home kbuilder cache synthtool googleapis artman genfiles artman proto deps artman name google common protos artman proto package artman root dir home kbuilder cache synthtool googleapis artman samples artman service yaml home kbuilder cache synthtool googleapis google cloud redis redis yaml artman src proto path artman home kbuilder cache synthtool googleapis google cloud redis artman toolkit path toolkit artman artman creating gapicclientpipeline artman output warning toplevel lint control presence service redis googleapis com does not have control environment configured error toplevel reference to unknown type locations googleapis com location on field google cloud redis listinstancesrequest google cloud redis listinstancesrequest parent error toplevel reference to unknown type locations googleapis com location on field google cloud redis createinstancerequest google cloud redis createinstancerequest parent warning toplevel lint control presence service redis googleapis com does not have control environment configured error toplevel reference to unknown type locations googleapis com location on field google cloud redis listinstancesrequest google cloud redis listinstancesrequest parent error toplevel reference to unknown type locations googleapis com location on field google cloud redis createinstancerequest google cloud redis createinstancerequest parent artman traceback most recent call last file artman artman cli main py line in main engine run file usr local lib dist packages taskflow engines action engine engine py line in run for state in self run iter timeout timeout file usr local lib dist packages taskflow engines action engine engine py line in run iter failure failure reraise if any er failures file usr local lib dist packages taskflow types failure py line in reraise if any failures reraise file usr local lib dist packages taskflow types failure py line in reraise six reraise self exc info file usr local lib dist packages six py line in reraise raise value file usr local lib dist packages taskflow engines action engine executor py line in execute task result task execute arguments file artman artman tasks gapic tasks py line in execute task utils gapic gen task toolkit path args file artman artman tasks task base py line in exec command raise e file artman artman tasks task base py line in exec command output subprocess check output args stderr subprocess stdout file usr lib subprocess py line in check output kwargs stdout file usr lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth env lib site packages synthtool main py line in main file tmpfs src git autosynth env lib site packages click core py line in call return self main args kwargs file tmpfs src git autosynth env lib site packages click core py line in main rv self invoke ctx file tmpfs src git autosynth env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src git autosynth env lib site packages click core py line in invoke return callback args kwargs file tmpfs src git autosynth env lib site packages synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file tmpfs src git autosynth working repo google cloud redis synth py line in artman output name google cloud ruby google cloud redis file tmpfs src git autosynth env lib site packages synthtool gcp gapic generator py line in ruby library return self generate code service version ruby kwargs file tmpfs src git autosynth env lib site packages synthtool gcp gapic generator py line in generate code generator args generator args file tmpfs src git autosynth env lib site packages synthtool gcp artman py line in run shell run cmd cwd root dir file tmpfs src git autosynth env lib site packages synthtool shell py line in run raise exc file tmpfs src git autosynth env lib site packages synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status synthtool cleaned up temporary directories synthtool wrote metadata to synth metadata synthesis failed google internal developers can see the full log ,0 372816,26021595169.0,IssuesEvent,2022-12-21 13:07:04,mathew-fleisch/bashbot,https://api.github.com/repos/mathew-fleisch/bashbot,closed,Document Makefile,documentation,The makefile has some useful helper command/targets that can be used to build or install existing binaries on a host machine.,1.0,Document Makefile - The makefile has some useful helper command/targets that can be used to build or install existing binaries on a host machine.,0,document makefile the makefile has some useful helper command targets that can be used to build or install existing binaries on a host machine ,0 312756,9552835901.0,IssuesEvent,2019-05-02 17:40:47,WoWManiaUK/Blackwing-Lair,https://api.github.com/repos/WoWManiaUK/Blackwing-Lair,closed,[Quest/Order] Hero of the Sin'dorei - prerequisite missing,Fixed in Dev Priority zone 1-20,"https://www.wowhead.com/quest=9328/hero-of-the-sindorei Should only become available to pick up from https://www.wowhead.com/npc=16239/magister-kaendris once you completed https://www.wowhead.com/quest=9167/the-traitors-destruction Currently available to pick up without completing the previous chain.",1.0,"[Quest/Order] Hero of the Sin'dorei - prerequisite missing - https://www.wowhead.com/quest=9328/hero-of-the-sindorei Should only become available to pick up from https://www.wowhead.com/npc=16239/magister-kaendris once you completed https://www.wowhead.com/quest=9167/the-traitors-destruction Currently available to pick up without completing the previous chain.",0, hero of the sin dorei prerequisite missing should only become available to pick up from once you completed currently available to pick up without completing the previous chain ,0 6693,23744158689.0,IssuesEvent,2022-08-31 14:41:11,home-assistant/home-assistant.io,https://api.github.com/repos/home-assistant/home-assistant.io,closed,Information about trigger duration can not be found on the trigger page,automation,"### Feedback adding a trigger to an automation in the home assistant UI, you are given a field labeled ""Duration (optional)"" I clicked the ""learn more about triggers"" link to figure out what that meant, but could not find anything relevant on the linked page. I'm sure i could try hunting with google, but it would be much more accessible to just have help about that field on the linked page. Thank you :) ### URL https://www.home-assistant.io/docs/automation/trigger/ ### Version 2022.8.7 ### Additional information ![Screenshot 2022-08-29 113059](https://user-images.githubusercontent.com/108598670/187272916-860525a8-6e78-4021-95f8-50133f203d5d.png) ",1.0,"Information about trigger duration can not be found on the trigger page - ### Feedback adding a trigger to an automation in the home assistant UI, you are given a field labeled ""Duration (optional)"" I clicked the ""learn more about triggers"" link to figure out what that meant, but could not find anything relevant on the linked page. I'm sure i could try hunting with google, but it would be much more accessible to just have help about that field on the linked page. Thank you :) ### URL https://www.home-assistant.io/docs/automation/trigger/ ### Version 2022.8.7 ### Additional information ![Screenshot 2022-08-29 113059](https://user-images.githubusercontent.com/108598670/187272916-860525a8-6e78-4021-95f8-50133f203d5d.png) ",1,information about trigger duration can not be found on the trigger page feedback adding a trigger to an automation in the home assistant ui you are given a field labeled duration optional i clicked the learn more about triggers link to figure out what that meant but could not find anything relevant on the linked page i m sure i could try hunting with google but it would be much more accessible to just have help about that field on the linked page thank you url version additional information ,1 333369,10121030483.0,IssuesEvent,2019-07-31 14:50:59,BWRat/DES506_Oneiro,https://api.github.com/repos/BWRat/DES506_Oneiro,closed,Rudimentry ladder,Priority C,The ladder acts like a trap. Player cannot cancel climbing and must finish the whole action before they can do anything else. ,1.0,Rudimentry ladder - The ladder acts like a trap. Player cannot cancel climbing and must finish the whole action before they can do anything else. ,0,rudimentry ladder the ladder acts like a trap player cannot cancel climbing and must finish the whole action before they can do anything else ,0 8905,27190125544.0,IssuesEvent,2023-02-19 17:50:01,AnthonyMonterrosa/C-sharp-service-stack,https://api.github.com/repos/AnthonyMonterrosa/C-sharp-service-stack,closed,Separate GitHub Actions Build and Test into Separate Jobs.,automation enhancement,"Currently, the GitHub Action that is run as a PR check does the build and tests in one job. It is preferred that they are separate jobs so, if either fail, we can see which did fail at a glance while still in the PR's webpage.",1.0,"Separate GitHub Actions Build and Test into Separate Jobs. - Currently, the GitHub Action that is run as a PR check does the build and tests in one job. It is preferred that they are separate jobs so, if either fail, we can see which did fail at a glance while still in the PR's webpage.",1,separate github actions build and test into separate jobs currently the github action that is run as a pr check does the build and tests in one job it is preferred that they are separate jobs so if either fail we can see which did fail at a glance while still in the pr s webpage ,1 100974,21562551057.0,IssuesEvent,2022-05-01 11:36:46,joomla/joomla-cms,https://api.github.com/repos/joomla/joomla-cms,closed,[4.1.x] Cassiopea Registration page Privacy/Terms alignment,New Feature No Code Attached Yet J4 Frontend Template,"Hi guys, about the [Cassiopea Registration page Privacy/Terms alignment](https://photos.app.goo.gl/6VZuH7Ja7RQ8yfKt5 ""Registration Privacy/Terms""), Should not be better to add by default: .required.radio { display: inline-flex; gap: 1rem; } to align them horizontally and don't waste precious space ? ",1.0,"[4.1.x] Cassiopea Registration page Privacy/Terms alignment - Hi guys, about the [Cassiopea Registration page Privacy/Terms alignment](https://photos.app.goo.gl/6VZuH7Ja7RQ8yfKt5 ""Registration Privacy/Terms""), Should not be better to add by default: .required.radio { display: inline-flex; gap: 1rem; } to align them horizontally and don't waste precious space ? ",0, cassiopea registration page privacy terms alignment hi guys about the registration privacy terms should not be better to add by default required radio display inline flex gap to align them horizontally and don t waste precious space ,0 108645,11597422869.0,IssuesEvent,2020-02-24 20:50:45,BIAPT/Scripts,https://api.github.com/repos/BIAPT/Scripts,closed,Visualize the step-wise wPLI matrices to ensure that the analysis is correct,documentation enhancement,"Here the objectives are simple for the first experiment, we need to generate wPLI matrices with a small step (1 seconds for starting) and visualize the result properties. The analysis documentation should be done on the README.md.",1.0,"Visualize the step-wise wPLI matrices to ensure that the analysis is correct - Here the objectives are simple for the first experiment, we need to generate wPLI matrices with a small step (1 seconds for starting) and visualize the result properties. The analysis documentation should be done on the README.md.",0,visualize the step wise wpli matrices to ensure that the analysis is correct here the objectives are simple for the first experiment we need to generate wpli matrices with a small step seconds for starting and visualize the result properties the analysis documentation should be done on the readme md ,0 113072,11787059955.0,IssuesEvent,2020-03-17 13:25:42,zilliztech/arctern,https://api.github.com/repos/zilliztech/arctern,opened,Set up a local Conda channel for installing the Arctern,arctern-0.1.0 documentation,"## Report needed documentation **Describe the documentation you'd like** Set up a local Conda channel for installing the Arctern",1.0,"Set up a local Conda channel for installing the Arctern - ## Report needed documentation **Describe the documentation you'd like** Set up a local Conda channel for installing the Arctern",0,set up a local conda channel for installing the arctern report needed documentation describe the documentation you d like set up a local conda channel for installing the arctern,0 250867,27115567772.0,IssuesEvent,2023-02-15 18:17:21,cosmos/ibc-rs,https://api.github.com/repos/cosmos/ibc-rs,closed,Remove `todo!()`s for tendermint `ClientState`,A: good-first-issue A: urgent A: critical O: security,"There are 3 `todo!()`s to be removed ([one](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L740), [two](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L803) and [three](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L830)).",True,"Remove `todo!()`s for tendermint `ClientState` - There are 3 `todo!()`s to be removed ([one](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L740), [two](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L803) and [three](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L830)).",0,remove todo s for tendermint clientstate there are todo s to be removed and ,0 20055,13643200242.0,IssuesEvent,2020-09-25 16:43:30,niconoe/pyinaturalist,https://api.github.com/repos/niconoe/pyinaturalist,opened,Drop support for Python 3.5,dependencies infrastructure,"[Python 3.5 reached EOL on 2020-09-13](https://devguide.python.org/#status-of-python-branches). I think it would be reasonable to keep pyinaturalist compatible with Python 3.5 through v0.11 For v0.12+, we can remove 3.5 from the tox tests and start making use of Python 3.6 features: f-strings, type annotations for variables, etc.",1.0,"Drop support for Python 3.5 - [Python 3.5 reached EOL on 2020-09-13](https://devguide.python.org/#status-of-python-branches). I think it would be reasonable to keep pyinaturalist compatible with Python 3.5 through v0.11 For v0.12+, we can remove 3.5 from the tox tests and start making use of Python 3.6 features: f-strings, type annotations for variables, etc.",0,drop support for python i think it would be reasonable to keep pyinaturalist compatible with python through for we can remove from the tox tests and start making use of python features f strings type annotations for variables etc ,0 256591,19429091448.0,IssuesEvent,2021-12-21 09:50:33,Schlaue-Lise-IT-Project/schlaue-lise,https://api.github.com/repos/Schlaue-Lise-IT-Project/schlaue-lise,closed,Überarbeiten der README & des User Manuals,documentation,"Das [README](https://github.com/Schlaue-Lise-IT-Project/schlaue-lise/blob/main/README.md) und das [User Manual](https://github.com/Schlaue-Lise-IT-Project/schlaue-lise/blob/main/user-manual.md) müssen überarbeitet bzw. ausgefüllt werden. Folgende Dinge gilt es dabei zu beachten: - Das ganze ist unser _Bericht_, achtet also darauf, dass dort sinnvolle Sachen stehen - Es ist auf gendergerechte Sprache zu achten; wir verwenden den Binnendoppelpunkt (bspw. Anwender:innen) - Nutzt Absätze in langen Texten, das wird sonst zu anstrengend zu lesen - Nutzt die Backticks (\`\`), wenn ihr etwas hervorheben wollt, was mit dem Code zu tun hat oder anstelle von Anführungszeichen (bspw. `conda`) Aktuelle Tasks (bitte erweitern) - [x] User Manual (Schlafen) - [x] User Manual (Hygiene) - [x] User Manual (Spenden) - [x] User Manual (Medizin) - [x] README",1.0,"Überarbeiten der README & des User Manuals - Das [README](https://github.com/Schlaue-Lise-IT-Project/schlaue-lise/blob/main/README.md) und das [User Manual](https://github.com/Schlaue-Lise-IT-Project/schlaue-lise/blob/main/user-manual.md) müssen überarbeitet bzw. ausgefüllt werden. Folgende Dinge gilt es dabei zu beachten: - Das ganze ist unser _Bericht_, achtet also darauf, dass dort sinnvolle Sachen stehen - Es ist auf gendergerechte Sprache zu achten; wir verwenden den Binnendoppelpunkt (bspw. Anwender:innen) - Nutzt Absätze in langen Texten, das wird sonst zu anstrengend zu lesen - Nutzt die Backticks (\`\`), wenn ihr etwas hervorheben wollt, was mit dem Code zu tun hat oder anstelle von Anführungszeichen (bspw. `conda`) Aktuelle Tasks (bitte erweitern) - [x] User Manual (Schlafen) - [x] User Manual (Hygiene) - [x] User Manual (Spenden) - [x] User Manual (Medizin) - [x] README",0,überarbeiten der readme des user manuals das und das müssen überarbeitet bzw ausgefüllt werden folgende dinge gilt es dabei zu beachten das ganze ist unser bericht achtet also darauf dass dort sinnvolle sachen stehen es ist auf gendergerechte sprache zu achten wir verwenden den binnendoppelpunkt bspw anwender innen nutzt absätze in langen texten das wird sonst zu anstrengend zu lesen nutzt die backticks wenn ihr etwas hervorheben wollt was mit dem code zu tun hat oder anstelle von anführungszeichen bspw conda aktuelle tasks bitte erweitern user manual schlafen user manual hygiene user manual spenden user manual medizin readme,0 1580,10352913521.0,IssuesEvent,2019-09-05 10:17:14,big-neon/bn-web,https://api.github.com/repos/big-neon/bn-web,opened,Automation: Big Neon : Test 20: Refund Tickets,Automation,"**Pre-conditions:** - User should have Admin access - Event the user is selecting should have tickets that have been purchased. **Steps:** 1. Log in as Admin with permission 2. Click on event on the left side bar 3. Click on the event 4. Go to ""Dashboard"" 5. Click on ""Tools"" 6. Click on ""Manage Orders"" 7. Select the ticket which needs to be refunded 8. Confirm if refund amount is correct 9. Click on ""Refund"" in the bottom right corner after ticket/s are selected 10. ReFund must be successful",1.0,"Automation: Big Neon : Test 20: Refund Tickets - **Pre-conditions:** - User should have Admin access - Event the user is selecting should have tickets that have been purchased. **Steps:** 1. Log in as Admin with permission 2. Click on event on the left side bar 3. Click on the event 4. Go to ""Dashboard"" 5. Click on ""Tools"" 6. Click on ""Manage Orders"" 7. Select the ticket which needs to be refunded 8. Confirm if refund amount is correct 9. Click on ""Refund"" in the bottom right corner after ticket/s are selected 10. ReFund must be successful",1,automation big neon test refund tickets pre conditions user should have admin access event the user is selecting should have tickets that have been purchased steps log in as admin with permission click on event on the left side bar click on the event go to dashboard click on tools click on manage orders select the ticket which needs to be refunded confirm if refund amount is correct click on refund in the bottom right corner after ticket s are selected refund must be successful,1 259681,22504665755.0,IssuesEvent,2022-06-23 14:34:51,MPMG-DCC-UFMG/F01,https://api.github.com/repos/MPMG-DCC-UFMG/F01,opened,Teste de generalizacao para a tag Terceiro Setor - Dados de Parcerias - Minduri,generalization test development,DoD: Realizar o teste de Generalização do validador da tag Terceiro Setor - Dados de Parcerias para o Município de Minduri.,1.0,Teste de generalizacao para a tag Terceiro Setor - Dados de Parcerias - Minduri - DoD: Realizar o teste de Generalização do validador da tag Terceiro Setor - Dados de Parcerias para o Município de Minduri.,0,teste de generalizacao para a tag terceiro setor dados de parcerias minduri dod realizar o teste de generalização do validador da tag terceiro setor dados de parcerias para o município de minduri ,0 2412,11899473458.0,IssuesEvent,2020-03-30 09:03:54,elastic/beats,https://api.github.com/repos/elastic/beats,closed,[ci] Enable Jenkinsfile Pipeline for master and 7.x,[zube]: In Review automation ci,"We want to enable the Jenkinsfile based pipeline build for changes that affect master and 7.x, including pull-requests. And then disable the old Jenkins build for those same targets. ",1.0,"[ci] Enable Jenkinsfile Pipeline for master and 7.x - We want to enable the Jenkinsfile based pipeline build for changes that affect master and 7.x, including pull-requests. And then disable the old Jenkins build for those same targets. ",1, enable jenkinsfile pipeline for master and x we want to enable the jenkinsfile based pipeline build for changes that affect master and x including pull requests and then disable the old jenkins build for those same targets ,1 416111,28067274243.0,IssuesEvent,2023-03-29 16:18:50,microsoft/studentambassadors,https://api.github.com/repos/microsoft/studentambassadors,closed,Cognitive Services API frontend bug,documentation wontfix AI,"## Describe the bug Every time the drop-down menu in ""Name"" is clicked, the display text changes. The same text is displayed for both 1st and 2nd options in the drop-down menu. ## To Reproduce Steps to reproduce the behavior: 1. Go to https://westeurope.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription/console 2. Click on drop-down menu of ""Name"" 4. See the text displayed in box 5. Select the 1st and 2nd options (same text is displayed) ## Expected behavior The selected option from the drop-down menu should be displayed correctly ## Screenshots ![image](https://user-images.githubusercontent.com/93262556/221005064-18f2ea4f-795d-4126-baf4-269ce581ebb6.png) ![image](https://user-images.githubusercontent.com/93262556/221005146-20f5dafe-fd2c-4eb6-8094-5d8161bee64c.png) ### Desktop (please complete the following information): - OS: Windows 10 - Browser: Microsoft Edge - Version: 110.0.1587.50 #### 🎓 Add a tag to this issue for your current education role: **Student Ambassador** *** ",1.0,"Cognitive Services API frontend bug - ## Describe the bug Every time the drop-down menu in ""Name"" is clicked, the display text changes. The same text is displayed for both 1st and 2nd options in the drop-down menu. ## To Reproduce Steps to reproduce the behavior: 1. Go to https://westeurope.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription/console 2. Click on drop-down menu of ""Name"" 4. See the text displayed in box 5. Select the 1st and 2nd options (same text is displayed) ## Expected behavior The selected option from the drop-down menu should be displayed correctly ## Screenshots ![image](https://user-images.githubusercontent.com/93262556/221005064-18f2ea4f-795d-4126-baf4-269ce581ebb6.png) ![image](https://user-images.githubusercontent.com/93262556/221005146-20f5dafe-fd2c-4eb6-8094-5d8161bee64c.png) ### Desktop (please complete the following information): - OS: Windows 10 - Browser: Microsoft Edge - Version: 110.0.1587.50 #### 🎓 Add a tag to this issue for your current education role: **Student Ambassador** *** ",0,cognitive services api frontend bug describe the bug every time the drop down menu in name is clicked the display text changes the same text is displayed for both and options in the drop down menu to reproduce steps to reproduce the behavior go to click on drop down menu of name see the text displayed in box select the and options same text is displayed expected behavior the selected option from the drop down menu should be displayed correctly screenshots desktop please complete the following information os windows browser microsoft edge version 🎓 add a tag to this issue for your current education role student ambassador ,0 4448,16566018550.0,IssuesEvent,2021-05-29 12:28:31,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Missing illuminance in Automation (GZCGQ01LM),integration: device_automation,"### The problem I'm missing with Xiaomi (GZCGQ01LM) in automations the option ""illuminance"". There are 4 entities, but in the automation i only see 3. At the beginning of 2021 the option was there, but now it's gone in the UI. I can use the fuction ""illuminance"" if i use it manual in the YAML file. ![image](https://user-images.githubusercontent.com/76258622/118364323-4871b500-b598-11eb-805f-c181223d46fb.png) And this are the options in automations: ![image](https://user-images.githubusercontent.com/76258622/118364356-70611880-b598-11eb-86b1-05d49ac2b6c0.png) ### What is version of Home Assistant Core has the issue? core-2021.5.4 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant Core ### Integration causing the issue Automation ### Link to integration documentation on our website https://www.home-assistant.io/integrations/device_automation/ ### Example YAML snippet _No response_ ### Anything in the logs that might be useful for us? _No response_ ### Additional information _No response_",1.0,"Missing illuminance in Automation (GZCGQ01LM) - ### The problem I'm missing with Xiaomi (GZCGQ01LM) in automations the option ""illuminance"". There are 4 entities, but in the automation i only see 3. At the beginning of 2021 the option was there, but now it's gone in the UI. I can use the fuction ""illuminance"" if i use it manual in the YAML file. ![image](https://user-images.githubusercontent.com/76258622/118364323-4871b500-b598-11eb-805f-c181223d46fb.png) And this are the options in automations: ![image](https://user-images.githubusercontent.com/76258622/118364356-70611880-b598-11eb-86b1-05d49ac2b6c0.png) ### What is version of Home Assistant Core has the issue? core-2021.5.4 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant Core ### Integration causing the issue Automation ### Link to integration documentation on our website https://www.home-assistant.io/integrations/device_automation/ ### Example YAML snippet _No response_ ### Anything in the logs that might be useful for us? _No response_ ### Additional information _No response_",1,missing illuminance in automation the problem i m missing with xiaomi in automations the option illuminance there are entities but in the automation i only see at the beginning of the option was there but now it s gone in the ui i can use the fuction illuminance if i use it manual in the yaml file and this are the options in automations what is version of home assistant core has the issue core what was the last working version of home assistant core no response what type of installation are you running home assistant core integration causing the issue automation link to integration documentation on our website example yaml snippet no response anything in the logs that might be useful for us no response additional information no response ,1 138,4059294520.0,IssuesEvent,2016-05-25 09:04:09,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,New rule engine: Add JSON demo for new rule engine.,Automation,"The demo should create a module types, templates, modules and rules defined by JSON files. We can use sample.handler for this purpose",1.0,"New rule engine: Add JSON demo for new rule engine. - The demo should create a module types, templates, modules and rules defined by JSON files. We can use sample.handler for this purpose",1,new rule engine add json demo for new rule engine the demo should create a module types templates modules and rules defined by json files we can use sample handler for this purpose,1 740279,25741744891.0,IssuesEvent,2022-12-08 06:44:07,encorelab/ck-board,https://api.github.com/repos/encorelab/ck-board,opened,Create TODO message field and modal pop-up,enhancement high priority,"1. When creating or editing a TODO item, add a description field that can contain links (just like posts) 2. When item item is clicked open a modal pop-up for viewing the item, containing all fields - title - description - type (value or ""None"") - group (group name or ""None"") - date",1.0,"Create TODO message field and modal pop-up - 1. When creating or editing a TODO item, add a description field that can contain links (just like posts) 2. When item item is clicked open a modal pop-up for viewing the item, containing all fields - title - description - type (value or ""None"") - group (group name or ""None"") - date",0,create todo message field and modal pop up when creating or editing a todo item add a description field that can contain links just like posts img width alt screen shot at am src when item item is clicked open a modal pop up for viewing the item containing all fields title description type value or none group group name or none date,0 942,8781396552.0,IssuesEvent,2018-12-19 20:20:48,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,404 errors when clicking on links on this page,assigned-to-author automation/svc doc-bug triaged,"When on the page ""https://docs.microsoft.com/en-us/azure/automation/automation-connections#windows-powershell-cmdlets"" and clicking on links to the Cmdlets, you are taken to a 404 page. For example ""Get-AzureRmAutomationConnection"" links to ""https://docs.microsoft.com/en-us/powershell/module/azurerm.automation/get-azurermautomationconnection"" In particular, for our business, we need a working link to the cmdlet ""Remove-AzureRmAutomationModule"". The previous working link, ""https://docs.microsoft.com/en-us/powershell/module/azurerm.automation/remove-azurermautomationmodule"", now returns a 404 error as well. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 81c284ea-3656-836f-7eab-388773e0e382 * Version Independent ID: 71329bef-2d4f-4ff6-5a03-83b99b2269e9 * Content: [Connection assets in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-connections) * Content Source: [articles/automation/automation-connections.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-connections.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"404 errors when clicking on links on this page - When on the page ""https://docs.microsoft.com/en-us/azure/automation/automation-connections#windows-powershell-cmdlets"" and clicking on links to the Cmdlets, you are taken to a 404 page. For example ""Get-AzureRmAutomationConnection"" links to ""https://docs.microsoft.com/en-us/powershell/module/azurerm.automation/get-azurermautomationconnection"" In particular, for our business, we need a working link to the cmdlet ""Remove-AzureRmAutomationModule"". The previous working link, ""https://docs.microsoft.com/en-us/powershell/module/azurerm.automation/remove-azurermautomationmodule"", now returns a 404 error as well. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 81c284ea-3656-836f-7eab-388773e0e382 * Version Independent ID: 71329bef-2d4f-4ff6-5a03-83b99b2269e9 * Content: [Connection assets in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-connections) * Content Source: [articles/automation/automation-connections.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-connections.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1, errors when clicking on links on this page when on the page and clicking on links to the cmdlets you are taken to a page for example get azurermautomationconnection links to in particular for our business we need a working link to the cmdlet remove azurermautomationmodule the previous working link now returns a error as well document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 3436,13765494911.0,IssuesEvent,2020-10-07 13:29:08,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Automation: Holding state trigger breaks on automation reloading,integration: automation," ## The problem Hi! I noticed that a holding state trigger (i.e. with `for` condition) does not fire if the `automation.reload` service is called when the entity is already in target state but the time has not passed yet. For example, consider a simple heater that is turned on and off every minute. If I then edit any automation via UI (even an unrelated automation), it will implicitly call the `automation.reload` service, and the heater will stay in one of the states forever (or until I toggle it manually), which could potentially lead to disaster results (freeze or overheat something). ## Environment - Home Assistant Core release with the issue: 0.115 - Last working Home Assistant Core release (if known): N/A - Operating environment (OS/Container/Supervised/Core): OpenWRT, Python 3.7.8 - Integration causing this issue: automation - Link to integration documentation on our website: https://www.home-assistant.io/docs/automation/trigger/ ## Problem-relevant `configuration.yaml` ```yaml automation: - trigger: platform: state entity_id: switch.heater to: ""on"" for: ""00:01:00"" action: service: switch.turn_off entity_id: switch.heater - trigger: platform: state entity_id: switch.heater to: ""off"" for: ""00:01:00"" action: service: switch.turn_on entity_id: switch.heater ``` ## Traceback/Error logs ```txt ``` ## Additional information On load, the automation should lookup history to find out the remaining time. At the very least, it should restart the timer from scratch. It should also lookup its own history in order to avoid triggering twice on the same event, but it should never miss an action even if I reload the automations just a millisecond before it would have fired. Same should apply to HA restarts in the ideal world.",1.0,"Automation: Holding state trigger breaks on automation reloading - ## The problem Hi! I noticed that a holding state trigger (i.e. with `for` condition) does not fire if the `automation.reload` service is called when the entity is already in target state but the time has not passed yet. For example, consider a simple heater that is turned on and off every minute. If I then edit any automation via UI (even an unrelated automation), it will implicitly call the `automation.reload` service, and the heater will stay in one of the states forever (or until I toggle it manually), which could potentially lead to disaster results (freeze or overheat something). ## Environment - Home Assistant Core release with the issue: 0.115 - Last working Home Assistant Core release (if known): N/A - Operating environment (OS/Container/Supervised/Core): OpenWRT, Python 3.7.8 - Integration causing this issue: automation - Link to integration documentation on our website: https://www.home-assistant.io/docs/automation/trigger/ ## Problem-relevant `configuration.yaml` ```yaml automation: - trigger: platform: state entity_id: switch.heater to: ""on"" for: ""00:01:00"" action: service: switch.turn_off entity_id: switch.heater - trigger: platform: state entity_id: switch.heater to: ""off"" for: ""00:01:00"" action: service: switch.turn_on entity_id: switch.heater ``` ## Traceback/Error logs ```txt ``` ## Additional information On load, the automation should lookup history to find out the remaining time. At the very least, it should restart the timer from scratch. It should also lookup its own history in order to avoid triggering twice on the same event, but it should never miss an action even if I reload the automations just a millisecond before it would have fired. Same should apply to HA restarts in the ideal world.",1,automation holding state trigger breaks on automation reloading read this first if you need additional help with this template please refer to make sure you are running the latest version of home assistant before reporting an issue do not report issues for integrations if you are using custom components or integrations provide as many details as possible paste logs configuration samples and code into the backticks do not delete any text from this template otherwise your issue may be closed without comment the problem describe the issue you are experiencing here to communicate to the maintainers tell us what you were trying to do and what happened hi i noticed that a holding state trigger i e with for condition does not fire if the automation reload service is called when the entity is already in target state but the time has not passed yet for example consider a simple heater that is turned on and off every minute if i then edit any automation via ui even an unrelated automation it will implicitly call the automation reload service and the heater will stay in one of the states forever or until i toggle it manually which could potentially lead to disaster results freeze or overheat something environment provide details about the versions you are using which helps us to reproduce and find the issue quicker version information is found in the home assistant frontend configuration info home assistant core release with the issue last working home assistant core release if known n a operating environment os container supervised core openwrt python integration causing this issue automation link to integration documentation on our website problem relevant configuration yaml an example configuration that caused the problem for you fill this out even if it seems unimportant to you please be sure to remove personal information like passwords private urls and other credentials yaml automation trigger platform state entity id switch heater to on for action service switch turn off entity id switch heater trigger platform state entity id switch heater to off for action service switch turn on entity id switch heater traceback error logs if you come across any trace or error logs please provide them txt additional information on load the automation should lookup history to find out the remaining time at the very least it should restart the timer from scratch it should also lookup its own history in order to avoid triggering twice on the same event but it should never miss an action even if i reload the automations just a millisecond before it would have fired same should apply to ha restarts in the ideal world ,1 6761,23865171437.0,IssuesEvent,2022-09-07 10:22:41,smcnab1/op-question-mark,https://api.github.com/repos/smcnab1/op-question-mark,opened,[FR] Implement Bed & Presence Detection in Automations,Status: Confirmed Type: Feature Priority: Low For: Automations,"**Bed Sensors** - [ ] Implement in turning off automations - [ ] Implement in turning on automations - [ ] Implement in managing security automations - [ ] Set up security automation for overnight **Presence Sensor** - [ ] Implement maintaining light automation until presence moves room",1.0,"[FR] Implement Bed & Presence Detection in Automations - **Bed Sensors** - [ ] Implement in turning off automations - [ ] Implement in turning on automations - [ ] Implement in managing security automations - [ ] Set up security automation for overnight **Presence Sensor** - [ ] Implement maintaining light automation until presence moves room",1, implement bed presence detection in automations bed sensors implement in turning off automations implement in turning on automations implement in managing security automations set up security automation for overnight presence sensor implement maintaining light automation until presence moves room,1 5086,18530515822.0,IssuesEvent,2021-10-21 04:59:56,astropy/astropy,https://api.github.com/repos/astropy/astropy,closed,MNT: Have a bot to auto-backport as PR,Feature Request needs-discussion dev-automation,"When a PR against `master` is merged, it would be desirable to have a bot to automatically open up follow-up PR(s) to backport changes that are just merged against older release branch(es), depending on the relevant PR milestone. If the automatic backport PR(s) encounter difficulties, such as failed CI or conflicts, the bot should then take follow-up actions (apply special labels or create comments) so that manual intervention can be done. If possible, we should not ""roll our own,"" but rather look at how other major projects are doing their backports, and see how we can reuse existing solutions. If all fails, @Cadair said he has a special hack that works ""half the time""... It is something we should aim for sooner than later, so I am going to add a milestone to this issue.",1.0,"MNT: Have a bot to auto-backport as PR - When a PR against `master` is merged, it would be desirable to have a bot to automatically open up follow-up PR(s) to backport changes that are just merged against older release branch(es), depending on the relevant PR milestone. If the automatic backport PR(s) encounter difficulties, such as failed CI or conflicts, the bot should then take follow-up actions (apply special labels or create comments) so that manual intervention can be done. If possible, we should not ""roll our own,"" but rather look at how other major projects are doing their backports, and see how we can reuse existing solutions. If all fails, @Cadair said he has a special hack that works ""half the time""... It is something we should aim for sooner than later, so I am going to add a milestone to this issue.",1,mnt have a bot to auto backport as pr when a pr against master is merged it would be desirable to have a bot to automatically open up follow up pr s to backport changes that are just merged against older release branch es depending on the relevant pr milestone if the automatic backport pr s encounter difficulties such as failed ci or conflicts the bot should then take follow up actions apply special labels or create comments so that manual intervention can be done if possible we should not roll our own but rather look at how other major projects are doing their backports and see how we can reuse existing solutions if all fails cadair said he has a special hack that works half the time it is something we should aim for sooner than later so i am going to add a milestone to this issue ,1 96561,12139371273.0,IssuesEvent,2020-04-23 18:44:54,solex2006/SELIProject,https://api.github.com/repos/solex2006/SELIProject,opened,Error Suggestion,1 - Planning Feature Design Notes :notebook: discussion,"This is a ""not-end"" requirement. What I want to mean is that this should be considered for any new page you create that Student can access. Also should correct the old pages. I will use the label **Feature Design Notes** to this cases *************** If an input error is automatically detected and suggestions for correction are known, then the suggestions are provided to the user, unless it would jeopardize the security or purpose of the content. ### Examples #### Example: # :beetle: Test Procedures ### Expected Results: # :busts_in_silhouette: Benefits",1.0,"Error Suggestion - This is a ""not-end"" requirement. What I want to mean is that this should be considered for any new page you create that Student can access. Also should correct the old pages. I will use the label **Feature Design Notes** to this cases *************** If an input error is automatically detected and suggestions for correction are known, then the suggestions are provided to the user, unless it would jeopardize the security or purpose of the content. ### Examples #### Example: # :beetle: Test Procedures ### Expected Results: # :busts_in_silhouette: Benefits",0,error suggestion this is a not end requirement what i want to mean is that this should be considered for any new page you create that student can access also should correct the old pages i will use the label feature design notes to this cases if an input error is automatically detected and suggestions for correction are known then the suggestions are provided to the user unless it would jeopardize the security or purpose of the content examples example beetle test procedures expected results busts in silhouette benefits,0 365640,25545815629.0,IssuesEvent,2022-11-29 18:42:11,PhilanthropyDataCommons/service,https://api.github.com/repos/PhilanthropyDataCommons/service,closed,Setup instructions omit a necessary `.env.test` change,documentation,"I just followed our [setup instructions](https://github.com/PhilanthropyDataCommons/service#setup) mostly-successfully. The only hiccup was when I ran tests for the first time and received multiple errors with the same block of failing code: ``` ● Test suite failed to run ENOENT: no such file or directory, open 'secret_api_keys.txt' 2 | 3 | const validKeysFile = process.env.API_KEYS_FILE ?? 'test_keys.txt'; > 4 | const data = fs.readFileSync(validKeysFile, 'utf8').split('\n'); | ^ 5 | export const dummyApiKey = { 'x-api-key': data[0] }; 6 | at Object. (src/test/dummyApiKey.ts:4:17) ``` Skipping down to the [API Keys section](https://github.com/PhilanthropyDataCommons/service#setup) made it clear I should have set `API_KEYS_FILE=test_keys.txt` in `.env.test`. We should make that explicit in the setup block.",1.0,"Setup instructions omit a necessary `.env.test` change - I just followed our [setup instructions](https://github.com/PhilanthropyDataCommons/service#setup) mostly-successfully. The only hiccup was when I ran tests for the first time and received multiple errors with the same block of failing code: ``` ● Test suite failed to run ENOENT: no such file or directory, open 'secret_api_keys.txt' 2 | 3 | const validKeysFile = process.env.API_KEYS_FILE ?? 'test_keys.txt'; > 4 | const data = fs.readFileSync(validKeysFile, 'utf8').split('\n'); | ^ 5 | export const dummyApiKey = { 'x-api-key': data[0] }; 6 | at Object. (src/test/dummyApiKey.ts:4:17) ``` Skipping down to the [API Keys section](https://github.com/PhilanthropyDataCommons/service#setup) made it clear I should have set `API_KEYS_FILE=test_keys.txt` in `.env.test`. We should make that explicit in the setup block.",0,setup instructions omit a necessary env test change i just followed our mostly successfully the only hiccup was when i ran tests for the first time and received multiple errors with the same block of failing code ● test suite failed to run enoent no such file or directory open secret api keys txt const validkeysfile process env api keys file test keys txt const data fs readfilesync validkeysfile split n export const dummyapikey x api key data at object src test dummyapikey ts skipping down to the made it clear i should have set api keys file test keys txt in env test we should make that explicit in the setup block ,0 10459,26992473992.0,IssuesEvent,2023-02-09 21:08:58,MicrosoftDocs/architecture-center,https://api.github.com/repos/MicrosoftDocs/architecture-center,closed,Add numbers to diagram,assigned-to-author triaged architecture-center/svc example-scenario/subsvc Pri1,"there is a numerical listing below the diagram, it would be helpful if the diagram showed the numbers so that the text descriptions could be more easily correlated. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a * Version Independent ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a * Content: [Network-hardened web app - Azure Example Scenarios](https://docs.microsoft.com/en-us/azure/architecture/example-scenario/security/hardened-web-app) * Content Source: [docs/example-scenario/security/hardened-web-app.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/example-scenario/security/hardened-web-app.yml) * Service: **architecture-center** * Sub-service: **example-scenario** * GitHub Login: @damaccar * Microsoft Alias: **damaccar**",1.0,"Add numbers to diagram - there is a numerical listing below the diagram, it would be helpful if the diagram showed the numbers so that the text descriptions could be more easily correlated. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a * Version Independent ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a * Content: [Network-hardened web app - Azure Example Scenarios](https://docs.microsoft.com/en-us/azure/architecture/example-scenario/security/hardened-web-app) * Content Source: [docs/example-scenario/security/hardened-web-app.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/example-scenario/security/hardened-web-app.yml) * Service: **architecture-center** * Sub-service: **example-scenario** * GitHub Login: @damaccar * Microsoft Alias: **damaccar**",0,add numbers to diagram there is a numerical listing below the diagram it would be helpful if the diagram showed the numbers so that the text descriptions could be more easily correlated document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service architecture center sub service example scenario github login damaccar microsoft alias damaccar ,0 1657,10542530407.0,IssuesEvent,2019-10-02 13:22:06,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,apm-server + logstash + ILM,automation subtask,"When using logstash to index apm-server output while ILM is enabled, this change is needed: ```patch diff --git a/docker/logstash/pipeline/apm.conf b/docker/logstash/pipeline/apm.conf index 1db7fdc..31fc1df 100644 --- a/docker/logstash/pipeline/apm.conf +++ b/docker/logstash/pipeline/apm.conf @@ -34,6 +34,6 @@ filter { output { elasticsearch { hosts => [""elasticsearch:9200""] - index => ""%{[@metadata][beat]}-%{[@metadata][version]}%{[@metadata][index_suffix]}-%{+YYYY.MM.dd}"" + index => ""%{[@metadata][beat]}-%{[@metadata][version]}%{[@metadata][index_suffix]}"" } } ``` it would also be nice to include `pipeline => ""apm""` for versions that install that pipeline.",1.0,"apm-server + logstash + ILM - When using logstash to index apm-server output while ILM is enabled, this change is needed: ```patch diff --git a/docker/logstash/pipeline/apm.conf b/docker/logstash/pipeline/apm.conf index 1db7fdc..31fc1df 100644 --- a/docker/logstash/pipeline/apm.conf +++ b/docker/logstash/pipeline/apm.conf @@ -34,6 +34,6 @@ filter { output { elasticsearch { hosts => [""elasticsearch:9200""] - index => ""%{[@metadata][beat]}-%{[@metadata][version]}%{[@metadata][index_suffix]}-%{+YYYY.MM.dd}"" + index => ""%{[@metadata][beat]}-%{[@metadata][version]}%{[@metadata][index_suffix]}"" } } ``` it would also be nice to include `pipeline => ""apm""` for versions that install that pipeline.",1,apm server logstash ilm when using logstash to index apm server output while ilm is enabled this change is needed patch diff git a docker logstash pipeline apm conf b docker logstash pipeline apm conf index a docker logstash pipeline apm conf b docker logstash pipeline apm conf filter output elasticsearch hosts index yyyy mm dd index it would also be nice to include pipeline apm for versions that install that pipeline ,1 620,7549323598.0,IssuesEvent,2018-04-18 13:58:04,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Multiple Subscription Update management ,assigned-to-author automation product-question triaged,"I have Multiple subscription (Prod,Dev,Test),now am in confusion of how to use Azure Update management across the subscription. Is that possible to have single Azure Automation account enabled with Update management for all my subscription or do i need to have Azure Automation account enabled with Update management in All my subscription. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: c3461048-c7fc-3979-a818-39af99d5e6bb * Version Independent ID: d0e5e766-ef63-d934-b21b-678933a5cc65 * Content: [Manage updates and patches for your Azure Windows VMs](https://docs.microsoft.com/en-us/azure/automation/automation-tutorial-update-management) * Content Source: [articles/automation/automation-tutorial-update-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-tutorial-update-management.md) * Service: **automation** * GitHub Login: @zjalexander * Microsoft Alias: **zachal**",1.0,"Multiple Subscription Update management - I have Multiple subscription (Prod,Dev,Test),now am in confusion of how to use Azure Update management across the subscription. Is that possible to have single Azure Automation account enabled with Update management for all my subscription or do i need to have Azure Automation account enabled with Update management in All my subscription. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: c3461048-c7fc-3979-a818-39af99d5e6bb * Version Independent ID: d0e5e766-ef63-d934-b21b-678933a5cc65 * Content: [Manage updates and patches for your Azure Windows VMs](https://docs.microsoft.com/en-us/azure/automation/automation-tutorial-update-management) * Content Source: [articles/automation/automation-tutorial-update-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-tutorial-update-management.md) * Service: **automation** * GitHub Login: @zjalexander * Microsoft Alias: **zachal**",1,multiple subscription update management i have multiple subscription prod dev test now am in confusion of how to use azure update management across the subscription is that possible to have single azure automation account enabled with update management for all my subscription or do i need to have azure automation account enabled with update management in all my subscription document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login zjalexander microsoft alias zachal ,1 40874,6875291691.0,IssuesEvent,2017-11-19 12:14:18,junit-team/junit5,https://api.github.com/repos/junit-team/junit5,closed,Update Asciidoctor PDF backend of the user-guide,status: blocked theme: documentation,"## Overview The PDF backend of is disabled at the moment, as does not work with Java 9. See `documentation/documentation.gradle:83` for backend configuration. See https://github.com/jruby/jruby/issues/4805 for the underlying issue. ## Deliverables - [ ] Enable PDF backend when https://github.com/jruby/jruby/issues/4805 is solved and JRuby **9.1.14** is released. ",1.0,"Update Asciidoctor PDF backend of the user-guide - ## Overview The PDF backend of is disabled at the moment, as does not work with Java 9. See `documentation/documentation.gradle:83` for backend configuration. See https://github.com/jruby/jruby/issues/4805 for the underlying issue. ## Deliverables - [ ] Enable PDF backend when https://github.com/jruby/jruby/issues/4805 is solved and JRuby **9.1.14** is released. ",0,update asciidoctor pdf backend of the user guide overview the pdf backend of is disabled at the moment as does not work with java see documentation documentation gradle for backend configuration see for the underlying issue deliverables enable pdf backend when is solved and jruby is released ,0 450467,31925861097.0,IssuesEvent,2023-09-19 01:40:50,vercel/next.js,https://api.github.com/repos/vercel/next.js,closed,Docs: Get static paths fallback value issue,template: documentation,"### What is the improvement or update you wish to see? There is an error with example od Dynamic Routes fallback value for Pages Router (value `true` is not working, giving build errors). ### Is there any context that might help us understand? Reproducing guide: 1. Clone [this repo](https://github.com/z4nr34l/nextjs-preprender-reproduce.git) 2. Run `next build` or `pnpm run build` inside 3. Watch it giving SSG errors - [x] I'll prepare PR shortly to fix that. ### Does the docs page already exist? Please link to it. https://nextjs.org/docs/pages/building-your-application/data-fetching/get-static-paths",1.0,"Docs: Get static paths fallback value issue - ### What is the improvement or update you wish to see? There is an error with example od Dynamic Routes fallback value for Pages Router (value `true` is not working, giving build errors). ### Is there any context that might help us understand? Reproducing guide: 1. Clone [this repo](https://github.com/z4nr34l/nextjs-preprender-reproduce.git) 2. Run `next build` or `pnpm run build` inside 3. Watch it giving SSG errors - [x] I'll prepare PR shortly to fix that. ### Does the docs page already exist? Please link to it. https://nextjs.org/docs/pages/building-your-application/data-fetching/get-static-paths",0,docs get static paths fallback value issue what is the improvement or update you wish to see there is an error with example od dynamic routes fallback value for pages router value true is not working giving build errors is there any context that might help us understand reproducing guide clone run next build or pnpm run build inside watch it giving ssg errors i ll prepare pr shortly to fix that does the docs page already exist please link to it ,0 1841,10924371270.0,IssuesEvent,2019-11-22 09:59:41,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,Replacer service is never started by server in the docker image,automation bug,"`replacer` is missing in the server globals: https://sourcegraph.com/github.com/sourcegraph/sourcegraph@master/-/blob/cmd/server/shared/globals.go#L12:5",1.0,"Replacer service is never started by server in the docker image - `replacer` is missing in the server globals: https://sourcegraph.com/github.com/sourcegraph/sourcegraph@master/-/blob/cmd/server/shared/globals.go#L12:5",1,replacer service is never started by server in the docker image replacer is missing in the server globals ,1 161491,12546168066.0,IssuesEvent,2020-06-05 20:13:07,pytorch/pytorch,https://api.github.com/repos/pytorch/pytorch,closed,DISABLED test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn),high priority module: rpc module: tensorpipe topic: flaky-tests triage review triaged," https://app.circleci.com/pipelines/github/pytorch/pytorch/176463/workflows/013c36ff-c568-4726-a10f-fc6fc342ac0c/jobs/5670885/steps ``` Jun 03 17:38:08 test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn) ... [W tensorpipe_agent.cpp:312] RPC agent for worker3 encountered error when reading incoming request: pipe closed Jun 03 17:38:08 [E container.cpp:248] Could not release Dist Autograd Context on node 0: pipe closed Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker0 encountered error when reading incoming request: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:280] RPC agent for worker0 encountered error when writing outgoing response: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [E container.cpp:248] Could not release Dist Autograd Context on node 2: pipe closed Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker1 encountered error when reading incoming request: pipe closed Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker2 encountered error when reading incoming request: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [W tensorpipe_agent.cpp:280] RPC agent for worker2 encountered error when writing outgoing response: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [W tensorpipe_agent.cpp:453] RPC agent for worker0 encountered error when reading incoming request: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: EOF: end of file Jun 03 17:39:47 Timing out after 100 seconds and killing subprocesses. Jun 03 17:39:47 ERROR (100.070s) ``` ``` Jun 03 17:43:08 ====================================================================== Jun 03 17:43:08 ERROR [100.070s]: test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn) Jun 03 17:43:08 ---------------------------------------------------------------------- Jun 03 17:43:08 Traceback (most recent call last): Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 204, in wrapper Jun 03 17:43:08 self._join_processes(fn) Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 306, in _join_processes Jun 03 17:43:08 self._check_return_codes(elapsed_time) Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 344, in _check_return_codes Jun 03 17:43:08 raise RuntimeError('Process {} terminated or timed out after {} seconds'.format(i, elapsed_time)) Jun 03 17:43:08 RuntimeError: Process 2 terminated or timed out after 100.05238389968872 seconds ``` cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar @jiayisuse @lw @beauby",1.0,"DISABLED test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn) - https://app.circleci.com/pipelines/github/pytorch/pytorch/176463/workflows/013c36ff-c568-4726-a10f-fc6fc342ac0c/jobs/5670885/steps ``` Jun 03 17:38:08 test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn) ... [W tensorpipe_agent.cpp:312] RPC agent for worker3 encountered error when reading incoming request: pipe closed Jun 03 17:38:08 [E container.cpp:248] Could not release Dist Autograd Context on node 0: pipe closed Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker0 encountered error when reading incoming request: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:280] RPC agent for worker0 encountered error when writing outgoing response: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [E container.cpp:248] Could not release Dist Autograd Context on node 2: pipe closed Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker1 encountered error when reading incoming request: pipe closed Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker2 encountered error when reading incoming request: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [W tensorpipe_agent.cpp:280] RPC agent for worker2 encountered error when writing outgoing response: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [W tensorpipe_agent.cpp:453] RPC agent for worker0 encountered error when reading incoming request: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: EOF: end of file Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: EOF: end of file Jun 03 17:39:47 Timing out after 100 seconds and killing subprocesses. Jun 03 17:39:47 ERROR (100.070s) ``` ``` Jun 03 17:43:08 ====================================================================== Jun 03 17:43:08 ERROR [100.070s]: test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn) Jun 03 17:43:08 ---------------------------------------------------------------------- Jun 03 17:43:08 Traceback (most recent call last): Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 204, in wrapper Jun 03 17:43:08 self._join_processes(fn) Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 306, in _join_processes Jun 03 17:43:08 self._check_return_codes(elapsed_time) Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 344, in _check_return_codes Jun 03 17:43:08 raise RuntimeError('Process {} terminated or timed out after {} seconds'.format(i, elapsed_time)) Jun 03 17:43:08 RuntimeError: Process 2 terminated or timed out after 100.05238389968872 seconds ``` cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar @jiayisuse @lw @beauby",0,disabled test backward node failure main tensorpipeagentdistautogradtestwithspawn jun test backward node failure main tensorpipeagentdistautogradtestwithspawn rpc agent for encountered error when reading incoming request pipe closed jun could not release dist autograd context on node pipe closed jun rpc agent for encountered error when reading incoming request eof end of file jun rpc agent for encountered error when writing outgoing response eof end of file jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun could not release dist autograd context on node pipe closed jun rpc agent for encountered error when reading incoming request pipe closed jun rpc agent for encountered error when reading incoming request eof end of file jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun rpc agent for encountered error when writing outgoing response eof end of file jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun rpc agent for encountered error when reading incoming request eof end of file jun rpc agent for encountered error when writing outgoing request epipe broken pipe jun rpc agent for encountered error when writing outgoing request epipe broken pipe jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun rpc agent for encountered error when writing outgoing request epipe broken pipe jun rpc agent for encountered error when writing outgoing request eof end of file jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun rpc agent for encountered error when writing outgoing request eof end of file jun timing out after seconds and killing subprocesses jun error jun jun error test backward node failure main tensorpipeagentdistautogradtestwithspawn jun jun traceback most recent call last jun file opt conda lib site packages torch testing internal common distributed py line in wrapper jun self join processes fn jun file opt conda lib site packages torch testing internal common distributed py line in join processes jun self check return codes elapsed time jun file opt conda lib site packages torch testing internal common distributed py line in check return codes jun raise runtimeerror process terminated or timed out after seconds format i elapsed time jun runtimeerror process terminated or timed out after seconds cc ezyang gchanan pietern mrshenli zhaojuanmao satgera gqchen aazzolini rohan varma jjlilley osalpekar jiayisuse lw beauby,0 251581,8017426787.0,IssuesEvent,2018-07-25 15:53:39,CARLI/vufind,https://api.github.com/repos/CARLI/vufind,closed,"Remove color of former ""Live Status Unavailable"" gray box on results page",Accepted Ready for Prod priority issue,"In #161 we decided to remove the wording from the ""Live Status Unavailable"" box on the results page because it was confusing and didn't add any value. However, that will leave us with a small, gray box (see example in 11 and 12 in screenshot below). Can we remove the color from that box or hide it? ![screen shot 2018-07-03 at 9 24 49 am](https://user-images.githubusercontent.com/17598357/42592281-a600d09c-850e-11e8-8a48-b65667fb20ee.png) The CSS that controls that gray color appears to be: < span class=""status""> < span class=""label label-default"">< /span> < /span> .label-default { background-color: #777; } ",1.0,"Remove color of former ""Live Status Unavailable"" gray box on results page - In #161 we decided to remove the wording from the ""Live Status Unavailable"" box on the results page because it was confusing and didn't add any value. However, that will leave us with a small, gray box (see example in 11 and 12 in screenshot below). Can we remove the color from that box or hide it? ![screen shot 2018-07-03 at 9 24 49 am](https://user-images.githubusercontent.com/17598357/42592281-a600d09c-850e-11e8-8a48-b65667fb20ee.png) The CSS that controls that gray color appears to be: < span class=""status""> < span class=""label label-default"">< /span> < /span> .label-default { background-color: #777; } ",0,remove color of former live status unavailable gray box on results page in we decided to remove the wording from the live status unavailable box on the results page because it was confusing and didn t add any value however that will leave us with a small gray box see example in and in screenshot below can we remove the color from that box or hide it the css that controls that gray color appears to be label default background color ,0 529773,15395204320.0,IssuesEvent,2021-03-03 18:54:41,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] Updating the config map in the Longhorn yaml doesn't set the values of the settings.,priority/2 wontfix,"**Describe the bug** Deploy Longhorn-master after setting some values like `concurrent-automatic-engine-upgrade-per-node-limit` in the Config map in the Longhorn yaml file. The values are not reflected the Longhorn setting once deployed. **To Reproduce** Steps to reproduce the behavior: 1. Deploy Longhorn v1.1.0 on a K8s cluster. 2. Create some volumes and attach them to pods. 3. Change the `concurrent-automatic-engine-upgrade-per-node-limit` value to 2 in the config map. Change some more values in the Config map like below. ``` apiVersion: v1 kind: ConfigMap metadata: name: longhorn-default-setting namespace: longhorn-system data: default-setting.yaml: |- backup-target: backup-target-credential-secret: allow-recurring-job-while-volume-detached: create-default-disk-labeled-nodes: default-data-path: replica-soft-anti-affinity: storage-over-provisioning-percentage: storage-minimal-available-percentage: upgrade-checker: default-replica-count: default-data-locality: guaranteed-engine-cpu: default-longhorn-static-storage-class: backupstore-poll-interval: taint-toleration: priority-class: auto-salvage: auto-delete-pod-when-volume-detached-unexpectedly: disable-scheduling-on-cordoned-node: replica-zone-soft-anti-affinity: volume-attachment-recovery-policy: node-down-pod-deletion-policy: allow-node-drain-with-last-healthy-replica:true mkfs-ext4-parameters:'abc' disable-replica-rebuild: replica-replenishment-wait-interval:100 disable-revision-counter: system-managed-pods-image-pull-policy: allow-volume-creation-with-degraded-availability:false auto-cleanup-system-generated-snapshot: concurrent-automatic-engine-upgrade-per-node-limit:2 backing-image-cleanup-wait-interval: ``` 4. Deploy Longhorn using kubectl command. Check the Longhorn setting, the values are not reflected. **Expected behavior** User should be able to change the `longhorn-default-setting` and deploy Longhorn with those values. **Environment:** - Longhorn version: Longhorn-master `03/01/2021` - Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: K8s v1.20.4 - RKE - Number of management node in the cluster: 1 - Number of worker node in the cluster: 3 - Node config - OS type and version: Ubuntu 1.20 - CPU per node: 2 vcpus - Memory per node: 4 GB - Disk type(e.g. SSD/NVMe): SSD - Network bandwidth between the nodes: 5 Gigabyte - Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): DO - Number of Longhorn volumes in the cluster: 10 **Additional context** Add any other context about the problem here. ",1.0,"[BUG] Updating the config map in the Longhorn yaml doesn't set the values of the settings. - **Describe the bug** Deploy Longhorn-master after setting some values like `concurrent-automatic-engine-upgrade-per-node-limit` in the Config map in the Longhorn yaml file. The values are not reflected the Longhorn setting once deployed. **To Reproduce** Steps to reproduce the behavior: 1. Deploy Longhorn v1.1.0 on a K8s cluster. 2. Create some volumes and attach them to pods. 3. Change the `concurrent-automatic-engine-upgrade-per-node-limit` value to 2 in the config map. Change some more values in the Config map like below. ``` apiVersion: v1 kind: ConfigMap metadata: name: longhorn-default-setting namespace: longhorn-system data: default-setting.yaml: |- backup-target: backup-target-credential-secret: allow-recurring-job-while-volume-detached: create-default-disk-labeled-nodes: default-data-path: replica-soft-anti-affinity: storage-over-provisioning-percentage: storage-minimal-available-percentage: upgrade-checker: default-replica-count: default-data-locality: guaranteed-engine-cpu: default-longhorn-static-storage-class: backupstore-poll-interval: taint-toleration: priority-class: auto-salvage: auto-delete-pod-when-volume-detached-unexpectedly: disable-scheduling-on-cordoned-node: replica-zone-soft-anti-affinity: volume-attachment-recovery-policy: node-down-pod-deletion-policy: allow-node-drain-with-last-healthy-replica:true mkfs-ext4-parameters:'abc' disable-replica-rebuild: replica-replenishment-wait-interval:100 disable-revision-counter: system-managed-pods-image-pull-policy: allow-volume-creation-with-degraded-availability:false auto-cleanup-system-generated-snapshot: concurrent-automatic-engine-upgrade-per-node-limit:2 backing-image-cleanup-wait-interval: ``` 4. Deploy Longhorn using kubectl command. Check the Longhorn setting, the values are not reflected. **Expected behavior** User should be able to change the `longhorn-default-setting` and deploy Longhorn with those values. **Environment:** - Longhorn version: Longhorn-master `03/01/2021` - Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: K8s v1.20.4 - RKE - Number of management node in the cluster: 1 - Number of worker node in the cluster: 3 - Node config - OS type and version: Ubuntu 1.20 - CPU per node: 2 vcpus - Memory per node: 4 GB - Disk type(e.g. SSD/NVMe): SSD - Network bandwidth between the nodes: 5 Gigabyte - Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): DO - Number of Longhorn volumes in the cluster: 10 **Additional context** Add any other context about the problem here. ",0, updating the config map in the longhorn yaml doesn t set the values of the settings describe the bug deploy longhorn master after setting some values like concurrent automatic engine upgrade per node limit in the config map in the longhorn yaml file the values are not reflected the longhorn setting once deployed to reproduce steps to reproduce the behavior deploy longhorn on a cluster create some volumes and attach them to pods change the concurrent automatic engine upgrade per node limit value to in the config map change some more values in the config map like below apiversion kind configmap metadata name longhorn default setting namespace longhorn system data default setting yaml backup target backup target credential secret allow recurring job while volume detached create default disk labeled nodes default data path replica soft anti affinity storage over provisioning percentage storage minimal available percentage upgrade checker default replica count default data locality guaranteed engine cpu default longhorn static storage class backupstore poll interval taint toleration priority class auto salvage auto delete pod when volume detached unexpectedly disable scheduling on cordoned node replica zone soft anti affinity volume attachment recovery policy node down pod deletion policy allow node drain with last healthy replica true mkfs parameters abc disable replica rebuild replica replenishment wait interval disable revision counter system managed pods image pull policy allow volume creation with degraded availability false auto cleanup system generated snapshot concurrent automatic engine upgrade per node limit backing image cleanup wait interval deploy longhorn using kubectl command check the longhorn setting the values are not reflected expected behavior user should be able to change the longhorn default setting and deploy longhorn with those values environment longhorn version longhorn master kubernetes distro e g rke eks openshift and version rke number of management node in the cluster number of worker node in the cluster node config os type and version ubuntu cpu per node vcpus memory per node gb disk type e g ssd nvme ssd network bandwidth between the nodes gigabyte underlying infrastructure e g on aws gce eks gke vmware kvm baremetal do number of longhorn volumes in the cluster additional context add any other context about the problem here ,0 9475,28502087666.0,IssuesEvent,2023-04-18 18:10:29,keycloak/keycloak-benchmark,https://api.github.com/repos/keycloak/keycloak-benchmark,opened,Add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts,enhancement provision automation dataset,"### Description Add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts - Similar to minikube module we need to have support to create the needed service account enabled client, test-realm and users to start a benchmark against the Openshift cluster based Keycloak instance. - Akin to the above point, we also need to be able to deploy dataset module to the Openshift cluster based Keycloak instance and create needed datasets for various entities. ### Discussion _No response_ ### Motivation _No response_ ### Details The question I have for this particular feature request is along the lines of implementation for gatlinguser task from minikube/Taskfile.yaml Do we want to create these `tasks` under the `common/Taskfile.yml` ? That way we can simply modify the `gatlinguser` to pick up the `KC_HOSTNAME_SUFFIX` from the `.env` file to be under a conditional if block to only run the varied version of the below bash command when the `KC_HOSTNAME_SUFFIX` variable exists in the context. Suggested Change: ``` - > bash -c ' if [ ""{{.KC_HOSTNAME_SUFFIX}}"" != """" ]; then ../keycloak-cli/keycloak/bin/kcadm.sh config credentials --server https://keycloak.{{.KC_HOSTNAME_SUFFIX}}/ --realm master --user admin --password admin; else ../keycloak-cli/keycloak/bin/kcadm.sh config credentials --server https://keycloak.{{.IP}}.nip.io/ --realm master --user admin --password admin; fi' ``` For the dataset module, I believe there would be no changes needed, but would need some testing to make sure we don't hit any runtime issues.",1.0,"Add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts - ### Description Add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts - Similar to minikube module we need to have support to create the needed service account enabled client, test-realm and users to start a benchmark against the Openshift cluster based Keycloak instance. - Akin to the above point, we also need to be able to deploy dataset module to the Openshift cluster based Keycloak instance and create needed datasets for various entities. ### Discussion _No response_ ### Motivation _No response_ ### Details The question I have for this particular feature request is along the lines of implementation for gatlinguser task from minikube/Taskfile.yaml Do we want to create these `tasks` under the `common/Taskfile.yml` ? That way we can simply modify the `gatlinguser` to pick up the `KC_HOSTNAME_SUFFIX` from the `.env` file to be under a conditional if block to only run the varied version of the below bash command when the `KC_HOSTNAME_SUFFIX` variable exists in the context. Suggested Change: ``` - > bash -c ' if [ ""{{.KC_HOSTNAME_SUFFIX}}"" != """" ]; then ../keycloak-cli/keycloak/bin/kcadm.sh config credentials --server https://keycloak.{{.KC_HOSTNAME_SUFFIX}}/ --realm master --user admin --password admin; else ../keycloak-cli/keycloak/bin/kcadm.sh config credentials --server https://keycloak.{{.IP}}.nip.io/ --realm master --user admin --password admin; fi' ``` For the dataset module, I believe there would be no changes needed, but would need some testing to make sure we don't hit any runtime issues.",1,add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts description add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts similar to minikube module we need to have support to create the needed service account enabled client test realm and users to start a benchmark against the openshift cluster based keycloak instance akin to the above point we also need to be able to deploy dataset module to the openshift cluster based keycloak instance and create needed datasets for various entities discussion no response motivation no response details the question i have for this particular feature request is along the lines of implementation for gatlinguser task from minikube taskfile yaml do we want to create these tasks under the common taskfile yml that way we can simply modify the gatlinguser to pick up the kc hostname suffix from the env file to be under a conditional if block to only run the varied version of the below bash command when the kc hostname suffix variable exists in the context suggested change bash c if then keycloak cli keycloak bin kcadm sh config credentials server realm master user admin password admin else keycloak cli keycloak bin kcadm sh config credentials server realm master user admin password admin fi for the dataset module i believe there would be no changes needed but would need some testing to make sure we don t hit any runtime issues ,1 8687,2611535966.0,IssuesEvent,2015-02-27 06:05:51,chrsmith/hedgewars,https://api.github.com/repos/chrsmith/hedgewars,closed,Keyboard Layout,auto-migrated Priority-Medium Type-Defect,"``` IF i have russian keyboard layout turned on my OSX 10.7.5 some buttons in game e.g. P,T etc doesnt work. ``` Original issue reported on code.google.com by `maxis...@gmail.com` on 18 Jan 2014 at 10:08 * Merged into: #192",1.0,"Keyboard Layout - ``` IF i have russian keyboard layout turned on my OSX 10.7.5 some buttons in game e.g. P,T etc doesnt work. ``` Original issue reported on code.google.com by `maxis...@gmail.com` on 18 Jan 2014 at 10:08 * Merged into: #192",0,keyboard layout if i have russian keyboard layout turned on my osx some buttons in game e g p t etc doesnt work original issue reported on code google com by maxis gmail com on jan at merged into ,0 1828,10888576553.0,IssuesEvent,2019-11-18 16:30:30,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,opened,DRAFT!/WIP! Automation Tracking Issue 3.11,automation,"### Goals * Improve stability & performance * Improve developer speed & UX by refactoring and paying off tech debt * Catch up on features ### Backend - [ ] For some repositories `gitserver` cannot apply the diff #6625 - [ ] Continuously refactor existing code, pay off tech debt and continuously improve developer UX: #6572 - [ ] Improve performance, stability and observability when executing `CampaignPlans` and `createCampaign` - [ ] Set an upper time limit on `CampaignJob` execution - [ ] Add metrics/tracing to `previewCampaignPlan` and `createCampaign` - [ ] Use a persistent queue instead of goroutines (see #6572 ""No persistent queue"") - [ ] Execute `ChangesetJob`s in parallel (see #6572 ""GitHub rate limit and abuse detection"") - [ ] Correctly handle deletions - [ ] Define and implement what happens when a {repository,external-service} gets deleted - [ ] Non-manual Campaigns cannot be deleted due to foreign-key constraint #6659 - [ ] Implement `retryCampaign` that retries the subprocesses of `createCampaign` - [ ] Make `ChangesetJob` execution idempotent: check that new commits are not added to same branch, check for `ErrAlreadyExists` response from code hosts - [ ] Implement `cancelCampaignPlan` so that all jobs are cancelled - [ ] Create changesets for a repositories default branch (right now we open the PR for `master`, [see here](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/service.go#L210)) - [ ] More efficient and stable `Changeset` syncing - [ ] Bitbucket Server webhooks ([RFC](https://docs.google.com/document/d/18RStJNmD9BswkjDwDVDDe792j2UavgqHB_hHE9i3npc/edit)) - [ ] Syncing 24 GitHub pull requests fails due to GraphQL node limit reached #6658 - [ ] Gracefully handle changeset deletion in code hosts #6396 - [ ] Heuristic syncing of Changesets and ChangesetEvents #6388 - [ ] Only ""update"" a changeset when it actually changed. ### Frontend - [ ] Add snapshot tests - [ ] Cancel the previous preview with `switchMap` - [ ] ... ### Frontend & Backend - [ ] Show the combined event timeline from all changesets - [ ] GraphQL schema - [ ] Show comments - [ ] Show reviews - [ ] Shows status of all changesets in the campaign and allow querying/filtering by the following fields - [ ] Show ""commit statuses"" - [ ] Show ""labels"" - [ ] Filtering fields via GraphQL API - [ ] Filter by ""open/merged/closed"" - [ ] Filter by ""commit statuses"" - [ ] Filter by ""review status"" - [ ] Filter by ""labels"" - [ ] Show the set of participants involved in the campaign - [ ] GraphQL schema - [ ] Rename `campaign.{name,description}` to `campaign.{title,body}` ### Internal User Testing - [ ] Test the user flow with a colleague at Sourcegraph ",1.0,"DRAFT!/WIP! Automation Tracking Issue 3.11 - ### Goals * Improve stability & performance * Improve developer speed & UX by refactoring and paying off tech debt * Catch up on features ### Backend - [ ] For some repositories `gitserver` cannot apply the diff #6625 - [ ] Continuously refactor existing code, pay off tech debt and continuously improve developer UX: #6572 - [ ] Improve performance, stability and observability when executing `CampaignPlans` and `createCampaign` - [ ] Set an upper time limit on `CampaignJob` execution - [ ] Add metrics/tracing to `previewCampaignPlan` and `createCampaign` - [ ] Use a persistent queue instead of goroutines (see #6572 ""No persistent queue"") - [ ] Execute `ChangesetJob`s in parallel (see #6572 ""GitHub rate limit and abuse detection"") - [ ] Correctly handle deletions - [ ] Define and implement what happens when a {repository,external-service} gets deleted - [ ] Non-manual Campaigns cannot be deleted due to foreign-key constraint #6659 - [ ] Implement `retryCampaign` that retries the subprocesses of `createCampaign` - [ ] Make `ChangesetJob` execution idempotent: check that new commits are not added to same branch, check for `ErrAlreadyExists` response from code hosts - [ ] Implement `cancelCampaignPlan` so that all jobs are cancelled - [ ] Create changesets for a repositories default branch (right now we open the PR for `master`, [see here](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/service.go#L210)) - [ ] More efficient and stable `Changeset` syncing - [ ] Bitbucket Server webhooks ([RFC](https://docs.google.com/document/d/18RStJNmD9BswkjDwDVDDe792j2UavgqHB_hHE9i3npc/edit)) - [ ] Syncing 24 GitHub pull requests fails due to GraphQL node limit reached #6658 - [ ] Gracefully handle changeset deletion in code hosts #6396 - [ ] Heuristic syncing of Changesets and ChangesetEvents #6388 - [ ] Only ""update"" a changeset when it actually changed. ### Frontend - [ ] Add snapshot tests - [ ] Cancel the previous preview with `switchMap` - [ ] ... ### Frontend & Backend - [ ] Show the combined event timeline from all changesets - [ ] GraphQL schema - [ ] Show comments - [ ] Show reviews - [ ] Shows status of all changesets in the campaign and allow querying/filtering by the following fields - [ ] Show ""commit statuses"" - [ ] Show ""labels"" - [ ] Filtering fields via GraphQL API - [ ] Filter by ""open/merged/closed"" - [ ] Filter by ""commit statuses"" - [ ] Filter by ""review status"" - [ ] Filter by ""labels"" - [ ] Show the set of participants involved in the campaign - [ ] GraphQL schema - [ ] Rename `campaign.{name,description}` to `campaign.{title,body}` ### Internal User Testing - [ ] Test the user flow with a colleague at Sourcegraph ",1,draft wip automation tracking issue goals improve stability performance improve developer speed ux by refactoring and paying off tech debt catch up on features backend for some repositories gitserver cannot apply the diff continuously refactor existing code pay off tech debt and continuously improve developer ux improve performance stability and observability when executing campaignplans and createcampaign set an upper time limit on campaignjob execution add metrics tracing to previewcampaignplan and createcampaign use a persistent queue instead of goroutines see no persistent queue execute changesetjob s in parallel see github rate limit and abuse detection correctly handle deletions define and implement what happens when a repository external service gets deleted non manual campaigns cannot be deleted due to foreign key constraint implement retrycampaign that retries the subprocesses of createcampaign make changesetjob execution idempotent check that new commits are not added to same branch check for erralreadyexists response from code hosts implement cancelcampaignplan so that all jobs are cancelled create changesets for a repositories default branch right now we open the pr for master more efficient and stable changeset syncing bitbucket server webhooks syncing github pull requests fails due to graphql node limit reached gracefully handle changeset deletion in code hosts heuristic syncing of changesets and changesetevents only update a changeset when it actually changed frontend add snapshot tests cancel the previous preview with switchmap frontend backend show the combined event timeline from all changesets graphql schema show comments show reviews shows status of all changesets in the campaign and allow querying filtering by the following fields show commit statuses show labels filtering fields via graphql api filter by open merged closed filter by commit statuses filter by review status filter by labels show the set of participants involved in the campaign graphql schema rename campaign name description to campaign title body internal user testing test the user flow with a colleague at sourcegraph ,1 105786,13217755994.0,IssuesEvent,2020-08-17 07:26:38,shopsys/shopsys,https://api.github.com/repos/shopsys/shopsys,closed,Admin header wrapping without buttons,Design & Apperance," ### What is happening The text is wrapping in admin header. ![image](https://user-images.githubusercontent.com/6003253/83846604-27a7ab00-a70b-11ea-8902-0010ac7b333a.png) ### Expected result I believe that we should not have the empty div for buttons (if it is empty) ![image](https://user-images.githubusercontent.com/6003253/83846782-6c334680-a70b-11ea-95b6-556c537b255b.png) maybe that would help a bit. ![image](https://user-images.githubusercontent.com/6003253/83846682-473ed380-a70b-11ea-8c8c-f0eb6c13198c.png) ",1.0,"Admin header wrapping without buttons - ### What is happening The text is wrapping in admin header. ![image](https://user-images.githubusercontent.com/6003253/83846604-27a7ab00-a70b-11ea-8902-0010ac7b333a.png) ### Expected result I believe that we should not have the empty div for buttons (if it is empty) ![image](https://user-images.githubusercontent.com/6003253/83846782-6c334680-a70b-11ea-95b6-556c537b255b.png) maybe that would help a bit. ![image](https://user-images.githubusercontent.com/6003253/83846682-473ed380-a70b-11ea-8c8c-f0eb6c13198c.png) ",0,admin header wrapping without buttons what is happening the text is wrapping in admin header expected result i believe that we should not have the empty div for buttons if it is empty maybe that would help a bit ,0 748585,26128726632.0,IssuesEvent,2022-12-28 23:25:05,Ore-Design/Ore-3D-Reports-Changelog,https://api.github.com/repos/Ore-Design/Ore-3D-Reports-Changelog,closed,Bug: Capsule/You Products using Incorrect BOM Components [1.5.6],bug in progress medium priority,"Example: Anything with Bowl ex. A1206 - too much sheet material, no bowl S1206 - too much sheet material, no bowl A1235 - too much sheet material, no bowl A1237 - too much sheet material, no bowl A1209 - too much sheet material, no bowl",1.0,"Bug: Capsule/You Products using Incorrect BOM Components [1.5.6] - Example: Anything with Bowl ex. A1206 - too much sheet material, no bowl S1206 - too much sheet material, no bowl A1235 - too much sheet material, no bowl A1237 - too much sheet material, no bowl A1209 - too much sheet material, no bowl",0,bug capsule you products using incorrect bom components example anything with bowl ex too much sheet material no bowl too much sheet material no bowl too much sheet material no bowl too much sheet material no bowl too much sheet material no bowl,0 3002,12966077422.0,IssuesEvent,2020-07-20 23:52:15,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] Backup List issue when failing to retrieve backup names for backup volume,bug priority/2 require/automation-e2e require/automation-engine,"**Describe the bug** When trying to List all Backup Volumes and there is an error to retrieve the backupnames via `getBackupNamesForVolume` for a volume, we currently return an error back to the caller instead of setting the error as part of the VolumeInfo object. This blocks the ui/api from showing all available backup volumes. **To Reproduce** This is just one of many possible failure cases, but this one is easy to reproduce Setup: - create vol `bak1`, `bak2`, `bak3` and attach to nodes - write some data to all volumes - take a backup of each volume Repro: - create a file named: `backup_1234@failure.cfg` inside of the backups folder for volume `bak2` - now list backup volumes will fail and you should no longer see backup volumes - you should still be able to see backups for `bak1`, `bak3` if manually requested via the api **Expected behavior** Show available backup volumes and backups even if a single backup volume has issues. ",2.0,"[BUG] Backup List issue when failing to retrieve backup names for backup volume - **Describe the bug** When trying to List all Backup Volumes and there is an error to retrieve the backupnames via `getBackupNamesForVolume` for a volume, we currently return an error back to the caller instead of setting the error as part of the VolumeInfo object. This blocks the ui/api from showing all available backup volumes. **To Reproduce** This is just one of many possible failure cases, but this one is easy to reproduce Setup: - create vol `bak1`, `bak2`, `bak3` and attach to nodes - write some data to all volumes - take a backup of each volume Repro: - create a file named: `backup_1234@failure.cfg` inside of the backups folder for volume `bak2` - now list backup volumes will fail and you should no longer see backup volumes - you should still be able to see backups for `bak1`, `bak3` if manually requested via the api **Expected behavior** Show available backup volumes and backups even if a single backup volume has issues. ",1, backup list issue when failing to retrieve backup names for backup volume describe the bug when trying to list all backup volumes and there is an error to retrieve the backupnames via getbackupnamesforvolume for a volume we currently return an error back to the caller instead of setting the error as part of the volumeinfo object this blocks the ui api from showing all available backup volumes to reproduce this is just one of many possible failure cases but this one is easy to reproduce setup create vol and attach to nodes write some data to all volumes take a backup of each volume repro create a file named backup failure cfg inside of the backups folder for volume now list backup volumes will fail and you should no longer see backup volumes you should still be able to see backups for if manually requested via the api expected behavior show available backup volumes and backups even if a single backup volume has issues ,1 3302,2610060252.0,IssuesEvent,2015-02-26 18:17:45,chrsmith/jsjsj122,https://api.github.com/repos/chrsmith/jsjsj122,opened,路桥治疗不育哪里效果最好,auto-migrated Priority-Medium Type-Defect,"``` 路桥治疗不育哪里效果最好【台州五洲生殖医院】24小时健康 咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台 州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、1 08、118、198及椒江一金清公交车直达枫南小区,乘坐107、105、 109、112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:08",1.0,"路桥治疗不育哪里效果最好 - ``` 路桥治疗不育哪里效果最好【台州五洲生殖医院】24小时健康 咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台 州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、1 08、118、198及椒江一金清公交车直达枫南小区,乘坐107、105、 109、112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:08",0,路桥治疗不育哪里效果最好 路桥治疗不育哪里效果最好【台州五洲生殖医院】 咨询热线 微信号tzwzszyy 医院地址 台 (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at ,0 23536,4955656969.0,IssuesEvent,2016-12-01 21:04:00,easy-updates-manager/easy-updates-manager,https://api.github.com/repos/easy-updates-manager/easy-updates-manager,closed,Easy Updates Manager doesn't log updates done through Jetpack Manage,documentation,"When using the Jetpack Manage part of Jetpack, you have the ability to update plugins remotely through wordpress.com. If you use this, it doesn't log the updates through Easy Updates Manager. Perhaps a FAQ should be created to let users know that Easy Updates Manager doesn't log updates that are done through external software. ",1.0,"Easy Updates Manager doesn't log updates done through Jetpack Manage - When using the Jetpack Manage part of Jetpack, you have the ability to update plugins remotely through wordpress.com. If you use this, it doesn't log the updates through Easy Updates Manager. Perhaps a FAQ should be created to let users know that Easy Updates Manager doesn't log updates that are done through external software. ",0,easy updates manager doesn t log updates done through jetpack manage when using the jetpack manage part of jetpack you have the ability to update plugins remotely through wordpress com if you use this it doesn t log the updates through easy updates manager perhaps a faq should be created to let users know that easy updates manager doesn t log updates that are done through external software ,0 267783,23319576194.0,IssuesEvent,2022-08-08 15:12:57,splendo/kaluga,https://api.github.com/repos/splendo/kaluga,opened,Delayed verification for mock methods,component:test-utils,"Sometimes you want to verify a method is called a number of times within a certain period (e.g. at least twice within a second, `verifyWithin(duration = 1.seconds, times = 2)` or over a certain time (e.g. between 2 and 5 times over a second, `verfiyOver(duration = 1.seconds, times = 2...5)` This also covers the specific case of #541",1.0,"Delayed verification for mock methods - Sometimes you want to verify a method is called a number of times within a certain period (e.g. at least twice within a second, `verifyWithin(duration = 1.seconds, times = 2)` or over a certain time (e.g. between 2 and 5 times over a second, `verfiyOver(duration = 1.seconds, times = 2...5)` This also covers the specific case of #541",0,delayed verification for mock methods sometimes you want to verify a method is called a number of times within a certain period e g at least twice within a second verifywithin duration seconds times or over a certain time e g between and times over a second verfiyover duration seconds times this also covers the specific case of ,0 4190,15770441328.0,IssuesEvent,2021-03-31 19:26:17,jessicamorris/jessicamorris.github.io,https://api.github.com/repos/jessicamorris/jessicamorris.github.io,closed,Automate gh-pages update on commits to main branch,automation,"GitHub's got support for repo automation, surely I can make changes auto-deploy. More on GitHub actions: https://docs.github.com/en/actions Acceptance criteria: - The page at jessicamorris.github.io/ automatically updates when the `main` branch changes.",1.0,"Automate gh-pages update on commits to main branch - GitHub's got support for repo automation, surely I can make changes auto-deploy. More on GitHub actions: https://docs.github.com/en/actions Acceptance criteria: - The page at jessicamorris.github.io/ automatically updates when the `main` branch changes.",1,automate gh pages update on commits to main branch github s got support for repo automation surely i can make changes auto deploy more on github actions acceptance criteria the page at jessicamorris github io automatically updates when the main branch changes ,1 8070,26149172922.0,IssuesEvent,2022-12-30 10:46:23,elastic/e2e-testing,https://api.github.com/repos/elastic/e2e-testing,closed,Download stack logs from the AWS instance to the Jenkins worker,enhancement Team:Automation size:S triaged area:ci,"It will allow troubleshooting the Stack deployment better, as now it's needed to SSH into the machine, and monitor the logs",1.0,"Download stack logs from the AWS instance to the Jenkins worker - It will allow troubleshooting the Stack deployment better, as now it's needed to SSH into the machine, and monitor the logs",1,download stack logs from the aws instance to the jenkins worker it will allow troubleshooting the stack deployment better as now it s needed to ssh into the machine and monitor the logs,1 108,3779429684.0,IssuesEvent,2016-03-18 08:15:17,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,opened,Propolsals for changes in Automation engine API,Automation,"1) Type of Input/output defaultValue property is better to be changed from Object to String. In this way the default value will be presented in same way as default value of ConfigDescriptionParameter. Also the rule engine does not know what kind of object to create. Conversion from string to object has to be served by the handler because it knows how to handle this value. 2) At the moment, types of inputs and outputs are defined as fully qualified names. I’m not sure if it usable for the people which does not know java (i.e. javascript developer defining rules through the JSON definition). The proposal is the type to be just a string and validation to be based on equality of input and output as strings. 3) Configuration (which contains values for configuration properties) of Module, Rule, RuleTemplate at the moment is defined as Map. Our proposal is the configuration values to be presented as Map and stored as String. In that way the type of the will be defined and it will be presented in the same way as default value of configuration property. Also the configuration values will be easily serialized/deserialized. ",1.0,"Propolsals for changes in Automation engine API - 1) Type of Input/output defaultValue property is better to be changed from Object to String. In this way the default value will be presented in same way as default value of ConfigDescriptionParameter. Also the rule engine does not know what kind of object to create. Conversion from string to object has to be served by the handler because it knows how to handle this value. 2) At the moment, types of inputs and outputs are defined as fully qualified names. I’m not sure if it usable for the people which does not know java (i.e. javascript developer defining rules through the JSON definition). The proposal is the type to be just a string and validation to be based on equality of input and output as strings. 3) Configuration (which contains values for configuration properties) of Module, Rule, RuleTemplate at the moment is defined as Map. Our proposal is the configuration values to be presented as Map and stored as String. In that way the type of the will be defined and it will be presented in the same way as default value of configuration property. Also the configuration values will be easily serialized/deserialized. ",1,propolsals for changes in automation engine api type of input output defaultvalue property is better to be changed from object to string in this way the default value will be presented in same way as default value of configdescriptionparameter also the rule engine does not know what kind of object to create conversion from string to object has to be served by the handler because it knows how to handle this value at the moment types of inputs and outputs are defined as fully qualified names i’m not sure if it usable for the people which does not know java i e javascript developer defining rules through the json definition the proposal is the type to be just a string and validation to be based on equality of input and output as strings configuration which contains values for configuration properties of module rule ruletemplate at the moment is defined as map our proposal is the configuration values to be presented as map and stored as string in that way the type of the will be defined and it will be presented in the same way as default value of configuration property also the configuration values will be easily serialized deserialized ,1 802487,28964130953.0,IssuesEvent,2023-05-10 06:32:30,alkem-io/client-web,https://api.github.com/repos/alkem-io/client-web,reopened,BUG: Banners on Space cards incorrect on pages,bug client User High Priority,"**Describe the bug** The banners on the cards for the Spaces are showing different dimensions for the banners on the various pages (search, home, profile). **To Reproduce** Steps to reproduce the behavior: 1. Search for the Publieke Dienstverlening Space on the search page 2. See the card with incorrectly cropped banner 3. Go to User profile page of Jet Klaver 4. See correct banner on Publieke Dienstverlening Space card **Expected behavior** Cards for Spaces on the Home page and Search page must use Card banner instead of Page banner (I think this solves it?) **Screenshots** ![image.png](https://images.zenhubusercontent.com/6200e10a5561c893d9e84591/ae954d9c-8aa1-4c87-b196-933dcced1dac)",1.0,"BUG: Banners on Space cards incorrect on pages - **Describe the bug** The banners on the cards for the Spaces are showing different dimensions for the banners on the various pages (search, home, profile). **To Reproduce** Steps to reproduce the behavior: 1. Search for the Publieke Dienstverlening Space on the search page 2. See the card with incorrectly cropped banner 3. Go to User profile page of Jet Klaver 4. See correct banner on Publieke Dienstverlening Space card **Expected behavior** Cards for Spaces on the Home page and Search page must use Card banner instead of Page banner (I think this solves it?) **Screenshots** ![image.png](https://images.zenhubusercontent.com/6200e10a5561c893d9e84591/ae954d9c-8aa1-4c87-b196-933dcced1dac)",0,bug banners on space cards incorrect on pages describe the bug the banners on the cards for the spaces are showing different dimensions for the banners on the various pages search home profile to reproduce steps to reproduce the behavior search for the publieke dienstverlening space on the search page see the card with incorrectly cropped banner go to user profile page of jet klaver see correct banner on publieke dienstverlening space card expected behavior cards for spaces on the home page and search page must use card banner instead of page banner i think this solves it screenshots ,0 6646,3038729620.0,IssuesEvent,2015-08-07 01:05:59,atom/atom,https://api.github.com/repos/atom/atom,closed,What version of Jasmine is Atom using?,documentation,"In [vendor/jasmine.js](https://github.com/atom/atom/blob/52abb4afc9098454cea8e220a363be3a9b958934/vendor/jasmine.js#L2662), I found that the Jasmine version is 1.3. In [docs/writing-specs.md](https://raw.githubusercontent.com/atom/atom/2f62346c585361591e6a9de7349401c7ebe360eb/docs/writing-specs.md), I found links to both Jasmine 1.3 and 2.0. It would be nice to document the version of Jasmine used in Atom and correct the wrong link(s) in `writing-specs.md`.",1.0,"What version of Jasmine is Atom using? - In [vendor/jasmine.js](https://github.com/atom/atom/blob/52abb4afc9098454cea8e220a363be3a9b958934/vendor/jasmine.js#L2662), I found that the Jasmine version is 1.3. In [docs/writing-specs.md](https://raw.githubusercontent.com/atom/atom/2f62346c585361591e6a9de7349401c7ebe360eb/docs/writing-specs.md), I found links to both Jasmine 1.3 and 2.0. It would be nice to document the version of Jasmine used in Atom and correct the wrong link(s) in `writing-specs.md`.",0,what version of jasmine is atom using in i found that the jasmine version is in i found links to both jasmine and it would be nice to document the version of jasmine used in atom and correct the wrong link s in writing specs md ,0 169840,20841949756.0,IssuesEvent,2022-03-21 01:55:58,turkdevops/graphql-tools,https://api.github.com/repos/turkdevops/graphql-tools,opened,"CVE-2022-24771 (High) detected in forge0.10.0, zimphonyzimbra-domain-admin-1.2",security vulnerability,"## CVE-2022-24771 - High Severity Vulnerability
Vulnerable Libraries - forge0.10.0, zimphonyzimbra-domain-admin-1.2

Vulnerability Details

Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure. This can allow a crafted structure that steals padding bytes and uses unchecked portion of the PKCS#1 encoded message to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.

Publish Date: 2022-03-18

URL: CVE-2022-24771

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24771

Release Date: 2022-03-18

Fix Resolution: node-forge - 1.3.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-24771 (High) detected in forge0.10.0, zimphonyzimbra-domain-admin-1.2 - ## CVE-2022-24771 - High Severity Vulnerability
Vulnerable Libraries - forge0.10.0, zimphonyzimbra-domain-admin-1.2

Vulnerability Details

Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure. This can allow a crafted structure that steals padding bytes and uses unchecked portion of the PKCS#1 encoded message to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.

Publish Date: 2022-03-18

URL: CVE-2022-24771

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24771

Release Date: 2022-03-18

Fix Resolution: node-forge - 1.3.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in zimphonyzimbra domain admin cve high severity vulnerability vulnerable libraries zimphonyzimbra domain admin vulnerability details forge also called node forge is a native implementation of transport layer security in javascript prior to version rsa pkcs signature verification code is lenient in checking the digest algorithm structure this can allow a crafted structure that steals padding bytes and uses unchecked portion of the pkcs encoded message to forge a signature when a low public exponent is being used the issue has been addressed in node forge version there are currently no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource ,0 152782,19696981458.0,IssuesEvent,2022-01-12 13:11:40,jtimberlake/serverless-artillery,https://api.github.com/repos/jtimberlake/serverless-artillery,closed,WS-2019-0318 (High) detected in handlebars-4.1.0.tgz - autoclosed,security vulnerability,"## WS-2019-0318 - High Severity Vulnerability
Vulnerable Library - handlebars-4.1.0.tgz

Handlebars provides the power necessary to let you build semantic templates effectively with no frustration

Library home page: https://registry.npmjs.org/handlebars/-/handlebars-4.1.0.tgz

Path to dependency file: serverless-artillery/package.json

Path to vulnerable library: serverless-artillery/node_modules/nyc/node_modules/handlebars/package.json

Dependency Hierarchy: - nyc-13.3.0.tgz (Root Library) - istanbul-reports-2.1.1.tgz - :x: **handlebars-4.1.0.tgz** (Vulnerable Library)

Found in HEAD commit: c4de98a3ee33ed933132ba45998d6d4f54aa4e6d

Found in base branch: master

Vulnerability Details

In ""showdownjs/showdown"", versions prior to v4.4.5 are vulnerable against Regular expression Denial of Service (ReDOS) once receiving specially-crafted templates.

Publish Date: 2019-10-20

URL: WS-2019-0318

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.npmjs.com/advisories/1300

Release Date: 2019-12-01

Fix Resolution: handlebars - 4.4.5

",True,"WS-2019-0318 (High) detected in handlebars-4.1.0.tgz - autoclosed - ## WS-2019-0318 - High Severity Vulnerability
Vulnerable Library - handlebars-4.1.0.tgz

Handlebars provides the power necessary to let you build semantic templates effectively with no frustration

Library home page: https://registry.npmjs.org/handlebars/-/handlebars-4.1.0.tgz

Path to dependency file: serverless-artillery/package.json

Path to vulnerable library: serverless-artillery/node_modules/nyc/node_modules/handlebars/package.json

Dependency Hierarchy: - nyc-13.3.0.tgz (Root Library) - istanbul-reports-2.1.1.tgz - :x: **handlebars-4.1.0.tgz** (Vulnerable Library)

Found in HEAD commit: c4de98a3ee33ed933132ba45998d6d4f54aa4e6d

Found in base branch: master

Vulnerability Details

In ""showdownjs/showdown"", versions prior to v4.4.5 are vulnerable against Regular expression Denial of Service (ReDOS) once receiving specially-crafted templates.

Publish Date: 2019-10-20

URL: WS-2019-0318

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.npmjs.com/advisories/1300

Release Date: 2019-12-01

Fix Resolution: handlebars - 4.4.5

",0,ws high detected in handlebars tgz autoclosed ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file serverless artillery package json path to vulnerable library serverless artillery node modules nyc node modules handlebars package json dependency hierarchy nyc tgz root library istanbul reports tgz x handlebars tgz vulnerable library found in head commit a href found in base branch master vulnerability details in showdownjs showdown versions prior to are vulnerable against regular expression denial of service redos once receiving specially crafted templates publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree nyc istanbul reports handlebars isminimumfixversionavailable true minimumfixversion handlebars basebranches vulnerabilityidentifier ws vulnerabilitydetails in showdownjs showdown versions prior to are vulnerable against regular expression denial of service redos once receiving specially crafted templates vulnerabilityurl ,0 3537,13924603387.0,IssuesEvent,2020-10-21 15:45:31,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,--no-kibana does not remove APM server kibana flags,automation bug team:automation,"Running the following command ``` python3 scripts/compose.py start 8.0.0 --no-kibana ``` This is the command line for the APM Sever, it contains Kibana settings and should not ``` /usr/local/bin/docker-entrypoint apm-server ... -E apm-server.kibana.enabled=true -E apm-server.kibana.host=kibana:5601 -E apm-server.kibana.username=apm_server_user -E apm-server.kibana.password=changeme ... ```",2.0,"--no-kibana does not remove APM server kibana flags - Running the following command ``` python3 scripts/compose.py start 8.0.0 --no-kibana ``` This is the command line for the APM Sever, it contains Kibana settings and should not ``` /usr/local/bin/docker-entrypoint apm-server ... -E apm-server.kibana.enabled=true -E apm-server.kibana.host=kibana:5601 -E apm-server.kibana.username=apm_server_user -E apm-server.kibana.password=changeme ... ```",1, no kibana does not remove apm server kibana flags running the following command scripts compose py start no kibana this is the command line for the apm sever it contains kibana settings and should not usr local bin docker entrypoint apm server e apm server kibana enabled true e apm server kibana host kibana e apm server kibana username apm server user e apm server kibana password changeme ,1 282012,21315455539.0,IssuesEvent,2022-04-16 07:31:26,kaiyichen/pe,https://api.github.com/repos/kaiyichen/pe,opened,Wrong format of class diagram for add command in developer guide,type.DocumentationBug severity.VeryLow,"![image.png](https://raw.githubusercontent.com/kaiyichen/pe/main/files/59915101-a91b-4956-857a-b35fb792256f.png) abstract classes should EITHER include `{abstract}` or be italic. Should not be both ",1.0,"Wrong format of class diagram for add command in developer guide - ![image.png](https://raw.githubusercontent.com/kaiyichen/pe/main/files/59915101-a91b-4956-857a-b35fb792256f.png) abstract classes should EITHER include `{abstract}` or be italic. Should not be both ",0,wrong format of class diagram for add command in developer guide abstract classes should either include abstract or be italic should not be both ,0 20366,6035018330.0,IssuesEvent,2017-06-09 12:52:10,EEA-Norway-Grants/dataviz,https://api.github.com/repos/EEA-Norway-Grants/dataviz,opened,"give up tabs logic from components, use a tabs ""widget"" in sidebar",Type: Code quality,"The sidebar shouldn't have any tab-related logic, and neither should the sub-components. We should end up with something like this in the main template: ```html ``` Before we start writing our own tab component, evaluate this project, it looks very much ok: https://github.com/spatie/vue-tabs-component (It's also debatable if the sidebar has any business being a separate component, but we'll see about that later.) ",1.0,"give up tabs logic from components, use a tabs ""widget"" in sidebar - The sidebar shouldn't have any tab-related logic, and neither should the sub-components. We should end up with something like this in the main template: ```html ``` Before we start writing our own tab component, evaluate this project, it looks very much ok: https://github.com/spatie/vue-tabs-component (It's also debatable if the sidebar has any business being a separate component, but we'll see about that later.) ",0,give up tabs logic from components use a tabs widget in sidebar the sidebar shouldn t have any tab related logic and neither should the sub components we should end up with something like this in the main template html before we start writing our own tab component evaluate this project it looks very much ok it s also debatable if the sidebar has any business being a separate component but we ll see about that later ,0 1662,10550431956.0,IssuesEvent,2019-10-03 11:02:43,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,opened,Can't scroll to the drag target element in iframe in IE11,AREA: client FREQUENCY: level 1 HAS WORKAROUND SYSTEM: automations TYPE: bug," ### What is your Test Scenario? Drag an element located in an iframe, that is not visible in the viewport. ### What is the Current behavior? TestCafe fails with the `Element doesn't exist` error. Workaround: add a step, that hovers iframe's ``. ```js await t.switchToIframe('iframe'); await t.hover('body') ``` ### What is the Expected behavior? TestCafe should scroll iframe's parent and be able to drag the target element. ### What is your web application and your TestCafe test code? Your website URL (or attach your complete example): https://demos.devexpress.com/Bootstrap/GridView/Adaptivity.aspx
Your complete test code (or attach your test files): ```js fixture`test`.page`https://demos.devexpress.com/Bootstrap/GridView/Adaptivity.aspx`; test('test', async t => { await t.resizeWindow(1420, 760); await t.switchToIframe('#content > div:nth-child(11) > div.demo-device-container > div.demo-device.bg-secondary.border.border-secondary.qrcode-container > div > iframe'); // NOTE: uncomment the line below to fix the test // await t.hover('body') await t.drag('#ctl05_GridViewAdaptiveLayout_col1', 200, 0); }); ```
Your complete configuration file (if any): ``` ```
Your complete test report: ``` ```
Screenshots: ``` ```
### Steps to Reproduce: 1. Go to my website ... 3. Execute this command... 4. See the error... ### Your Environment details: * testcafe version: 1.5.0 * node.js version: 10.15.0 * command-line arguments: testcafe ie test.js * browser name and version: IE 11 * platform and version: * other: ",1.0,"Can't scroll to the drag target element in iframe in IE11 - ### What is your Test Scenario? Drag an element located in an iframe, that is not visible in the viewport. ### What is the Current behavior? TestCafe fails with the `Element doesn't exist` error. Workaround: add a step, that hovers iframe's ``. ```js await t.switchToIframe('iframe'); await t.hover('body') ``` ### What is the Expected behavior? TestCafe should scroll iframe's parent and be able to drag the target element. ### What is your web application and your TestCafe test code? Your website URL (or attach your complete example): https://demos.devexpress.com/Bootstrap/GridView/Adaptivity.aspx
Your complete test code (or attach your test files): ```js fixture`test`.page`https://demos.devexpress.com/Bootstrap/GridView/Adaptivity.aspx`; test('test', async t => { await t.resizeWindow(1420, 760); await t.switchToIframe('#content > div:nth-child(11) > div.demo-device-container > div.demo-device.bg-secondary.border.border-secondary.qrcode-container > div > iframe'); // NOTE: uncomment the line below to fix the test // await t.hover('body') await t.drag('#ctl05_GridViewAdaptiveLayout_col1', 200, 0); }); ```
Your complete configuration file (if any): ``` ```
Your complete test report: ``` ```
Screenshots: ``` ```
### Steps to Reproduce: 1. Go to my website ... 3. Execute this command... 4. See the error... ### Your Environment details: * testcafe version: 1.5.0 * node.js version: 10.15.0 * command-line arguments: testcafe ie test.js * browser name and version: IE 11 * platform and version: * other: ",1,can t scroll to the drag target element in iframe in if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository  in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario drag an element located in an iframe that is not visible in the viewport what is the current behavior testcafe fails with the element doesn t exist error workaround add a step that hovers iframe s js await t switchtoiframe iframe await t hover body what is the expected behavior testcafe should scroll iframe s parent and be able to drag the target element what is your web application and your testcafe test code your website url or attach your complete example your complete test code or attach your test files js fixture test page test test async t await t resizewindow await t switchtoiframe content div nth child div demo device container div demo device bg secondary border border secondary qrcode container div iframe note uncomment the line below to fix the test await t hover body await t drag gridviewadaptivelayout your complete configuration file if any your complete test report screenshots steps to reproduce go to my website execute this command see the error your environment details testcafe version node js version command line arguments testcafe ie test js browser name and version ie platform and version other ,1 2812,12626373627.0,IssuesEvent,2020-06-14 16:17:02,pysal/submodule_template,https://api.github.com/repos/pysal/submodule_template,opened,Automated merging for conda-forge feedstock,automation,New functionality in `conda-forge` allows for the automated merging of passing PRs in a package's feedstock. It is enabled through the opening of an issue with a specific copy/pasted message. See these issues in the [`mapclassify`](https://github.com/conda-forge/mapclassify-feedstock/issues/11) and [`spaghetti`](https://github.com/conda-forge/spaghetti-feedstock/issues/21#issuecomment-643786627) feedstocks for examples.,1.0,Automated merging for conda-forge feedstock - New functionality in `conda-forge` allows for the automated merging of passing PRs in a package's feedstock. It is enabled through the opening of an issue with a specific copy/pasted message. See these issues in the [`mapclassify`](https://github.com/conda-forge/mapclassify-feedstock/issues/11) and [`spaghetti`](https://github.com/conda-forge/spaghetti-feedstock/issues/21#issuecomment-643786627) feedstocks for examples.,1,automated merging for conda forge feedstock new functionality in conda forge allows for the automated merging of passing prs in a package s feedstock it is enabled through the opening of an issue with a specific copy pasted message see these issues in the and feedstocks for examples ,1 6738,23816331511.0,IssuesEvent,2022-09-05 07:08:53,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,cdc cli changefeed remove: Error: [CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded,type/bug severity/minor found/automation area/ticdc,"### What did you do? - Create 300 changefeed with kafka sink - Remove all changefeeds one by one ### What did you expect to see? _No response_ ### What did you see instead? - When removing changefeed 6, cli failed: Error: [CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console (paste TiDB cluster version here) ``` Upstream TiKV version (execute `tikv-server --version`): ```console (paste TiKV version here) v5.4.2 ``` TiCDC version (execute `cdc version`): ```console (paste TiCDC version here) v5.4.2 ```",1.0,"cdc cli changefeed remove: Error: [CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded - ### What did you do? - Create 300 changefeed with kafka sink - Remove all changefeeds one by one ### What did you expect to see? _No response_ ### What did you see instead? - When removing changefeed 6, cli failed: Error: [CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console (paste TiDB cluster version here) ``` Upstream TiKV version (execute `tikv-server --version`): ```console (paste TiKV version here) v5.4.2 ``` TiCDC version (execute `cdc version`): ```console (paste TiCDC version here) v5.4.2 ```",1,cdc cli changefeed remove: error etcd api call error context deadline exceeded what did you do create changefeed with kafka sink remove all changefeeds one by one what did you expect to see no response what did you see instead when removing changefeed cli failed error etcd api call error context deadline exceeded versions of the cluster upstream tidb cluster version execute select tidb version in a mysql client console paste tidb cluster version here upstream tikv version execute tikv server version console paste tikv version here ticdc version execute cdc version console paste ticdc version here ,1 8277,26603757324.0,IssuesEvent,2023-01-23 17:36:49,o3de/o3de,https://api.github.com/repos/o3de/o3de,closed,AR Bug Report,kind/bug needs-triage kind/automation,"**Describe the bug** A clear and concise description of what the bug is. **Failed Jenkins Job Information:** The name of the job that failed, job build number, and code snippit of the failure. **Attachments** Attach the Jenkins job log as a .txt file and any other relevant information. **Additional context** Add any other context about the problem here. ",1.0,"AR Bug Report - **Describe the bug** A clear and concise description of what the bug is. **Failed Jenkins Job Information:** The name of the job that failed, job build number, and code snippit of the failure. **Attachments** Attach the Jenkins job log as a .txt file and any other relevant information. **Additional context** Add any other context about the problem here. ",1,ar bug report describe the bug a clear and concise description of what the bug is failed jenkins job information the name of the job that failed job build number and code snippit of the failure attachments attach the jenkins job log as a txt file and any other relevant information additional context add any other context about the problem here ,1 9715,30327371721.0,IssuesEvent,2023-07-11 01:55:35,astropy/astropy,https://api.github.com/repos/astropy/astropy,opened,pre-commit check cannot see missing import,Bug dev-automation,"pre-commit on the PR was green, and my editor hooked up to flake8 did not catch it anymore (it used to). What is going on here? This sounds like a bug in the pre-commit/ruff checks. I feel like we went overboard with such checks, making them so complicated that it is starting to fail because half the devs don't know how to read all the settings. `E NameError: name 'nullcontext' is not defined` (The problem above as I forgot to add `from contextlib import nullcontext` but it was not caught until CI ran.) @nstarman , @eerovaher , or @WilliamJamieson , do you know how to fix this in the settings?",1.0,"pre-commit check cannot see missing import - pre-commit on the PR was green, and my editor hooked up to flake8 did not catch it anymore (it used to). What is going on here? This sounds like a bug in the pre-commit/ruff checks. I feel like we went overboard with such checks, making them so complicated that it is starting to fail because half the devs don't know how to read all the settings. `E NameError: name 'nullcontext' is not defined` (The problem above as I forgot to add `from contextlib import nullcontext` but it was not caught until CI ran.) @nstarman , @eerovaher , or @WilliamJamieson , do you know how to fix this in the settings?",1,pre commit check cannot see missing import pre commit on the pr was green and my editor hooked up to did not catch it anymore it used to what is going on here this sounds like a bug in the pre commit ruff checks i feel like we went overboard with such checks making them so complicated that it is starting to fail because half the devs don t know how to read all the settings e nameerror name nullcontext is not defined the problem above as i forgot to add from contextlib import nullcontext but it was not caught until ci ran nstarman eerovaher or williamjamieson do you know how to fix this in the settings ,1 5180,18821302320.0,IssuesEvent,2021-11-10 08:39:04,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,trigger.to_state.context only filled in on dimmer light.entitites?,integration: automation,"### The problem I am using the fact that the ""trigger.to_state.context.user_id != None"" for several of my light entities, when they are manually modified via lovelace UI, in order to disable automations from overriding the manual settings (for a while) But it seems that the value is only filled in for some light entitites never get the context value filled in for some reason? I've got four qubino dimmers, and a few qubino switches. The switches are manually added configged as lights entities as well. I then have an automation that triggers if the dimmers or switches change (as included in examples), which works fine for the dimmer lights, **but not for the two switches**, because it turns out the trigger doesn't contain the context value in the to_state, as it does for the dimmers. The full trigger for a switch, contains: ` {'id': '1', 'idx': '1', 'platform': 'state', 'entity_id': 'switch.taklampa_kallartrappa', 'from_state': , 'to_state': , 'for': datetime.timedelta(seconds=1), 'attribute': None, 'description': 'state of switch.taklampa_kallartrappa'} ` ### What is version of Home Assistant Core has the issue? 2021.9.7 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant Container ### Integration causing the issue automation ### Link to integration documentation on our website _No response_ ### Example YAML snippet ```yaml alias: AutoLights Disable on Manual Override description: '' trigger: - platform: state entity_id: light.flush_dimmer_koket attribute: brightness for: hours: 0 minutes: 0 seconds: 1 milliseconds: 0 - platform: state attribute: brightness entity_id: light.flush_dimmer_sovrummet for: hours: 0 minutes: 0 seconds: 1 milliseconds: 0 - platform: state entity_id: light.flush_dimmer_vardagsrum_plus attribute: brightness for: hours: 0 minutes: 0 seconds: 1 milliseconds: 0 - platform: state entity_id: light.tvattstuga to: 'on' for: hours: 0 minutes: 0 seconds: 1 milliseconds: 0 from: 'off' condition: - condition: template value_template: '{{ trigger.to_state.context.user_id != None }}' action: - service: timer.start data: duration: '01:30:00' target: entity_id: | {% if trigger.entity_id is search (""vardagsrum"") -%} timer.autolights_disable_vrum {% elif trigger.entity_id is search (""kitchen|koket"") -%} timer.autolights_disable_kitchen {% elif trigger.entity_id is search(""sovrum"") -%} timer.autolights_disable_sovrum {% elif trigger.entity_id is search(""tvattstuga"") -%} timer.autolights_disable_tvattstuga {%- else -%} timer.DoesntExistFailure {%- endif -%} mode: single ``` ### Anything in the logs that might be useful for us? _No response_ ### Additional information I've tried switching between using the *switch* instead of the created light.entitiy, but no change.",1.0,"trigger.to_state.context only filled in on dimmer light.entitites? - ### The problem I am using the fact that the ""trigger.to_state.context.user_id != None"" for several of my light entities, when they are manually modified via lovelace UI, in order to disable automations from overriding the manual settings (for a while) But it seems that the value is only filled in for some light entitites never get the context value filled in for some reason? I've got four qubino dimmers, and a few qubino switches. The switches are manually added configged as lights entities as well. I then have an automation that triggers if the dimmers or switches change (as included in examples), which works fine for the dimmer lights, **but not for the two switches**, because it turns out the trigger doesn't contain the context value in the to_state, as it does for the dimmers. The full trigger for a switch, contains: ` {'id': '1', 'idx': '1', 'platform': 'state', 'entity_id': 'switch.taklampa_kallartrappa', 'from_state': , 'to_state': , 'for': datetime.timedelta(seconds=1), 'attribute': None, 'description': 'state of switch.taklampa_kallartrappa'} ` ### What is version of Home Assistant Core has the issue? 2021.9.7 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant Container ### Integration causing the issue automation ### Link to integration documentation on our website _No response_ ### Example YAML snippet ```yaml alias: AutoLights Disable on Manual Override description: '' trigger: - platform: state entity_id: light.flush_dimmer_koket attribute: brightness for: hours: 0 minutes: 0 seconds: 1 milliseconds: 0 - platform: state attribute: brightness entity_id: light.flush_dimmer_sovrummet for: hours: 0 minutes: 0 seconds: 1 milliseconds: 0 - platform: state entity_id: light.flush_dimmer_vardagsrum_plus attribute: brightness for: hours: 0 minutes: 0 seconds: 1 milliseconds: 0 - platform: state entity_id: light.tvattstuga to: 'on' for: hours: 0 minutes: 0 seconds: 1 milliseconds: 0 from: 'off' condition: - condition: template value_template: '{{ trigger.to_state.context.user_id != None }}' action: - service: timer.start data: duration: '01:30:00' target: entity_id: | {% if trigger.entity_id is search (""vardagsrum"") -%} timer.autolights_disable_vrum {% elif trigger.entity_id is search (""kitchen|koket"") -%} timer.autolights_disable_kitchen {% elif trigger.entity_id is search(""sovrum"") -%} timer.autolights_disable_sovrum {% elif trigger.entity_id is search(""tvattstuga"") -%} timer.autolights_disable_tvattstuga {%- else -%} timer.DoesntExistFailure {%- endif -%} mode: single ``` ### Anything in the logs that might be useful for us? _No response_ ### Additional information I've tried switching between using the *switch* instead of the created light.entitiy, but no change.",1,trigger to state context only filled in on dimmer light entitites the problem i am using the fact that the trigger to state context user id none for several of my light entities when they are manually modified via lovelace ui in order to disable automations from overriding the manual settings for a while but it seems that the value is only filled in for some light entitites never get the context value filled in for some reason i ve got four qubino dimmers and a few qubino switches the switches are manually added configged as lights entities as well i then have an automation that triggers if the dimmers or switches change as included in examples which works fine for the dimmer lights but not for the two switches because it turns out the trigger doesn t contain the context value in the to state as it does for the dimmers the full trigger for a switch contains id idx platform state entity id switch taklampa kallartrappa from state to state for datetime timedelta seconds attribute none description state of switch taklampa kallartrappa what is version of home assistant core has the issue what was the last working version of home assistant core no response what type of installation are you running home assistant container integration causing the issue automation link to integration documentation on our website no response example yaml snippet yaml alias autolights disable on manual override description trigger platform state entity id light flush dimmer koket attribute brightness for hours minutes seconds milliseconds platform state attribute brightness entity id light flush dimmer sovrummet for hours minutes seconds milliseconds platform state entity id light flush dimmer vardagsrum plus attribute brightness for hours minutes seconds milliseconds platform state entity id light tvattstuga to on for hours minutes seconds milliseconds from off condition condition template value template trigger to state context user id none action service timer start data duration target entity id if trigger entity id is search vardagsrum timer autolights disable vrum elif trigger entity id is search kitchen koket timer autolights disable kitchen elif trigger entity id is search sovrum timer autolights disable sovrum elif trigger entity id is search tvattstuga timer autolights disable tvattstuga else timer doesntexistfailure endif mode single anything in the logs that might be useful for us no response additional information i ve tried switching between using the switch instead of the created light entitiy but no change ,1 8152,26282663827.0,IssuesEvent,2023-01-07 13:31:23,ita-social-projects/TeachUA,https://api.github.com/repos/ita-social-projects/TeachUA,closed,"[Club, API] PATCH /api/club/{id} endpoint is not performing direct function of updating club",bug Backend Priority: Medium API Automation,"**Environment:** Windows 11, Google Chrome Version 108.0.5359.125 (Official Build) (64-bit). **Reproducible:** always. **Build found:** last commit [5757356](https://github.com/ita-social-projects/TeachUA/commit/57573565fd58d1553fa880a969c94f7cafa0204b) **Preconditions** 1. Open Swagger UI. **Steps to reproduce** 1. Go to 'club' section. 2. Click on '/api/club/{id}' endpoint. 3. Pay attention to the example value of the Request body. **Actual result** Endpoint updates the user who is assigned to the club. **Expected result** Based on the endpoint, it should update the club fields similar to the PUT method, but not all fields, only specific ones. ",1.0,"[Club, API] PATCH /api/club/{id} endpoint is not performing direct function of updating club - **Environment:** Windows 11, Google Chrome Version 108.0.5359.125 (Official Build) (64-bit). **Reproducible:** always. **Build found:** last commit [5757356](https://github.com/ita-social-projects/TeachUA/commit/57573565fd58d1553fa880a969c94f7cafa0204b) **Preconditions** 1. Open Swagger UI. **Steps to reproduce** 1. Go to 'club' section. 2. Click on '/api/club/{id}' endpoint. 3. Pay attention to the example value of the Request body. **Actual result** Endpoint updates the user who is assigned to the club. **Expected result** Based on the endpoint, it should update the club fields similar to the PUT method, but not all fields, only specific ones. ",1, patch api club id endpoint is not performing direct function of updating club environment windows google chrome version official build bit reproducible always build found last commit preconditions open swagger ui steps to reproduce go to club section click on api club id endpoint pay attention to the example value of the request body actual result endpoint updates the user who is assigned to the club img width alt image src expected result based on the endpoint it should update the club fields similar to the put method but not all fields only specific ones ,1 6468,23212778177.0,IssuesEvent,2022-08-02 11:40:56,submariner-io/releases,https://api.github.com/repos/submariner-io/releases,opened,Automate waiting for images to build,enhancement automation size:medium,"We currently have an ability to detect if any open PRs from a previous stage are still open. On the same note, we could detect if necessary images are finished building. We could either try to query the jobs on the CI, or piggy back on the dependency tracking bot. For this, image building jobs for projects that build images for a specific tag could open an `automated` issue when they start, and close it when the job ends successfully. We could then use ""Depends on"" (similar to #457) on the PR created by `make release` to track these ""tracker"" issues. The only problem is that the E2E would still fail, but that could easily be manually re-run (and perhaps this can be further automated in the future).",1.0,"Automate waiting for images to build - We currently have an ability to detect if any open PRs from a previous stage are still open. On the same note, we could detect if necessary images are finished building. We could either try to query the jobs on the CI, or piggy back on the dependency tracking bot. For this, image building jobs for projects that build images for a specific tag could open an `automated` issue when they start, and close it when the job ends successfully. We could then use ""Depends on"" (similar to #457) on the PR created by `make release` to track these ""tracker"" issues. The only problem is that the E2E would still fail, but that could easily be manually re-run (and perhaps this can be further automated in the future).",1,automate waiting for images to build we currently have an ability to detect if any open prs from a previous stage are still open on the same note we could detect if necessary images are finished building we could either try to query the jobs on the ci or piggy back on the dependency tracking bot for this image building jobs for projects that build images for a specific tag could open an automated issue when they start and close it when the job ends successfully we could then use depends on similar to on the pr created by make release to track these tracker issues the only problem is that the would still fail but that could easily be manually re run and perhaps this can be further automated in the future ,1 144963,22586942094.0,IssuesEvent,2022-06-28 16:00:13,blockframes/blockframes,https://api.github.com/repos/blockframes/blockframes,closed,Calendar Improvements,App - Festival 🎪 Design - UX July clean up,"_Estimated priority: medium, but can tend to high_ List of improvements / features that wireframes should reflect : - [ ] able the list view for calendar (prepare screens for switching); - [ ] prepare the list view screen; - [ ] block user if wants to create an event in the past (+ what message to show?); - [ ] able different period views (day, week, month ?) (prepare screens for each).",1.0,"Calendar Improvements - _Estimated priority: medium, but can tend to high_ List of improvements / features that wireframes should reflect : - [ ] able the list view for calendar (prepare screens for switching); - [ ] prepare the list view screen; - [ ] block user if wants to create an event in the past (+ what message to show?); - [ ] able different period views (day, week, month ?) (prepare screens for each).",0,calendar improvements estimated priority medium but can tend to high list of improvements features that wireframes should reflect able the list view for calendar prepare screens for switching prepare the list view screen block user if wants to create an event in the past what message to show able different period views day week month prepare screens for each ,0 789977,27811804032.0,IssuesEvent,2023-03-18 07:40:02,AY2223S2-CS2103T-W11-3/tp,https://api.github.com/repos/AY2223S2-CS2103T-W11-3/tp,closed,Update Card::isSameCard to check for both question and answer of the card,priority.Low,"This means a unique card is defined not just by the question, but also the answer. Enables users to have the same question but with different answers. Useful for situations where the same question might have different answer under different contexts - e.g. What is a bat? (Deck - Baseball vs Deck - Mammals)",1.0,"Update Card::isSameCard to check for both question and answer of the card - This means a unique card is defined not just by the question, but also the answer. Enables users to have the same question but with different answers. Useful for situations where the same question might have different answer under different contexts - e.g. What is a bat? (Deck - Baseball vs Deck - Mammals)",0,update card issamecard to check for both question and answer of the card this means a unique card is defined not just by the question but also the answer enables users to have the same question but with different answers useful for situations where the same question might have different answer under different contexts e g what is a bat deck baseball vs deck mammals ,0 278964,30702429141.0,IssuesEvent,2023-07-27 01:29:27,nidhi7598/linux-3.0.35_CVE-2018-13405,https://api.github.com/repos/nidhi7598/linux-3.0.35_CVE-2018-13405,closed,CVE-2020-29660 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed,Mend: dependency security vulnerability,"## CVE-2020-29660 - Medium Severity Vulnerability
Vulnerable Library - linux-stable-rtv3.8.6

Julia Cartwright's fork of linux-stable-rt.git

Library home page: https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git

Found in HEAD commit: 662fbf6e1ed61fd353add2f52e2dd27e990364c7

Found in base branch: master

Vulnerable Source Files (1)

/drivers/tty/tty_io.c

Vulnerability Details

A locking inconsistency issue was discovered in the tty subsystem of the Linux kernel through 5.9.13. drivers/tty/tty_io.c and drivers/tty/tty_jobctrl.c may allow a read-after-free attack against TIOCGSID, aka CID-c8bcd9c5be24.

Publish Date: 2020-12-09

URL: CVE-2020-29660

CVSS 3 Score Details (4.4)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2020-12-09

Fix Resolution: v5.10-rc7

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-29660 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed - ## CVE-2020-29660 - Medium Severity Vulnerability
Vulnerable Library - linux-stable-rtv3.8.6

Julia Cartwright's fork of linux-stable-rt.git

Library home page: https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git

Found in HEAD commit: 662fbf6e1ed61fd353add2f52e2dd27e990364c7

Found in base branch: master

Vulnerable Source Files (1)

/drivers/tty/tty_io.c

Vulnerability Details

A locking inconsistency issue was discovered in the tty subsystem of the Linux kernel through 5.9.13. drivers/tty/tty_io.c and drivers/tty/tty_jobctrl.c may allow a read-after-free attack against TIOCGSID, aka CID-c8bcd9c5be24.

Publish Date: 2020-12-09

URL: CVE-2020-29660

CVSS 3 Score Details (4.4)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2020-12-09

Fix Resolution: v5.10-rc7

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers tty tty io c vulnerability details a locking inconsistency issue was discovered in the tty subsystem of the linux kernel through drivers tty tty io c and drivers tty tty jobctrl c may allow a read after free attack against tiocgsid aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend ,0 5914,21640437202.0,IssuesEvent,2022-05-05 18:11:49,willowtreeapps/vocable-ios,https://api.github.com/repos/willowtreeapps/vocable-ios,opened,Refactor PresetPhrasesTests to use injection data,automation,"This ticket is for refactoring the test functions in PresetPhrasesTests to use injected preset data. This work is part of the overall effort outlined in https://github.com/willowtreeapps/vocable-ios/issues/590 (the parent ticket to this one) Injection data is hardcoded as type `Presets` in setup(). The ids are assigned to the cell element for category and phrase. `func injectPresetData() -> Presets { return Presets { Category(id: ""general_category"", ""General"") { Phrase(id: ""general_be_patient"", ""Please be patient"") Phrase(id: ""general_donde_estoy"", languageCode: ""es"", ""No sé donde estoy"") }` **Acceptance Criteria**: All tests in SettingsScreenTests pass",1.0,"Refactor PresetPhrasesTests to use injection data - This ticket is for refactoring the test functions in PresetPhrasesTests to use injected preset data. This work is part of the overall effort outlined in https://github.com/willowtreeapps/vocable-ios/issues/590 (the parent ticket to this one) Injection data is hardcoded as type `Presets` in setup(). The ids are assigned to the cell element for category and phrase. `func injectPresetData() -> Presets { return Presets { Category(id: ""general_category"", ""General"") { Phrase(id: ""general_be_patient"", ""Please be patient"") Phrase(id: ""general_donde_estoy"", languageCode: ""es"", ""No sé donde estoy"") }` **Acceptance Criteria**: All tests in SettingsScreenTests pass",1,refactor presetphrasestests to use injection data this ticket is for refactoring the test functions in presetphrasestests to use injected preset data this work is part of the overall effort outlined in the parent ticket to this one injection data is hardcoded as type presets in setup the ids are assigned to the cell element for category and phrase func injectpresetdata presets return presets category id general category general phrase id general be patient please be patient phrase id general donde estoy languagecode es no sé donde estoy acceptance criteria all tests in settingsscreentests pass,1 3087,13062864627.0,IssuesEvent,2020-07-30 15:44:02,geosolutions-it/geoserver,https://api.github.com/repos/geosolutions-it/geoserver,closed,WFS 1.0 test package build,CITE CITE_AUTOMATION,"Build to deploy in repositories the test suite for this protocol. * Repository: https://github.com/opengeospatial/ets-wfs10 * Version, latest Parametes: both repo and branch/tag to build Deploy: on OSGeo Setup for the deploy: ``` false osgeo Open Source Geospatial Foundation - WebDAV upload dav:http://download.osgeo.org/upload/geotools/ ``` ",1.0,"WFS 1.0 test package build - Build to deploy in repositories the test suite for this protocol. * Repository: https://github.com/opengeospatial/ets-wfs10 * Version, latest Parametes: both repo and branch/tag to build Deploy: on OSGeo Setup for the deploy: ``` false osgeo Open Source Geospatial Foundation - WebDAV upload dav:http://download.osgeo.org/upload/geotools/ ``` ",1,wfs test package build build to deploy in repositories the test suite for this protocol repository version latest parametes both repo and branch tag to build deploy on osgeo setup for the deploy false osgeo open source geospatial foundation webdav upload dav ,1 441798,30799639382.0,IssuesEvent,2023-07-31 23:31:52,risingwavelabs/risingwave-docs,https://api.github.com/repos/risingwavelabs/risingwave-docs,opened,`access_key` and `secret_key` are required fields for AWS auth ,documentation,"### Related code PR https://github.com/risingwavelabs/risingwave/pull/11120 ### Which part(s) of the docs might be affected or should be updated? And how? Document that `access_key` and `secret_key` are required fields for sources and sinks that use AWS auth ### Reference _No response_",1.0,"`access_key` and `secret_key` are required fields for AWS auth - ### Related code PR https://github.com/risingwavelabs/risingwave/pull/11120 ### Which part(s) of the docs might be affected or should be updated? And how? Document that `access_key` and `secret_key` are required fields for sources and sinks that use AWS auth ### Reference _No response_",0, access key and secret key are required fields for aws auth related code pr which part s of the docs might be affected or should be updated and how document that access key and secret key are required fields for sources and sinks that use aws auth reference no response ,0 109222,16833831118.0,IssuesEvent,2021-06-18 09:16:17,AlexRogalskiy/qiitos,https://api.github.com/repos/AlexRogalskiy/qiitos,opened,CVE-2020-7753 (High) detected in trim-0.0.1.tgz,security vulnerability,"## CVE-2020-7753 - High Severity Vulnerability
Vulnerable Library - trim-0.0.1.tgz

Trim string whitespace

Library home page: https://registry.npmjs.org/trim/-/trim-0.0.1.tgz

Path to dependency file: qiitos/package.json

Path to vulnerable library: qiitos/node_modules/trim/package.json

Dependency Hierarchy: - remark-preset-davidtheclark-0.12.0.tgz (Root Library) - remark-cli-7.0.1.tgz - remark-11.0.2.tgz - remark-parse-7.0.2.tgz - :x: **trim-0.0.1.tgz** (Vulnerable Library)

Found in HEAD commit: 872c80fd58e83cfbcf073db571d49033ca056550

Vulnerability Details

All versions of package trim are vulnerable to Regular Expression Denial of Service (ReDoS) via trim().

Publish Date: 2020-10-27

URL: CVE-2020-7753

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/component/trim/pull/8

Release Date: 2020-10-27

Fix Resolution: trim - 0.0.3

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-7753 (High) detected in trim-0.0.1.tgz - ## CVE-2020-7753 - High Severity Vulnerability
Vulnerable Library - trim-0.0.1.tgz

Trim string whitespace

Library home page: https://registry.npmjs.org/trim/-/trim-0.0.1.tgz

Path to dependency file: qiitos/package.json

Path to vulnerable library: qiitos/node_modules/trim/package.json

Dependency Hierarchy: - remark-preset-davidtheclark-0.12.0.tgz (Root Library) - remark-cli-7.0.1.tgz - remark-11.0.2.tgz - remark-parse-7.0.2.tgz - :x: **trim-0.0.1.tgz** (Vulnerable Library)

Found in HEAD commit: 872c80fd58e83cfbcf073db571d49033ca056550

Vulnerability Details

All versions of package trim are vulnerable to Regular Expression Denial of Service (ReDoS) via trim().

Publish Date: 2020-10-27

URL: CVE-2020-7753

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/component/trim/pull/8

Release Date: 2020-10-27

Fix Resolution: trim - 0.0.3

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in trim tgz cve high severity vulnerability vulnerable library trim tgz trim string whitespace library home page a href path to dependency file qiitos package json path to vulnerable library qiitos node modules trim package json dependency hierarchy remark preset davidtheclark tgz root library remark cli tgz remark tgz remark parse tgz x trim tgz vulnerable library found in head commit a href vulnerability details all versions of package trim are vulnerable to regular expression denial of service redos via trim publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution trim step up your open source security game with whitesource ,0 135066,19485475379.0,IssuesEvent,2021-12-26 09:21:15,zainfathoni/kelas.rumahberbagi.com,https://api.github.com/repos/zainfathoni/kelas.rumahberbagi.com,closed,CTA,enhancement design ui,"## Description Call to action to purchase the course. ## Narrative - **As an** authenticated user - **I want** it to be obvious how to purchase the course - **so that** I can start the purchase transaction flow easily. ## Acceptance Criteria [Dashboard page](app/routes/dashboard.tsx) should render this [Single price with details](https://tailwindui.com/components/marketing/sections/pricing#component-56cbd4f191ac0d54e5a5c0287481d5b9) call to action. ![Single price with details](https://tailwindui.com/img/components/pricing.02-single-price-with-details-xl.png) ### Scenario 1 - **Given** I am an authenticated user, - **and** I have not purchased the course yet, - **when** I click the CTA button, - **then** it redirects to the `/dashboard/purchase` route for me to start the transaction. ## Implementation Model
Code snippet ```jsx /* This example requires Tailwind CSS v2.0+ */ import { CheckCircleIcon } from '@heroicons/react/solid' const includedFeatures = [ 'Private forum access', 'Member resources', 'Entry to annual conference', 'Official member t-shirt', ] export default function Example() { return (

Simple no-tricks pricing

If you're not satisfied, contact us within the first 14 days and we'll send you a full refund.

Lifetime Membership

Lorem ipsum dolor sit amet consect etur adipisicing elit. Itaque amet indis perferendis blanditiis repellendus etur quidem assumenda.

What's included

    {includedFeatures.map((feature) => (
  • {feature}

  • ))}
) } ```
## Tasks - [ ] Render the [CTA](https://tailwindui.com/components/marketing/sections/pricing#component-56cbd4f191ac0d54e5a5c0287481d5b9) inside the content section of the [dashboard.tsx](app/routes/dashboard.tsx) page - [ ] Implement the redirect action to `/dashboard/purchase` route - [ ] Implement an empty `/dashboard/purchase` route page - [ ] Write an end-to-end test case for Scenario 1 under `e2e/cta.spec.ts` file - [ ] Move the edit profile functionality out of the `/dashboard/index.tsx` page and put it in the `/dashboard/settings.tsx` route instead. - [ ] Update the e2e tests accordingly while preserving the edit profile scenarios and functionality.",1.0,"CTA - ## Description Call to action to purchase the course. ## Narrative - **As an** authenticated user - **I want** it to be obvious how to purchase the course - **so that** I can start the purchase transaction flow easily. ## Acceptance Criteria [Dashboard page](app/routes/dashboard.tsx) should render this [Single price with details](https://tailwindui.com/components/marketing/sections/pricing#component-56cbd4f191ac0d54e5a5c0287481d5b9) call to action. ![Single price with details](https://tailwindui.com/img/components/pricing.02-single-price-with-details-xl.png) ### Scenario 1 - **Given** I am an authenticated user, - **and** I have not purchased the course yet, - **when** I click the CTA button, - **then** it redirects to the `/dashboard/purchase` route for me to start the transaction. ## Implementation Model
Code snippet ```jsx /* This example requires Tailwind CSS v2.0+ */ import { CheckCircleIcon } from '@heroicons/react/solid' const includedFeatures = [ 'Private forum access', 'Member resources', 'Entry to annual conference', 'Official member t-shirt', ] export default function Example() { return (

Simple no-tricks pricing

If you're not satisfied, contact us within the first 14 days and we'll send you a full refund.

Lifetime Membership

Lorem ipsum dolor sit amet consect etur adipisicing elit. Itaque amet indis perferendis blanditiis repellendus etur quidem assumenda.

What's included

    {includedFeatures.map((feature) => (
  • {feature}

  • ))}
) } ```
## Tasks - [ ] Render the [CTA](https://tailwindui.com/components/marketing/sections/pricing#component-56cbd4f191ac0d54e5a5c0287481d5b9) inside the content section of the [dashboard.tsx](app/routes/dashboard.tsx) page - [ ] Implement the redirect action to `/dashboard/purchase` route - [ ] Implement an empty `/dashboard/purchase` route page - [ ] Write an end-to-end test case for Scenario 1 under `e2e/cta.spec.ts` file - [ ] Move the edit profile functionality out of the `/dashboard/index.tsx` page and put it in the `/dashboard/settings.tsx` route instead. - [ ] Update the e2e tests accordingly while preserving the edit profile scenarios and functionality.",0,cta description call to action to purchase the course narrative as an authenticated user i want it to be obvious how to purchase the course so that i can start the purchase transaction flow easily acceptance criteria app routes dashboard tsx should render this call to action scenario given i am an authenticated user and i have not purchased the course yet when i click the cta button then it redirects to the dashboard purchase route for me to start the transaction implementation model code snippet jsx this example requires tailwind css import checkcircleicon from heroicons react solid const includedfeatures private forum access member resources entry to annual conference official member t shirt export default function example return simple no tricks pricing if you re not satisfied contact us within the first days and we ll send you a full refund lifetime membership lorem ipsum dolor sit amet consect etur adipisicing elit itaque amet indis perferendis blanditiis repellendus etur quidem assumenda what s included includedfeatures map feature feature pay once own it forever usd learn about our membership policy a href classname flex items center justify center px py border border transparent text base font medium rounded md text white bg gray hover bg gray get access get a free sample tasks render the inside the content section of the app routes dashboard tsx page implement the redirect action to dashboard purchase route implement an empty dashboard purchase route page write an end to end test case for scenario under cta spec ts file move the edit profile functionality out of the dashboard index tsx page and put it in the dashboard settings tsx route instead update the tests accordingly while preserving the edit profile scenarios and functionality ,0 10231,32030411137.0,IssuesEvent,2023-09-22 11:56:50,dcaribou/transfermarkt-datasets,https://api.github.com/repos/dcaribou/transfermarkt-datasets,opened,Add useful git hooks,automations,"Git hooks can be useful to avoid committing untested components. For example [dbt-checkpoint](https://github.com/dbt-checkpoint/dbt-checkpoint) can be configured to run and test dbt models before they get committed.",1.0,"Add useful git hooks - Git hooks can be useful to avoid committing untested components. For example [dbt-checkpoint](https://github.com/dbt-checkpoint/dbt-checkpoint) can be configured to run and test dbt models before they get committed.",1,add useful git hooks git hooks can be useful to avoid committing untested components for example can be configured to run and test dbt models before they get committed ,1 5877,21529745279.0,IssuesEvent,2022-04-28 22:41:51,rancher-sandbox/rancher-desktop,https://api.github.com/repos/rancher-sandbox/rancher-desktop,closed,rdctl start doesn't return and doesn't change container engine,kind/bug platform/windows area/automation,"Before I shut down RD I was running it with the moby runtime. Then I started it up with from the CLI: ```console PS C:\Users\Jan\Downloads> rdctl start --container-engine containerd About to launch C:\Users\Jan\AppData\Local/Programs/Rancher Desktop/Rancher Desktop.exe --kubernetes-container-engine containerd ... [8380:0426/202102.226:ERROR:gpu_init.cc(446)] Passthrough is not supported, GL is disabled, ANGLE is (node:7088) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066. This will be ignored in a future version. (Use `Rancher Desktop --trace-deprecation ...` to show where the warning was created) ``` It does start up RD again, but it did not change the container engine to `containerd`. The `rdctl start` command also never returned to the command prompt. When I aborted it with Ctrl-C then RD was stopped as well (not just the Window, but also the background app). Finally there is the issue of the noisy output, but that is secondary to the functional issues.",1.0,"rdctl start doesn't return and doesn't change container engine - Before I shut down RD I was running it with the moby runtime. Then I started it up with from the CLI: ```console PS C:\Users\Jan\Downloads> rdctl start --container-engine containerd About to launch C:\Users\Jan\AppData\Local/Programs/Rancher Desktop/Rancher Desktop.exe --kubernetes-container-engine containerd ... [8380:0426/202102.226:ERROR:gpu_init.cc(446)] Passthrough is not supported, GL is disabled, ANGLE is (node:7088) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066. This will be ignored in a future version. (Use `Rancher Desktop --trace-deprecation ...` to show where the warning was created) ``` It does start up RD again, but it did not change the container engine to `containerd`. The `rdctl start` command also never returned to the command prompt. When I aborted it with Ctrl-C then RD was stopped as well (not just the Window, but also the background app). Finally there is the issue of the noisy output, but that is secondary to the functional issues.",1,rdctl start doesn t return and doesn t change container engine before i shut down rd i was running it with the moby runtime then i started it up with from the cli console ps c users jan downloads rdctl start container engine containerd about to launch c users jan appdata local programs rancher desktop rancher desktop exe kubernetes container engine containerd passthrough is not supported gl is disabled angle is node deprecationwarning setting the tls servername to an ip address is not permitted by rfc this will be ignored in a future version use rancher desktop trace deprecation to show where the warning was created it does start up rd again but it did not change the container engine to containerd the rdctl start command also never returned to the command prompt when i aborted it with ctrl c then rd was stopped as well not just the window but also the background app finally there is the issue of the noisy output but that is secondary to the functional issues ,1 280312,30820648222.0,IssuesEvent,2023-08-01 16:05:39,momo-tong/jackson-databind-2.13.0,https://api.github.com/repos/momo-tong/jackson-databind-2.13.0,opened,jackson-databind-2.13.0.jar: 5 vulnerabilities (highest severity is: 7.5),Mend: dependency security vulnerability,"
Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (jackson-databind version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-42004](https://www.mend.io/vulnerability-database/CVE-2022-42004) | High | 7.5 | jackson-databind-2.13.0.jar | Direct | 2.13.4 | ❌ | | [CVE-2022-42003](https://www.mend.io/vulnerability-database/CVE-2022-42003) | High | 7.5 | jackson-databind-2.13.0.jar | Direct | 2.13.4.1 | ❌ | | [CVE-2020-36518](https://www.mend.io/vulnerability-database/CVE-2020-36518) | High | 7.5 | jackson-databind-2.13.0.jar | Direct | 2.13.2.1 | ❌ | | [CVE-2021-46877](https://www.mend.io/vulnerability-database/CVE-2021-46877) | High | 7.5 | jackson-databind-2.13.0.jar | Direct | 2.13.1 | ❌ | | [WS-2021-0616](https://github.com/FasterXML/jackson-databind/commit/3ccde7d938fea547e598fdefe9a82cff37fed5cb) | Medium | 5.9 | jackson-databind-2.13.0.jar | Direct | 2.13.1 | ❌ | ## Details
CVE-2022-42004 ### Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Dependency Hierarchy: - :x: **jackson-databind-2.13.0.jar** (Vulnerable Library)

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

Found in base branch: master

### Vulnerability Details

In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.

Publish Date: 2022-10-02

URL: CVE-2022-42004

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2022-10-02

Fix Resolution: 2.13.4

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2022-42003 ### Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Dependency Hierarchy: - :x: **jackson-databind-2.13.0.jar** (Vulnerable Library)

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

Found in base branch: master

### Vulnerability Details

In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. Additional fix version in 2.13.4.1 and 2.12.17.1

Publish Date: 2022-10-02

URL: CVE-2022-42003

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2022-10-02

Fix Resolution: 2.13.4.1

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2020-36518 ### Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Dependency Hierarchy: - :x: **jackson-databind-2.13.0.jar** (Vulnerable Library)

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

Found in base branch: master

### Vulnerability Details

jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects. Mend Note: After conducting further research, Mend has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518.

Publish Date: 2022-03-11

URL: CVE-2020-36518

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2022-03-11

Fix Resolution: 2.13.2.1

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2021-46877 ### Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Dependency Hierarchy: - :x: **jackson-databind-2.13.0.jar** (Vulnerable Library)

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

Found in base branch: master

### Vulnerability Details

jackson-databind 2.10.x through 2.12.x before 2.12.6 and 2.13.x before 2.13.1 allows attackers to cause a denial of service (2 GB transient heap usage per read) in uncommon situations involving JsonNode JDK serialization.

Publish Date: 2023-03-18

URL: CVE-2021-46877

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://www.cve.org/CVERecord?id=CVE-2021-46877

Release Date: 2023-03-18

Fix Resolution: 2.13.1

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
WS-2021-0616 ### Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Dependency Hierarchy: - :x: **jackson-databind-2.13.0.jar** (Vulnerable Library)

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

Found in base branch: master

### Vulnerability Details

FasterXML jackson-databind before 2.12.6 and 2.13.1 there is DoS when using JDK serialization to serialize JsonNode.

Publish Date: 2021-11-20

URL: WS-2021-0616

### CVSS 3 Score Details (5.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2021-11-20

Fix Resolution: 2.13.1

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
",True,"jackson-databind-2.13.0.jar: 5 vulnerabilities (highest severity is: 7.5) -
Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (jackson-databind version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-42004](https://www.mend.io/vulnerability-database/CVE-2022-42004) | High | 7.5 | jackson-databind-2.13.0.jar | Direct | 2.13.4 | ❌ | | [CVE-2022-42003](https://www.mend.io/vulnerability-database/CVE-2022-42003) | High | 7.5 | jackson-databind-2.13.0.jar | Direct | 2.13.4.1 | ❌ | | [CVE-2020-36518](https://www.mend.io/vulnerability-database/CVE-2020-36518) | High | 7.5 | jackson-databind-2.13.0.jar | Direct | 2.13.2.1 | ❌ | | [CVE-2021-46877](https://www.mend.io/vulnerability-database/CVE-2021-46877) | High | 7.5 | jackson-databind-2.13.0.jar | Direct | 2.13.1 | ❌ | | [WS-2021-0616](https://github.com/FasterXML/jackson-databind/commit/3ccde7d938fea547e598fdefe9a82cff37fed5cb) | Medium | 5.9 | jackson-databind-2.13.0.jar | Direct | 2.13.1 | ❌ | ## Details
CVE-2022-42004 ### Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Dependency Hierarchy: - :x: **jackson-databind-2.13.0.jar** (Vulnerable Library)

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

Found in base branch: master

### Vulnerability Details

In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.

Publish Date: 2022-10-02

URL: CVE-2022-42004

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2022-10-02

Fix Resolution: 2.13.4

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2022-42003 ### Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Dependency Hierarchy: - :x: **jackson-databind-2.13.0.jar** (Vulnerable Library)

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

Found in base branch: master

### Vulnerability Details

In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. Additional fix version in 2.13.4.1 and 2.12.17.1

Publish Date: 2022-10-02

URL: CVE-2022-42003

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2022-10-02

Fix Resolution: 2.13.4.1

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2020-36518 ### Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Dependency Hierarchy: - :x: **jackson-databind-2.13.0.jar** (Vulnerable Library)

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

Found in base branch: master

### Vulnerability Details

jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects. Mend Note: After conducting further research, Mend has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518.

Publish Date: 2022-03-11

URL: CVE-2020-36518

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2022-03-11

Fix Resolution: 2.13.2.1

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2021-46877 ### Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Dependency Hierarchy: - :x: **jackson-databind-2.13.0.jar** (Vulnerable Library)

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

Found in base branch: master

### Vulnerability Details

jackson-databind 2.10.x through 2.12.x before 2.12.6 and 2.13.x before 2.13.1 allows attackers to cause a denial of service (2 GB transient heap usage per read) in uncommon situations involving JsonNode JDK serialization.

Publish Date: 2023-03-18

URL: CVE-2021-46877

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://www.cve.org/CVERecord?id=CVE-2021-46877

Release Date: 2023-03-18

Fix Resolution: 2.13.1

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
WS-2021-0616 ### Vulnerable Library - jackson-databind-2.13.0.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar

Dependency Hierarchy: - :x: **jackson-databind-2.13.0.jar** (Vulnerable Library)

Found in HEAD commit: f314799500ec3cd8f6906b348be5d4117350af3b

Found in base branch: master

### Vulnerability Details

FasterXML jackson-databind before 2.12.6 and 2.13.1 there is DoS when using JDK serialization to serialize JsonNode.

Publish Date: 2021-11-20

URL: WS-2021-0616

### CVSS 3 Score Details (5.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2021-11-20

Fix Resolution: 2.13.1

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
",0,jackson databind jar vulnerabilities highest severity is vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in jackson databind version remediation available high jackson databind jar direct high jackson databind jar direct high jackson databind jar direct high jackson databind jar direct medium jackson databind jar direct details cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in beandeserializer deserializefromarray to prevent use of deeply nested arrays an application is vulnerable only with certain customized choices for deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting when the unwrap single value arrays feature is enabled additional fix version in and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details jackson databind before allows a java stackoverflow exception and denial of service via a large depth of nested objects mend note after conducting further research mend has determined that all versions of com fasterxml jackson core jackson databind up to version are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details jackson databind x through x before and x before allows attackers to cause a denial of service gb transient heap usage per read in uncommon situations involving jsonnode jdk serialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ws vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind before and there is dos when using jdk serialization to serialize jsonnode publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend ,0 8931,27241506061.0,IssuesEvent,2023-02-21 20:54:29,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Cypress Tests for Authorization Profile Client Role,automation,"1. Set roles in Keycloak auth client 1.1 Authenticates Admin owner 1.2 Add ""Read"" role to the client of the authorization profile 1.3 Add ""Write"" role to the client of the authorization profile 2. Apply client roles to the Authorization Profile 2.1 Authenticates Wendy (Credential-Issuer) 2.2 Select the namespace created for client credential 2.3 Clear the Client Scope 2.4 Set the roles to the authorization profile 3. Developer creates an access request for Client ID/Secret authenticator to verify read role 3.1 Developer logs in 3.2 Creates an application 3.3 Creates an access request 4. Access manager apply ""Read"" role and approves developer access request 4.1 Access Manager logs in 4.2 Access Manager approves developer access request 4.3 Select scopes in Authorization Tab 4.4 approves an access request 5. Update Kong plugin and verify that only only GET method is allowed for Read role 5.1 Set allowed method ""GET"" in kong plugin 5.2 Set authorization roles in plugin file 5.3 Set allowed audience in plugin file 5.4 applies authorization plugin to service published to Kong Gateway 5.5 Make ""GET"" call and verify that Kong allows user to access the resources 5.6 Make ""POST"" call and verify that Kong does not allow user to access the resources 6. Developer creates an access request for Client ID/Secret authenticator to verify write role 6.1 Developer logs in 6.2 Creates an application 6.3 Creates an access request 7. Access manager apply ""Write"" role and approves developer access request 7.1 Access Manager logs in 7.2 Access Manager approves developer access request 7.3 Select ""Write"" roles in Authorization Tab 7.4 approves an access request 8. Update Kong plugin and verify that only only PUT and POST methods are allowed for Read role 8.1 Set allowed methods ""PUT"" and ""POST"" in kong plugin 8.2 Set authorization roles in plugin file 8.3 Set allowed audience in plugin file 8.4 applies authorization plugin to service published to Kong Gateway 8.5 Make ""GET"" call and verify that Kong does not allow user to access the resources 8.6 Make ""POST"" call and verify that Kong allows user to access the resources 8.7 Make ""PUT"" call and verify that Kong allows user to access the resources ",1.0,"Cypress Tests for Authorization Profile Client Role - 1. Set roles in Keycloak auth client 1.1 Authenticates Admin owner 1.2 Add ""Read"" role to the client of the authorization profile 1.3 Add ""Write"" role to the client of the authorization profile 2. Apply client roles to the Authorization Profile 2.1 Authenticates Wendy (Credential-Issuer) 2.2 Select the namespace created for client credential 2.3 Clear the Client Scope 2.4 Set the roles to the authorization profile 3. Developer creates an access request for Client ID/Secret authenticator to verify read role 3.1 Developer logs in 3.2 Creates an application 3.3 Creates an access request 4. Access manager apply ""Read"" role and approves developer access request 4.1 Access Manager logs in 4.2 Access Manager approves developer access request 4.3 Select scopes in Authorization Tab 4.4 approves an access request 5. Update Kong plugin and verify that only only GET method is allowed for Read role 5.1 Set allowed method ""GET"" in kong plugin 5.2 Set authorization roles in plugin file 5.3 Set allowed audience in plugin file 5.4 applies authorization plugin to service published to Kong Gateway 5.5 Make ""GET"" call and verify that Kong allows user to access the resources 5.6 Make ""POST"" call and verify that Kong does not allow user to access the resources 6. Developer creates an access request for Client ID/Secret authenticator to verify write role 6.1 Developer logs in 6.2 Creates an application 6.3 Creates an access request 7. Access manager apply ""Write"" role and approves developer access request 7.1 Access Manager logs in 7.2 Access Manager approves developer access request 7.3 Select ""Write"" roles in Authorization Tab 7.4 approves an access request 8. Update Kong plugin and verify that only only PUT and POST methods are allowed for Read role 8.1 Set allowed methods ""PUT"" and ""POST"" in kong plugin 8.2 Set authorization roles in plugin file 8.3 Set allowed audience in plugin file 8.4 applies authorization plugin to service published to Kong Gateway 8.5 Make ""GET"" call and verify that Kong does not allow user to access the resources 8.6 Make ""POST"" call and verify that Kong allows user to access the resources 8.7 Make ""PUT"" call and verify that Kong allows user to access the resources ",1,cypress tests for authorization profile client role set roles in keycloak auth client authenticates admin owner add read role to the client of the authorization profile add write role to the client of the authorization profile apply client roles to the authorization profile authenticates wendy credential issuer select the namespace created for client credential clear the client scope set the roles to the authorization profile developer creates an access request for client id secret authenticator to verify read role developer logs in creates an application creates an access request access manager apply read role and approves developer access request access manager logs in access manager approves developer access request select scopes in authorization tab approves an access request update kong plugin and verify that only only get method is allowed for read role set allowed method get in kong plugin set authorization roles in plugin file set allowed audience in plugin file applies authorization plugin to service published to kong gateway make get call and verify that kong allows user to access the resources make post call and verify that kong does not allow user to access the resources developer creates an access request for client id secret authenticator to verify write role developer logs in creates an application creates an access request access manager apply write role and approves developer access request access manager logs in access manager approves developer access request select write roles in authorization tab approves an access request update kong plugin and verify that only only put and post methods are allowed for read role set allowed methods put and post in kong plugin set authorization roles in plugin file set allowed audience in plugin file applies authorization plugin to service published to kong gateway make get call and verify that kong does not allow user to access the resources make post call and verify that kong allows user to access the resources make put call and verify that kong allows user to access the resources ,1 3709,14399673772.0,IssuesEvent,2020-12-03 11:15:43,Tithibots/tithiwa,https://api.github.com/repos/Tithibots/tithiwa,closed,Create remove_group_admins() in Group class Similar to make_group_admins(),Selenium Automation enhancement good first issue python,"Take a look at https://github.com/Tithibots/tithiwa/blob/7cef0e13b6ab6f8050060bb3cbaad0f59f79c19e/tithiwa/group.py#L50 Just need to click on the `Remove` button forgiven members",1.0,"Create remove_group_admins() in Group class Similar to make_group_admins() - Take a look at https://github.com/Tithibots/tithiwa/blob/7cef0e13b6ab6f8050060bb3cbaad0f59f79c19e/tithiwa/group.py#L50 Just need to click on the `Remove` button forgiven members",1,create remove group admins in group class similar to make group admins take a look at just need to click on the remove button forgiven members,1 650848,21419034510.0,IssuesEvent,2022-04-22 13:52:33,consta-design-system/uikit,https://api.github.com/repos/consta-design-system/uikit,opened,Table: доработки resize & scroll,feature 🔥 priority,"- [ ] Добавить свойство отвечающее за распределение свободного пространства: - последнему столбцу (по умолчанию) - задать номер столбца - равномерно раскидать по всем столбцам - [ ] Исправить баг со скроллом ",1.0,"Table: доработки resize & scroll - - [ ] Добавить свойство отвечающее за распределение свободного пространства: - последнему столбцу (по умолчанию) - задать номер столбца - равномерно раскидать по всем столбцам - [ ] Исправить баг со скроллом ",0,table доработки resize scroll добавить свойство отвечающее за распределение свободного пространства последнему столбцу по умолчанию задать номер столбца равномерно раскидать по всем столбцам исправить баг со скроллом img width alt снимок экрана в src img width alt снимок экрана в src ,0 677785,23175410258.0,IssuesEvent,2022-07-31 10:48:36,fredo-ai/Fredo-Public,https://api.github.com/repos/fredo-ai/Fredo-Public,closed,Image uploads to #project,priority-4,"When I upload an image and provide some #hashtag, I want the image to be put in the correct project list in Workflowy. This might not be possible within our current bot system. Maybe in the future.",1.0,"Image uploads to #project - When I upload an image and provide some #hashtag, I want the image to be put in the correct project list in Workflowy. This might not be possible within our current bot system. Maybe in the future.",0,image uploads to project when i upload an image and provide some hashtag i want the image to be put in the correct project list in workflowy this might not be possible within our current bot system maybe in the future ,0 1718,10596459909.0,IssuesEvent,2019-10-09 21:18:12,rancher/rancher,https://api.github.com/repos/rancher/rancher,opened,Automation - test cluster and project monitoring ,kind/task setup/automation,add tests for cluster and project monitoring ,1.0,Automation - test cluster and project monitoring - add tests for cluster and project monitoring ,1,automation test cluster and project monitoring add tests for cluster and project monitoring ,1 4566,16869629514.0,IssuesEvent,2021-06-22 01:23:13,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,Way to approve contributer pull requests for automation,P1 eng:automation wontfix,"Do to some configuration choices we do not run pull requests from people outside an approved group. It would be good to have some way to mark the PR as approved for testing. Possibly a label or comment that triggers the automation? Right now the only way I know of to run the automation would be to fork the users PR and submit a request under an approved member.",1.0,"Way to approve contributer pull requests for automation - Do to some configuration choices we do not run pull requests from people outside an approved group. It would be good to have some way to mark the PR as approved for testing. Possibly a label or comment that triggers the automation? Right now the only way I know of to run the automation would be to fork the users PR and submit a request under an approved member.",1,way to approve contributer pull requests for automation do to some configuration choices we do not run pull requests from people outside an approved group it would be good to have some way to mark the pr as approved for testing possibly a label or comment that triggers the automation right now the only way i know of to run the automation would be to fork the users pr and submit a request under an approved member ,1 4042,15242574087.0,IssuesEvent,2021-02-19 10:03:26,home-assistant/frontend,https://api.github.com/repos/home-assistant/frontend,closed,"""delay: 5"" in yaml is converted to 5 hours in UI editor",bug editor: automation,"**Checklist** - [X] I have updated to the latest available Home Assistant version. - [X] I have cleared the cache of my browser. - [X] I have tried a different browser to see if it is related to my browser. **Describe the issue you are experiencing** If the action ```- delay: 5``` is written in YAML and the automation is loaded in the UI editor, the editor will indicate that the delay is 5 hours. (```05:00:00:000```). **Describe the behavior you expected** I would expect it to be 5 seconds. **Steps to reproduce the issue** 1. Create a delay in YAML 2. Open in UI automation/script-editor **What version of Home Assistant Core has the issue?** core-2021.2.3 **What was the last working version of Home Assistant Core?** _No response_ **In which browser are you experiencing the issue with?** Google Chrome 88.0.4324.150 / Android companion app beta-580-5ee48f2-full **Which operating system are you using to run this browser?** Windows 10 / Android **State of relevant entities** ```yaml # Paste your state here. ``` **Problem-relevant frontend configuration** ```yaml - delay: 5 ``` **Javascript errors shown in your browser console/inspector** ```txt # Paste your logs here. ``` ",1.0,"""delay: 5"" in yaml is converted to 5 hours in UI editor - **Checklist** - [X] I have updated to the latest available Home Assistant version. - [X] I have cleared the cache of my browser. - [X] I have tried a different browser to see if it is related to my browser. **Describe the issue you are experiencing** If the action ```- delay: 5``` is written in YAML and the automation is loaded in the UI editor, the editor will indicate that the delay is 5 hours. (```05:00:00:000```). **Describe the behavior you expected** I would expect it to be 5 seconds. **Steps to reproduce the issue** 1. Create a delay in YAML 2. Open in UI automation/script-editor **What version of Home Assistant Core has the issue?** core-2021.2.3 **What was the last working version of Home Assistant Core?** _No response_ **In which browser are you experiencing the issue with?** Google Chrome 88.0.4324.150 / Android companion app beta-580-5ee48f2-full **Which operating system are you using to run this browser?** Windows 10 / Android **State of relevant entities** ```yaml # Paste your state here. ``` **Problem-relevant frontend configuration** ```yaml - delay: 5 ``` **Javascript errors shown in your browser console/inspector** ```txt # Paste your logs here. ``` ",1, delay in yaml is converted to hours in ui editor checklist i have updated to the latest available home assistant version i have cleared the cache of my browser i have tried a different browser to see if it is related to my browser describe the issue you are experiencing if the action delay is written in yaml and the automation is loaded in the ui editor the editor will indicate that the delay is hours describe the behavior you expected i would expect it to be seconds steps to reproduce the issue create a delay in yaml open in ui automation script editor what version of home assistant core has the issue core what was the last working version of home assistant core no response in which browser are you experiencing the issue with google chrome android companion app beta full which operating system are you using to run this browser windows android state of relevant entities yaml paste your state here problem relevant frontend configuration yaml delay javascript errors shown in your browser console inspector txt paste your logs here ,1 287765,31856358344.0,IssuesEvent,2023-09-15 07:45:24,Trinadh465/linux-4.1.15_CVE-2023-26607,https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-26607,opened,CVE-2020-13974 (High) detected in linuxlinux-4.6,Mend: dependency security vulnerability,"## CVE-2020-13974 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.6

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in HEAD commit: 6fca0e3f2f14e1e851258fd815766531370084b0

Found in base branch: main

Vulnerable Source Files (2)

/drivers/tty/vt/keyboard.c /drivers/tty/vt/keyboard.c

Vulnerability Details

An issue was discovered in the Linux kernel 4.4 through 5.7.1. drivers/tty/vt/keyboard.c has an integer overflow if k_ascii is called several times in a row, aka CID-b86dab054059. NOTE: Members in the community argue that the integer overflow does not lead to a security issue in this case.

Publish Date: 2020-06-09

URL: CVE-2020-13974

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://nvd.nist.gov/vuln/detail/CVE-2020-13974

Release Date: 2020-06-09

Fix Resolution: linux-libc-headers - 5.8;linux-yocto - 4.8.24+gitAUTOINC+c84532b647_f6329fd287

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-13974 (High) detected in linuxlinux-4.6 - ## CVE-2020-13974 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.6

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in HEAD commit: 6fca0e3f2f14e1e851258fd815766531370084b0

Found in base branch: main

Vulnerable Source Files (2)

/drivers/tty/vt/keyboard.c /drivers/tty/vt/keyboard.c

Vulnerability Details

An issue was discovered in the Linux kernel 4.4 through 5.7.1. drivers/tty/vt/keyboard.c has an integer overflow if k_ascii is called several times in a row, aka CID-b86dab054059. NOTE: Members in the community argue that the integer overflow does not lead to a security issue in this case.

Publish Date: 2020-06-09

URL: CVE-2020-13974

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://nvd.nist.gov/vuln/detail/CVE-2020-13974

Release Date: 2020-06-09

Fix Resolution: linux-libc-headers - 5.8;linux-yocto - 4.8.24+gitAUTOINC+c84532b647_f6329fd287

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files drivers tty vt keyboard c drivers tty vt keyboard c vulnerability details an issue was discovered in the linux kernel through drivers tty vt keyboard c has an integer overflow if k ascii is called several times in a row aka cid note members in the community argue that the integer overflow does not lead to a security issue in this case publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers linux yocto gitautoinc step up your open source security game with mend ,0 111209,11726822251.0,IssuesEvent,2020-03-10 15:03:38,primefaces/primefaces,https://api.github.com/repos/primefaces/primefaces,closed,Docu: make sure up-to-date versions are delivered to the client,documentation,"Due to this (old versions) of the documentation are loaded from ""Cache Storage"". ![grafik](https://user-images.githubusercontent.com/10461942/75908039-b0335880-5e49-11ea-8ba8-5cbb764c7136.png) ![grafik](https://user-images.githubusercontent.com/10461942/75908071-bde8de00-5e49-11ea-9e86-19b1eb3147a9.png) When we update documentation users still have the old versions in their ""Cache Storage"". Only pushing Ctrl + F5 (or cleanup all Browser-Cache) helps. When we look at https://docsify.js.org/#/?id=docsify itself does not use a serviceworker. There´s a docsify-issue: https://github.com/docsifyjs/docsify/issues/190 It say´s we should remove the serviceworker because it´s not needed anymore. ",1.0,"Docu: make sure up-to-date versions are delivered to the client - Due to this (old versions) of the documentation are loaded from ""Cache Storage"". ![grafik](https://user-images.githubusercontent.com/10461942/75908039-b0335880-5e49-11ea-8ba8-5cbb764c7136.png) ![grafik](https://user-images.githubusercontent.com/10461942/75908071-bde8de00-5e49-11ea-9e86-19b1eb3147a9.png) When we update documentation users still have the old versions in their ""Cache Storage"". Only pushing Ctrl + F5 (or cleanup all Browser-Cache) helps. When we look at https://docsify.js.org/#/?id=docsify itself does not use a serviceworker. There´s a docsify-issue: https://github.com/docsifyjs/docsify/issues/190 It say´s we should remove the serviceworker because it´s not needed anymore. ",0,docu make sure up to date versions are delivered to the client due to this old versions of the documentation are loaded from cache storage when we update documentation users still have the old versions in their cache storage only pushing ctrl or cleanup all browser cache helps when we look at itself does not use a serviceworker there´s a docsify issue it say´s we should remove the serviceworker because it´s not needed anymore ,0 315512,23583685486.0,IssuesEvent,2022-08-23 09:45:06,ONSdigital/design-system,https://api.github.com/repos/ONSdigital/design-system,opened,Remove links to downloadable resources pattern,Bug Documentation,"We removed the downloadable resources docs so the links to it need to be removed: - https://ons-design-system.netlify.app/components/document-list/ - (there maybe more)",1.0,"Remove links to downloadable resources pattern - We removed the downloadable resources docs so the links to it need to be removed: - https://ons-design-system.netlify.app/components/document-list/ - (there maybe more)",0,remove links to downloadable resources pattern we removed the downloadable resources docs so the links to it need to be removed there maybe more ,0 6433,23131650798.0,IssuesEvent,2022-07-28 10:55:37,elastic/apm-pipeline-library,https://api.github.com/repos/elastic/apm-pipeline-library,closed,[filebeat step] Usage is not deterministic,bug question Team:Automation impact:low,"When we use the filebeat step wrapping `dir(BASE_DIR)`, it finds the container log files. But if we move the step inside, it does not. ## Example Not archiving: https://github.com/elastic/e2e-testing/pull/1330, the step was moved out of the dir(BaseDir) Archiving: https://github.com/elastic/e2e-testing/pull/1487, restored the location",1.0,"[filebeat step] Usage is not deterministic - When we use the filebeat step wrapping `dir(BASE_DIR)`, it finds the container log files. But if we move the step inside, it does not. ## Example Not archiving: https://github.com/elastic/e2e-testing/pull/1330, the step was moved out of the dir(BaseDir) Archiving: https://github.com/elastic/e2e-testing/pull/1487, restored the location",1, usage is not deterministic when we use the filebeat step wrapping dir base dir it finds the container log files but if we move the step inside it does not example not archiving the step was moved out of the dir basedir archiving restored the location,1 4383,16375023129.0,IssuesEvent,2021-05-15 23:16:16,IBM/FHIR,https://api.github.com/repos/IBM/FHIR,closed,Javadocs site doesn't update version number.,automation bug,"**Describe the bug** Javadocs site doesn't update version number. https://ibm.github.io/FHIR/javadocs/4.7.1/index.html?overview-summary.html **Expected behavior** Should show the version number properly **Additional context** Should show the version number properly ",1.0,"Javadocs site doesn't update version number. - **Describe the bug** Javadocs site doesn't update version number. https://ibm.github.io/FHIR/javadocs/4.7.1/index.html?overview-summary.html **Expected behavior** Should show the version number properly **Additional context** Should show the version number properly ",1,javadocs site doesn t update version number describe the bug javadocs site doesn t update version number expected behavior should show the version number properly additional context should show the version number properly ,1 8835,27172311406.0,IssuesEvent,2023-02-17 20:39:47,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Proper way to get item metadata by path while avoiding the url length limit,Needs: Attention :wave: automation:Closed,"#### Category - [x] Question - [ ] Documentation issue - [ ] Bug I use the `https://${SHAREPOINT_SITE_ID}/_api/v2.0/drives/${DRIVE_ID}/root:@path:?path='${URL_ENCODED_PATH}' ` endpoint to get an item by its path. When a file name contains unicode characters, it's quite easy for the url to exceed the 2048-character length limit. For example, the character ""鵝"" is usually encoded to ""%E9%B5%9D"", and a filename consisting of 300 ""鵝"" encodes to an url longer than 2700 characters, so when I try to get the item's metadata the API returns a 401 error with an empty response body. I did find a way to shorten the url by encoding ""鵝"" as ""%u9D5D"", however there's 3 problems with it: 1. I _think_ this is UTF-16? 2. is this encoding supported for all onedrive apis? 3. the character U+1F4A9 ""💩"" is ""%uD83D%uDCA9"", so a filename with 300 ""💩""s would be more than 300 * 6 * 2 = 3600 characters and still exceed the 2048-character limit [ ]: http://aka.ms/onedrive-api-issues [x]: http://aka.ms/onedrive-api-issues",1.0,"Proper way to get item metadata by path while avoiding the url length limit - #### Category - [x] Question - [ ] Documentation issue - [ ] Bug I use the `https://${SHAREPOINT_SITE_ID}/_api/v2.0/drives/${DRIVE_ID}/root:@path:?path='${URL_ENCODED_PATH}' ` endpoint to get an item by its path. When a file name contains unicode characters, it's quite easy for the url to exceed the 2048-character length limit. For example, the character ""鵝"" is usually encoded to ""%E9%B5%9D"", and a filename consisting of 300 ""鵝"" encodes to an url longer than 2700 characters, so when I try to get the item's metadata the API returns a 401 error with an empty response body. I did find a way to shorten the url by encoding ""鵝"" as ""%u9D5D"", however there's 3 problems with it: 1. I _think_ this is UTF-16? 2. is this encoding supported for all onedrive apis? 3. the character U+1F4A9 ""💩"" is ""%uD83D%uDCA9"", so a filename with 300 ""💩""s would be more than 300 * 6 * 2 = 3600 characters and still exceed the 2048-character limit [ ]: http://aka.ms/onedrive-api-issues [x]: http://aka.ms/onedrive-api-issues",1,proper way to get item metadata by path while avoiding the url length limit category question documentation issue bug i use the endpoint to get an item by its path when a file name contains unicode characters it s quite easy for the url to exceed the character length limit for example the character 鵝 is usually encoded to and a filename consisting of 鵝 encodes to an url longer than characters so when i try to get the item s metadata the api returns a error with an empty response body i did find a way to shorten the url by encoding 鵝 as however there s problems with it i think this is utf is this encoding supported for all onedrive apis the character u 💩 is so a filename with 💩 s would be more than characters and still exceed the character limit ,1 2178,11518171552.0,IssuesEvent,2020-02-14 09:58:08,elastic/opbeans-frontend,https://api.github.com/repos/elastic/opbeans-frontend,opened,Allow to disable random errors generator,automation enhancement,"There is an instrumented error that it is generated randomly, to be able to disable it helps to use the Opbeans-front end in a predictable way. https://github.com/elastic/opbeans-frontend/blob/472f914f5529d64ccf4aad0fc4a76ec27fa0a135/src/components/ProductDetail/index.js#L9",1.0,"Allow to disable random errors generator - There is an instrumented error that it is generated randomly, to be able to disable it helps to use the Opbeans-front end in a predictable way. https://github.com/elastic/opbeans-frontend/blob/472f914f5529d64ccf4aad0fc4a76ec27fa0a135/src/components/ProductDetail/index.js#L9",1,allow to disable random errors generator there is an instrumented error that it is generated randomly to be able to disable it helps to use the opbeans front end in a predictable way ,1 2142,11459600144.0,IssuesEvent,2020-02-07 07:44:10,apache/druid,https://api.github.com/repos/apache/druid,closed,"Prohibit HashMap(capacity), HashMap(capacity, loadFactor), HashSet, LinkedHashMap constructors",Area - Automation/Static Analysis Contributions Welcome Performance Starter,"They are pretty much always misused. See [this SO answer](https://stackoverflow.com/a/30220944/648955) for the explanation. They should be prohibited using forbidden-apis with suggested alternatives: Guava's `Maps.new(Linked)HashMapWithExpectedSize()`, `Sets.newHashSetWithExpectedSize()`.",1.0,"Prohibit HashMap(capacity), HashMap(capacity, loadFactor), HashSet, LinkedHashMap constructors - They are pretty much always misused. See [this SO answer](https://stackoverflow.com/a/30220944/648955) for the explanation. They should be prohibited using forbidden-apis with suggested alternatives: Guava's `Maps.new(Linked)HashMapWithExpectedSize()`, `Sets.newHashSetWithExpectedSize()`.",1,prohibit hashmap capacity hashmap capacity loadfactor hashset linkedhashmap constructors they are pretty much always misused see for the explanation they should be prohibited using forbidden apis with suggested alternatives guava s maps new linked hashmapwithexpectedsize sets newhashsetwithexpectedsize ,1 9095,27540698294.0,IssuesEvent,2023-03-07 08:20:37,elastic/apm-pipeline-library,https://api.github.com/repos/elastic/apm-pipeline-library,closed,GitHub PR comment with the name of the stage where the step failed,automation ci,"For instance, when running the same step in several parallel stages then the information might not be relevant but duplicated, even though they are totally different steps but from the user experience it might seem a bit of weird ![image](https://user-images.githubusercontent.com/2871786/82548576-69edbb80-9b53-11ea-86e6-7b1cf6082a2a.png) I'd like to add some improvements to provide the stage where the step failed What do you think?",1.0,"GitHub PR comment with the name of the stage where the step failed - For instance, when running the same step in several parallel stages then the information might not be relevant but duplicated, even though they are totally different steps but from the user experience it might seem a bit of weird ![image](https://user-images.githubusercontent.com/2871786/82548576-69edbb80-9b53-11ea-86e6-7b1cf6082a2a.png) I'd like to add some improvements to provide the stage where the step failed What do you think?",1,github pr comment with the name of the stage where the step failed for instance when running the same step in several parallel stages then the information might not be relevant but duplicated even though they are totally different steps but from the user experience it might seem a bit of weird i d like to add some improvements to provide the stage where the step failed what do you think ,1 9545,29522343207.0,IssuesEvent,2023-06-05 03:54:59,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,closed,The Plan of TPCH Q3 changes without any data update leading to 13s performance regression,type/enhancement type/performance sig/planner found/automation affects-6.3,"## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) 1. deploy a tidb cluster: 1 tidb (16c) + 3 TiKV(16c) + 1 PD 2. restore tpch 50g data 3. run tpch for 30 mins ### 2. What did you expect to see? (Required) The plans of all queries would not change and the plan for Q3 should be stuck to d14988835227e68de9bb1194760cee8e. ### 3. What did you see instead (Required) The plan for Q3 would change in some of the daily runs. ![image](https://user-images.githubusercontent.com/84501897/187379690-b78ee887-3440-4eb8-bf5d-9da1c597437c.png) TPCH Q3 ``` q3 = ` /*PLACEHOLDER*/ select l_orderkey, sum(l_extendedprice * (1 - l_discount)) as revenue, o_orderdate, o_shippriority from customer, orders, lineitem where c_mktsegment = 'AUTOMOBILE' and c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate < '1995-03-13' and l_shipdate > '1995-03-13' group by l_orderkey, o_orderdate, o_shippriority order by revenue desc, o_orderdate limit 10; ` ``` ``` Olap_Detail_Log_ID: 2792562 Plan_Digest: d14988835227e68de9bb1194760cee8e Elapsed_Time (s): 26.5 +--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ | ID | ESTROWS | ACTROWS | TASK | ACCESS OBJECT | EXECUTION INFO | OPERATOR INFO | MEMORY | DISK | +--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ | Projection_14 | 10.00 | 10 | root | | time:26.5s, loops:2, Concurrency:OFF | test.lineitem.l_orderkey, Column#35, test.orders.o_orderdate, test.orders.o_shippriority | 2.52 KB | N/A | | └─TopN_17 | 10.00 | 10 | root | | time:26.5s, loops:2 | Column#35:desc, test.orders.o_orderdate, offset:0, count:10 | 76.8 KB | N/A | | └─HashAgg_22 | 39991142.90 | 565763 | root | | time:26.5s, loops:555, partial_worker:{wall_time:26.060572975s, concurrency:5, task_num:1461, tot_wait:2m8.697732643s, tot_exec:1.436736417s, tot_time:2m10.299361175s, max:26.060536947s, p95:26.060536947s}, final_worker:{wall_time:26.522530518s, concurrency:5, task_num:25, tot_wait:2m10.294821085s, tot_exec:2.216177038s, tot_time:2m12.511018685s, max:26.522479673s, p95:26.522479673s} | group by:Column#48, Column#49, Column#50, funcs:sum(Column#44)->Column#35, funcs:firstrow(Column#45)->test.orders.o_orderdate, funcs:firstrow(Column#46)->test.orders.o_shippriority, funcs:firstrow(Column#47)->test.lineitem.l_orderkey | 378.4 MB | N/A | | └─Projection_82 | 92857210.61 | 1495049 | root | | time:26s, loops:1462, Concurrency:5 | mul(test.lineitem.l_extendedprice, minus(1, test.lineitem.l_discount))->Column#44, test.orders.o_orderdate, test.orders.o_shippriority, test.lineitem.l_orderkey, test.lineitem.l_orderkey, test.orders.o_orderdate, test.orders.o_shippriority | 1.09 MB | N/A | | └─IndexHashJoin_30 | 92857210.61 | 1495049 | root | | time:26s, loops:1462, inner:{total:2m3.7s, concurrency:5, task:292, construct:8.58s, fetch:1m51.9s, build:1.94s, join:3.19s} | inner join, inner:IndexLookUp_27, outer key:test.orders.o_orderkey, inner key:test.lineitem.l_orderkey, equal cond:eq(test.orders.o_orderkey, test.lineitem.l_orderkey) | 35.5 MB | N/A | | ├─HashJoin_70(Build) | 22875928.63 | 7274323 | root | | time:6.91s, loops:7108, build_hash_table:{total:891.1ms, fetch:158.5ms, build:732.6ms}, probe:{concurrency:5, total:2m7.2s, max:25.4s, probe:1m53.5s, fetch:13.7s} | inner join, equal:[eq(test.customer.c_custkey, test.orders.o_custkey)] | 141.3 MB | 0 Bytes | | │ ├─TableReader_76(Build) | 1502320.19 | 1501166 | root | | time:214.1ms, loops:1463, cop_task: {num: 150, max: 199.7ms, min: 1.34ms, avg: 51.1ms, p95: 173.9ms, max_proc_keys: 185477, p95_proc_keys: 183486, tot_proc: 7.46s, tot_wait: 24ms, rpc_num: 150, rpc_time: 7.66s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_75 | 9.18 MB | N/A | | │ │ └─Selection_75 | 1502320.19 | 1501166 | cop[tikv] | | tikv_task:{proc max:193ms, min:0s, avg: 48.8ms, p80:98ms, p95:168ms, iters:7937, tasks:150}, scan_detail: {total_process_keys: 7500000, total_process_keys_size: 1526085547, total_keys: 7500150, get_snapshot_time: 14.3ms, rocksdb: {key_skipped_count: 7500000, block: {cache_hit_count: 25190}}} | eq(test.customer.c_mktsegment, ""AUTOMOBILE"") | N/A | N/A | | │ │ └─TableFullScan_74 | 7500000.00 | 7500000 | cop[tikv] | table:customer | tikv_task:{proc max:169ms, min:0s, avg: 43.1ms, p80:87ms, p95:149ms, iters:7937, tasks:150} | keep order:false | N/A | N/A | | │ └─TableReader_73(Probe) | 36347384.33 | 36374625 | root | | time:2.31s, loops:35402, cop_task: {num: 1615, max: 272.3ms, min: 1.22ms, avg: 63.1ms, p95: 173.8ms, max_proc_keys: 104416, p95_proc_keys: 104416, tot_proc: 1m37.8s, tot_wait: 597ms, rpc_num: 1615, rpc_time: 1m41.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_72 | 24.6 MB | N/A | | │ └─Selection_72 | 36347384.33 | 36374625 | cop[tikv] | | tikv_task:{proc max:234ms, min:0s, avg: 57.5ms, p80:117ms, p95:160ms, iters:79749, tasks:1615}, scan_detail: {total_process_keys: 75000000, total_process_keys_size: 11391895327, total_keys: 75001615, get_snapshot_time: 84.9ms, rocksdb: {key_skipped_count: 75000000, block: {cache_hit_count: 172276, read_count: 20178, read_byte: 337.6 MB, read_time: 146.8ms}}} | lt(test.orders.o_orderdate, 1995-03-13 00:00:00.000000) | N/A | N/A | | │ └─TableFullScan_71 | 75000000.00 | 75000000 | cop[tikv] | table:orders | tikv_task:{proc max:223ms, min:0s, avg: 54ms, p80:109ms, p95:150ms, iters:79749, tasks:1615} | keep order:false | N/A | N/A | | └─IndexLookUp_27(Probe) | 4.06 | 1495049 | root | | time:1m45.4s, loops:1958, index_task: {total_time: 1m24.1s, fetch_handle: 1m24.1s, build: 4.73ms, wait: 13.1ms}, table_task: {total_time: 1m51.7s, num: 2583, concurrency: 5} | | 155.7 KB | N/A | | ├─IndexRangeScan_24(Build) | 7.50 | 29096047 | cop[tikv] | table:lineitem, index:PRIMARY(L_ORDERKEY, L_LINENUMBER) | time:1m21.7s, loops:30459, cop_task: {num: 12290, max: 156.9ms, min: 482.9µs, avg: 21ms, p95: 58.5ms, max_proc_keys: 26443, p95_proc_keys: 9184, tot_proc: 2m58s, tot_wait: 13.5s, rpc_num: 12290, rpc_time: 4m18.1s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{proc max:135ms, min:0s, avg: 13.6ms, p80:22ms, p95:48ms, iters:74689, tasks:12290}, scan_detail: {total_process_keys: 29096047, total_process_keys_size: 1542090491, total_keys: 36380592, get_snapshot_time: 931ms, rocksdb: {key_skipped_count: 29096047, block: {cache_hit_count: 14364485, read_count: 226140, read_byte: 1023.0 MB, read_time: 1.03s}}} | range: decided by [eq(test.lineitem.l_orderkey, test.orders.o_orderkey)], keep order:false | N/A | N/A | | └─Selection_26(Probe) | 4.06 | 1495049 | cop[tikv] | | time:1m41.8s, loops:5703, cop_task: {num: 10316, max: 193.4ms, min: 379.4µs, avg: 20.8ms, p95: 63.1ms, max_proc_keys: 20984, p95_proc_keys: 9360, tot_proc: 2m59.9s, tot_wait: 13.6s, rpc_num: 10316, rpc_time: 3m33.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{proc max:186ms, min:0s, avg: 16.5ms, p80:28ms, p95:56ms, iters:73336, tasks:10316}, scan_detail: {total_process_keys: 29096047, total_process_keys_size: 5779934528, total_keys: 34821877, get_snapshot_time: 1.43s, rocksdb: {key_skipped_count: 28246157, block: {cache_hit_count: 12914527, read_count: 59918, read_byte: 1.29 GB, read_time: 697.6ms}}} | gt(test.lineitem.l_shipdate, 1995-03-13 00:00:00.000000) | N/A | N/A | | └─TableRowIDScan_25 | 7.50 | 29096047 | cop[tikv] | table:lineitem | tikv_task:{proc max:185ms, min:0s, avg: 16.2ms, p80:28ms, p95:55ms, iters:73336, tasks:10316} | keep order:false | N/A | N/A | +--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ Olap_Detail_Log_ID: 2792540 Plan_Digest: f8e52347ef089dc357e3ff1704d5415e Elapsed_Time (s): 39.0 +-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ | ID | ESTROWS | ACTROWS | TASK | ACCESS OBJECT | EXECUTION INFO | OPERATOR INFO | MEMORY | DISK | +-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ | Projection_14 | 10.00 | 10 | root | | time:39s, loops:2, Concurrency:OFF | test.lineitem.l_orderkey, Column#35, test.orders.o_orderdate, test.orders.o_shippriority | 2.52 KB | N/A | | └─TopN_17 | 10.00 | 10 | root | | time:39s, loops:2 | Column#35:desc, test.orders.o_orderdate, offset:0, count:10 | 76.8 KB | N/A | | └─HashAgg_22 | 39759090.21 | 565763 | root | | time:39s, loops:555, partial_worker:{wall_time:38.571025254s, concurrency:5, task_num:1461, tot_wait:3m11.057657942s, tot_exec:1.598260816s, tot_time:3m12.821384773s, max:38.570971871s, p95:38.570971871s}, final_worker:{wall_time:39.010329063s, concurrency:5, task_num:25, tot_wait:3m12.77921226s, tot_exec:2.193400817s, tot_time:3m14.97263345s, max:39.010246209s, p95:39.010246209s} | group by:Column#48, Column#49, Column#50, funcs:sum(Column#44)->Column#35, funcs:firstrow(Column#45)->test.orders.o_orderdate, funcs:firstrow(Column#46)->test.orders.o_shippriority, funcs:firstrow(Column#47)->test.lineitem.l_orderkey | 378.4 MB | N/A | | └─Projection_82 | 92857210.61 | 1495049 | root | | time:38.5s, loops:1462, Concurrency:5 | mul(test.lineitem.l_extendedprice, minus(1, test.lineitem.l_discount))->Column#44, test.orders.o_orderdate, test.orders.o_shippriority, test.lineitem.l_orderkey, test.lineitem.l_orderkey, test.orders.o_orderdate, test.orders.o_shippriority | 1.09 MB | N/A | | └─HashJoin_39 | 92857210.61 | 1495049 | root | | time:38.5s, loops:1462, build_hash_table:{total:9.95s, fetch:5.45s, build:4.49s}, probe:{concurrency:5, total:3m12.6s, max:38.5s, probe:54.8s, fetch:2m17.8s} | inner join, equal:[eq(test.orders.o_orderkey, test.lineitem.l_orderkey)] | 567.5 MB | 0 Bytes | | ├─HashJoin_70(Build) | 22875928.63 | 7274323 | root | | time:7.69s, loops:7107, build_hash_table:{total:899.1ms, fetch:108ms, build:791.1ms}, probe:{concurrency:5, total:49.7s, max:9.94s, probe:27.5s, fetch:22.3s} | inner join, equal:[eq(test.customer.c_custkey, test.orders.o_custkey)] | 141.3 MB | 0 Bytes | | │ ├─TableReader_76(Build) | 1502320.19 | 1501166 | root | | time:193ms, loops:1463, cop_task: {num: 150, max: 189ms, min: 805.7µs, avg: 51.9ms, p95: 173.7ms, max_proc_keys: 185477, p95_proc_keys: 183486, tot_proc: 7.57s, tot_wait: 26ms, rpc_num: 150, rpc_time: 7.78s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_75 | 10.5 MB | N/A | | │ │ └─Selection_75 | 1502320.19 | 1501166 | cop[tikv] | | tikv_task:{proc max:181ms, min:1ms, avg: 49.4ms, p80:117ms, p95:167ms, iters:7937, tasks:150}, scan_detail: {total_process_keys: 7500000, total_process_keys_size: 1526085547, total_keys: 7500150, get_snapshot_time: 15ms, rocksdb: {key_skipped_count: 7500000, block: {cache_hit_count: 25190}}} | eq(test.customer.c_mktsegment, ""AUTOMOBILE"") | N/A | N/A | | │ │ └─TableFullScan_74 | 7500000.00 | 7500000 | cop[tikv] | table:customer | tikv_task:{proc max:159ms, min:0s, avg: 43.4ms, p80:104ms, p95:148ms, iters:7937, tasks:150} | keep order:false | N/A | N/A | | │ └─TableReader_73(Probe) | 36347384.33 | 36374625 | root | | time:4s, loops:35403, cop_task: {num: 1615, max: 284.6ms, min: 1.41ms, avg: 69ms, p95: 165.6ms, max_proc_keys: 104416, p95_proc_keys: 104416, tot_proc: 1m47.7s, tot_wait: 168ms, rpc_num: 1615, rpc_time: 1m51.5s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_72 | 13.9 MB | N/A | | │ └─Selection_72 | 36347384.33 | 36374625 | cop[tikv] | | tikv_task:{proc max:216ms, min:0s, avg: 64.4ms, p80:132ms, p95:157ms, iters:79749, tasks:1615}, scan_detail: {total_process_keys: 75000000, total_process_keys_size: 11391895327, total_keys: 75001615, get_snapshot_time: 104.3ms, rocksdb: {key_skipped_count: 75000000, block: {cache_hit_count: 3121, read_count: 189333, read_byte: 3.09 GB, read_time: 1.28s}}} | lt(test.orders.o_orderdate, 1995-03-13 00:00:00.000000) | N/A | N/A | | │ └─TableFullScan_71 | 75000000.00 | 75000000 | cop[tikv] | table:orders | tikv_task:{proc max:203ms, min:0s, avg: 61.5ms, p80:126ms, p95:150ms, iters:79749, tasks:1615} | keep order:false | N/A | N/A | | └─TableReader_79(Probe) | 161388779.98 | 161995407 | root | | time:19.9s, loops:157662, cop_task: {num: 7962, max: 359.2ms, min: 855.3µs, avg: 50.4ms, p95: 136.1ms, max_proc_keys: 95200, p95_proc_keys: 94176, tot_proc: 5m57.4s, tot_wait: 833ms, rpc_num: 7962, rpc_time: 6m40.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_78 | 29.2 MB | N/A | | └─Selection_78 | 161388779.98 | 161995407 | cop[tikv] | | tikv_task:{proc max:173ms, min:0s, avg: 40.4ms, p80:87ms, p95:112ms, iters:324826, tasks:7962}, scan_detail: {total_process_keys: 300005811, total_process_keys_size: 59595430182, total_keys: 300013773, get_snapshot_time: 482.3ms, rocksdb: {key_skipped_count: 300005811, block: {cache_hit_count: 803520, read_count: 185706, read_byte: 2.93 GB, read_time: 1.27s}}} | gt(test.lineitem.l_shipdate, 1995-03-13 00:00:00.000000) | N/A | N/A | | └─TableFullScan_77 | 300005811.00 | 300005811 | cop[tikv] | table:lineitem | tikv_task:{proc max:163ms, min:0s, avg: 38.2ms, p80:83ms, p95:106ms, iters:324826, tasks:7962} | keep order:false | N/A | N/A | +-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ ``` ### 4. What is your TiDB version? (Required) nightly ",1.0,"The Plan of TPCH Q3 changes without any data update leading to 13s performance regression - ## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) 1. deploy a tidb cluster: 1 tidb (16c) + 3 TiKV(16c) + 1 PD 2. restore tpch 50g data 3. run tpch for 30 mins ### 2. What did you expect to see? (Required) The plans of all queries would not change and the plan for Q3 should be stuck to d14988835227e68de9bb1194760cee8e. ### 3. What did you see instead (Required) The plan for Q3 would change in some of the daily runs. ![image](https://user-images.githubusercontent.com/84501897/187379690-b78ee887-3440-4eb8-bf5d-9da1c597437c.png) TPCH Q3 ``` q3 = ` /*PLACEHOLDER*/ select l_orderkey, sum(l_extendedprice * (1 - l_discount)) as revenue, o_orderdate, o_shippriority from customer, orders, lineitem where c_mktsegment = 'AUTOMOBILE' and c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate < '1995-03-13' and l_shipdate > '1995-03-13' group by l_orderkey, o_orderdate, o_shippriority order by revenue desc, o_orderdate limit 10; ` ``` ``` Olap_Detail_Log_ID: 2792562 Plan_Digest: d14988835227e68de9bb1194760cee8e Elapsed_Time (s): 26.5 +--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ | ID | ESTROWS | ACTROWS | TASK | ACCESS OBJECT | EXECUTION INFO | OPERATOR INFO | MEMORY | DISK | +--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ | Projection_14 | 10.00 | 10 | root | | time:26.5s, loops:2, Concurrency:OFF | test.lineitem.l_orderkey, Column#35, test.orders.o_orderdate, test.orders.o_shippriority | 2.52 KB | N/A | | └─TopN_17 | 10.00 | 10 | root | | time:26.5s, loops:2 | Column#35:desc, test.orders.o_orderdate, offset:0, count:10 | 76.8 KB | N/A | | └─HashAgg_22 | 39991142.90 | 565763 | root | | time:26.5s, loops:555, partial_worker:{wall_time:26.060572975s, concurrency:5, task_num:1461, tot_wait:2m8.697732643s, tot_exec:1.436736417s, tot_time:2m10.299361175s, max:26.060536947s, p95:26.060536947s}, final_worker:{wall_time:26.522530518s, concurrency:5, task_num:25, tot_wait:2m10.294821085s, tot_exec:2.216177038s, tot_time:2m12.511018685s, max:26.522479673s, p95:26.522479673s} | group by:Column#48, Column#49, Column#50, funcs:sum(Column#44)->Column#35, funcs:firstrow(Column#45)->test.orders.o_orderdate, funcs:firstrow(Column#46)->test.orders.o_shippriority, funcs:firstrow(Column#47)->test.lineitem.l_orderkey | 378.4 MB | N/A | | └─Projection_82 | 92857210.61 | 1495049 | root | | time:26s, loops:1462, Concurrency:5 | mul(test.lineitem.l_extendedprice, minus(1, test.lineitem.l_discount))->Column#44, test.orders.o_orderdate, test.orders.o_shippriority, test.lineitem.l_orderkey, test.lineitem.l_orderkey, test.orders.o_orderdate, test.orders.o_shippriority | 1.09 MB | N/A | | └─IndexHashJoin_30 | 92857210.61 | 1495049 | root | | time:26s, loops:1462, inner:{total:2m3.7s, concurrency:5, task:292, construct:8.58s, fetch:1m51.9s, build:1.94s, join:3.19s} | inner join, inner:IndexLookUp_27, outer key:test.orders.o_orderkey, inner key:test.lineitem.l_orderkey, equal cond:eq(test.orders.o_orderkey, test.lineitem.l_orderkey) | 35.5 MB | N/A | | ├─HashJoin_70(Build) | 22875928.63 | 7274323 | root | | time:6.91s, loops:7108, build_hash_table:{total:891.1ms, fetch:158.5ms, build:732.6ms}, probe:{concurrency:5, total:2m7.2s, max:25.4s, probe:1m53.5s, fetch:13.7s} | inner join, equal:[eq(test.customer.c_custkey, test.orders.o_custkey)] | 141.3 MB | 0 Bytes | | │ ├─TableReader_76(Build) | 1502320.19 | 1501166 | root | | time:214.1ms, loops:1463, cop_task: {num: 150, max: 199.7ms, min: 1.34ms, avg: 51.1ms, p95: 173.9ms, max_proc_keys: 185477, p95_proc_keys: 183486, tot_proc: 7.46s, tot_wait: 24ms, rpc_num: 150, rpc_time: 7.66s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_75 | 9.18 MB | N/A | | │ │ └─Selection_75 | 1502320.19 | 1501166 | cop[tikv] | | tikv_task:{proc max:193ms, min:0s, avg: 48.8ms, p80:98ms, p95:168ms, iters:7937, tasks:150}, scan_detail: {total_process_keys: 7500000, total_process_keys_size: 1526085547, total_keys: 7500150, get_snapshot_time: 14.3ms, rocksdb: {key_skipped_count: 7500000, block: {cache_hit_count: 25190}}} | eq(test.customer.c_mktsegment, ""AUTOMOBILE"") | N/A | N/A | | │ │ └─TableFullScan_74 | 7500000.00 | 7500000 | cop[tikv] | table:customer | tikv_task:{proc max:169ms, min:0s, avg: 43.1ms, p80:87ms, p95:149ms, iters:7937, tasks:150} | keep order:false | N/A | N/A | | │ └─TableReader_73(Probe) | 36347384.33 | 36374625 | root | | time:2.31s, loops:35402, cop_task: {num: 1615, max: 272.3ms, min: 1.22ms, avg: 63.1ms, p95: 173.8ms, max_proc_keys: 104416, p95_proc_keys: 104416, tot_proc: 1m37.8s, tot_wait: 597ms, rpc_num: 1615, rpc_time: 1m41.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_72 | 24.6 MB | N/A | | │ └─Selection_72 | 36347384.33 | 36374625 | cop[tikv] | | tikv_task:{proc max:234ms, min:0s, avg: 57.5ms, p80:117ms, p95:160ms, iters:79749, tasks:1615}, scan_detail: {total_process_keys: 75000000, total_process_keys_size: 11391895327, total_keys: 75001615, get_snapshot_time: 84.9ms, rocksdb: {key_skipped_count: 75000000, block: {cache_hit_count: 172276, read_count: 20178, read_byte: 337.6 MB, read_time: 146.8ms}}} | lt(test.orders.o_orderdate, 1995-03-13 00:00:00.000000) | N/A | N/A | | │ └─TableFullScan_71 | 75000000.00 | 75000000 | cop[tikv] | table:orders | tikv_task:{proc max:223ms, min:0s, avg: 54ms, p80:109ms, p95:150ms, iters:79749, tasks:1615} | keep order:false | N/A | N/A | | └─IndexLookUp_27(Probe) | 4.06 | 1495049 | root | | time:1m45.4s, loops:1958, index_task: {total_time: 1m24.1s, fetch_handle: 1m24.1s, build: 4.73ms, wait: 13.1ms}, table_task: {total_time: 1m51.7s, num: 2583, concurrency: 5} | | 155.7 KB | N/A | | ├─IndexRangeScan_24(Build) | 7.50 | 29096047 | cop[tikv] | table:lineitem, index:PRIMARY(L_ORDERKEY, L_LINENUMBER) | time:1m21.7s, loops:30459, cop_task: {num: 12290, max: 156.9ms, min: 482.9µs, avg: 21ms, p95: 58.5ms, max_proc_keys: 26443, p95_proc_keys: 9184, tot_proc: 2m58s, tot_wait: 13.5s, rpc_num: 12290, rpc_time: 4m18.1s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{proc max:135ms, min:0s, avg: 13.6ms, p80:22ms, p95:48ms, iters:74689, tasks:12290}, scan_detail: {total_process_keys: 29096047, total_process_keys_size: 1542090491, total_keys: 36380592, get_snapshot_time: 931ms, rocksdb: {key_skipped_count: 29096047, block: {cache_hit_count: 14364485, read_count: 226140, read_byte: 1023.0 MB, read_time: 1.03s}}} | range: decided by [eq(test.lineitem.l_orderkey, test.orders.o_orderkey)], keep order:false | N/A | N/A | | └─Selection_26(Probe) | 4.06 | 1495049 | cop[tikv] | | time:1m41.8s, loops:5703, cop_task: {num: 10316, max: 193.4ms, min: 379.4µs, avg: 20.8ms, p95: 63.1ms, max_proc_keys: 20984, p95_proc_keys: 9360, tot_proc: 2m59.9s, tot_wait: 13.6s, rpc_num: 10316, rpc_time: 3m33.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{proc max:186ms, min:0s, avg: 16.5ms, p80:28ms, p95:56ms, iters:73336, tasks:10316}, scan_detail: {total_process_keys: 29096047, total_process_keys_size: 5779934528, total_keys: 34821877, get_snapshot_time: 1.43s, rocksdb: {key_skipped_count: 28246157, block: {cache_hit_count: 12914527, read_count: 59918, read_byte: 1.29 GB, read_time: 697.6ms}}} | gt(test.lineitem.l_shipdate, 1995-03-13 00:00:00.000000) | N/A | N/A | | └─TableRowIDScan_25 | 7.50 | 29096047 | cop[tikv] | table:lineitem | tikv_task:{proc max:185ms, min:0s, avg: 16.2ms, p80:28ms, p95:55ms, iters:73336, tasks:10316} | keep order:false | N/A | N/A | +--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ Olap_Detail_Log_ID: 2792540 Plan_Digest: f8e52347ef089dc357e3ff1704d5415e Elapsed_Time (s): 39.0 +-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ | ID | ESTROWS | ACTROWS | TASK | ACCESS OBJECT | EXECUTION INFO | OPERATOR INFO | MEMORY | DISK | +-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ | Projection_14 | 10.00 | 10 | root | | time:39s, loops:2, Concurrency:OFF | test.lineitem.l_orderkey, Column#35, test.orders.o_orderdate, test.orders.o_shippriority | 2.52 KB | N/A | | └─TopN_17 | 10.00 | 10 | root | | time:39s, loops:2 | Column#35:desc, test.orders.o_orderdate, offset:0, count:10 | 76.8 KB | N/A | | └─HashAgg_22 | 39759090.21 | 565763 | root | | time:39s, loops:555, partial_worker:{wall_time:38.571025254s, concurrency:5, task_num:1461, tot_wait:3m11.057657942s, tot_exec:1.598260816s, tot_time:3m12.821384773s, max:38.570971871s, p95:38.570971871s}, final_worker:{wall_time:39.010329063s, concurrency:5, task_num:25, tot_wait:3m12.77921226s, tot_exec:2.193400817s, tot_time:3m14.97263345s, max:39.010246209s, p95:39.010246209s} | group by:Column#48, Column#49, Column#50, funcs:sum(Column#44)->Column#35, funcs:firstrow(Column#45)->test.orders.o_orderdate, funcs:firstrow(Column#46)->test.orders.o_shippriority, funcs:firstrow(Column#47)->test.lineitem.l_orderkey | 378.4 MB | N/A | | └─Projection_82 | 92857210.61 | 1495049 | root | | time:38.5s, loops:1462, Concurrency:5 | mul(test.lineitem.l_extendedprice, minus(1, test.lineitem.l_discount))->Column#44, test.orders.o_orderdate, test.orders.o_shippriority, test.lineitem.l_orderkey, test.lineitem.l_orderkey, test.orders.o_orderdate, test.orders.o_shippriority | 1.09 MB | N/A | | └─HashJoin_39 | 92857210.61 | 1495049 | root | | time:38.5s, loops:1462, build_hash_table:{total:9.95s, fetch:5.45s, build:4.49s}, probe:{concurrency:5, total:3m12.6s, max:38.5s, probe:54.8s, fetch:2m17.8s} | inner join, equal:[eq(test.orders.o_orderkey, test.lineitem.l_orderkey)] | 567.5 MB | 0 Bytes | | ├─HashJoin_70(Build) | 22875928.63 | 7274323 | root | | time:7.69s, loops:7107, build_hash_table:{total:899.1ms, fetch:108ms, build:791.1ms}, probe:{concurrency:5, total:49.7s, max:9.94s, probe:27.5s, fetch:22.3s} | inner join, equal:[eq(test.customer.c_custkey, test.orders.o_custkey)] | 141.3 MB | 0 Bytes | | │ ├─TableReader_76(Build) | 1502320.19 | 1501166 | root | | time:193ms, loops:1463, cop_task: {num: 150, max: 189ms, min: 805.7µs, avg: 51.9ms, p95: 173.7ms, max_proc_keys: 185477, p95_proc_keys: 183486, tot_proc: 7.57s, tot_wait: 26ms, rpc_num: 150, rpc_time: 7.78s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_75 | 10.5 MB | N/A | | │ │ └─Selection_75 | 1502320.19 | 1501166 | cop[tikv] | | tikv_task:{proc max:181ms, min:1ms, avg: 49.4ms, p80:117ms, p95:167ms, iters:7937, tasks:150}, scan_detail: {total_process_keys: 7500000, total_process_keys_size: 1526085547, total_keys: 7500150, get_snapshot_time: 15ms, rocksdb: {key_skipped_count: 7500000, block: {cache_hit_count: 25190}}} | eq(test.customer.c_mktsegment, ""AUTOMOBILE"") | N/A | N/A | | │ │ └─TableFullScan_74 | 7500000.00 | 7500000 | cop[tikv] | table:customer | tikv_task:{proc max:159ms, min:0s, avg: 43.4ms, p80:104ms, p95:148ms, iters:7937, tasks:150} | keep order:false | N/A | N/A | | │ └─TableReader_73(Probe) | 36347384.33 | 36374625 | root | | time:4s, loops:35403, cop_task: {num: 1615, max: 284.6ms, min: 1.41ms, avg: 69ms, p95: 165.6ms, max_proc_keys: 104416, p95_proc_keys: 104416, tot_proc: 1m47.7s, tot_wait: 168ms, rpc_num: 1615, rpc_time: 1m51.5s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_72 | 13.9 MB | N/A | | │ └─Selection_72 | 36347384.33 | 36374625 | cop[tikv] | | tikv_task:{proc max:216ms, min:0s, avg: 64.4ms, p80:132ms, p95:157ms, iters:79749, tasks:1615}, scan_detail: {total_process_keys: 75000000, total_process_keys_size: 11391895327, total_keys: 75001615, get_snapshot_time: 104.3ms, rocksdb: {key_skipped_count: 75000000, block: {cache_hit_count: 3121, read_count: 189333, read_byte: 3.09 GB, read_time: 1.28s}}} | lt(test.orders.o_orderdate, 1995-03-13 00:00:00.000000) | N/A | N/A | | │ └─TableFullScan_71 | 75000000.00 | 75000000 | cop[tikv] | table:orders | tikv_task:{proc max:203ms, min:0s, avg: 61.5ms, p80:126ms, p95:150ms, iters:79749, tasks:1615} | keep order:false | N/A | N/A | | └─TableReader_79(Probe) | 161388779.98 | 161995407 | root | | time:19.9s, loops:157662, cop_task: {num: 7962, max: 359.2ms, min: 855.3µs, avg: 50.4ms, p95: 136.1ms, max_proc_keys: 95200, p95_proc_keys: 94176, tot_proc: 5m57.4s, tot_wait: 833ms, rpc_num: 7962, rpc_time: 6m40.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_78 | 29.2 MB | N/A | | └─Selection_78 | 161388779.98 | 161995407 | cop[tikv] | | tikv_task:{proc max:173ms, min:0s, avg: 40.4ms, p80:87ms, p95:112ms, iters:324826, tasks:7962}, scan_detail: {total_process_keys: 300005811, total_process_keys_size: 59595430182, total_keys: 300013773, get_snapshot_time: 482.3ms, rocksdb: {key_skipped_count: 300005811, block: {cache_hit_count: 803520, read_count: 185706, read_byte: 2.93 GB, read_time: 1.27s}}} | gt(test.lineitem.l_shipdate, 1995-03-13 00:00:00.000000) | N/A | N/A | | └─TableFullScan_77 | 300005811.00 | 300005811 | cop[tikv] | table:lineitem | tikv_task:{proc max:163ms, min:0s, avg: 38.2ms, p80:83ms, p95:106ms, iters:324826, tasks:7962} | keep order:false | N/A | N/A | +-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+ ``` ### 4. What is your TiDB version? (Required) nightly ",1,the plan of tpch changes without any data update leading to performance regression bug report please answer these questions before submitting your issue thanks minimal reproduce step required deploy a tidb cluster tidb tikv pd restore tpch data run tpch for mins what did you expect to see required the plans of all queries would not change and the plan for should be stuck to what did you see instead required the plan for would change in some of the daily runs tpch placeholder select l orderkey sum l extendedprice l discount as revenue o orderdate o shippriority from customer orders lineitem where c mktsegment automobile and c custkey o custkey and l orderkey o orderkey and o orderdate and l shipdate group by l orderkey o orderdate o shippriority order by revenue desc o orderdate limit olap detail log id plan digest elapsed time s id estrows actrows task access object execution info operator info memory disk projection root time loops concurrency off test lineitem l orderkey column test orders o orderdate test orders o shippriority kb n a └─topn root time loops column desc test orders o orderdate offset count kb n a └─hashagg root time loops partial worker wall time concurrency task num tot wait tot exec tot time max final worker wall time concurrency task num tot wait tot exec tot time max group by column column column funcs sum column column funcs firstrow column test orders o orderdate funcs firstrow column test orders o shippriority funcs firstrow column test lineitem l orderkey mb n a └─projection root time loops concurrency mul test lineitem l extendedprice minus test lineitem l discount column test orders o orderdate test orders o shippriority test lineitem l orderkey test lineitem l orderkey test orders o orderdate test orders o shippriority mb n a └─indexhashjoin root time loops inner total concurrency task construct fetch build join inner join inner indexlookup outer key test orders o orderkey inner key test lineitem l orderkey equal cond eq test orders o orderkey test lineitem l orderkey mb n a ├─hashjoin build root time loops build hash table total fetch build probe concurrency total max probe fetch inner join equal mb bytes │ ├─tablereader build root time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency data selection mb n a │ │ └─selection cop tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count eq test customer c mktsegment automobile n a n a │ │ └─tablefullscan cop table customer tikv task proc max min avg iters tasks keep order false n a n a │ └─tablereader probe root time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency data selection mb n a │ └─selection cop tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count read count read byte mb read time lt test orders o orderdate n a n a │ └─tablefullscan cop table orders tikv task proc max min avg iters tasks keep order false n a n a └─indexlookup probe root time loops index task total time fetch handle build wait table task total time num concurrency kb n a ├─indexrangescan build cop table lineitem index primary l orderkey l linenumber time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count read count read byte mb read time range decided by keep order false n a n a └─selection probe cop time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count read count read byte gb read time gt test lineitem l shipdate n a n a └─tablerowidscan cop table lineitem tikv task proc max min avg iters tasks keep order false n a n a olap detail log id plan digest elapsed time s id estrows actrows task access object execution info operator info memory disk projection root time loops concurrency off test lineitem l orderkey column test orders o orderdate test orders o shippriority kb n a └─topn root time loops column desc test orders o orderdate offset count kb n a └─hashagg root time loops partial worker wall time concurrency task num tot wait tot exec tot time max final worker wall time concurrency task num tot wait tot exec tot time max group by column column column funcs sum column column funcs firstrow column test orders o orderdate funcs firstrow column test orders o shippriority funcs firstrow column test lineitem l orderkey mb n a └─projection root time loops concurrency mul test lineitem l extendedprice minus test lineitem l discount column test orders o orderdate test orders o shippriority test lineitem l orderkey test lineitem l orderkey test orders o orderdate test orders o shippriority mb n a └─hashjoin root time loops build hash table total fetch build probe concurrency total max probe fetch inner join equal mb bytes ├─hashjoin build root time loops build hash table total fetch build probe concurrency total max probe fetch inner join equal mb bytes │ ├─tablereader build root time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency data selection mb n a │ │ └─selection cop tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count eq test customer c mktsegment automobile n a n a │ │ └─tablefullscan cop table customer tikv task proc max min avg iters tasks keep order false n a n a │ └─tablereader probe root time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency data selection mb n a │ └─selection cop tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count read count read byte gb read time lt test orders o orderdate n a n a │ └─tablefullscan cop table orders tikv task proc max min avg iters tasks keep order false n a n a └─tablereader probe root time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency data selection mb n a └─selection cop tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count read count read byte gb read time gt test lineitem l shipdate n a n a └─tablefullscan cop table lineitem tikv task proc max min avg iters tasks keep order false n a n a what is your tidb version required nightly ,1 23756,3851867537.0,IssuesEvent,2016-04-06 05:28:51,GPF/imame4all,https://api.github.com/repos/GPF/imame4all,closed,Rom List doesn't work with Touch Overlay disabled using Custom Rom Path,auto-migrated Priority-Medium Type-Defect,"``` What steps will reproduce the problem? 1. Change to a custom rom location 2. Turn OFF Landscape and Portrait overlays 3. Exit MAME. 4. Reload MAME. 5. Touch screen. No rom list appears, only menu. NOTE: If you go into OPTIONS and turn the overlays BACK ON ... the rom list will appear -- so it is definitely a glitch. What is the expected output? What do you see instead? ROM LIST will appear without enabling overlays. What version of the product are you using? On what operating system? This occurs in 1.4.1 and 1.5.x .. any build that accepts custom ROM paths. This is on an Asus Transformer 3.2 latest build. Please provide any additional information below. Fully reproducable. Will troubleshoot, test builds, shoot video - **whatever** it takes to get this fixed. ``` Original issue reported on code.google.com by `dark...@gmail.com` on 3 Jan 2012 at 4:12",1.0,"Rom List doesn't work with Touch Overlay disabled using Custom Rom Path - ``` What steps will reproduce the problem? 1. Change to a custom rom location 2. Turn OFF Landscape and Portrait overlays 3. Exit MAME. 4. Reload MAME. 5. Touch screen. No rom list appears, only menu. NOTE: If you go into OPTIONS and turn the overlays BACK ON ... the rom list will appear -- so it is definitely a glitch. What is the expected output? What do you see instead? ROM LIST will appear without enabling overlays. What version of the product are you using? On what operating system? This occurs in 1.4.1 and 1.5.x .. any build that accepts custom ROM paths. This is on an Asus Transformer 3.2 latest build. Please provide any additional information below. Fully reproducable. Will troubleshoot, test builds, shoot video - **whatever** it takes to get this fixed. ``` Original issue reported on code.google.com by `dark...@gmail.com` on 3 Jan 2012 at 4:12",0,rom list doesn t work with touch overlay disabled using custom rom path what steps will reproduce the problem change to a custom rom location turn off landscape and portrait overlays exit mame reload mame touch screen no rom list appears only menu note if you go into options and turn the overlays back on the rom list will appear so it is definitely a glitch what is the expected output what do you see instead rom list will appear without enabling overlays what version of the product are you using on what operating system this occurs in and x any build that accepts custom rom paths this is on an asus transformer latest build please provide any additional information below fully reproducable will troubleshoot test builds shoot video whatever it takes to get this fixed original issue reported on code google com by dark gmail com on jan at ,0 400,6190432579.0,IssuesEvent,2017-07-04 15:22:20,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,Correction for ScriptedRuleProvider.xml,Automation,"In the file `ScriptedRuleProvider.xml` the following definition `` should be replaced with `` (... false `.internal.shared` in the name definition)",1.0,"Correction for ScriptedRuleProvider.xml - In the file `ScriptedRuleProvider.xml` the following definition `` should be replaced with `` (... false `.internal.shared` in the name definition)",1,correction for scriptedruleprovider xml in the file scriptedruleprovider xml the following definition should be replaced with false internal shared in the name definition ,1 7633,25312611366.0,IssuesEvent,2022-11-17 18:43:11,tigerbeetledb/tigerbeetle,https://api.github.com/repos/tigerbeetledb/tigerbeetle,opened,Run sample programs in docker-compose in Github CI,automation,"I got this started [here](https://github.com/tigerbeetledb/tigerbeetle-go/pull/17/files). Now that we've got a monorepo this can even be simplified a bit more. Instead of running against the latest built Docker image for the TigerBeetle server, it should build against the current commit so we are truly running integration tests. And it should run for every sample program so that we can ensure our sample programs are correct. Incidentally this also provides integration tests for the entire database.",1.0,"Run sample programs in docker-compose in Github CI - I got this started [here](https://github.com/tigerbeetledb/tigerbeetle-go/pull/17/files). Now that we've got a monorepo this can even be simplified a bit more. Instead of running against the latest built Docker image for the TigerBeetle server, it should build against the current commit so we are truly running integration tests. And it should run for every sample program so that we can ensure our sample programs are correct. Incidentally this also provides integration tests for the entire database.",1,run sample programs in docker compose in github ci i got this started now that we ve got a monorepo this can even be simplified a bit more instead of running against the latest built docker image for the tigerbeetle server it should build against the current commit so we are truly running integration tests and it should run for every sample program so that we can ensure our sample programs are correct incidentally this also provides integration tests for the entire database ,1 705,3041314556.0,IssuesEvent,2015-08-07 20:32:10,brunobuzzi/OrbeonPersistenceLayer,https://api.github.com/repos/brunobuzzi/OrbeonPersistenceLayer,opened,REST: Orbeon Form Runner Summary,GemStone Service Orbeon Service Call,Implement service for form runner summary (user created applications and forms),2.0,REST: Orbeon Form Runner Summary - Implement service for form runner summary (user created applications and forms),0,rest orbeon form runner summary implement service for form runner summary user created applications and forms ,0 10148,31810023253.0,IssuesEvent,2023-09-13 16:10:49,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,closed,Adjust ui tests to Object Storage ui elements,ui_automation Squad/Black,"![Screenshot 2023-07-17 at 15 24 45](https://github.com/red-hat-storage/ocs-ci/assets/61454420/a38be2aa-b7ea-43a2-b0f5-fbe097b72575) 4.14 ODF deployment have changes so we need to adjust PageNavigator and MCG, Object related tests",1.0,"Adjust ui tests to Object Storage ui elements - ![Screenshot 2023-07-17 at 15 24 45](https://github.com/red-hat-storage/ocs-ci/assets/61454420/a38be2aa-b7ea-43a2-b0f5-fbe097b72575) 4.14 ODF deployment have changes so we need to adjust PageNavigator and MCG, Object related tests",1,adjust ui tests to object storage ui elements odf deployment have changes so we need to adjust pagenavigator and mcg object related tests,1 394551,11645091782.0,IssuesEvent,2020-02-29 22:44:28,grpc/grpc,https://api.github.com/repos/grpc/grpc,closed,node test failures,disposition/stale kind/bug lang/node priority/P2,"https://source.cloud.google.com/results/invocations/0231cf31-76b9-40fe-91f7-5c474356d328/targets Looks like there are a few failures here, but I don't understand the output well enough to know how to split them up.",1.0,"node test failures - https://source.cloud.google.com/results/invocations/0231cf31-76b9-40fe-91f7-5c474356d328/targets Looks like there are a few failures here, but I don't understand the output well enough to know how to split them up.",0,node test failures looks like there are a few failures here but i don t understand the output well enough to know how to split them up ,0 324,5409902957.0,IssuesEvent,2017-03-01 06:33:37,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,Automation: Context map should use Object values,Automation,"With respect to the (offtopic) discussion at #3007 I would like to create a new issue. The values of the elements in the context map should be of type Object (so, a known type) and not `?`.",1.0,"Automation: Context map should use Object values - With respect to the (offtopic) discussion at #3007 I would like to create a new issue. The values of the elements in the context map should be of type Object (so, a known type) and not `?`.",1,automation context map should use object values with respect to the offtopic discussion at i would like to create a new issue the values of the elements in the context map should be of type object so a known type and not ,1 634410,20360933428.0,IssuesEvent,2022-02-20 17:19:52,ReliaQualAssociates/ramstk,https://api.github.com/repos/ReliaQualAssociates/ramstk,closed,Hardware module allows adding a sibling to the top-level item,type: fix priority: high status: inprogress bump: patch dobranch,"**Describe the bug** The hardware module allows the creation of a sibling item to the top-level (system) item. There should only be one top-level item for each revision in a program database. ***Expected Behavior*** As a RAMSTK analyst, I want only one system level item per revision so there is only one hardware BoM per revision. ***Actual Behavior*** Pressing the 'Add Sibling' button with the top-level item selected results in the creation of a second top-level item. **Reproduce** 1. Launch RAMSTK 2. Open a Program database 3. Select the Hardware module 4. Select the top-level item in the Module Book 5. Press the 'Add Sibling' button or select 'Add Sibling' from the pop-up menu > Steps to reproduce the behavior. **Logs** None **Additional Comments** The _do_request_insert_sibling() method in the HardwareModuleView() class should check the parent ID of the selected item, raise an information dialog telling the user a sibling can't be added to the top-level item, and then exit without sending the request insert message. dobranch priority: high type: fix",1.0,"Hardware module allows adding a sibling to the top-level item - **Describe the bug** The hardware module allows the creation of a sibling item to the top-level (system) item. There should only be one top-level item for each revision in a program database. ***Expected Behavior*** As a RAMSTK analyst, I want only one system level item per revision so there is only one hardware BoM per revision. ***Actual Behavior*** Pressing the 'Add Sibling' button with the top-level item selected results in the creation of a second top-level item. **Reproduce** 1. Launch RAMSTK 2. Open a Program database 3. Select the Hardware module 4. Select the top-level item in the Module Book 5. Press the 'Add Sibling' button or select 'Add Sibling' from the pop-up menu > Steps to reproduce the behavior. **Logs** None **Additional Comments** The _do_request_insert_sibling() method in the HardwareModuleView() class should check the parent ID of the selected item, raise an information dialog telling the user a sibling can't be added to the top-level item, and then exit without sending the request insert message. dobranch priority: high type: fix",0,hardware module allows adding a sibling to the top level item describe the bug the hardware module allows the creation of a sibling item to the top level system item there should only be one top level item for each revision in a program database expected behavior as a ramstk analyst i want only one system level item per revision so there is only one hardware bom per revision actual behavior pressing the add sibling button with the top level item selected results in the creation of a second top level item reproduce launch ramstk open a program database select the hardware module select the top level item in the module book press the add sibling button or select add sibling from the pop up menu steps to reproduce the behavior logs none additional comments the do request insert sibling method in the hardwaremoduleview class should check the parent id of the selected item raise an information dialog telling the user a sibling can t be added to the top level item and then exit without sending the request insert message dobranch priority high type fix,0 392176,11584550198.0,IssuesEvent,2020-02-22 18:00:01,ayumi-cloud/oc-security-module,https://api.github.com/repos/ayumi-cloud/oc-security-module,opened,Add multi-level tabs - part of the new ui in October CMS II,Firewall New UI Priority: Medium enhancement in-progress,"### Enhancement idea - [ ] e.g. Add this to firewall or virus definitions ",1.0,"Add multi-level tabs - part of the new ui in October CMS II - ### Enhancement idea - [ ] e.g. Add this to firewall or virus definitions ",0,add multi level tabs part of the new ui in october cms ii enhancement idea e g add this to firewall or virus definitions ,0 4360,16165164054.0,IssuesEvent,2021-05-01 10:35:08,davepl/Primes,https://api.github.com/repos/davepl/Primes,closed,Setup CI for this project,automation,"Hello, It would be nice to set up some CI to run these implementations in a controlled environment periodically. We're getting benchmarks from different machines, and it is hard to keep track of all the numbers for all the different implementations. We need one source of truth. ",1.0,"Setup CI for this project - Hello, It would be nice to set up some CI to run these implementations in a controlled environment periodically. We're getting benchmarks from different machines, and it is hard to keep track of all the numbers for all the different implementations. We need one source of truth. ",1,setup ci for this project hello it would be nice to set up some ci to run these implementations in a controlled environment periodically we re getting benchmarks from different machines and it is hard to keep track of all the numbers for all the different implementations we need one source of truth ,1 2947,12856722916.0,IssuesEvent,2020-07-09 08:07:45,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,opened,Error while loading shared libraries: libXss.so.1,automation bug team:automation,"Today we start to show the following error on RUM test ``` AssertionError: Expected done, got Failed to launch chrome! /rumjs-integration-test/node_modules/puppeteer/.local-chromium/linux-686378/chrome-linux/chrome: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory TROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md ```",2.0,"Error while loading shared libraries: libXss.so.1 - Today we start to show the following error on RUM test ``` AssertionError: Expected done, got Failed to launch chrome! /rumjs-integration-test/node_modules/puppeteer/.local-chromium/linux-686378/chrome-linux/chrome: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory TROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md ```",1,error while loading shared libraries libxss so today we start to show the following error on rum test assertionerror expected done got failed to launch chrome rumjs integration test node modules puppeteer local chromium linux chrome linux chrome error while loading shared libraries libxss so cannot open shared object file no such file or directory troubleshooting ,1 162481,12677681923.0,IssuesEvent,2020-06-19 08:15:03,SymbiFlow/sv-tests,https://api.github.com/repos/SymbiFlow/sv-tests,closed,Add advanced simulation tests,enhancement tests,"Currently we have only a basic set of simulation tests, we should add more tests for the following chapters: - [x] 16. Assertions PR #821 - [x] 18. Constrained random value generation PR #820 Besides those chapters we should also cover some advanced simulation aspects like: - [x] UVM Scoreboards PR #836 - [x] Bus functional models #836 While adding those tests we should also try to incorporate some tests utilizing UVM as it is used in real life simulation flows but its current test coverage needs improvement (#560). Please update this issue with links to PRs/Issues to track status.",1.0,"Add advanced simulation tests - Currently we have only a basic set of simulation tests, we should add more tests for the following chapters: - [x] 16. Assertions PR #821 - [x] 18. Constrained random value generation PR #820 Besides those chapters we should also cover some advanced simulation aspects like: - [x] UVM Scoreboards PR #836 - [x] Bus functional models #836 While adding those tests we should also try to incorporate some tests utilizing UVM as it is used in real life simulation flows but its current test coverage needs improvement (#560). Please update this issue with links to PRs/Issues to track status.",0,add advanced simulation tests currently we have only a basic set of simulation tests we should add more tests for the following chapters assertions pr constrained random value generation pr besides those chapters we should also cover some advanced simulation aspects like uvm scoreboards pr bus functional models while adding those tests we should also try to incorporate some tests utilizing uvm as it is used in real life simulation flows but its current test coverage needs improvement please update this issue with links to prs issues to track status ,0 953,8824294194.0,IssuesEvent,2019-01-02 16:31:43,arcus-azure/arcus.eventgrid.sidecar,https://api.github.com/repos/arcus-azure/arcus.eventgrid.sidecar,closed,Define branch policies,automation management,"Define branch policies on the `master` branch. It should: - [x] Build every PR with our CI - [x] Be approved by 1 person""",1.0,"Define branch policies - Define branch policies on the `master` branch. It should: - [x] Build every PR with our CI - [x] Be approved by 1 person""",1,define branch policies define branch policies on the master branch it should build every pr with our ci be approved by person ,1 5495,19808262754.0,IssuesEvent,2022-01-19 09:27:13,jibebe-jkuat/internship2022,https://api.github.com/repos/jibebe-jkuat/internship2022,reopened,A drawing of the chassis for the robot car ,Automation,AutoCAD 2D drawing will be drafted for 2D printing in the prototyping Lab,1.0,A drawing of the chassis for the robot car - AutoCAD 2D drawing will be drafted for 2D printing in the prototyping Lab,1,a drawing of the chassis for the robot car autocad drawing will be drafted for printing in the prototyping lab,1 740794,25767820986.0,IssuesEvent,2022-12-09 04:24:02,WeMakeDevs/classroom-monitor-bot,https://api.github.com/repos/WeMakeDevs/classroom-monitor-bot,closed,[BUG] Website not deploying,🟧 priority: high 🔒 staff only 💣type: bug,"### Describe the bug Looks like the website's not deplyoying anymore. CC: @kaiwalyakoparkar, @siddhant-khisty. ### To Reproduce _No response_ ### Expected Behavior _No response_ ### Screenshot/ Video _No response_ ### Desktop (please complete the following information) _No response_ ### Additional context _No response_",1.0,"[BUG] Website not deploying - ### Describe the bug Looks like the website's not deplyoying anymore. CC: @kaiwalyakoparkar, @siddhant-khisty. ### To Reproduce _No response_ ### Expected Behavior _No response_ ### Screenshot/ Video _No response_ ### Desktop (please complete the following information) _No response_ ### Additional context _No response_",0, website not deploying describe the bug looks like the website s not deplyoying anymore cc kaiwalyakoparkar siddhant khisty to reproduce no response expected behavior no response screenshot video no response desktop please complete the following information no response additional context no response ,0 324377,23996038571.0,IssuesEvent,2022-09-14 07:38:12,Yun-SeYeong/Bitcoin-Trading-System,https://api.github.com/repos/Yun-SeYeong/Bitcoin-Trading-System,closed,Sync API count 200개 제한 변경,documentation enhancement,현재 Upbit API 특성상 200개를 최대로 조회 할 수 있다. 200개 씩 Batch 형식으로 여러번 나누어 요청 후 조합하여 sync 하도록 변경,1.0,Sync API count 200개 제한 변경 - 현재 Upbit API 특성상 200개를 최대로 조회 할 수 있다. 200개 씩 Batch 형식으로 여러번 나누어 요청 후 조합하여 sync 하도록 변경,0,sync api count 제한 변경 현재 upbit api 특성상 최대로 조회 할 수 있다 씩 batch 형식으로 여러번 나누어 요청 후 조합하여 sync 하도록 변경,0 801123,28454023228.0,IssuesEvent,2023-04-17 05:07:09,magento/magento2,https://api.github.com/repos/magento/magento2,reopened,Read-only app/etc/,Triage: Dev.Experience Priority: P3 Progress: done Issue: ready for confirmation Issue: needs update,"Is there a reason for the `app/etc/` path to be writable during `php bin/magento setup:upgrade --keep-generated`? Looking into `Magento\Framework\Setup\FilePermissions`, the [getMissingWritableDirectoriesForDbUpgrade](https://github.com/magento/magento2/blob/2.4-develop/lib/internal/Magento/Framework/Setup/FilePermissions.php#L282-L300) asks for `app/etc/` to be writable, but it's not clear what is being written to that folder. My goal is to deploy magento in a read-only environment (except for the `var/` folder), for an already installed Magento, so theoretically none of those files should be changed compared to what the CI builds. ",1.0,"Read-only app/etc/ - Is there a reason for the `app/etc/` path to be writable during `php bin/magento setup:upgrade --keep-generated`? Looking into `Magento\Framework\Setup\FilePermissions`, the [getMissingWritableDirectoriesForDbUpgrade](https://github.com/magento/magento2/blob/2.4-develop/lib/internal/Magento/Framework/Setup/FilePermissions.php#L282-L300) asks for `app/etc/` to be writable, but it's not clear what is being written to that folder. My goal is to deploy magento in a read-only environment (except for the `var/` folder), for an already installed Magento, so theoretically none of those files should be changed compared to what the CI builds. ",0,read only app etc is there a reason for the app etc path to be writable during php bin magento setup upgrade keep generated looking into magento framework setup filepermissions the asks for app etc to be writable but it s not clear what is being written to that folder my goal is to deploy magento in a read only environment except for the var folder for an already installed magento so theoretically none of those files should be changed compared to what the ci builds ,0 2295,11722915642.0,IssuesEvent,2020-03-10 08:00:16,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,Syscheck automated tests: Synchronization disabled,automation component/fim,"## Description Add test that checks that synchronization is disabled when set to disabled in the configuration. This message must not appear in the log: ``` 2020/03/06 11:51:58 ossec-syscheckd[23205] fim_sync.c:56 at fim_run_integrity(): DEBUG: Initializing FIM Integrity Synchronization check. Sync interval is 300 seconds. ``` This is the required configuration for the synchronization: ```xml no 5m 1h 10 ``` ",1.0,"Syscheck automated tests: Synchronization disabled - ## Description Add test that checks that synchronization is disabled when set to disabled in the configuration. This message must not appear in the log: ``` 2020/03/06 11:51:58 ossec-syscheckd[23205] fim_sync.c:56 at fim_run_integrity(): DEBUG: Initializing FIM Integrity Synchronization check. Sync interval is 300 seconds. ``` This is the required configuration for the synchronization: ```xml no 5m 1h 10 ``` ",1,syscheck automated tests synchronization disabled description add test that checks that synchronization is disabled when set to disabled in the configuration this message must not appear in the log ossec syscheckd fim sync c at fim run integrity debug initializing fim integrity synchronization check sync interval is seconds this is the required configuration for the synchronization xml no ,1 7552,25110239078.0,IssuesEvent,2022-11-08 19:53:06,o3de/o3de,https://api.github.com/repos/o3de/o3de,closed,Linux/Mac/iOS asset_profile from clean failing,kind/bug sig/platform sig/graphics-audio triage/accepted priority/major kind/automation,"**Describe the bug** asset_profile job for Linux, Mac, iOS from clean has been failing for over 7 days **Failed Jenkins Job Information:** `asset_profile` from the nightly-clean jobs, jobs 55-61 ``` [2021-08-06T06:29:00.909Z] AssetProcessor: Processed ""ResourcePools/DefaultConstantBufferPool.resourcepool"" (""server"")... [2021-08-06T06:29:00.909Z] AssetProcessor: Processed ""LightingPresets/LowContrast/royal_esplanade_2k_iblskyboxcm.exr"" (""pc"")... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/AuxGeom/AuxGeomObjectLit.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shader/ImagePreview.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/SimpleTextured.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/Shadow/DepthExponentiation.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/SimpleTextured.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shader/ImagePreview.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/AuxGeom/AuxGeomObjectLit.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/Shadow/DepthExponentiation.shadervariantlist, (server)... ```",1.0,"Linux/Mac/iOS asset_profile from clean failing - **Describe the bug** asset_profile job for Linux, Mac, iOS from clean has been failing for over 7 days **Failed Jenkins Job Information:** `asset_profile` from the nightly-clean jobs, jobs 55-61 ``` [2021-08-06T06:29:00.909Z] AssetProcessor: Processed ""ResourcePools/DefaultConstantBufferPool.resourcepool"" (""server"")... [2021-08-06T06:29:00.909Z] AssetProcessor: Processed ""LightingPresets/LowContrast/royal_esplanade_2k_iblskyboxcm.exr"" (""pc"")... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/AuxGeom/AuxGeomObjectLit.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shader/ImagePreview.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/SimpleTextured.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/Shadow/DepthExponentiation.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/SimpleTextured.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shader/ImagePreview.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/AuxGeom/AuxGeomObjectLit.shadervariantlist, (server)... [2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/Shadow/DepthExponentiation.shadervariantlist, (server)... ```",1,linux mac ios asset profile from clean failing describe the bug asset profile job for linux mac ios from clean has been failing for over days failed jenkins job information asset profile from the nightly clean jobs jobs assetprocessor processed resourcepools defaultconstantbufferpool resourcepool server assetprocessor processed lightingpresets lowcontrast royal esplanade iblskyboxcm exr pc assetprocessor failed shaders auxgeom auxgeomobjectlit shadervariantlist server assetprocessor failed shader imagepreview shadervariantlist server assetprocessor failed shaders simpletextured shadervariantlist server assetprocessor failed shaders shadow depthexponentiation shadervariantlist server assetprocessor failed shaders simpletextured shadervariantlist server assetprocessor failed shader imagepreview shadervariantlist server assetprocessor failed shaders auxgeom auxgeomobjectlit shadervariantlist server assetprocessor failed shaders shadow depthexponentiation shadervariantlist server ,1 9555,6384314942.0,IssuesEvent,2017-08-03 04:19:51,upspin/upspin,https://api.github.com/repos/upspin/upspin,closed,"all: snapshots, docs, and easy creation",docs usability,"The Writers file must include the snapshot user explicitly, or else (better) implicitly, because otherwise the user cannot create the directory to store the snapshots. Whatever the result, the process must be implemented, tested, documented, and added to the signup and/or setup docs.",True,"all: snapshots, docs, and easy creation - The Writers file must include the snapshot user explicitly, or else (better) implicitly, because otherwise the user cannot create the directory to store the snapshots. Whatever the result, the process must be implemented, tested, documented, and added to the signup and/or setup docs.",0,all snapshots docs and easy creation the writers file must include the snapshot user explicitly or else better implicitly because otherwise the user cannot create the directory to store the snapshots whatever the result the process must be implemented tested documented and added to the signup and or setup docs ,0 9469,28491586962.0,IssuesEvent,2023-04-18 11:37:55,carpentries/amy,https://api.github.com/repos/carpentries/amy,closed,Update automated email triggers to remove supporting instructor,component: email automation,"We are deprecating the role of supporting instructor so any automated email that checks for this role should be updated. The supporting instructor role is no longer required. ",1.0,"Update automated email triggers to remove supporting instructor - We are deprecating the role of supporting instructor so any automated email that checks for this role should be updated. The supporting instructor role is no longer required. ",1,update automated email triggers to remove supporting instructor we are deprecating the role of supporting instructor so any automated email that checks for this role should be updated the supporting instructor role is no longer required ,1 602611,18476366077.0,IssuesEvent,2021-10-18 07:47:13,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,teams.live.com - site is not usable,browser-firefox priority-critical engine-gecko," **URL**: https://teams.live.com **Browser / Version**: Firefox 93.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Browser unsupported **Steps to Reproduce**: Just won't run on Firefox, forcing to use Edge
Browser Configuration
  • None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"teams.live.com - site is not usable - **URL**: https://teams.live.com **Browser / Version**: Firefox 93.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Browser unsupported **Steps to Reproduce**: Just won't run on Firefox, forcing to use Edge
Browser Configuration
  • None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,teams live com site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description browser unsupported steps to reproduce just won t run on firefox forcing to use edge browser configuration none from with ❤️ ,0 285104,8754809809.0,IssuesEvent,2018-12-14 12:59:03,zephyrproject-rtos/zephyr,https://api.github.com/repos/zephyrproject-rtos/zephyr,closed,"QEMU serial output is not reliable, may affect SLIP and thus network testing",area: Networking area: QEMU bug priority: low,"This ticket provides a (partial) answer of why the issue described in https://github.com/zephyrproject-rtos/zephyr/pull/7831#issuecomment-392067202 happens, specifically: 1. when running samples/net/socket/dumb_http_server sample app on qemu_cortex_m3, 2. running `ab -n1000 http://192.0.2.1:8080/`, 3. processing of requests gets stuck after just few dozens of requests, `ab` eventually times out 4. (ab can be restarted and number of requests can be processed still, i.e. the app keeps running, but requests get stuck soon) So, it's more or less know issue, but it's not always kept in mind: UART emulation in QEMU is sub-ideal, and there can be problems with serial communication, which is used by SLIP and loop-slip-tap.sh. This is what happens here. For example, SLIP driver logging: ~~~ [slip] [INF] slip_send: sent: pkt 0x20001ec4 llr: 14, len: 54 [slip] [INF] slip_send: sent: pkt 0x20001ec4 llr: 14, len: 1506 [slip] [INF] slip_send: sent: pkt 0x20001e78 llr: 14, len: 783 [slip] [INF] slip_send: sent: pkt 0x20001e2c llr: 14, len: 54 Connection from 192.0.2.2 closed [slip] [INF] slip_send: sent: pkt 0x20001e78 llr: 14, len: 783 ~~~ What we can see here is that pkt 0x20001e78 was transmitted twice. But here's what Wireshark sees: ![screenshot from 2018-06-05 23-50-25](https://user-images.githubusercontent.com/500451/41002030-4491a14c-691b-11e8-9c57-7f4b533d3864.png) As can be seen, instead of first 783 bytes packet it receives broken 275 bytes packet, which gets ignored by host. That's what causes retransmission, and next time the packet gets thru. ",1.0,"QEMU serial output is not reliable, may affect SLIP and thus network testing - This ticket provides a (partial) answer of why the issue described in https://github.com/zephyrproject-rtos/zephyr/pull/7831#issuecomment-392067202 happens, specifically: 1. when running samples/net/socket/dumb_http_server sample app on qemu_cortex_m3, 2. running `ab -n1000 http://192.0.2.1:8080/`, 3. processing of requests gets stuck after just few dozens of requests, `ab` eventually times out 4. (ab can be restarted and number of requests can be processed still, i.e. the app keeps running, but requests get stuck soon) So, it's more or less know issue, but it's not always kept in mind: UART emulation in QEMU is sub-ideal, and there can be problems with serial communication, which is used by SLIP and loop-slip-tap.sh. This is what happens here. For example, SLIP driver logging: ~~~ [slip] [INF] slip_send: sent: pkt 0x20001ec4 llr: 14, len: 54 [slip] [INF] slip_send: sent: pkt 0x20001ec4 llr: 14, len: 1506 [slip] [INF] slip_send: sent: pkt 0x20001e78 llr: 14, len: 783 [slip] [INF] slip_send: sent: pkt 0x20001e2c llr: 14, len: 54 Connection from 192.0.2.2 closed [slip] [INF] slip_send: sent: pkt 0x20001e78 llr: 14, len: 783 ~~~ What we can see here is that pkt 0x20001e78 was transmitted twice. But here's what Wireshark sees: ![screenshot from 2018-06-05 23-50-25](https://user-images.githubusercontent.com/500451/41002030-4491a14c-691b-11e8-9c57-7f4b533d3864.png) As can be seen, instead of first 783 bytes packet it receives broken 275 bytes packet, which gets ignored by host. That's what causes retransmission, and next time the packet gets thru. ",0,qemu serial output is not reliable may affect slip and thus network testing this ticket provides a partial answer of why the issue described in happens specifically when running samples net socket dumb http server sample app on qemu cortex running ab processing of requests gets stuck after just few dozens of requests ab eventually times out ab can be restarted and number of requests can be processed still i e the app keeps running but requests get stuck soon so it s more or less know issue but it s not always kept in mind uart emulation in qemu is sub ideal and there can be problems with serial communication which is used by slip and loop slip tap sh this is what happens here for example slip driver logging slip send sent pkt llr len slip send sent pkt llr len slip send sent pkt llr len slip send sent pkt llr len connection from closed slip send sent pkt llr len what we can see here is that pkt was transmitted twice but here s what wireshark sees as can be seen instead of first bytes packet it receives broken bytes packet which gets ignored by host that s what causes retransmission and next time the packet gets thru ,0 19070,3133303800.0,IssuesEvent,2015-09-10 00:18:58,beefproject/beef,https://api.github.com/repos/beefproject/beef,closed,`open_udp_socket': no datagram socket (RuntimeError),Defect,"could you please help me on this? this is the error that I get when i try to run beef in kali linux 2.0: ""root@ss:/usr/share/beef-xss# ./beef [18:50:30][*] Bind socket [imapeudora1] listening on [0.0.0.0:2000]. [18:50:30][*] Browser Exploitation Framework (BeEF) 0.4.6.1-alpha [18:50:30] | Twit: @beefproject [18:50:30] | Site: http://beefproject.com [18:50:30] | Blog: http://blog.beefproject.com [18:50:30] |_ Wiki: https://github.com/beefproject/beef/wiki [18:50:30][*] Project Creator: Wade Alcorn (@WadeAlcorn) [18:50:30][*] BeEF is loading. Wait a few seconds... [18:50:33][*] 12 extensions enabled. [18:50:33][*] 241 modules enabled. [18:50:33][*] 2 network interfaces were detected. [18:50:33][+] running on network interface: 127.0.0.1 [18:50:33] | Hook URL: http://127.0.0.1:3000/hook.js [18:50:33] |_ UI URL: http://127.0.0.1:3000/ui/panel [18:50:33][+] running on network interface: 192.168.0.10 [18:50:33] | Hook URL: http://192.168.0.10:3000/hook.js [18:50:33] |_ UI URL: http://192.168.0.10:3000/ui/panel [18:50:33][*] RESTful API key: a3ed2a9e5386081c6cd57842fccd93f1af55f60e [18:50:33][*] DNS Server: 127.0.0.1:5300 (udp) [18:50:33] | Upstream Server: 8.8.8.8:53 (udp) [18:50:33] |_ Upstream Server: 8.8.8.8:53 (tcp) [18:50:33][*] HTTP Proxy: http://127.0.0.1:6789 [18:50:33][*] BeEF server started (press control+c to stop) /usr/lib/ruby/vendor_ruby/eventmachine.rb:859:in `open_udp_socket': no datagram socket (RuntimeError) from /usr/lib/ruby/vendor_ruby/eventmachine.rb:859:in `open_datagram_socket' from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:122:in `block in run' from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:119:in `each' from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:119:in `run' from /usr/share/beef-xss/extensions/dns/dns.rb:127:in `block (3 levels) in run' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:959:in `call' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:959:in `block in run_deferred_callbacks' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:956:in `times' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:956:in `run_deferred_callbacks' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:187:in `run_machine' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:187:in `run' from /usr/lib/ruby/vendor_ruby/thin/backends/base.rb:61:in `start' from /usr/lib/ruby/vendor_ruby/thin/server.rb:159:in `start' from /usr/share/beef-xss/core/main/server.rb:127:in `start' from ./beef:145:in `
' "" I`ve tried to reinstall from repos, git and reinstalled all needed gems and tried with different versions of ruby but at the end I still get this.",1.0,"`open_udp_socket': no datagram socket (RuntimeError) - could you please help me on this? this is the error that I get when i try to run beef in kali linux 2.0: ""root@ss:/usr/share/beef-xss# ./beef [18:50:30][*] Bind socket [imapeudora1] listening on [0.0.0.0:2000]. [18:50:30][*] Browser Exploitation Framework (BeEF) 0.4.6.1-alpha [18:50:30] | Twit: @beefproject [18:50:30] | Site: http://beefproject.com [18:50:30] | Blog: http://blog.beefproject.com [18:50:30] |_ Wiki: https://github.com/beefproject/beef/wiki [18:50:30][*] Project Creator: Wade Alcorn (@WadeAlcorn) [18:50:30][*] BeEF is loading. Wait a few seconds... [18:50:33][*] 12 extensions enabled. [18:50:33][*] 241 modules enabled. [18:50:33][*] 2 network interfaces were detected. [18:50:33][+] running on network interface: 127.0.0.1 [18:50:33] | Hook URL: http://127.0.0.1:3000/hook.js [18:50:33] |_ UI URL: http://127.0.0.1:3000/ui/panel [18:50:33][+] running on network interface: 192.168.0.10 [18:50:33] | Hook URL: http://192.168.0.10:3000/hook.js [18:50:33] |_ UI URL: http://192.168.0.10:3000/ui/panel [18:50:33][*] RESTful API key: a3ed2a9e5386081c6cd57842fccd93f1af55f60e [18:50:33][*] DNS Server: 127.0.0.1:5300 (udp) [18:50:33] | Upstream Server: 8.8.8.8:53 (udp) [18:50:33] |_ Upstream Server: 8.8.8.8:53 (tcp) [18:50:33][*] HTTP Proxy: http://127.0.0.1:6789 [18:50:33][*] BeEF server started (press control+c to stop) /usr/lib/ruby/vendor_ruby/eventmachine.rb:859:in `open_udp_socket': no datagram socket (RuntimeError) from /usr/lib/ruby/vendor_ruby/eventmachine.rb:859:in `open_datagram_socket' from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:122:in `block in run' from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:119:in `each' from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:119:in `run' from /usr/share/beef-xss/extensions/dns/dns.rb:127:in `block (3 levels) in run' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:959:in `call' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:959:in `block in run_deferred_callbacks' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:956:in `times' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:956:in `run_deferred_callbacks' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:187:in `run_machine' from /usr/lib/ruby/vendor_ruby/eventmachine.rb:187:in `run' from /usr/lib/ruby/vendor_ruby/thin/backends/base.rb:61:in `start' from /usr/lib/ruby/vendor_ruby/thin/server.rb:159:in `start' from /usr/share/beef-xss/core/main/server.rb:127:in `start' from ./beef:145:in `
' "" I`ve tried to reinstall from repos, git and reinstalled all needed gems and tried with different versions of ruby but at the end I still get this.",0, open udp socket no datagram socket runtimeerror could you please help me on this this is the error that i get when i try to run beef in kali linux root ss usr share beef xss beef bind socket listening on browser exploitation framework beef alpha twit beefproject site blog wiki project creator wade alcorn wadealcorn beef is loading wait a few seconds extensions enabled modules enabled network interfaces were detected running on network interface hook url ui url running on network interface hook url ui url restful api key dns server udp upstream server udp upstream server tcp http proxy beef server started press control c to stop usr lib ruby vendor ruby eventmachine rb in open udp socket no datagram socket runtimeerror from usr lib ruby vendor ruby eventmachine rb in open datagram socket from usr lib ruby vendor ruby rubydns server rb in block in run from usr lib ruby vendor ruby rubydns server rb in each from usr lib ruby vendor ruby rubydns server rb in run from usr share beef xss extensions dns dns rb in block levels in run from usr lib ruby vendor ruby eventmachine rb in call from usr lib ruby vendor ruby eventmachine rb in block in run deferred callbacks from usr lib ruby vendor ruby eventmachine rb in times from usr lib ruby vendor ruby eventmachine rb in run deferred callbacks from usr lib ruby vendor ruby eventmachine rb in run machine from usr lib ruby vendor ruby eventmachine rb in run from usr lib ruby vendor ruby thin backends base rb in start from usr lib ruby vendor ruby thin server rb in start from usr share beef xss core main server rb in start from beef in i ve tried to reinstall from repos git and reinstalled all needed gems and tried with different versions of ruby but at the end i still get this ,0 40217,2867572955.0,IssuesEvent,2015-06-05 14:07:07,Araq/Nim,https://api.github.com/repos/Araq/Nim,closed,nimsuggest should be fixed to work in separate repo,High Priority Tools,Nimsuggest was separated to its own repo and lives at http://github.com/nim-lang/nimsuggest. The only problem is that installing it via nimble fails. This should be fixed and the nimsuggest in this repo should stop diverging from the separated version.,1.0,nimsuggest should be fixed to work in separate repo - Nimsuggest was separated to its own repo and lives at http://github.com/nim-lang/nimsuggest. The only problem is that installing it via nimble fails. This should be fixed and the nimsuggest in this repo should stop diverging from the separated version.,0,nimsuggest should be fixed to work in separate repo nimsuggest was separated to its own repo and lives at the only problem is that installing it via nimble fails this should be fixed and the nimsuggest in this repo should stop diverging from the separated version ,0 221,4786941529.0,IssuesEvent,2016-10-29 18:22:12,rancher/rancher,https://api.github.com/repos/rancher/rancher,opened,"LB stuck in ""Reinitilaizing"" state when the environment is deactivated and ",kind/bug setup/automation,"Server version - Build from master. Steps to reproduce the problem: Create an environment with services and LB service . Deactivate environment. Activate environment. LB container is stuck in ""Reinitilaizing"" state forever. ha proxy logs: ```10/29/2016 9:27:42 AMtime=""2016-10-29T16:27:42Z"" level=info msg=""KUBERNETES_URL is not set, skipping init of kubernetes controller"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Starting Rancher LB service"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""LB controller: rancher"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""LB provider: haproxy"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""starting rancher controller"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Healthcheck handler is listening on :10241"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg="" -- staring haproxy\n * Starting haproxy haproxy\n ...done.\n"" 10/29/2016 9:27:44 AMtime=""2016-10-29T16:27:44Z"" level=info msg="" -- reloading haproxy config with the new config changes\n[WARNING] 302/162744 (43) : config : 'option forwardfor' ignored for proxy 'default' as it requires HTTP mode.\n"" 10/29/2016 9:27:48 AMtime=""2016-10-29T16:27:48Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:27:48 AMtime=""2016-10-29T16:27:48Z"" level=info msg="" -- no changes in haproxy config\n"" 10/29/2016 9:27:53 AMtime=""2016-10-29T16:27:53Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:27:53 AMtime=""2016-10-29T16:27:53Z"" level=info msg="" -- no changes in haproxy config\n"" 10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Received SIGTERM, shutting down"" 10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Shutting down rancher controller"" 10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Shutting down provider haproxy"" 10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Error during shutdown shutdown already in progress"" 10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Exiting with 1"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""KUBERNETES_URL is not set, skipping init of kubernetes controller"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""Starting Rancher LB service"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""LB controller: rancher"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""LB provider: haproxy"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""starting rancher controller"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""Healthcheck handler is listening on :10241"" 10/29/2016 9:28:16 AMtime=""2016-10-29T16:28:16Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:28:21 AMtime=""2016-10-29T16:28:21Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:28:21 AMtime=""2016-10-29T16:28:21Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:28:26 AMtime=""2016-10-29T16:28:26Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:28:26 AMtime=""2016-10-29T16:28:26Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:28:31 AMtime=""2016-10-29T16:28:31Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:28:36 AMtime=""2016-10-29T16:28:36Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:28:41 AMtime=""2016-10-29T16:28:41Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:28:46 AMtime=""2016-10-29T16:28:46Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:28:51 AMtime=""2016-10-29T16:28:51Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:29:01 AMtime=""2016-10-29T16:29:01Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:29:06 AMtime=""2016-10-29T16:29:06Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:29:11 AMtime=""2016-10-29T16:29:11Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:29:15 AMtime=""2016-10-29T16:29:15Z"" level=info msg="" -- staring haproxy\nPidfile (and pid) already exist.\n"" 10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=info msg="" -- no changes in haproxy config\n"" 10/29/2016 9:29:36 AMtime=""2016-10-29T16:29:36Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:29:36 AMtime=""2016-10-29T16:29:36Z"" level=info msg="" -- no changes in haproxy config\n"" 10/29/2016 9:29:51 AMtime=""2016-10-29T16:29:51Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:29:51 AMtime=""2016-10-29T16:29:51Z"" level=info msg="" -- no changes in haproxy config\n"" ```",1.0,"LB stuck in ""Reinitilaizing"" state when the environment is deactivated and - Server version - Build from master. Steps to reproduce the problem: Create an environment with services and LB service . Deactivate environment. Activate environment. LB container is stuck in ""Reinitilaizing"" state forever. ha proxy logs: ```10/29/2016 9:27:42 AMtime=""2016-10-29T16:27:42Z"" level=info msg=""KUBERNETES_URL is not set, skipping init of kubernetes controller"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Starting Rancher LB service"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""LB controller: rancher"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""LB provider: haproxy"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""starting rancher controller"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Healthcheck handler is listening on :10241"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg="" -- staring haproxy\n * Starting haproxy haproxy\n ...done.\n"" 10/29/2016 9:27:44 AMtime=""2016-10-29T16:27:44Z"" level=info msg="" -- reloading haproxy config with the new config changes\n[WARNING] 302/162744 (43) : config : 'option forwardfor' ignored for proxy 'default' as it requires HTTP mode.\n"" 10/29/2016 9:27:48 AMtime=""2016-10-29T16:27:48Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:27:48 AMtime=""2016-10-29T16:27:48Z"" level=info msg="" -- no changes in haproxy config\n"" 10/29/2016 9:27:53 AMtime=""2016-10-29T16:27:53Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:27:53 AMtime=""2016-10-29T16:27:53Z"" level=info msg="" -- no changes in haproxy config\n"" 10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Received SIGTERM, shutting down"" 10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Shutting down rancher controller"" 10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Shutting down provider haproxy"" 10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Error during shutdown shutdown already in progress"" 10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Exiting with 1"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""KUBERNETES_URL is not set, skipping init of kubernetes controller"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""Starting Rancher LB service"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""LB controller: rancher"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""LB provider: haproxy"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""starting rancher controller"" 10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""Healthcheck handler is listening on :10241"" 10/29/2016 9:28:16 AMtime=""2016-10-29T16:28:16Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:28:21 AMtime=""2016-10-29T16:28:21Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:28:21 AMtime=""2016-10-29T16:28:21Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:28:26 AMtime=""2016-10-29T16:28:26Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:28:26 AMtime=""2016-10-29T16:28:26Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:28:31 AMtime=""2016-10-29T16:28:31Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:28:36 AMtime=""2016-10-29T16:28:36Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:28:41 AMtime=""2016-10-29T16:28:41Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:28:46 AMtime=""2016-10-29T16:28:46Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:28:51 AMtime=""2016-10-29T16:28:51Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:29:01 AMtime=""2016-10-29T16:29:01Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:29:06 AMtime=""2016-10-29T16:29:06Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:29:11 AMtime=""2016-10-29T16:29:11Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:29:15 AMtime=""2016-10-29T16:29:15Z"" level=info msg="" -- staring haproxy\nPidfile (and pid) already exist.\n"" 10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage"" 10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=info msg="" -- no changes in haproxy config\n"" 10/29/2016 9:29:36 AMtime=""2016-10-29T16:29:36Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:29:36 AMtime=""2016-10-29T16:29:36Z"" level=info msg="" -- no changes in haproxy config\n"" 10/29/2016 9:29:51 AMtime=""2016-10-29T16:29:51Z"" level=info msg=""Syncing up LB"" 10/29/2016 9:29:51 AMtime=""2016-10-29T16:29:51Z"" level=info msg="" -- no changes in haproxy config\n"" ```",1,lb stuck in reinitilaizing state when the environment is deactivated and server version build from master steps to reproduce the problem create an environment with services and lb service deactivate environment activate environment lb container is stuck in reinitilaizing state forever ha proxy logs amtime level info msg kubernetes url is not set skipping init of kubernetes controller amtime level info msg starting rancher lb service amtime level info msg lb controller rancher amtime level info msg lb provider haproxy amtime level info msg starting rancher controller amtime level info msg healthcheck handler is listening on amtime level info msg syncing up lb amtime level info msg staring haproxy n starting haproxy haproxy n done n amtime level info msg reloading haproxy config with the new config changes n config option forwardfor ignored for proxy default as it requires http mode n amtime level info msg syncing up lb amtime level info msg no changes in haproxy config n amtime level info msg syncing up lb amtime level info msg no changes in haproxy config n amtime level info msg received sigterm shutting down amtime level info msg shutting down rancher controller amtime level info msg shutting down provider haproxy amtime level info msg error during shutdown shutdown already in progress amtime level info msg exiting with amtime level info msg kubernetes url is not set skipping init of kubernetes controller amtime level info msg starting rancher lb service amtime level info msg lb controller rancher amtime level info msg lb provider haproxy amtime level info msg starting rancher controller amtime level info msg healthcheck handler is listening on amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level info msg staring haproxy npidfile and pid already exist n amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level info msg no changes in haproxy config n amtime level info msg syncing up lb amtime level info msg no changes in haproxy config n amtime level info msg syncing up lb amtime level info msg no changes in haproxy config n ,1 7216,24459413514.0,IssuesEvent,2022-10-07 09:44:42,o3de/o3de,https://api.github.com/repos/o3de/o3de,opened,Nightly build Bug Report: Windows periodic_test_gpu_profile job red due to timeouts,kind/bug needs-triage sig/graphics-audio kind/automation,"**Failed Jenkins Job Information:** https://jenkins-pipeline.agscollab.com/blue/organizations/jenkins/O3DE-LY-Fork-development_periodic-incremental-daily-internal/detail/O3DE-LY-Fork-development_periodic-incremental-daily-internal/504/pipeline/799 ``` [2022-10-07T06:18:57.155Z] The following tests FAILED: [2022-10-07T06:18:57.155Z] 184 - AutomatedTesting::EditorLevelLoadingPerfTests_DX12.periodic::TEST_RUN (Failed) [2022-10-07T06:18:57.155Z] 185 - AutomatedTesting::EditorLevelLoadingPerfTests_Vulkan.periodic::TEST_RUN (Failed) ..\..\..\..\..\..\AutomatedTesting\Gem\PythonTests\Performance\TestSuite_Periodic_DX12.py::TestAutomation::Time_EditorLevelLoading_10KEntityCpuPerfTest[windows-windows_editor-AutomatedTesting] FAILED [ 50%] Test ABORTED after not completing within 180 seconds ..\..\..\..\..\..\AutomatedTesting\Gem\PythonTests\Performance\TestSuite_Periodic_Vulkan.py::TestAutomation::Time_EditorLevelLoading_10KEntityCpuPerfTest[windows-windows_editor-AutomatedTesting] FAILED [ 50%] Test ABORTED after not completing within 600 seconds ``` **Attachments** [log.txt](https://github.com/o3de/o3de/files/9732784/log.txt)",1.0,"Nightly build Bug Report: Windows periodic_test_gpu_profile job red due to timeouts - **Failed Jenkins Job Information:** https://jenkins-pipeline.agscollab.com/blue/organizations/jenkins/O3DE-LY-Fork-development_periodic-incremental-daily-internal/detail/O3DE-LY-Fork-development_periodic-incremental-daily-internal/504/pipeline/799 ``` [2022-10-07T06:18:57.155Z] The following tests FAILED: [2022-10-07T06:18:57.155Z] 184 - AutomatedTesting::EditorLevelLoadingPerfTests_DX12.periodic::TEST_RUN (Failed) [2022-10-07T06:18:57.155Z] 185 - AutomatedTesting::EditorLevelLoadingPerfTests_Vulkan.periodic::TEST_RUN (Failed) ..\..\..\..\..\..\AutomatedTesting\Gem\PythonTests\Performance\TestSuite_Periodic_DX12.py::TestAutomation::Time_EditorLevelLoading_10KEntityCpuPerfTest[windows-windows_editor-AutomatedTesting] FAILED [ 50%] Test ABORTED after not completing within 180 seconds ..\..\..\..\..\..\AutomatedTesting\Gem\PythonTests\Performance\TestSuite_Periodic_Vulkan.py::TestAutomation::Time_EditorLevelLoading_10KEntityCpuPerfTest[windows-windows_editor-AutomatedTesting] FAILED [ 50%] Test ABORTED after not completing within 600 seconds ``` **Attachments** [log.txt](https://github.com/o3de/o3de/files/9732784/log.txt)",1,nightly build bug report windows periodic test gpu profile job red due to timeouts failed jenkins job information the following tests failed automatedtesting editorlevelloadingperftests periodic test run failed automatedtesting editorlevelloadingperftests vulkan periodic test run failed automatedtesting gem pythontests performance testsuite periodic py testautomation time editorlevelloading failed test aborted after not completing within seconds automatedtesting gem pythontests performance testsuite periodic vulkan py testautomation time editorlevelloading failed test aborted after not completing within seconds attachments ,1 139658,18853735307.0,IssuesEvent,2021-11-12 01:37:21,sesong11/example,https://api.github.com/repos/sesong11/example,opened,CVE-2019-12814 (Medium) detected in jackson-databind-2.9.9.jar,security vulnerability,"## CVE-2019-12814 - Medium Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.9.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /example/quartz-jdbc/pom.xml

Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar

Dependency Hierarchy: - spring-boot-starter-web-2.1.6.RELEASE.jar (Root Library) - spring-boot-starter-json-2.1.6.RELEASE.jar - :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)

Vulnerability Details

A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.

Publish Date: 2019-06-19

URL: CVE-2019-12814

CVSS 3 Score Details (5.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/FasterXML/jackson-databind/issues/2341

Release Date: 2019-06-19

Fix Resolution: 2.7.9.6, 2.8.11.4, 2.9.9.1, 2.10.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-12814 (Medium) detected in jackson-databind-2.9.9.jar - ## CVE-2019-12814 - Medium Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.9.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /example/quartz-jdbc/pom.xml

Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar

Dependency Hierarchy: - spring-boot-starter-web-2.1.6.RELEASE.jar (Root Library) - spring-boot-starter-json-2.1.6.RELEASE.jar - :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)

Vulnerability Details

A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.

Publish Date: 2019-06-19

URL: CVE-2019-12814

CVSS 3 Score Details (5.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/FasterXML/jackson-databind/issues/2341

Release Date: 2019-06-19

Fix Resolution: 2.7.9.6, 2.8.11.4, 2.9.9.1, 2.10.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file example quartz jdbc pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has jdom x or x jar in the classpath an attacker can send a specifically crafted json message that allows them to read arbitrary local files on the server publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 4713,17316359917.0,IssuesEvent,2021-07-27 06:46:10,rancher-sandbox/cOS-toolkit,https://api.github.com/repos/rancher-sandbox/cOS-toolkit,opened,Add AMI images id list to releases artifact,automation enhancement release,"**Is your feature request related to a problem? Please describe.** Having a clear list of the published AMIs helps users to find the image rather have to dig with the aws-cli **Describe the solution you'd like** An artifact which is uploaded during releasing which contains the AMI IDs ",1.0,"Add AMI images id list to releases artifact - **Is your feature request related to a problem? Please describe.** Having a clear list of the published AMIs helps users to find the image rather have to dig with the aws-cli **Describe the solution you'd like** An artifact which is uploaded during releasing which contains the AMI IDs ",1,add ami images id list to releases artifact is your feature request related to a problem please describe having a clear list of the published amis helps users to find the image rather have to dig with the aws cli describe the solution you d like an artifact which is uploaded during releasing which contains the ami ids ,1 5672,20733453704.0,IssuesEvent,2022-03-14 11:35:23,SuperOfficeDocs/superoffice-docs,https://api.github.com/repos/SuperOfficeDocs/superoffice-docs,closed,Feedback for Enum values for ScreenChooserType,doc-enhancement crmscript automation," This list does not contain all enums, should be updated to include all Trigger events from https://docs.superoffice.com/automation/trigger/reference/CRMScript.Event.Trigger.html --- #### Document Details ⚠ *Do not edit this section. It is required for docs.superOffice.com ➟ Docs Team processing.* * Content Source: [enum-screenchoosertype](https://github.com/SuperOfficeDocs/superoffice-docs/blob/main/docs/database/tables/enums/screenchoosertype.md/#L1)",1.0,"Feedback for Enum values for ScreenChooserType - This list does not contain all enums, should be updated to include all Trigger events from https://docs.superoffice.com/automation/trigger/reference/CRMScript.Event.Trigger.html --- #### Document Details ⚠ *Do not edit this section. It is required for docs.superOffice.com ➟ Docs Team processing.* * Content Source: [enum-screenchoosertype](https://github.com/SuperOfficeDocs/superoffice-docs/blob/main/docs/database/tables/enums/screenchoosertype.md/#L1)",1,feedback for enum values for screenchoosertype this list does not contain all enums should be updated to include all trigger events from document details ⚠ do not edit this section it is required for docs superoffice com ➟ docs team processing content source ,1 26310,19984842410.0,IssuesEvent,2022-01-30 13:54:57,yt-project/yt,https://api.github.com/repos/yt-project/yt,closed,Reduce size of pep8speaks config file,new contributor friendly infrastructure,"After https://github.com/OrkoHunter/pep8speaks/pull/106 has been merged, it looks like we can reduce the config file presence -- and reduce duplication -- for pep8speaks. I believe it would be sufficient to remove our .pep8speaks.yml file, but we should investigate if we can remove the ""ignore"" and ""exclude"" sections and leave the bits where we define how the bot should talk.",1.0,"Reduce size of pep8speaks config file - After https://github.com/OrkoHunter/pep8speaks/pull/106 has been merged, it looks like we can reduce the config file presence -- and reduce duplication -- for pep8speaks. I believe it would be sufficient to remove our .pep8speaks.yml file, but we should investigate if we can remove the ""ignore"" and ""exclude"" sections and leave the bits where we define how the bot should talk.",0,reduce size of config file after has been merged it looks like we can reduce the config file presence and reduce duplication for i believe it would be sufficient to remove our yml file but we should investigate if we can remove the ignore and exclude sections and leave the bits where we define how the bot should talk ,0 4883,17933343541.0,IssuesEvent,2021-09-10 12:23:37,CDCgov/prime-reportstream,https://api.github.com/repos/CDCgov/prime-reportstream,closed,add Greenlight Urgent Care to the list of senders in RS,sender-automation,what is required to add a new sender and allow them to start submitting data?,1.0,add Greenlight Urgent Care to the list of senders in RS - what is required to add a new sender and allow them to start submitting data?,1,add greenlight urgent care to the list of senders in rs what is required to add a new sender and allow them to start submitting data ,1 3209,13186175957.0,IssuesEvent,2020-08-12 23:18:57,bkthomps/Containers,https://api.github.com/repos/bkthomps/Containers,closed,Update CI/CD,automation,"Right now, it builds with coverage, then sends to codecov for analysis. Additionally, there is another tool to check code quality. Valgrind has to be run manually. Instead, replace this with: build without coverage, run clang-tidy and valgrind with -Werror. Then, build with coverage and send to codecov. This requires a container with valgrind and codecov installed. However, this means we don't need to code quality tool (only allowed 10 code quality checks per day with that tool).",1.0,"Update CI/CD - Right now, it builds with coverage, then sends to codecov for analysis. Additionally, there is another tool to check code quality. Valgrind has to be run manually. Instead, replace this with: build without coverage, run clang-tidy and valgrind with -Werror. Then, build with coverage and send to codecov. This requires a container with valgrind and codecov installed. However, this means we don't need to code quality tool (only allowed 10 code quality checks per day with that tool).",1,update ci cd right now it builds with coverage then sends to codecov for analysis additionally there is another tool to check code quality valgrind has to be run manually instead replace this with build without coverage run clang tidy and valgrind with werror then build with coverage and send to codecov this requires a container with valgrind and codecov installed however this means we don t need to code quality tool only allowed code quality checks per day with that tool ,1 2190,11542783297.0,IssuesEvent,2020-02-18 08:17:22,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,opened,a8n: Support Bitbucket build status webhooks,automation,"We need to support Bibucket build status webhooks once it has been added to the plugin: https://github.com/sourcegraph/sourcegraph/issues/8386 This issue was extracted from this larger one: https://github.com/sourcegraph/sourcegraph/issues/7093",1.0,"a8n: Support Bitbucket build status webhooks - We need to support Bibucket build status webhooks once it has been added to the plugin: https://github.com/sourcegraph/sourcegraph/issues/8386 This issue was extracted from this larger one: https://github.com/sourcegraph/sourcegraph/issues/7093",1, support bitbucket build status webhooks we need to support bibucket build status webhooks once it has been added to the plugin this issue was extracted from this larger one ,1 9327,28010813531.0,IssuesEvent,2023-03-27 18:29:05,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,closed,[Backup Restore] Restore failed for dropped colocated database ,kind/bug area/docdb priority/medium qa_automation,"Jira Link: [DB-5918](https://yugabyte.atlassian.net/browse/DB-5918) ### Description Steps: 1. Take Backup of a Colocated DB 2. DROP the DB 3. Restore the Backup Observed Restore failed with below error: `2023-03-22 11:19:01,918 test_base.py:178 ERROR testysqltabletsplittingwithrpc-aws-rf3 ITEST FAILED testysqltabletsplittingwithrpc-aws-rf3 : RuntimeError('wait_for_task: Failed task with errors in 30.35816502571106s:\nFailed to execute task {""platformVersion"":""2.17.3.0-b16"",""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""feddcbfa-8379-4a3e-8ba7-8c9af9788fe9"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""yb..., hit error:\n\nTask id 7901e4e9-f195-4897-95d6-8402d158cac0_PGSQL_TABLE_TYPE_colocated_db status: Failed with error COMMAND_FAILED.')` ``` YW 2023-03-22T11:40:46.430Z [ERROR] c8c6cf68-4a31-4885-97b5-7909b9427f25 from TaskExecutor in TaskPool-1 - Failed to execute task type RestoreBackup UUID 16226ab6-9715-44bf-a2c0-a4bb11eba31b details {""platformVersion"":""2.17.2.0-b216"",""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""fa1fd3d9-cbc9-4b52-bd61-2c0b14c66063"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""ybControllerrRpcPort"":18018,""redisServerHttpPort"":11000,""redisServerRpcPort"":6379,""yqlServerHttpPort"":12000,""yqlServerRpcPort"":9042,""ysqlServerHttpPort"":13000,""ysqlServerRpcPort"":5433,""nodeExporterPort"":9300},""extraDependencies"":{""installNodeExporter"":true},""firstTry"":true,""customerUUID"":""c9c4e6e2-a640-43f7-a522-29d2f2ededbd"",""actionType"":""RESTORE"",""category"":""YB_CONTROLLER"",""backupStorageInfoList"":[{""backupType"":""PGSQL_TABLE_TYPE"",""storageLocation"":""gs://itest-backup/univ-fa1fd3d9-cbc9-4b52-bd61-2c0b14c66063/ybc_backup-2023-03-22T11:37:39-1662548782/multi-table-colocated_db"",""keyspace"":""colocated_db"",""sse"":false,""oldOwner"":""postgres""}],""prefixUUID"":""1a61a87a-ced3-48ff-a2cf-d21f56e5910b"",""currentIdx"":0,""currentYbcTaskId"":""1a61a87a-ced3-48ff-a2cf-d21f56e5910b_PGSQL_TABLE_TYPE_colocated_db"",""enableVerboseLogs"":false,""storageConfigUUID"":""0386d3e5-52f5-4b4a-b4ca-005d622e349e"",""alterLoadBalancer"":true,""disableChecksum"":false,""useTablespaces"":false,""disableMultipart"":false,""parallelism"":8,""targetXClusterConfigs"":[],""sourceXClusterConfigs"":[]}, hit error. java.lang.RuntimeException: RestoreBackupYbc : completed 1 out of 1 tasks. failed. at com.yugabyte.yw.commissioner.TaskExecutor$RunnableTask.runSubTasks(TaskExecutor.java:1110) at com.yugabyte.yw.commissioner.tasks.RestoreBackup.run(RestoreBackup.java:65) at com.yugabyte.yw.commissioner.TaskExecutor$AbstractRunnableTask.run(TaskExecutor.java:796) at com.yugabyte.yw.commissioner.TaskExecutor$RunnableTask.run(TaskExecutor.java:1005) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.yugabyte.yw.common.logging.MDCAwareRunnable.run(MDCAwareRunnable.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: com.yugabyte.yw.common.PlatformServiceException: Task id 1a61a87a-ced3-48ff-a2cf-d21f56e5910b_PGSQL_TABLE_TYPE_colocated_db status: Failed with error COMMAND_FAILED at com.yugabyte.yw.commissioner.YbcTaskBase.handleTaskCompleteStage(YbcTaskBase.java:95) at com.yugabyte.yw.commissioner.YbcTaskBase.pollTaskProgress(YbcTaskBase.java:66) at com.yugabyte.yw.commissioner.tasks.subtasks.RestoreBackupYbc.run(RestoreBackupYbc.java:179) at com.yugabyte.yw.commissioner.TaskExecutor$AbstractRunnableTask.run(TaskExecutor.java:796) at com.yugabyte.yw.commissioner.TaskExecutor$RunnableSubTask.run(TaskExecutor.java:1180) ... 6 common frames omitted ``` Further details in Slack Thread accessible internal to YB - https://yugabyte.slack.com/archives/C8QDREM0R/p1679490341873789 cc: @renjith-yb @kripasreenivasan ### Warning: Please confirm that this issue does not contain any sensitive information - [X] I confirm this issue does not contain any sensitive information. [DB-5918]: https://yugabyte.atlassian.net/browse/DB-5918?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ",1.0,"[Backup Restore] Restore failed for dropped colocated database - Jira Link: [DB-5918](https://yugabyte.atlassian.net/browse/DB-5918) ### Description Steps: 1. Take Backup of a Colocated DB 2. DROP the DB 3. Restore the Backup Observed Restore failed with below error: `2023-03-22 11:19:01,918 test_base.py:178 ERROR testysqltabletsplittingwithrpc-aws-rf3 ITEST FAILED testysqltabletsplittingwithrpc-aws-rf3 : RuntimeError('wait_for_task: Failed task with errors in 30.35816502571106s:\nFailed to execute task {""platformVersion"":""2.17.3.0-b16"",""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""feddcbfa-8379-4a3e-8ba7-8c9af9788fe9"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""yb..., hit error:\n\nTask id 7901e4e9-f195-4897-95d6-8402d158cac0_PGSQL_TABLE_TYPE_colocated_db status: Failed with error COMMAND_FAILED.')` ``` YW 2023-03-22T11:40:46.430Z [ERROR] c8c6cf68-4a31-4885-97b5-7909b9427f25 from TaskExecutor in TaskPool-1 - Failed to execute task type RestoreBackup UUID 16226ab6-9715-44bf-a2c0-a4bb11eba31b details {""platformVersion"":""2.17.2.0-b216"",""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""fa1fd3d9-cbc9-4b52-bd61-2c0b14c66063"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""ybControllerrRpcPort"":18018,""redisServerHttpPort"":11000,""redisServerRpcPort"":6379,""yqlServerHttpPort"":12000,""yqlServerRpcPort"":9042,""ysqlServerHttpPort"":13000,""ysqlServerRpcPort"":5433,""nodeExporterPort"":9300},""extraDependencies"":{""installNodeExporter"":true},""firstTry"":true,""customerUUID"":""c9c4e6e2-a640-43f7-a522-29d2f2ededbd"",""actionType"":""RESTORE"",""category"":""YB_CONTROLLER"",""backupStorageInfoList"":[{""backupType"":""PGSQL_TABLE_TYPE"",""storageLocation"":""gs://itest-backup/univ-fa1fd3d9-cbc9-4b52-bd61-2c0b14c66063/ybc_backup-2023-03-22T11:37:39-1662548782/multi-table-colocated_db"",""keyspace"":""colocated_db"",""sse"":false,""oldOwner"":""postgres""}],""prefixUUID"":""1a61a87a-ced3-48ff-a2cf-d21f56e5910b"",""currentIdx"":0,""currentYbcTaskId"":""1a61a87a-ced3-48ff-a2cf-d21f56e5910b_PGSQL_TABLE_TYPE_colocated_db"",""enableVerboseLogs"":false,""storageConfigUUID"":""0386d3e5-52f5-4b4a-b4ca-005d622e349e"",""alterLoadBalancer"":true,""disableChecksum"":false,""useTablespaces"":false,""disableMultipart"":false,""parallelism"":8,""targetXClusterConfigs"":[],""sourceXClusterConfigs"":[]}, hit error. java.lang.RuntimeException: RestoreBackupYbc : completed 1 out of 1 tasks. failed. at com.yugabyte.yw.commissioner.TaskExecutor$RunnableTask.runSubTasks(TaskExecutor.java:1110) at com.yugabyte.yw.commissioner.tasks.RestoreBackup.run(RestoreBackup.java:65) at com.yugabyte.yw.commissioner.TaskExecutor$AbstractRunnableTask.run(TaskExecutor.java:796) at com.yugabyte.yw.commissioner.TaskExecutor$RunnableTask.run(TaskExecutor.java:1005) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.yugabyte.yw.common.logging.MDCAwareRunnable.run(MDCAwareRunnable.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: com.yugabyte.yw.common.PlatformServiceException: Task id 1a61a87a-ced3-48ff-a2cf-d21f56e5910b_PGSQL_TABLE_TYPE_colocated_db status: Failed with error COMMAND_FAILED at com.yugabyte.yw.commissioner.YbcTaskBase.handleTaskCompleteStage(YbcTaskBase.java:95) at com.yugabyte.yw.commissioner.YbcTaskBase.pollTaskProgress(YbcTaskBase.java:66) at com.yugabyte.yw.commissioner.tasks.subtasks.RestoreBackupYbc.run(RestoreBackupYbc.java:179) at com.yugabyte.yw.commissioner.TaskExecutor$AbstractRunnableTask.run(TaskExecutor.java:796) at com.yugabyte.yw.commissioner.TaskExecutor$RunnableSubTask.run(TaskExecutor.java:1180) ... 6 common frames omitted ``` Further details in Slack Thread accessible internal to YB - https://yugabyte.slack.com/archives/C8QDREM0R/p1679490341873789 cc: @renjith-yb @kripasreenivasan ### Warning: Please confirm that this issue does not contain any sensitive information - [X] I confirm this issue does not contain any sensitive information. [DB-5918]: https://yugabyte.atlassian.net/browse/DB-5918?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ",1, restore failed for dropped colocated database jira link description steps take backup of a colocated db drop the db restore the backup observed restore failed with below error test base py error testysqltabletsplittingwithrpc aws itest failed testysqltabletsplittingwithrpc aws runtimeerror wait for task failed task with errors in nfailed to execute task platformversion sleepaftermasterrestartmillis sleepaftertserverrestartmillis nodeexporteruser prometheus universeuuid feddcbfa enableybc false installybc false ybcinstalled false encryptionatrestconfig encryptionatrestenabled false optype undefined type data key communicationports masterhttpport masterrpcport tserverhttpport tserverrpcport ybcontrollerhttpport yb hit error n ntask id pgsql table type colocated db status failed with error command failed yw from taskexecutor in taskpool failed to execute task type restorebackup uuid details platformversion sleepaftermasterrestartmillis sleepaftertserverrestartmillis nodeexporteruser prometheus universeuuid enableybc false installybc false ybcinstalled false encryptionatrestconfig encryptionatrestenabled false optype undefined type data key communicationports masterhttpport masterrpcport tserverhttpport tserverrpcport ybcontrollerhttpport ybcontrollerrrpcport redisserverhttpport redisserverrpcport yqlserverhttpport yqlserverrpcport ysqlserverhttpport ysqlserverrpcport nodeexporterport extradependencies installnodeexporter true firsttry true customeruuid actiontype restore category yb controller backupstorageinfolist prefixuuid currentidx currentybctaskid pgsql table type colocated db enableverboselogs false storageconfiguuid alterloadbalancer true disablechecksum false usetablespaces false disablemultipart false parallelism targetxclusterconfigs sourcexclusterconfigs hit error java lang runtimeexception restorebackupybc completed out of tasks failed at com yugabyte yw commissioner taskexecutor runnabletask runsubtasks taskexecutor java at com yugabyte yw commissioner tasks restorebackup run restorebackup java at com yugabyte yw commissioner taskexecutor abstractrunnabletask run taskexecutor java at com yugabyte yw commissioner taskexecutor runnabletask run taskexecutor java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at com yugabyte yw common logging mdcawarerunnable run mdcawarerunnable java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by com yugabyte yw common platformserviceexception task id pgsql table type colocated db status failed with error command failed at com yugabyte yw commissioner ybctaskbase handletaskcompletestage ybctaskbase java at com yugabyte yw commissioner ybctaskbase polltaskprogress ybctaskbase java at com yugabyte yw commissioner tasks subtasks restorebackupybc run restorebackupybc java at com yugabyte yw commissioner taskexecutor abstractrunnabletask run taskexecutor java at com yugabyte yw commissioner taskexecutor runnablesubtask run taskexecutor java common frames omitted further details in slack thread accessible internal to yb cc renjith yb kripasreenivasan warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information ,1 4059,15304067613.0,IssuesEvent,2021-02-24 16:29:27,rstudio/rstudio,https://api.github.com/repos/rstudio/rstudio,opened,Add automation locator ids for the Environment tab,automation," I could use some ids under that environment tab for the values or other things that might show in this area. ![Screen Shot 2021-02-24 at 9 26 40 AM](https://user-images.githubusercontent.com/1482677/109032128-9b675a80-7682-11eb-94ab-3c3ef862a84d.png) ",1.0,"Add automation locator ids for the Environment tab - I could use some ids under that environment tab for the values or other things that might show in this area. ![Screen Shot 2021-02-24 at 9 26 40 AM](https://user-images.githubusercontent.com/1482677/109032128-9b675a80-7682-11eb-94ab-3c3ef862a84d.png) ",1,add automation locator ids for the environment tab thanks for taking the time to file a feature request please take the time to search for an existing feature request to avoid creating duplicate requests if you find an existing feature request please give it a thumbs up reaction as we ll use these reactions to help prioritize the implementation of these features in the future if the feature has not yet been filed then please describe the feature you d like to see become a part of rstudio see for a guide on how to write good feature requests i could use some ids under that environment tab for the values or other things that might show in this area ,1 8668,27172062820.0,IssuesEvent,2023-02-17 20:25:19,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,"[7.2] File Picker for JS: TypeError ""tbody.rows.count is not a function"" (action: download)",type:bug area:Picker automation:Closed,"The OneDrive file picker for JavaScript opens, authenticates and displays my files just fine. It also lets me select a file and closes the dialog when I click “Open”. But the opener window does not receive proper file info – instead, I see these messages in the browser console: ``` [OneDriveSDK] calling xhr failure callback, status: EXCEPTION TypeError message: ""tbody.rows.count is not a function"" stack: ""startAjaxRequest@https://example.com/picker:56:4699 XMLHttpRequest.prototype.open@example.com/picker:56:13291 [33] Vulnerable Library - jackson-databind-2.9.2.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: zaproxy/buildSrc/build.gradle.kts

Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.2/1d8d8cb7cf26920ba57fb61fa56da88cc123b21f/jackson-databind-2.9.2.jar

Dependency Hierarchy: - github-api-1.95.jar (Root Library) - :x: **jackson-databind-2.9.2.jar** (Vulnerable Library)

Found in HEAD commit: 13d0feb89469fcd0caba70e9b151d20ad1849e95

Found in base branch: develop

Vulnerability Details

A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.

Publish Date: 2019-05-17

URL: CVE-2019-12086

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086

Release Date: 2019-05-17

Fix Resolution: 2.9.9

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-12086 (High) detected in jackson-databind-2.9.2.jar - ## CVE-2019-12086 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.2.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: zaproxy/buildSrc/build.gradle.kts

Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.2/1d8d8cb7cf26920ba57fb61fa56da88cc123b21f/jackson-databind-2.9.2.jar

Dependency Hierarchy: - github-api-1.95.jar (Root Library) - :x: **jackson-databind-2.9.2.jar** (Vulnerable Library)

Found in HEAD commit: 13d0feb89469fcd0caba70e9b151d20ad1849e95

Found in base branch: develop

Vulnerability Details

A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.

Publish Date: 2019-05-17

URL: CVE-2019-12086

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086

Release Date: 2019-05-17

Fix Resolution: 2.9.9

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file zaproxy buildsrc build gradle kts path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy github api jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch develop vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x before when default typing is enabled either globally or for a specific property for an externally exposed json endpoint the service has the mysql connector java jar or earlier in the classpath and an attacker can host a crafted mysql server reachable by the victim an attacker can send a crafted json message that allows them to read arbitrary local files on the server this occurs because of missing com mysql cj jdbc admin miniadmin validation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 323672,27745213580.0,IssuesEvent,2023-03-15 16:35:36,delph-in/srg,https://api.github.com/repos/delph-in/srg,opened,"""Casi todos los perros ladran""",mrs testsuite,"Item 901 in the MRS test suite: the only reading available seems to be that ""all things that are almost dogs"" bark. But ""Casi"" should be linked to ""todos"", not to perros.",1.0,"""Casi todos los perros ladran"" - Item 901 in the MRS test suite: the only reading available seems to be that ""all things that are almost dogs"" bark. But ""Casi"" should be linked to ""todos"", not to perros.",0, casi todos los perros ladran item in the mrs test suite the only reading available seems to be that all things that are almost dogs bark but casi should be linked to todos not to perros ,0 3604,14121075591.0,IssuesEvent,2020-11-09 00:43:53,surge-synthesizer/surge,https://api.github.com/repos/surge-synthesizer/surge,closed,Maybe a VST3 CC issue?,Bug Report Host Automation VST3,"As @JackyLigon and I discussed on slack: (edited for clarity) Jacky 5:25 PM @baconpaul Probably need to dig a little deeper, but was attempting to map an Assignable Controller to a synth parameter using CC17, and I can see it - curiously - changing FX1 Return slider, which I have in no way assigned to anything. Need to look more closely though. baconpaul 5:27 PM VST2 or VST3? Jacky 5:27 PM VST3 baconpaul 5:27 PM And latest nightly? Jacky 5:28 PM Surge-NIGHTLY-2020-02-12-48b6528-Setup baconpaul 5:29 PM OK there’s some hairy mapping which goes on in the VST3 to get CCs working properly. I had tested that pretty closely but the FX return is in the collection of params which is kinda in the ‘painful and too complicated’ range so if you find a clear example that woudl be very very very useful Jacky 5:30 PM As a step in trying to diagnose the MIDI mapping issue above, I've been using Reaper's JS MIDI Examiner to see CCs and values coming from the keyboard controller, just confirming CC17... baconpaul 5:30 PM CC17 and which channel would be useful too! Jacky 5:34 PM Happens actually that it was transmitting CC17 on CH 3 in this case. baconpaul 5:34 PM @Jacky OK I will poke at it. Lemme open an issue so I don’t forget",1.0,"Maybe a VST3 CC issue? - As @JackyLigon and I discussed on slack: (edited for clarity) Jacky 5:25 PM @baconpaul Probably need to dig a little deeper, but was attempting to map an Assignable Controller to a synth parameter using CC17, and I can see it - curiously - changing FX1 Return slider, which I have in no way assigned to anything. Need to look more closely though. baconpaul 5:27 PM VST2 or VST3? Jacky 5:27 PM VST3 baconpaul 5:27 PM And latest nightly? Jacky 5:28 PM Surge-NIGHTLY-2020-02-12-48b6528-Setup baconpaul 5:29 PM OK there’s some hairy mapping which goes on in the VST3 to get CCs working properly. I had tested that pretty closely but the FX return is in the collection of params which is kinda in the ‘painful and too complicated’ range so if you find a clear example that woudl be very very very useful Jacky 5:30 PM As a step in trying to diagnose the MIDI mapping issue above, I've been using Reaper's JS MIDI Examiner to see CCs and values coming from the keyboard controller, just confirming CC17... baconpaul 5:30 PM CC17 and which channel would be useful too! Jacky 5:34 PM Happens actually that it was transmitting CC17 on CH 3 in this case. baconpaul 5:34 PM @Jacky OK I will poke at it. Lemme open an issue so I don’t forget",1,maybe a cc issue as jackyligon and i discussed on slack edited for clarity jacky pm baconpaul probably need to dig a little deeper but was attempting to map an assignable controller to a synth parameter using and i can see it curiously changing return slider which i have in no way assigned to anything need to look more closely though baconpaul pm or jacky pm baconpaul pm and latest nightly jacky pm surge nightly setup baconpaul pm ok there’s some hairy mapping which goes on in the to get ccs working properly i had tested that pretty closely but the fx return is in the collection of params which is kinda in the ‘painful and too complicated’ range so if you find a clear example that woudl be very very very useful jacky pm as a step in trying to diagnose the midi mapping issue above i ve been using reaper s js midi examiner to see ccs and values coming from the keyboard controller just confirming baconpaul pm and which channel would be useful too jacky pm happens actually that it was transmitting on ch in this case baconpaul pm jacky ok i will poke at it lemme open an issue so i don’t forget,1 8625,2875502790.0,IssuesEvent,2015-06-09 08:36:29,nilmtk/nilmtk,https://api.github.com/repos/nilmtk/nilmtk,opened,Consider keeping cache in separate file,DataStore and format conversion design Statistics and correlations,"At the moment, we modify the main dataset HDF5 file to store cached statistics. This has the advantage that the cache is kept with the data. But it has several disadvantages: * Sometimes the HDF5 can become corrupted (e.g. in #328) * It slightly complicates our unit tests (because we need to replace modified test files with originals) So perhaps we should consider keeping the cache in a separate file (maybe even keeping it in the OS temporary directory)?",1.0,"Consider keeping cache in separate file - At the moment, we modify the main dataset HDF5 file to store cached statistics. This has the advantage that the cache is kept with the data. But it has several disadvantages: * Sometimes the HDF5 can become corrupted (e.g. in #328) * It slightly complicates our unit tests (because we need to replace modified test files with originals) So perhaps we should consider keeping the cache in a separate file (maybe even keeping it in the OS temporary directory)?",0,consider keeping cache in separate file at the moment we modify the main dataset file to store cached statistics this has the advantage that the cache is kept with the data but it has several disadvantages sometimes the can become corrupted e g in it slightly complicates our unit tests because we need to replace modified test files with originals so perhaps we should consider keeping the cache in a separate file maybe even keeping it in the os temporary directory ,0 1932,11135065933.0,IssuesEvent,2019-12-20 13:30:30,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,a8n: Gracefully handle changeset deletion in code hosts,automation,"Some code hosts allow changesets to be deleted, others don't. For instance, GitHub pull requests can't be deleted (only closed or merged), but Bitbucket's PRs can. To be reliable in the face of upstream deletions of changesets, the `a8n.Syncer` must be changed to handle this scenario gracefully. My intuition is that if the changeset is deleted on the code host, we should delete it on Sourcegraph, as we do for repositories. But that may be weird if we were the ones creating the changeset upstream and someone deleted manually. @sourcegraph/automation, @sqs: Thoughts on the desirable user experience here?",1.0,"a8n: Gracefully handle changeset deletion in code hosts - Some code hosts allow changesets to be deleted, others don't. For instance, GitHub pull requests can't be deleted (only closed or merged), but Bitbucket's PRs can. To be reliable in the face of upstream deletions of changesets, the `a8n.Syncer` must be changed to handle this scenario gracefully. My intuition is that if the changeset is deleted on the code host, we should delete it on Sourcegraph, as we do for repositories. But that may be weird if we were the ones creating the changeset upstream and someone deleted manually. @sourcegraph/automation, @sqs: Thoughts on the desirable user experience here?",1, gracefully handle changeset deletion in code hosts some code hosts allow changesets to be deleted others don t for instance github pull requests can t be deleted only closed or merged but bitbucket s prs can to be reliable in the face of upstream deletions of changesets the syncer must be changed to handle this scenario gracefully my intuition is that if the changeset is deleted on the code host we should delete it on sourcegraph as we do for repositories but that may be weird if we were the ones creating the changeset upstream and someone deleted manually sourcegraph automation sqs thoughts on the desirable user experience here ,1 871,8488840804.0,IssuesEvent,2018-10-26 17:56:38,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Outbound connectivity--to what?,assigned-to-author automation/svc doc-enhancement triaged,"States 'Outbound connectivity from the VM is required to return the results of the script'. Connectivity to what? Need IP/protocol/port info. Also, ideally there should be an NSG Service Tag to make it easy to configure the correct connectivity. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: b3ff94e6-3c1f-b9e1-1a46-b75fe2ffca1b * Version Independent ID: 8f3ca735-ddd4-f4a9-4ee0-189604018784 * Content: [Run PowerShell scripts in an Windows VM in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/run-command#powershell) * Content Source: [articles/virtual-machines/windows/run-command.md](https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machines/windows/run-command.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Outbound connectivity--to what? - States 'Outbound connectivity from the VM is required to return the results of the script'. Connectivity to what? Need IP/protocol/port info. Also, ideally there should be an NSG Service Tag to make it easy to configure the correct connectivity. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: b3ff94e6-3c1f-b9e1-1a46-b75fe2ffca1b * Version Independent ID: 8f3ca735-ddd4-f4a9-4ee0-189604018784 * Content: [Run PowerShell scripts in an Windows VM in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/run-command#powershell) * Content Source: [articles/virtual-machines/windows/run-command.md](https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machines/windows/run-command.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,outbound connectivity to what states outbound connectivity from the vm is required to return the results of the script connectivity to what need ip protocol port info also ideally there should be an nsg service tag to make it easy to configure the correct connectivity document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 23653,6461440723.0,IssuesEvent,2017-08-16 08:09:02,q2g/q2g-ext-selector,https://api.github.com/repos/q2g/q2g-ext-selector,closed,css naming issue extensionHeader.less / .css,code quality,"please fix the css name for: q2g-ext-selector/src/lib/daVinci.js/src/directives/extensionHeader.less ",1.0,"css naming issue extensionHeader.less / .css - please fix the css name for: q2g-ext-selector/src/lib/daVinci.js/src/directives/extensionHeader.less ",0,css naming issue extensionheader less css please fix the css name for ext selector src lib davinci js src directives extensionheader less ,0 135769,19663102182.0,IssuesEvent,2022-01-10 19:10:28,department-of-veterans-affairs/va.gov-team,https://api.github.com/repos/department-of-veterans-affairs/va.gov-team,closed,Check in with Oscar (SAHG),design vsa vsa-ebenefits SAHG,"## Background We need to talk to Oscar ## Considerations ## Tasks - [x] Tues - Wed: email reach out - [ ] ## Acceptance Criteria - [x] Oscar has been reached out to",1.0,"Check in with Oscar (SAHG) - ## Background We need to talk to Oscar ## Considerations ## Tasks - [x] Tues - Wed: email reach out - [ ] ## Acceptance Criteria - [x] Oscar has been reached out to",0,check in with oscar sahg background we need to talk to oscar considerations tasks tues wed email reach out acceptance criteria oscar has been reached out to,0 580,7314797087.0,IssuesEvent,2018-03-01 08:50:04,snowplow/iglu-central,https://api.github.com/repos/snowplow/iglu-central,opened,Add authorization SQL to DDLs generated by igluctl,automation,"Data modeling jobs are often run by a special datamodeling user in Redshift, which has limited permissions on `atomic`. When clients update a table, say because they are introducing a new version of the schema, which is already used in a datamodeling job, the datamodeling user loses its permissions, which causes the job to fail. It would be good if schema DDLs generated with igluctl have some additional SQL at the end, along the lines of: ``` GRANT SELECT ON {{atomic}}.{{table}} TO {{datamodeling}} ```",1.0,"Add authorization SQL to DDLs generated by igluctl - Data modeling jobs are often run by a special datamodeling user in Redshift, which has limited permissions on `atomic`. When clients update a table, say because they are introducing a new version of the schema, which is already used in a datamodeling job, the datamodeling user loses its permissions, which causes the job to fail. It would be good if schema DDLs generated with igluctl have some additional SQL at the end, along the lines of: ``` GRANT SELECT ON {{atomic}}.{{table}} TO {{datamodeling}} ```",1,add authorization sql to ddls generated by igluctl data modeling jobs are often run by a special datamodeling user in redshift which has limited permissions on atomic when clients update a table say because they are introducing a new version of the schema which is already used in a datamodeling job the datamodeling user loses its permissions which causes the job to fail it would be good if schema ddls generated with igluctl have some additional sql at the end along the lines of grant select on atomic table to datamodeling ,1 2525,12221737832.0,IssuesEvent,2020-05-02 09:37:01,krsiakdaniel/movies,https://api.github.com/repos/krsiakdaniel/movies,closed,GitHub - Apps + Actions,automation,"## actions https://github.com/krsiakdaniel/movies/actions/new - [x] greetings: https://github.com/krsiakdaniel/movies/pull/86 - [x] `stale.yml` config: https://github.com/krsiakdaniel/movies/commit/768aa0c8520ab1161d29f8fb80680a40a209203e ## apps https://github.com/marketplace?type=apps Installed: - [x] [StaleBot](https://github.com/marketplace/stale) - [x] imgBot: https://github.com/krsiakdaniel/movies/pull/80 - [x] Dependabot : https://github.com/krsiakdaniel/movies/pull/81",1.0,"GitHub - Apps + Actions - ## actions https://github.com/krsiakdaniel/movies/actions/new - [x] greetings: https://github.com/krsiakdaniel/movies/pull/86 - [x] `stale.yml` config: https://github.com/krsiakdaniel/movies/commit/768aa0c8520ab1161d29f8fb80680a40a209203e ## apps https://github.com/marketplace?type=apps Installed: - [x] [StaleBot](https://github.com/marketplace/stale) - [x] imgBot: https://github.com/krsiakdaniel/movies/pull/80 - [x] Dependabot : https://github.com/krsiakdaniel/movies/pull/81",1,github apps actions actions greetings stale yml config apps installed imgbot dependabot ,1 289032,8854296746.0,IssuesEvent,2019-01-09 00:41:33,visit-dav/issues-test,https://api.github.com/repos/visit-dav/issues-test,closed,Remove special handling for the gremlin system at LLNL since it is now a chaos 5 OS.,bug crash likelihood medium priority reviewed severity high wrong results,"Gremlin used to have a special build because it was running chaos 4, whereas all the other LLNL clusters were running chaos 5. A special build is also no longer needed for it, so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed, since its executables will be built on the inca system. -----------------------REDMINE MIGRATION----------------------- This ticket was migrated from Redmine. As such, not all information was able to be captured in the transition. Below is a complete record of the original redmine ticket. Ticket number: 1390 Status: Resolved Project: VisIt Tracker: Bug Priority: Urgent Subject: Remove special handling for the gremlin system at LLNL since it is now a chaos 5 OS. Assigned to: Eric Brugger Category: - Target version: 2.6.2 Author: Eric Brugger Start: 03/20/2013 Due date: % Done: 100% Estimated time: 1.00 hour Created: 03/20/2013 02:38 pm Updated: 03/20/2013 03:24 pm Likelihood: 3 - Occasional Severity: 4 - Crash / Wrong Results Found in version: 2.6.1 Impact: Expected Use: OS: All Support Group: Any Description: Gremlin used to have a special build because it was running chaos 4, whereas all the other LLNL clusters were running chaos 5. A special build is also no longer needed for it, so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed, since its executables will be built on the inca system. Comments: I committed revisions 20579 and 20581 to the 2.6 RC and trunk with thefollowing change:1) I removed the config site file for gremlin and removed it from both visitbuildclosed and visitinstallclosed since gremlin's operating system no longer differs from other LLNL clusters. I also removed the custom coding for gremlin from the custom launcher. This resolves #1390.D configsite/gremlin3.cmakeM resources/hosts/llnl_closed/customlauncherM svn_bin/visitbuildclosedM svn_bin/visitinstall-closed ",1.0,"Remove special handling for the gremlin system at LLNL since it is now a chaos 5 OS. - Gremlin used to have a special build because it was running chaos 4, whereas all the other LLNL clusters were running chaos 5. A special build is also no longer needed for it, so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed, since its executables will be built on the inca system. -----------------------REDMINE MIGRATION----------------------- This ticket was migrated from Redmine. As such, not all information was able to be captured in the transition. Below is a complete record of the original redmine ticket. Ticket number: 1390 Status: Resolved Project: VisIt Tracker: Bug Priority: Urgent Subject: Remove special handling for the gremlin system at LLNL since it is now a chaos 5 OS. Assigned to: Eric Brugger Category: - Target version: 2.6.2 Author: Eric Brugger Start: 03/20/2013 Due date: % Done: 100% Estimated time: 1.00 hour Created: 03/20/2013 02:38 pm Updated: 03/20/2013 03:24 pm Likelihood: 3 - Occasional Severity: 4 - Crash / Wrong Results Found in version: 2.6.1 Impact: Expected Use: OS: All Support Group: Any Description: Gremlin used to have a special build because it was running chaos 4, whereas all the other LLNL clusters were running chaos 5. A special build is also no longer needed for it, so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed, since its executables will be built on the inca system. Comments: I committed revisions 20579 and 20581 to the 2.6 RC and trunk with thefollowing change:1) I removed the config site file for gremlin and removed it from both visitbuildclosed and visitinstallclosed since gremlin's operating system no longer differs from other LLNL clusters. I also removed the custom coding for gremlin from the custom launcher. This resolves #1390.D configsite/gremlin3.cmakeM resources/hosts/llnl_closed/customlauncherM svn_bin/visitbuildclosedM svn_bin/visitinstall-closed ",0,remove special handling for the gremlin system at llnl since it is now a chaos os gremlin used to have a special build because it was running chaos whereas all the other llnl clusters were running chaos a special build is also no longer needed for it so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed since its executables will be built on the inca system redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority urgent subject remove special handling for the gremlin system at llnl since it is now a chaos os assigned to eric brugger category target version author eric brugger start due date done estimated time hour created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description gremlin used to have a special build because it was running chaos whereas all the other llnl clusters were running chaos a special build is also no longer needed for it so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed since its executables will be built on the inca system comments i committed revisions and to the rc and trunk with thefollowing change i removed the config site file for gremlin and removed it from both visitbuildclosed and visitinstallclosed since gremlin s operating system no longer differs from other llnl clusters i also removed the custom coding for gremlin from the custom launcher this resolves d configsite cmakem resources hosts llnl closed customlauncherm svn bin visitbuildclosedm svn bin visitinstall closed ,0 8836,27172312544.0,IssuesEvent,2023-02-17 20:39:51,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Detecting file rename kind of scenario using delta query,Needs: Attention :wave: automation:Closed," #### Category - [x] Question - [ ] Documentation issue - [ ] Bug What is the right way to detect whether contents of a file have been modified or file has just been renamed? In the delta query response, both are returned with etag and ctag modified. [ ]: http://aka.ms/onedrive-api-issues [x]: http://aka.ms/onedrive-api-issues",1.0,"Detecting file rename kind of scenario using delta query - #### Category - [x] Question - [ ] Documentation issue - [ ] Bug What is the right way to detect whether contents of a file have been modified or file has just been renamed? In the delta query response, both are returned with etag and ctag modified. [ ]: http://aka.ms/onedrive-api-issues [x]: http://aka.ms/onedrive-api-issues",1,detecting file rename kind of scenario using delta query category question documentation issue bug what is the right way to detect whether contents of a file have been modified or file has just been renamed in the delta query response both are returned with etag and ctag modified ,1 8205,26453349575.0,IssuesEvent,2023-01-16 12:52:02,rancher/elemental,https://api.github.com/repos/rancher/elemental,closed,Research - Use autoscaling for self-hosted runners in public cloud,area/automation kind/QA,"Right now, we have self hosted runners in the public cloud, the machines is started and stopped on demand. It is a good first step but when we will add more tests, we will need something better. We can think about autoscaling the runners, it is possible with the Github API and there are some resources on how to achieve it. For instance: https://www.dev-eth0.de/2021/03/09/autoscaling-gitlab-runner-instances-on-google-cloud-platform/ https://medium.com/philips-technology-blog/scaling-github-action-runners-a4a45f7c67a6 https://github.blog/changelog/2021-09-20-github-actions-ephemeral-self-hosted-runners-new-webhooks-for-auto-scaling/ But it's low priority at the moment.",1.0,"Research - Use autoscaling for self-hosted runners in public cloud - Right now, we have self hosted runners in the public cloud, the machines is started and stopped on demand. It is a good first step but when we will add more tests, we will need something better. We can think about autoscaling the runners, it is possible with the Github API and there are some resources on how to achieve it. For instance: https://www.dev-eth0.de/2021/03/09/autoscaling-gitlab-runner-instances-on-google-cloud-platform/ https://medium.com/philips-technology-blog/scaling-github-action-runners-a4a45f7c67a6 https://github.blog/changelog/2021-09-20-github-actions-ephemeral-self-hosted-runners-new-webhooks-for-auto-scaling/ But it's low priority at the moment.",1,research use autoscaling for self hosted runners in public cloud right now we have self hosted runners in the public cloud the machines is started and stopped on demand it is a good first step but when we will add more tests we will need something better we can think about autoscaling the runners it is possible with the github api and there are some resources on how to achieve it for instance but it s low priority at the moment ,1 2556,12270647678.0,IssuesEvent,2020-05-07 15:49:03,bandprotocol/bandchain,https://api.github.com/repos/bandprotocol/bandchain,closed,Make sure initial 4 validators in our testnet work become data providers,automation chain,Does not need to be during genesis. Can just be a command that broadcasts signed transactions to the network.,1.0,Make sure initial 4 validators in our testnet work become data providers - Does not need to be during genesis. Can just be a command that broadcasts signed transactions to the network.,1,make sure initial validators in our testnet work become data providers does not need to be during genesis can just be a command that broadcasts signed transactions to the network ,1 3717,14406656236.0,IssuesEvent,2020-12-03 20:33:36,SynBioDex/SBOL-visual,https://api.github.com/repos/SynBioDex/SBOL-visual,closed,Summary SEP catalog,automation,"Automate generation of an SEP catalog collection in this repository, as has been done for the SBOL Data SEPs.",1.0,"Summary SEP catalog - Automate generation of an SEP catalog collection in this repository, as has been done for the SBOL Data SEPs.",1,summary sep catalog automate generation of an sep catalog collection in this repository as has been done for the sbol data seps ,1 9340,28018599256.0,IssuesEvent,2023-03-28 02:12:43,nephio-project/nephio,https://api.github.com/repos/nephio-project/nephio,opened,Implement IPAM controller ,area/package-specialization sig/automation,Implement and package IPAM controller based on the workshop prototype. ,1.0,Implement IPAM controller - Implement and package IPAM controller based on the workshop prototype. ,1,implement ipam controller implement and package ipam controller based on the workshop prototype ,1 7970,25950388333.0,IssuesEvent,2022-12-17 14:14:15,awslabs/aws-lambda-powertools-typescript,https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript,closed,Maintenance: remove httpbin.org request from integration tests,area/automation type/internal status/confirmed,"### Summary To test Tracer's capture http requests feature the integration tests have a few requests to a 3rd party service called `httpbin.org`. At the moment the service appears to be unreachable (504). While I expect the service to come back online, it's a good moment to remove it and use another one that we have more control over. ### Why is this needed? To remove the dependency from the 3rd party service and continue to be able to run integration tests successfully. ### Which area does this relate to? Tests ### Solution Change the host to which the request is made and add a timeout so that if the host is unreachable the tests will fail fast. ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/)",1.0,"Maintenance: remove httpbin.org request from integration tests - ### Summary To test Tracer's capture http requests feature the integration tests have a few requests to a 3rd party service called `httpbin.org`. At the moment the service appears to be unreachable (504). While I expect the service to come back online, it's a good moment to remove it and use another one that we have more control over. ### Why is this needed? To remove the dependency from the 3rd party service and continue to be able to run integration tests successfully. ### Which area does this relate to? Tests ### Solution Change the host to which the request is made and add a timeout so that if the host is unreachable the tests will fail fast. ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/)",1,maintenance remove httpbin org request from integration tests summary to test tracer s capture http requests feature the integration tests have a few requests to a party service called httpbin org at the moment the service appears to be unreachable while i expect the service to come back online it s a good moment to remove it and use another one that we have more control over why is this needed to remove the dependency from the party service and continue to be able to run integration tests successfully which area does this relate to tests solution change the host to which the request is made and add a timeout so that if the host is unreachable the tests will fail fast acknowledgment this request meets should this be considered in other lambda powertools languages i e ,1 45055,13102425000.0,IssuesEvent,2020-08-04 06:40:54,kubesphere/kubesphere,https://api.github.com/repos/kubesphere/kubesphere,closed,Feature: add security section when creating workloads,area/security frozen,"Besides advance section, KubeSphere should add an isolated section named ‘security’, and place related options to clients, for example, ‘run as non-root’, ‘disabling mounting folders like /etc or /root’ https://mp.weixin.qq.com/s/jtDlMe5SprpZfIfXryAjzg Actually all these options could be under advance section, but use specified ‘security’ section, could highlight it and make client understand its importance. ",True,"Feature: add security section when creating workloads - Besides advance section, KubeSphere should add an isolated section named ‘security’, and place related options to clients, for example, ‘run as non-root’, ‘disabling mounting folders like /etc or /root’ https://mp.weixin.qq.com/s/jtDlMe5SprpZfIfXryAjzg Actually all these options could be under advance section, but use specified ‘security’ section, could highlight it and make client understand its importance. ",0,feature add security section when creating workloads besides advance section kubesphere should add an isolated section named ‘security’ and place related options to clients for example ‘run as non root’ ‘disabling mounting folders like etc or root’ actually all these options could be under advance section but use specified ‘security’ section could highlight it and make client understand its importance ,0 3077,13055138982.0,IssuesEvent,2020-07-30 00:44:32,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,is it Cluster aware?,Pri2 automation/svc cxp product-question triaged update-management/subsvc,"Does this feature has a way to know if my VMs are part of a Windows Cluster and somehow coordinate the update (kind of Cluster Aware Updating)? I am thinking of scenarios where I have SQL FCIs or Availability Groups on Azure VMs, or scenarios where I have other type of Windows Clusters (HyperV or File Server) in Non Azure VMs --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: db47901d-1664-058d-9407-47e932dc9661 * Version Independent ID: e90ec9ee-e7da-4f19-248f-4c825aaa8b9f * Content: [Azure Automation Update Management overview](https://docs.microsoft.com/en-us/azure/automation/automation-update-management) * Content Source: [articles/automation/automation-update-management.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-update-management.md) * Service: **automation** * Sub-service: **update-management** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1.0,"is it Cluster aware? - Does this feature has a way to know if my VMs are part of a Windows Cluster and somehow coordinate the update (kind of Cluster Aware Updating)? I am thinking of scenarios where I have SQL FCIs or Availability Groups on Azure VMs, or scenarios where I have other type of Windows Clusters (HyperV or File Server) in Non Azure VMs --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: db47901d-1664-058d-9407-47e932dc9661 * Version Independent ID: e90ec9ee-e7da-4f19-248f-4c825aaa8b9f * Content: [Azure Automation Update Management overview](https://docs.microsoft.com/en-us/azure/automation/automation-update-management) * Content Source: [articles/automation/automation-update-management.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-update-management.md) * Service: **automation** * Sub-service: **update-management** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1,is it cluster aware does this feature has a way to know if my vms are part of a windows cluster and somehow coordinate the update kind of cluster aware updating i am thinking of scenarios where i have sql fcis or availability groups on azure vms or scenarios where i have other type of windows clusters hyperv or file server in non azure vms document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service update management github login mgoedtel microsoft alias magoedte ,1 342692,30634563166.0,IssuesEvent,2023-07-24 16:44:39,hyperledger/cacti,https://api.github.com/repos/hyperledger/cacti,opened,chore: fix package.json#name properties to have scope and project name,bug good-first-issue Developer_Experience Significant_Change Hacktoberfest good-first-issue-400-expert Tests P1,"There are package.json manifest files in the project that have some generic name with a high probability of future collisions. For example in `examples/cactus-example-electricity-trade/tools/periodicExecuter/package.json` the `name` property is set to `""periodicExecuter""` instead of `@hyperledger/cactus-example-electricity-trade-periodic-executer` ",1.0,"chore: fix package.json#name properties to have scope and project name - There are package.json manifest files in the project that have some generic name with a high probability of future collisions. For example in `examples/cactus-example-electricity-trade/tools/periodicExecuter/package.json` the `name` property is set to `""periodicExecuter""` instead of `@hyperledger/cactus-example-electricity-trade-periodic-executer` ",0,chore fix package json name properties to have scope and project name there are package json manifest files in the project that have some generic name with a high probability of future collisions for example in examples cactus example electricity trade tools periodicexecuter package json the name property is set to periodicexecuter instead of hyperledger cactus example electricity trade periodic executer ,0 69965,9366551101.0,IssuesEvent,2019-04-03 01:19:56,edgedb/edgedb,https://api.github.com/repos/edgedb/edgedb,closed,improve eschema documentation,documentation,"Missing documentation and (mostly) syntax tests for: - [x] attributes - [x] final concepts/atoms - [x] abstract and delegated constraints - [x] document `ON` for constraints and indexes (in particular that it requires parens)",1.0,"improve eschema documentation - Missing documentation and (mostly) syntax tests for: - [x] attributes - [x] final concepts/atoms - [x] abstract and delegated constraints - [x] document `ON` for constraints and indexes (in particular that it requires parens)",0,improve eschema documentation missing documentation and mostly syntax tests for attributes final concepts atoms abstract and delegated constraints document on for constraints and indexes in particular that it requires parens ,0 7081,24204950798.0,IssuesEvent,2022-09-25 04:31:23,kanidm/kanidm,https://api.github.com/repos/kanidm/kanidm,opened,Debian package build failure in actions - multiple rust versions appearing,bug automation,"### I did this Pushed to master ### I expected the following Debian packages to build ### What happened The github actions automation failed - somehow after installing rustc 1.64 it ran make with `1.63` - https://github.com/kanidm/kanidm/actions/runs/3120455346/jobs/5061045245#step:6:1041 ``` info: default toolchain set to 'stable-x86_64-unknown-linux-gnu' stable-x86_64-unknown-linux-gnu installed - rustc 1.64.0 (a55dd71d5 2022-09-19) Rust is installed now. Great! To get started you may need to restart your current shell. This would reload your PATH environment variable to include Cargo's bin directory ($HOME/.cargo/bin). ``` Nek minnit ``` make[3]: Entering directory '/home/runner/build/kanidm' cargo build -p daemon --bin kanidmd --release error: failed to load manifest for workspace member `/home/runner/build/kanidm/kanidm_client` Caused by: failed to parse manifest at `/home/runner/build/kanidm/kanidm_client/Cargo.toml` Caused by: feature `workspace-inheritance` is required The package requires the Cargo feature called `workspace-inheritance`, but that feature is not stabilized in this version of Cargo (1.63.0 (fd9c4297c 2022-07-01)). ``` ",1.0,"Debian package build failure in actions - multiple rust versions appearing - ### I did this Pushed to master ### I expected the following Debian packages to build ### What happened The github actions automation failed - somehow after installing rustc 1.64 it ran make with `1.63` - https://github.com/kanidm/kanidm/actions/runs/3120455346/jobs/5061045245#step:6:1041 ``` info: default toolchain set to 'stable-x86_64-unknown-linux-gnu' stable-x86_64-unknown-linux-gnu installed - rustc 1.64.0 (a55dd71d5 2022-09-19) Rust is installed now. Great! To get started you may need to restart your current shell. This would reload your PATH environment variable to include Cargo's bin directory ($HOME/.cargo/bin). ``` Nek minnit ``` make[3]: Entering directory '/home/runner/build/kanidm' cargo build -p daemon --bin kanidmd --release error: failed to load manifest for workspace member `/home/runner/build/kanidm/kanidm_client` Caused by: failed to parse manifest at `/home/runner/build/kanidm/kanidm_client/Cargo.toml` Caused by: feature `workspace-inheritance` is required The package requires the Cargo feature called `workspace-inheritance`, but that feature is not stabilized in this version of Cargo (1.63.0 (fd9c4297c 2022-07-01)). ``` ",1,debian package build failure in actions multiple rust versions appearing i did this pushed to master i expected the following debian packages to build what happened the github actions automation failed somehow after installing rustc it ran make with info default toolchain set to stable unknown linux gnu stable unknown linux gnu installed rustc rust is installed now great to get started you may need to restart your current shell this would reload your path environment variable to include cargo s bin directory home cargo bin nek minnit make entering directory home runner build kanidm cargo build p daemon bin kanidmd release error failed to load manifest for workspace member home runner build kanidm kanidm client caused by failed to parse manifest at home runner build kanidm kanidm client cargo toml caused by feature workspace inheritance is required the package requires the cargo feature called workspace inheritance but that feature is not stabilized in this version of cargo ,1 8552,27125453371.0,IssuesEvent,2023-02-16 04:39:21,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,opened,[YSQL][PITR][Tablegroups]: PITR timing out when trying to restore ~100 table creation inside tablegroup,area/ysql priority/high QA status/awaiting-triage pitr qa_automation blocks_automation,"### Description PITR of table creation (~100 tables) in tablegroup is failing with following error: `ERR=Error running restore_snapshot_schedule: Timed out (yb/rpc/outbound_call.cc:488): Failed to restore snapshot from schedule: c43048b7-85cd-4ba2-af73-0e3a6c9ce363: RestoreSnapshotSchedule RPC (request call id 4) to 10.9.193.37:7100 timed out after 60.000s` Observed FATALs when tried to repro manually. Steps followed: ``` 1. create universe with packed rows enabled 2. create database 3. create snapshot schedule on created database 4. create tablegroup 5. create 100 tables inside tablegroup 6. restore to time 2 ``` FATAL details: ``` F20230215 18:40:59 ../../src/yb/master/state_with_tablets.cc:49] Invalid value of enum SysSnapshotEntryPB::State (full enum type: yb::master::SysSnapshotEntryPB_State, expression: state): RESTORED (8). @ 0x56282cd59e27 google::LogMessage::SendToLog() @ 0x56282cd5ad6d google::LogMessage::Flush() @ 0x56282cd5b3e9 google::LogMessageFatal::~LogMessageFatal() @ 0x56282e27aea1 yb::FatalInvalidEnumValueInternal() @ 0x56282d7f1224 yb::master::(anonymous namespace)::InitialStateToTerminalState() @ 0x56282d7f0e81 yb::master::StateWithTablets::AggregatedState() @ 0x56282d7e9c83 yb::master::RestorationState::ToEntryPB() @ 0x56282d7e9c36 yb::master::RestorationState::ToPB() @ 0x56282d5349ba yb::master::enterprise::CatalogManager::ListSnapshotRestorations() @ 0x56282d5b5751 yb::master::MasterBackupServiceImpl::ListSnapshotRestorations() @ 0x56282d8d626a std::__1::__function::__func<>::operator()() @ 0x56282d8d7cef yb::master::MasterBackupIf::Handle() @ 0x56282dc7d8ce yb::rpc::ServicePoolImpl::Handle() @ 0x56282dbbec8f yb::rpc::InboundCall::InboundCallTask::Run() @ 0x56282dc8c243 yb::rpc::(anonymous namespace)::Worker::Execute() @ 0x56282e305c7f yb::Thread::SuperviseThread() @ 0x7fc19e5d4694 start_thread @ 0x7fc19ead641d __clone ``` The FATALs issue wasn't reproable and might be intermittent but I was able to reproduce timeout issue, 8/10 times. Version: 2.17.2.0-b109 Logs from automation: [2.17.2.0_testpitrwithtablegroups-aws-rf3_20230215_143100.zip](https://github.com/yugabyte/yugabyte-db/files/10752201/2.17.2.0_testpitrwithtablegroups-aws-rf3_20230215_143100.zip) I can share the sql file used to create tables, it used complex datatypes.",2.0,"[YSQL][PITR][Tablegroups]: PITR timing out when trying to restore ~100 table creation inside tablegroup - ### Description PITR of table creation (~100 tables) in tablegroup is failing with following error: `ERR=Error running restore_snapshot_schedule: Timed out (yb/rpc/outbound_call.cc:488): Failed to restore snapshot from schedule: c43048b7-85cd-4ba2-af73-0e3a6c9ce363: RestoreSnapshotSchedule RPC (request call id 4) to 10.9.193.37:7100 timed out after 60.000s` Observed FATALs when tried to repro manually. Steps followed: ``` 1. create universe with packed rows enabled 2. create database 3. create snapshot schedule on created database 4. create tablegroup 5. create 100 tables inside tablegroup 6. restore to time 2 ``` FATAL details: ``` F20230215 18:40:59 ../../src/yb/master/state_with_tablets.cc:49] Invalid value of enum SysSnapshotEntryPB::State (full enum type: yb::master::SysSnapshotEntryPB_State, expression: state): RESTORED (8). @ 0x56282cd59e27 google::LogMessage::SendToLog() @ 0x56282cd5ad6d google::LogMessage::Flush() @ 0x56282cd5b3e9 google::LogMessageFatal::~LogMessageFatal() @ 0x56282e27aea1 yb::FatalInvalidEnumValueInternal() @ 0x56282d7f1224 yb::master::(anonymous namespace)::InitialStateToTerminalState() @ 0x56282d7f0e81 yb::master::StateWithTablets::AggregatedState() @ 0x56282d7e9c83 yb::master::RestorationState::ToEntryPB() @ 0x56282d7e9c36 yb::master::RestorationState::ToPB() @ 0x56282d5349ba yb::master::enterprise::CatalogManager::ListSnapshotRestorations() @ 0x56282d5b5751 yb::master::MasterBackupServiceImpl::ListSnapshotRestorations() @ 0x56282d8d626a std::__1::__function::__func<>::operator()() @ 0x56282d8d7cef yb::master::MasterBackupIf::Handle() @ 0x56282dc7d8ce yb::rpc::ServicePoolImpl::Handle() @ 0x56282dbbec8f yb::rpc::InboundCall::InboundCallTask::Run() @ 0x56282dc8c243 yb::rpc::(anonymous namespace)::Worker::Execute() @ 0x56282e305c7f yb::Thread::SuperviseThread() @ 0x7fc19e5d4694 start_thread @ 0x7fc19ead641d __clone ``` The FATALs issue wasn't reproable and might be intermittent but I was able to reproduce timeout issue, 8/10 times. Version: 2.17.2.0-b109 Logs from automation: [2.17.2.0_testpitrwithtablegroups-aws-rf3_20230215_143100.zip](https://github.com/yugabyte/yugabyte-db/files/10752201/2.17.2.0_testpitrwithtablegroups-aws-rf3_20230215_143100.zip) I can share the sql file used to create tables, it used complex datatypes.",1, pitr timing out when trying to restore table creation inside tablegroup description pitr of table creation tables in tablegroup is failing with following error err error running restore snapshot schedule timed out yb rpc outbound call cc failed to restore snapshot from schedule restoresnapshotschedule rpc request call id to timed out after observed fatals when tried to repro manually steps followed create universe with packed rows enabled create database create snapshot schedule on created database create tablegroup create tables inside tablegroup restore to time fatal details src yb master state with tablets cc invalid value of enum syssnapshotentrypb state full enum type yb master syssnapshotentrypb state expression state restored google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb fatalinvalidenumvalueinternal yb master anonymous namespace initialstatetoterminalstate yb master statewithtablets aggregatedstate yb master restorationstate toentrypb yb master restorationstate topb yb master enterprise catalogmanager listsnapshotrestorations yb master masterbackupserviceimpl listsnapshotrestorations std function func operator yb master masterbackupif handle yb rpc servicepoolimpl handle yb rpc inboundcall inboundcalltask run yb rpc anonymous namespace worker execute yb thread supervisethread start thread clone the fatals issue wasn t reproable and might be intermittent but i was able to reproduce timeout issue times version logs from automation i can share the sql file used to create tables it used complex datatypes ,1 296623,9124447293.0,IssuesEvent,2019-02-24 03:18:40,satvikpendem/Artemis,https://api.github.com/repos/satvikpendem/Artemis,opened,Create landing page animations,Platform: Landing Priority: Medium Type: Enhancement,"Currently the landing page has a video that shows the application in use, but it is outdated to the current visual style of the app. Moreover, video may be slow on certain connections. Create and recreate such app elements in pure SVG and CSS (with minimal JS, and only for programmatic features such as dark mode toggling #17 or user sign up / login) in order to make a fast website like [Stripe](https://stripe.com). Create all future elements via SVG and CSS as well.",1.0,"Create landing page animations - Currently the landing page has a video that shows the application in use, but it is outdated to the current visual style of the app. Moreover, video may be slow on certain connections. Create and recreate such app elements in pure SVG and CSS (with minimal JS, and only for programmatic features such as dark mode toggling #17 or user sign up / login) in order to make a fast website like [Stripe](https://stripe.com). Create all future elements via SVG and CSS as well.",0,create landing page animations currently the landing page has a video that shows the application in use but it is outdated to the current visual style of the app moreover video may be slow on certain connections create and recreate such app elements in pure svg and css with minimal js and only for programmatic features such as dark mode toggling or user sign up login in order to make a fast website like create all future elements via svg and css as well ,0 3416,13734444641.0,IssuesEvent,2020-10-05 08:42:32,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Text not entered in to Stripe iFrame with Safari,BROWSER: Safari FREQUENCY: level 2 SYSTEM: automations TYPE: bug," ### What is your Test Scenario? I'm trying to run our test code on multiple browsers, specifically entering in CreditCard details on our payment pages. ### What is the Current behavior? It looks like text isn't being properly entered in to the Stripe payment iframes, on Safari. The same steps are successful in Chrome, Firefox and Edge. The main concern here is that TestCafe continues on, believing it has actually entered text in to these iframes. ### What is the Expected behavior? Text should be entered in to Stripe iFrames, regardless of browser. OR, at the least, if TestCafe is unable to locate/interact with the iFrame... it should fail at that point instead of continuing on. ### What is your web application and your TestCafe test code?
Your website URL (or attach your complete example): You can target our live site, since the issue relates to being unable to enter data in the payment forms: https://www.change.org/
Your complete test code (or attach your test files): ```js import { t, Selector } from 'testcafe'; fixture('Join Membership with credit card').page('https://www.change.org/'); test('Guest user clicks contribute directly', async () => { await t .navigateTo('s/member') .expect(Selector('[data-testid=""member_landing_page_inline_contribute_button""]').visible) .ok(); await t .click(Selector('[data-testid=""member_landing_page_inline_contribute_button""]'), { speed: 0.3 }) .expect(Selector('[data-testid=""member_payment_form""]').visible) .ok(); await t .click(Selector('[data-testid=""payment-option-button-creditCard')) .expect(Selector('.iframe-form-element').visible) .ok(); const emailAddressInput = Selector('[data-testid=""input_email""]').filterVisible(); const confirmationEmailInput = Selector('[data-testid=""input_confirmation_email""]').filterVisible(); const firstNameInput = Selector('[data-testid=""input_first_name""]'); const lastNameInput = Selector('[data-testid=""input_last_name""]'); await t .typeText(emailAddressInput, 'email@email.com') .typeText(confirmationEmailInput, 'email@email.com') .typeText(firstNameInput, 'Your') .typeText(lastNameInput, 'Name'); await t .switchToIframe(Selector('[data-testid=""credit-card-number""] iframe')) .typeText(Selector('input[name=""cardnumber""]'), '1234123412341234', { replace: true }) .expect(Selector('input[name=""cardnumber""]').value) .eql('1234 1234 1234 1234') .switchToMainWindow(); }); ```
Your complete configuration file (if any): ``` N/A ```
Your complete test report: ``` Guest user clicks contribute directly 1) AssertionError: expected '' to deeply equal '1234 1234 1234 1234' Browser: Safari 13.0.5 / macOS 10.15.3 31 | 32 | await t 33 | .switchToIframe(Selector('[data-testid=""credit-card-number""] iframe')) 34 | .typeText(Selector('input[name=""cardnumber""]'), '1234123412341234', { replace: true }) 35 | .expect(Selector('input[name=""cardnumber""]').value) > 36 | .eql('1234 1234 1234 1234') 37 | .switchToMainWindow(); 38 |}); 39 | at (/Users/rcooper/work/github.com/change/regression-qaa/tests/users/demo.js:36:6) 1/1 failed (23s) ```
Screenshots: ``` N/A ```
### Steps to Reproduce: 1. Go to my website ... 3. Execute this command... 4. See the error... ### Your Environment details: * testcafe version: 1.8.2 * node.js version: 12.14.1 * command-line arguments: testcafe safari * browser name and version: Safari 13.0.5 * platform and version: macOS 10.15.3 ",1.0,"Text not entered in to Stripe iFrame with Safari - ### What is your Test Scenario? I'm trying to run our test code on multiple browsers, specifically entering in CreditCard details on our payment pages. ### What is the Current behavior? It looks like text isn't being properly entered in to the Stripe payment iframes, on Safari. The same steps are successful in Chrome, Firefox and Edge. The main concern here is that TestCafe continues on, believing it has actually entered text in to these iframes. ### What is the Expected behavior? Text should be entered in to Stripe iFrames, regardless of browser. OR, at the least, if TestCafe is unable to locate/interact with the iFrame... it should fail at that point instead of continuing on. ### What is your web application and your TestCafe test code?
Your website URL (or attach your complete example): You can target our live site, since the issue relates to being unable to enter data in the payment forms: https://www.change.org/
Your complete test code (or attach your test files): ```js import { t, Selector } from 'testcafe'; fixture('Join Membership with credit card').page('https://www.change.org/'); test('Guest user clicks contribute directly', async () => { await t .navigateTo('s/member') .expect(Selector('[data-testid=""member_landing_page_inline_contribute_button""]').visible) .ok(); await t .click(Selector('[data-testid=""member_landing_page_inline_contribute_button""]'), { speed: 0.3 }) .expect(Selector('[data-testid=""member_payment_form""]').visible) .ok(); await t .click(Selector('[data-testid=""payment-option-button-creditCard')) .expect(Selector('.iframe-form-element').visible) .ok(); const emailAddressInput = Selector('[data-testid=""input_email""]').filterVisible(); const confirmationEmailInput = Selector('[data-testid=""input_confirmation_email""]').filterVisible(); const firstNameInput = Selector('[data-testid=""input_first_name""]'); const lastNameInput = Selector('[data-testid=""input_last_name""]'); await t .typeText(emailAddressInput, 'email@email.com') .typeText(confirmationEmailInput, 'email@email.com') .typeText(firstNameInput, 'Your') .typeText(lastNameInput, 'Name'); await t .switchToIframe(Selector('[data-testid=""credit-card-number""] iframe')) .typeText(Selector('input[name=""cardnumber""]'), '1234123412341234', { replace: true }) .expect(Selector('input[name=""cardnumber""]').value) .eql('1234 1234 1234 1234') .switchToMainWindow(); }); ```
Your complete configuration file (if any): ``` N/A ```
Your complete test report: ``` Guest user clicks contribute directly 1) AssertionError: expected '' to deeply equal '1234 1234 1234 1234' Browser: Safari 13.0.5 / macOS 10.15.3 31 | 32 | await t 33 | .switchToIframe(Selector('[data-testid=""credit-card-number""] iframe')) 34 | .typeText(Selector('input[name=""cardnumber""]'), '1234123412341234', { replace: true }) 35 | .expect(Selector('input[name=""cardnumber""]').value) > 36 | .eql('1234 1234 1234 1234') 37 | .switchToMainWindow(); 38 |}); 39 | at (/Users/rcooper/work/github.com/change/regression-qaa/tests/users/demo.js:36:6) 1/1 failed (23s) ```
Screenshots: ``` N/A ```
### Steps to Reproduce: 1. Go to my website ... 3. Execute this command... 4. See the error... ### Your Environment details: * testcafe version: 1.8.2 * node.js version: 12.14.1 * command-line arguments: testcafe safari * browser name and version: Safari 13.0.5 * platform and version: macOS 10.15.3 ",1,text not entered in to stripe iframe with safari if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository  in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario i m trying to run our test code on multiple browsers specifically entering in creditcard details on our payment pages what is the current behavior it looks like text isn t being properly entered in to the stripe payment iframes on safari the same steps are successful in chrome firefox and edge the main concern here is that testcafe continues on believing it has actually entered text in to these iframes what is the expected behavior text should be entered in to stripe iframes regardless of browser or at the least if testcafe is unable to locate interact with the iframe it should fail at that point instead of continuing on what is your web application and your testcafe test code your website url or attach your complete example you can target our live site since the issue relates to being unable to enter data in the payment forms your complete test code or attach your test files js import t selector from testcafe fixture join membership with credit card page test guest user clicks contribute directly async await t navigateto s member expect selector visible ok await t click selector speed expect selector visible ok await t click selector data testid payment option button creditcard expect selector iframe form element visible ok const emailaddressinput selector filtervisible const confirmationemailinput selector filtervisible const firstnameinput selector const lastnameinput selector await t typetext emailaddressinput email email com typetext confirmationemailinput email email com typetext firstnameinput your typetext lastnameinput name await t switchtoiframe selector iframe typetext selector input replace true expect selector input value eql switchtomainwindow your complete configuration file if any n a your complete test report guest user clicks contribute directly assertionerror expected to deeply equal browser safari macos await t switchtoiframe selector iframe typetext selector input replace true expect selector input value eql switchtomainwindow at users rcooper work github com change regression qaa tests users demo js failed screenshots n a steps to reproduce go to my website execute this command see the error your environment details testcafe version node js version command line arguments testcafe safari browser name and version safari platform and version macos ,1 31856,6650285142.0,IssuesEvent,2017-09-28 15:48:53,fieldenms/tg,https://api.github.com/repos/fieldenms/tg,closed,Entity Master: saving defects during fast entry,Defect Entity master In progress P1 Property editor UI / UX,"### Description There are couple of significant deficiencies while entity master is quickly saved through the use of `CTRL+S` shortcut immediately after editing. The nature of these deficiencies is more or less intermittent, however some examples are quite easy to reproduce. ---------------------------------------- a) `tg-air`: in WA's compound master for new entity, choose `CAR` in `Type` autocompleter, press `CTRL+S`; after that `Priority` becomes erroneous and focused; type `2` into `Priority` and press `CTRL+S` immediately. For the very brief period of time `Scheduled Start` becomes erroneous and focused and then the focus is moved to `Type` property and `Scheduled Start` error disappears. b) `tg-air`: in Equipment's compound master, type several characters into `KEY` and press `CTRL+S` immediately; replay it many times (usually over ~20) and following validation error appears: `This property has recently been changed by another user. Please either edit the value back to [HGFHGFHGFHGFHGFHGFFGHFGHHGFFGHHGGFGHFGHFHGFHGFSFGH] to resolve the conflict or cancel all of your changes.` c) `tg-air`: in Equipment's compound master, press and hold `S` character into `KEY` and after some time press `CTRL`; a couple of client-side errors appears making entity master fully unusable: `SimultaneousSaveException {message: ""Simultaneous save exception: the save process has been already started before and not ended. Please, block UI until the save action completes.""}` ---------------------------------------- After initial investigation and discussion it appears that saving process is started earlier and after that validation starts too. Such validation after completion replaces the results of saving, which causes situations a) and b). Situation c) is caused by over-restrictive client-side `Simultaneous save exception`: perhaps debouncing is a good idea here very similarly to validation debouncing. ### Expected outcome Reliable fast entry and saving in entity masters.",1.0,"Entity Master: saving defects during fast entry - ### Description There are couple of significant deficiencies while entity master is quickly saved through the use of `CTRL+S` shortcut immediately after editing. The nature of these deficiencies is more or less intermittent, however some examples are quite easy to reproduce. ---------------------------------------- a) `tg-air`: in WA's compound master for new entity, choose `CAR` in `Type` autocompleter, press `CTRL+S`; after that `Priority` becomes erroneous and focused; type `2` into `Priority` and press `CTRL+S` immediately. For the very brief period of time `Scheduled Start` becomes erroneous and focused and then the focus is moved to `Type` property and `Scheduled Start` error disappears. b) `tg-air`: in Equipment's compound master, type several characters into `KEY` and press `CTRL+S` immediately; replay it many times (usually over ~20) and following validation error appears: `This property has recently been changed by another user. Please either edit the value back to [HGFHGFHGFHGFHGFHGFFGHFGHHGFFGHHGGFGHFGHFHGFHGFSFGH] to resolve the conflict or cancel all of your changes.` c) `tg-air`: in Equipment's compound master, press and hold `S` character into `KEY` and after some time press `CTRL`; a couple of client-side errors appears making entity master fully unusable: `SimultaneousSaveException {message: ""Simultaneous save exception: the save process has been already started before and not ended. Please, block UI until the save action completes.""}` ---------------------------------------- After initial investigation and discussion it appears that saving process is started earlier and after that validation starts too. Such validation after completion replaces the results of saving, which causes situations a) and b). Situation c) is caused by over-restrictive client-side `Simultaneous save exception`: perhaps debouncing is a good idea here very similarly to validation debouncing. ### Expected outcome Reliable fast entry and saving in entity masters.",0,entity master saving defects during fast entry description there are couple of significant deficiencies while entity master is quickly saved through the use of ctrl s shortcut immediately after editing the nature of these deficiencies is more or less intermittent however some examples are quite easy to reproduce a tg air in wa s compound master for new entity choose car in type autocompleter press ctrl s after that priority becomes erroneous and focused type into priority and press ctrl s immediately for the very brief period of time scheduled start becomes erroneous and focused and then the focus is moved to type property and scheduled start error disappears b tg air in equipment s compound master type several characters into key and press ctrl s immediately replay it many times usually over and following validation error appears this property has recently been changed by another user please either edit the value back to to resolve the conflict or cancel all of your changes c tg air in equipment s compound master press and hold s character into key and after some time press ctrl a couple of client side errors appears making entity master fully unusable simultaneoussaveexception message simultaneous save exception the save process has been already started before and not ended please block ui until the save action completes after initial investigation and discussion it appears that saving process is started earlier and after that validation starts too such validation after completion replaces the results of saving which causes situations a and b situation c is caused by over restrictive client side simultaneous save exception perhaps debouncing is a good idea here very similarly to validation debouncing expected outcome reliable fast entry and saving in entity masters ,0 320642,23817400173.0,IssuesEvent,2022-09-05 08:08:28,equinor/energyvision,https://api.github.com/repos/equinor/energyvision,closed,Documentation for Editors,📄 documentation,"This document should work as the one that exists for AEM, with screenshots and explanation of components / how to use them in Sanity. AEM example: [https://statoilsrm.sharepoint.com/sites/EditordocumentationforAEMequinorcom/_layouts/15/Doc.aspx?sourcedoc={8e53650c-64b2-41de-bd94-96204ff172a7}&action=edit&wd=target%28General.one%7Cc6904292-2ef8-408e-802e-26da4408bd77%2FUse%20Google%20Chrome%7Cc3775a1b-1816-4410-a236-fa40febda329%2F%29&wdorigin=NavigationUrl](https://statoilsrm.sharepoint.com/sites/EditordocumentationforAEMequinorcom/_layouts/15/Doc.aspx?sourcedoc=%7B8e53650c-64b2-41de-bd94-96204ff172a7%7D&action=edit&wd=target%28General.one%7Cc6904292-2ef8-408e-802e-26da4408bd77%2FUse%20Google%20Chrome%7Cc3775a1b-1816-4410-a236-fa40febda329%2F%29&wdorigin=NavigationUrl)",1.0,"Documentation for Editors - This document should work as the one that exists for AEM, with screenshots and explanation of components / how to use them in Sanity. AEM example: [https://statoilsrm.sharepoint.com/sites/EditordocumentationforAEMequinorcom/_layouts/15/Doc.aspx?sourcedoc={8e53650c-64b2-41de-bd94-96204ff172a7}&action=edit&wd=target%28General.one%7Cc6904292-2ef8-408e-802e-26da4408bd77%2FUse%20Google%20Chrome%7Cc3775a1b-1816-4410-a236-fa40febda329%2F%29&wdorigin=NavigationUrl](https://statoilsrm.sharepoint.com/sites/EditordocumentationforAEMequinorcom/_layouts/15/Doc.aspx?sourcedoc=%7B8e53650c-64b2-41de-bd94-96204ff172a7%7D&action=edit&wd=target%28General.one%7Cc6904292-2ef8-408e-802e-26da4408bd77%2FUse%20Google%20Chrome%7Cc3775a1b-1816-4410-a236-fa40febda329%2F%29&wdorigin=NavigationUrl)",0,documentation for editors this document should work as the one that exists for aem with screenshots and explanation of components how to use them in sanity aem example ,0 2650,12399316701.0,IssuesEvent,2020-05-21 04:49:25,IBM/ibm-spectrum-scale-csi,https://api.github.com/repos/IBM/ibm-spectrum-scale-csi,closed,Daemonset descibe needed in snapshot tool,Component: Automation Phase: Field Severity: 3 Target: Driver Target: Operator Type: Enhancement good first issue,"Today the snapshot tool is missing a describe of the Daemon Set ```kubectl describe ds ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi-driver``` **Describe the solution you'd like** Add the describe to the snapshot tool for the ```kubectl describe ds ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi-driver``` ",1.0,"Daemonset descibe needed in snapshot tool - Today the snapshot tool is missing a describe of the Daemon Set ```kubectl describe ds ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi-driver``` **Describe the solution you'd like** Add the describe to the snapshot tool for the ```kubectl describe ds ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi-driver``` ",1,daemonset descibe needed in snapshot tool today the snapshot tool is missing a describe of the daemon set kubectl describe ds ibm spectrum scale csi n ibm spectrum scale csi driver describe the solution you d like add the describe to the snapshot tool for the kubectl describe ds ibm spectrum scale csi n ibm spectrum scale csi driver ,1 698659,23988150658.0,IssuesEvent,2022-09-13 21:10:12,CDCgov/prime-reportstream,https://api.github.com/repos/CDCgov/prime-reportstream,closed,PII Leakage to HHS Protect,onboarding-ops High Priority,"HHS Protect has informed us to that both Abbott and BD Veritor are putting the patient's first and last name into the `sending_facility_namespace_id` field, so they are receiving PII. I don't see that field mapped for HHS Protect, so we will need to see what field they are referring to, and then mask that field. Looking at a file for HHS Protect I do see there are names that appear in the ordering provider first and last name field sometimes, so perhaps that's the field? We need to reach out to Kim Del Guerico and get more details. This is high priority.",1.0,"PII Leakage to HHS Protect - HHS Protect has informed us to that both Abbott and BD Veritor are putting the patient's first and last name into the `sending_facility_namespace_id` field, so they are receiving PII. I don't see that field mapped for HHS Protect, so we will need to see what field they are referring to, and then mask that field. Looking at a file for HHS Protect I do see there are names that appear in the ordering provider first and last name field sometimes, so perhaps that's the field? We need to reach out to Kim Del Guerico and get more details. This is high priority.",0,pii leakage to hhs protect hhs protect has informed us to that both abbott and bd veritor are putting the patient s first and last name into the sending facility namespace id field so they are receiving pii i don t see that field mapped for hhs protect so we will need to see what field they are referring to and then mask that field looking at a file for hhs protect i do see there are names that appear in the ordering provider first and last name field sometimes so perhaps that s the field we need to reach out to kim del guerico and get more details this is high priority ,0 6034,21920365227.0,IssuesEvent,2022-05-22 13:28:36,surge-synthesizer/surge,https://api.github.com/repos/surge-synthesizer/surge,closed,FX Unit Streaming Not Stable under Changing of Int Param Bounds (esp AW type),Host Automation Bug Report FX Plugin,"**Vospi — Today at 6:56 AM** hey guys! when using Surge FX XT, adding airwindows effects to the list on your side (which I love) breaks previous presets of the plugin and thus breaks projects. I suppose that's because currently it uses a knob position for it. I can clearly see that it's supposed to be ToTape6 in the preset, but it's Infinity for me now. Thus, unless you definitely know what did you do back then, you break your legacy projects by updating. While AIrwindows-in-a-package is a fantastic selling point for me personally, that's a huge problem for any production in my eyes. (installed today's Nigthly) **Robbert — Today at 7:02 AM** I just noticed automation gestures don't seem to be working correctly. I haven't checked if they work correctly when actively dragging parameters around, but clicking/holding down on a parameter doesn't send the gesture start, and hosts thus also won't highlight the parameter in their generic UIs or automation lanes. Filter Type you mean saving presets host-side? yeah probably, AW type is normalized to 0.0...1.0 range so yeah every time we add something it's gonna end up mangled which means we should add built in patch save/load just like we have in Surge XT **baconpaul — Today at 7:27 AM** Ahh shoot Well I can fix that I bet thank you. Yes it’s all just 0…1 save but I can test if you are an int at stream time. Automation will be hard tho **EvilDragon — Today at 7:28 AM** I wouldn't worry about automating that param tbh that's Asking For Trouble (TM) **baconpaul — Today at 7:28 AM** Yeah no this is just at get set state time That I would fix",1.0,"FX Unit Streaming Not Stable under Changing of Int Param Bounds (esp AW type) - **Vospi — Today at 6:56 AM** hey guys! when using Surge FX XT, adding airwindows effects to the list on your side (which I love) breaks previous presets of the plugin and thus breaks projects. I suppose that's because currently it uses a knob position for it. I can clearly see that it's supposed to be ToTape6 in the preset, but it's Infinity for me now. Thus, unless you definitely know what did you do back then, you break your legacy projects by updating. While AIrwindows-in-a-package is a fantastic selling point for me personally, that's a huge problem for any production in my eyes. (installed today's Nigthly) **Robbert — Today at 7:02 AM** I just noticed automation gestures don't seem to be working correctly. I haven't checked if they work correctly when actively dragging parameters around, but clicking/holding down on a parameter doesn't send the gesture start, and hosts thus also won't highlight the parameter in their generic UIs or automation lanes. Filter Type you mean saving presets host-side? yeah probably, AW type is normalized to 0.0...1.0 range so yeah every time we add something it's gonna end up mangled which means we should add built in patch save/load just like we have in Surge XT **baconpaul — Today at 7:27 AM** Ahh shoot Well I can fix that I bet thank you. Yes it’s all just 0…1 save but I can test if you are an int at stream time. Automation will be hard tho **EvilDragon — Today at 7:28 AM** I wouldn't worry about automating that param tbh that's Asking For Trouble (TM) **baconpaul — Today at 7:28 AM** Yeah no this is just at get set state time That I would fix",1,fx unit streaming not stable under changing of int param bounds esp aw type vospi — today at am hey guys when using surge fx xt adding airwindows effects to the list on your side which i love breaks previous presets of the plugin and thus breaks projects i suppose that s because currently it uses a knob position for it i can clearly see that it s supposed to be in the preset but it s infinity for me now thus unless you definitely know what did you do back then you break your legacy projects by updating while airwindows in a package is a fantastic selling point for me personally that s a huge problem for any production in my eyes installed today s nigthly robbert — today at am i just noticed automation gestures don t seem to be working correctly i haven t checked if they work correctly when actively dragging parameters around but clicking holding down on a parameter doesn t send the gesture start and hosts thus also won t highlight the parameter in their generic uis or automation lanes filter type you mean saving presets host side yeah probably aw type is normalized to range so yeah every time we add something it s gonna end up mangled which means we should add built in patch save load just like we have in surge xt baconpaul — today at am ahh shoot well i can fix that i bet thank you yes it’s all just … save but i can test if you are an int at stream time automation will be hard tho evildragon — today at am i wouldn t worry about automating that param tbh that s asking for trouble tm baconpaul — today at am yeah no this is just at get set state time that i would fix,1 3831,14664769398.0,IssuesEvent,2020-12-29 12:47:12,modi-w/AutoVersionsDB,https://api.github.com/repos/modi-w/AutoVersionsDB,opened,"When error occure when running the file ""publish.cmd"", the error doesn't seen in the process log file.",area-automation,"**Describe the bug** When running the file ""publish.cmd"" and an error occurred, the error doesn't see in the process log file. **To Reproduce** Steps to reproduce the behavior: 1. Create an error for the publish process (for example: create a compilation exception (syntax error) on the console app). 2. Rn the file ""publish.cmd"" (on the root folder) 3. check the new log created log file at the ""\automationLogs"" folder **Action Items:** 1. 2. 3. **Updates** 1. ",1.0,"When error occure when running the file ""publish.cmd"", the error doesn't seen in the process log file. - **Describe the bug** When running the file ""publish.cmd"" and an error occurred, the error doesn't see in the process log file. **To Reproduce** Steps to reproduce the behavior: 1. Create an error for the publish process (for example: create a compilation exception (syntax error) on the console app). 2. Rn the file ""publish.cmd"" (on the root folder) 3. check the new log created log file at the ""\automationLogs"" folder **Action Items:** 1. 2. 3. **Updates** 1. ",1,when error occure when running the file publish cmd the error doesn t seen in the process log file describe the bug when running the file publish cmd and an error occurred the error doesn t see in the process log file to reproduce steps to reproduce the behavior create an error for the publish process for example create a compilation exception syntax error on the console app rn the file publish cmd on the root folder check the new log created log file at the automationlogs folder action items updates ,1 416031,28064475899.0,IssuesEvent,2023-03-29 14:34:40,schreiberx/sweet,https://api.github.com/repos/schreiberx/sweet,closed,Installing SWEET on macOS: miniconda not actually required for compilation,documentation,We should update [INSTALL.md](https://github.com/schreiberx/sweet/blob/master/INSTALL.md)/[INSTALL_MACOS.md](https://github.com/schreiberx/sweet/blob/master/INSTALL_MACOSX.md) to make clear that miniconda does not have to be installed when setting up SWEET (Install file for macOS recommends to install Python packages via Homebrew),1.0,Installing SWEET on macOS: miniconda not actually required for compilation - We should update [INSTALL.md](https://github.com/schreiberx/sweet/blob/master/INSTALL.md)/[INSTALL_MACOS.md](https://github.com/schreiberx/sweet/blob/master/INSTALL_MACOSX.md) to make clear that miniconda does not have to be installed when setting up SWEET (Install file for macOS recommends to install Python packages via Homebrew),0,installing sweet on macos miniconda not actually required for compilation we should update to make clear that miniconda does not have to be installed when setting up sweet install file for macos recommends to install python packages via homebrew ,0 242183,20203357821.0,IssuesEvent,2022-02-11 17:24:53,open-metadata/OpenMetadata,https://api.github.com/repos/open-metadata/OpenMetadata,closed,Revamping selenium test cases for Entity Details Page,P1 E2E-testing,"**Is your feature request related to a problem? Please describe.** Revamping all the current test cases. Currently, there is duplication of code and many variables are repeated for all classes. **Describe the solution you'd like** - Reduce duplication of code. - Add variable to config for better control over test cases. **Describe alternatives you've considered** NA **Additional context** Entity Details Page involves the following: - Table Details Page - Dashboard Details Page - Pipeline Details Page - Topic Details Page",1.0,"Revamping selenium test cases for Entity Details Page - **Is your feature request related to a problem? Please describe.** Revamping all the current test cases. Currently, there is duplication of code and many variables are repeated for all classes. **Describe the solution you'd like** - Reduce duplication of code. - Add variable to config for better control over test cases. **Describe alternatives you've considered** NA **Additional context** Entity Details Page involves the following: - Table Details Page - Dashboard Details Page - Pipeline Details Page - Topic Details Page",0,revamping selenium test cases for entity details page is your feature request related to a problem please describe revamping all the current test cases currently there is duplication of code and many variables are repeated for all classes describe the solution you d like reduce duplication of code add variable to config for better control over test cases describe alternatives you ve considered na additional context entity details page involves the following table details page dashboard details page pipeline details page topic details page,0 142628,11488089257.0,IssuesEvent,2020-02-11 13:17:21,joeyfrog/hooktest,https://api.github.com/repos/joeyfrog/hooktest,closed,[XRAY] Vulnerability in artifact: kkkkkkkkkkkkkkkk,test xray,"This is an automated issue made via XRAY Github webhook. The deployed artifact(s) **['jjjjjjjjjjjjjjjjjjj', 'kkkkkkkkkkkkkkkk']** contain the following vaulnerable dependencie(s): ['ant-1.9.4.jar', 'aopalliance-repackaged-2.4.0-b09.jar', 'sdfsffseefewefwwef.jar', 'vsdvsdfsfsdfsfsfsfsdf.jar'] Here is the sent JSON from XRAY: [ { ""created"": ""2018-03-12T19:12:06.702Z"", ""description"": ""custom-glassfish"", ""impacted_artifacts"": [ { ""depth"": 0, ""display_name"": ""test:6639"", ""infected_files"": [ { ""depth"": 0, ""display_name"": ""ant-1.9.4.jar"", ""name"": ""ant-1.9.4.jar"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": """", ""pkg_type"": ""Generic"", ""sha256"": ""649ae0730251de07b8913f49286d46bba7b92d47c5f332610aa426c4f02161d8"" }, { ""depth"": 0, ""display_name"": ""org.glassfish.hk2.external:aopalliance-repackaged:2.4.0-b09"", ""name"": ""aopalliance-repackaged-2.4.0-b09.jar"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": """", ""pkg_type"": ""Maven"", ""sha256"": ""a97667a617fa5d427c2e95ce6f3eab5cf2d21d00c69ad2a7524ff6d9a9144f58"" } ], ""name"": ""jjjjjjjjjjjjjjjjjjj"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": ""artifactory-xray/builds/"", ""pkg_type"": ""Build"", ""sha1"": ""737145943754ac99a678d366269dcafc205233ba"", ""sha256"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"" } ], ""provider"": ""Custom"", ""severity"": ""Critical"", ""summary"": ""custom-glassfish"", ""type"": ""security"" }, { ""description"": ""Apache License 2.0"", ""impacted_artifacts"": [ { ""depth"": 0, ""display_name"": ""test:6639"", ""infected_files"": [ { ""depth"": 0, ""display_name"": ""ant-1.9.4.jar"", ""name"": ""sdfsffseefewefwwef.jar"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": """", ""pkg_type"": ""Generic"", ""sha256"": ""649ae0730251de07b8913f49286d46bba7b92d47c5f332610aa426c4f02161d8"" }, { ""depth"": 0, ""display_name"": ""org.glassfish.hk2.external:aopalliance-repackaged:2.4.0-b09"", ""name"": ""vsdvsdfsfsdfsfsfsfsdf.jar"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": """", ""pkg_type"": ""Maven"", ""sha256"": ""a97667a617fa5d427c2e95ce6f3eab5cf2d21d00c69ad2a7524ff6d9a9144f58"" } ], ""name"": ""kkkkkkkkkkkkkkkk"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": ""artifactory-xray/builds/"", ""pkg_type"": ""Build"", ""sha1"": ""737145943754ac99a678d366269dcafc205233ba"", ""sha256"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"" } ], ""severity"": ""Critical"", ""summary"": ""Apache-2.0"", ""type"": ""License"" } ]",1.0,"[XRAY] Vulnerability in artifact: kkkkkkkkkkkkkkkk - This is an automated issue made via XRAY Github webhook. The deployed artifact(s) **['jjjjjjjjjjjjjjjjjjj', 'kkkkkkkkkkkkkkkk']** contain the following vaulnerable dependencie(s): ['ant-1.9.4.jar', 'aopalliance-repackaged-2.4.0-b09.jar', 'sdfsffseefewefwwef.jar', 'vsdvsdfsfsdfsfsfsfsdf.jar'] Here is the sent JSON from XRAY: [ { ""created"": ""2018-03-12T19:12:06.702Z"", ""description"": ""custom-glassfish"", ""impacted_artifacts"": [ { ""depth"": 0, ""display_name"": ""test:6639"", ""infected_files"": [ { ""depth"": 0, ""display_name"": ""ant-1.9.4.jar"", ""name"": ""ant-1.9.4.jar"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": """", ""pkg_type"": ""Generic"", ""sha256"": ""649ae0730251de07b8913f49286d46bba7b92d47c5f332610aa426c4f02161d8"" }, { ""depth"": 0, ""display_name"": ""org.glassfish.hk2.external:aopalliance-repackaged:2.4.0-b09"", ""name"": ""aopalliance-repackaged-2.4.0-b09.jar"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": """", ""pkg_type"": ""Maven"", ""sha256"": ""a97667a617fa5d427c2e95ce6f3eab5cf2d21d00c69ad2a7524ff6d9a9144f58"" } ], ""name"": ""jjjjjjjjjjjjjjjjjjj"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": ""artifactory-xray/builds/"", ""pkg_type"": ""Build"", ""sha1"": ""737145943754ac99a678d366269dcafc205233ba"", ""sha256"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"" } ], ""provider"": ""Custom"", ""severity"": ""Critical"", ""summary"": ""custom-glassfish"", ""type"": ""security"" }, { ""description"": ""Apache License 2.0"", ""impacted_artifacts"": [ { ""depth"": 0, ""display_name"": ""test:6639"", ""infected_files"": [ { ""depth"": 0, ""display_name"": ""ant-1.9.4.jar"", ""name"": ""sdfsffseefewefwwef.jar"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": """", ""pkg_type"": ""Generic"", ""sha256"": ""649ae0730251de07b8913f49286d46bba7b92d47c5f332610aa426c4f02161d8"" }, { ""depth"": 0, ""display_name"": ""org.glassfish.hk2.external:aopalliance-repackaged:2.4.0-b09"", ""name"": ""vsdvsdfsfsdfsfsfsfsdf.jar"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": """", ""pkg_type"": ""Maven"", ""sha256"": ""a97667a617fa5d427c2e95ce6f3eab5cf2d21d00c69ad2a7524ff6d9a9144f58"" } ], ""name"": ""kkkkkkkkkkkkkkkk"", ""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"", ""path"": ""artifactory-xray/builds/"", ""pkg_type"": ""Build"", ""sha1"": ""737145943754ac99a678d366269dcafc205233ba"", ""sha256"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"" } ], ""severity"": ""Critical"", ""summary"": ""Apache-2.0"", ""type"": ""License"" } ]",0, vulnerability in artifact kkkkkkkkkkkkkkkk this is an automated issue made via xray github webhook the deployed artifact s contain the following vaulnerable dependencie s here is the sent json from xray created description custom glassfish impacted artifacts depth display name test infected files depth display name ant jar name ant jar parent sha path pkg type generic depth display name org glassfish external aopalliance repackaged name aopalliance repackaged jar parent sha path pkg type maven name jjjjjjjjjjjjjjjjjjj parent sha path artifactory xray builds pkg type build provider custom severity critical summary custom glassfish type security description apache license impacted artifacts depth display name test infected files depth display name ant jar name sdfsffseefewefwwef jar parent sha path pkg type generic depth display name org glassfish external aopalliance repackaged name vsdvsdfsfsdfsfsfsfsdf jar parent sha path pkg type maven name kkkkkkkkkkkkkkkk parent sha path artifactory xray builds pkg type build severity critical summary apache type license ,0 170335,14256098894.0,IssuesEvent,2020-11-20 00:11:01,irods/irods,https://api.github.com/repos/irods/irods,closed,"""istream write"" ignores --no-trunc when --append is present",documentation,"- [x] master - [x] 4-2-stable --- ## Bug Report The following options are not mutually exclusive. The `else` keyword needs to be removed so that `--no-trunc` and `--append` can be used at the same time. See [istream.cpp lines 300-309](https://github.com/irods/irods_client_icommands/blob/7560dc7f9f5b50faacc7312447155e4672709bac/src/istream.cpp#L300-L309)",1.0,"""istream write"" ignores --no-trunc when --append is present - - [x] master - [x] 4-2-stable --- ## Bug Report The following options are not mutually exclusive. The `else` keyword needs to be removed so that `--no-trunc` and `--append` can be used at the same time. See [istream.cpp lines 300-309](https://github.com/irods/irods_client_icommands/blob/7560dc7f9f5b50faacc7312447155e4672709bac/src/istream.cpp#L300-L309)",0, istream write ignores no trunc when append is present master stable bug report the following options are not mutually exclusive the else keyword needs to be removed so that no trunc and append can be used at the same time see ,0 73417,15254331900.0,IssuesEvent,2021-02-20 11:33:10,NixOS/nixpkgs,https://api.github.com/repos/NixOS/nixpkgs,closed,Vulnerability roundup 84: go-1.14.2: 6 advisories,1.severity: security,"[search](https://search.nix.gsc.io/?q=go&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=go+in%3Apath&type=Code) * [ ] [CVE-2018-17075](https://nvd.nist.gov/vuln/detail/CVE-2018-17075) CVSSv3=7.5 (nixos-unstable) * [ ] [CVE-2018-17142](https://nvd.nist.gov/vuln/detail/CVE-2018-17142) CVSSv3=7.5 (nixos-unstable) * [ ] [CVE-2018-17143](https://nvd.nist.gov/vuln/detail/CVE-2018-17143) CVSSv3=7.5 (nixos-unstable) * [ ] [CVE-2018-17846](https://nvd.nist.gov/vuln/detail/CVE-2018-17846) CVSSv3=7.5 (nixos-unstable) * [ ] [CVE-2018-17847](https://nvd.nist.gov/vuln/detail/CVE-2018-17847) CVSSv3=7.5 (nixos-unstable) * [ ] [CVE-2018-17848](https://nvd.nist.gov/vuln/detail/CVE-2018-17848) CVSSv3=7.5 (nixos-unstable) Scanned versions: nixos-unstable: 0f5ce2fac0c. May contain false positives. ",True,"Vulnerability roundup 84: go-1.14.2: 6 advisories - [search](https://search.nix.gsc.io/?q=go&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=go+in%3Apath&type=Code) * [ ] [CVE-2018-17075](https://nvd.nist.gov/vuln/detail/CVE-2018-17075) CVSSv3=7.5 (nixos-unstable) * [ ] [CVE-2018-17142](https://nvd.nist.gov/vuln/detail/CVE-2018-17142) CVSSv3=7.5 (nixos-unstable) * [ ] [CVE-2018-17143](https://nvd.nist.gov/vuln/detail/CVE-2018-17143) CVSSv3=7.5 (nixos-unstable) * [ ] [CVE-2018-17846](https://nvd.nist.gov/vuln/detail/CVE-2018-17846) CVSSv3=7.5 (nixos-unstable) * [ ] [CVE-2018-17847](https://nvd.nist.gov/vuln/detail/CVE-2018-17847) CVSSv3=7.5 (nixos-unstable) * [ ] [CVE-2018-17848](https://nvd.nist.gov/vuln/detail/CVE-2018-17848) CVSSv3=7.5 (nixos-unstable) Scanned versions: nixos-unstable: 0f5ce2fac0c. May contain false positives. ",0,vulnerability roundup go advisories nixos unstable nixos unstable nixos unstable nixos unstable nixos unstable nixos unstable scanned versions nixos unstable may contain false positives ,0 2924,12823331628.0,IssuesEvent,2020-07-06 11:31:10,GoodDollar/GoodDAPP,https://api.github.com/repos/GoodDollar/GoodDAPP,closed,Add options for quick re-login,automation,"@YuryAnanyev this ticket is connected to the Profile edit issue 1. implement a click on back arrow function, to navigate back, so it doesnt require a fresh login which takes time. 2. login by setting localStorage variables localStorage.setItem('GD_mnemonic',mnemonic) or setItem('GD_masterSeed',torus user private key) and setItem('GD_isLoggedIn',true)",1.0,"Add options for quick re-login - @YuryAnanyev this ticket is connected to the Profile edit issue 1. implement a click on back arrow function, to navigate back, so it doesnt require a fresh login which takes time. 2. login by setting localStorage variables localStorage.setItem('GD_mnemonic',mnemonic) or setItem('GD_masterSeed',torus user private key) and setItem('GD_isLoggedIn',true)",1,add options for quick re login yuryananyev this ticket is connected to the profile edit issue implement a click on back arrow function to navigate back so it doesnt require a fresh login which takes time login by setting localstorage variables localstorage setitem gd mnemonic mnemonic or setitem gd masterseed torus user private key and setitem gd isloggedin true ,1 817733,30652193665.0,IssuesEvent,2023-07-25 09:38:24,enzliguor/PokemonAO,https://api.github.com/repos/enzliguor/PokemonAO,closed,Containerizzare l'app PokemonAO,medium priority deploy feature,Scrivere un dockerfile che usi il JAR di PokemonAO per generare un container in cui girerà la nostra app,1.0,Containerizzare l'app PokemonAO - Scrivere un dockerfile che usi il JAR di PokemonAO per generare un container in cui girerà la nostra app,0,containerizzare l app pokemonao scrivere un dockerfile che usi il jar di pokemonao per generare un container in cui girerà la nostra app,0 286055,8783370558.0,IssuesEvent,2018-12-20 05:33:56,servinglynk/hslynk-open-source-docs,https://api.github.com/repos/servinglynk/hslynk-open-source-docs,closed,automated view syncing to survey edits,enhancement next priority reporting feature waiting on external resource,"Old/superceded/deleted answered versions are not removed, but marked ""question_name-old-v1"", ""question_name-old-v2"", etc.. @logicsandeep: how many hours do you think this will take to complete?",1.0,"automated view syncing to survey edits - Old/superceded/deleted answered versions are not removed, but marked ""question_name-old-v1"", ""question_name-old-v2"", etc.. @logicsandeep: how many hours do you think this will take to complete?",0,automated view syncing to survey edits old superceded deleted answered versions are not removed but marked question name old question name old etc logicsandeep how many hours do you think this will take to complete ,0 116082,11900206865.0,IssuesEvent,2020-03-30 10:15:03,Barbelot/Physarum3D,https://api.github.com/repos/Barbelot/Physarum3D,closed,Some missing steps in the instructions,documentation,"I'm gradually working through and getting to the stage where I have a working example but there is a simpler way - post a working project instead of a partial project + instructions. As someone that downloads an unusually high number of Unity Github projects to try them out (it's how I learn best) can I share a pattern I've observer? Repos that contain an entire Unity project nearly always work. Repos that don't do this have a much higher failure rate - especially as more time passes and more bitrot sets in! I'll post my project once I've got it working if that helps.",1.0,"Some missing steps in the instructions - I'm gradually working through and getting to the stage where I have a working example but there is a simpler way - post a working project instead of a partial project + instructions. As someone that downloads an unusually high number of Unity Github projects to try them out (it's how I learn best) can I share a pattern I've observer? Repos that contain an entire Unity project nearly always work. Repos that don't do this have a much higher failure rate - especially as more time passes and more bitrot sets in! I'll post my project once I've got it working if that helps.",0,some missing steps in the instructions i m gradually working through and getting to the stage where i have a working example but there is a simpler way post a working project instead of a partial project instructions as someone that downloads an unusually high number of unity github projects to try them out it s how i learn best can i share a pattern i ve observer repos that contain an entire unity project nearly always work repos that don t do this have a much higher failure rate especially as more time passes and more bitrot sets in i ll post my project once i ve got it working if that helps ,0 1656,10540413684.0,IssuesEvent,2019-10-02 08:21:21,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,"Updated script using ""Az"" module",Pri1 automation/svc cxp product-issue shared-capabilities/subsvc triaged,"There is some script that use new ""az"" module instead of ""AzureRM"" module? I tried to replace all ""AzureRM"" module cmdlets by ""Az"" module cmdlets and got two errors: Import-Module : The specified module 'Az.Profile' was not loaded because no valid module file was found in any module directory. At #FILEPATH#\New-RunAsAccount.ps1:92 char:1 + Import-Module Az.Profile + ~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ResourceUnavailable: (Az.Profile:String) [Import-Module], FileNotFoundException + FullyQualifiedErrorId : Modules_ModuleNotFound,Microsoft.PowerShell.Commands.ImportModuleCommand And: #FILEPATH#\New-RunAsAccount.ps1 : Please install the latest Azure PowerShell and retry. Relevant doc url : https://docs.microsoft.com/powershell/azureps-cmdlets-docs/ At line:1 char:1 + .\New-RunAsAccount.ps1 -ResourceGroup $RGAutomationName -AutomationAc ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,New-RunAsAccount.ps1 Has anyone faced this difficulty? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 56e2500f-e1f5-bc87-6e5c-f41b59265049 * Version Independent ID: d212be48-7d05-847d-3045-cea82e6ba603 * Content: [Manage Azure Automation Run As accounts](https://docs.microsoft.com/en-us/azure/automation/manage-runas-account#feedback) * Content Source: [articles/automation/manage-runas-account.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/manage-runas-account.md) * Service: **automation** * Sub-service: **shared-capabilities** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**",1.0,"Updated script using ""Az"" module - There is some script that use new ""az"" module instead of ""AzureRM"" module? I tried to replace all ""AzureRM"" module cmdlets by ""Az"" module cmdlets and got two errors: Import-Module : The specified module 'Az.Profile' was not loaded because no valid module file was found in any module directory. At #FILEPATH#\New-RunAsAccount.ps1:92 char:1 + Import-Module Az.Profile + ~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ResourceUnavailable: (Az.Profile:String) [Import-Module], FileNotFoundException + FullyQualifiedErrorId : Modules_ModuleNotFound,Microsoft.PowerShell.Commands.ImportModuleCommand And: #FILEPATH#\New-RunAsAccount.ps1 : Please install the latest Azure PowerShell and retry. Relevant doc url : https://docs.microsoft.com/powershell/azureps-cmdlets-docs/ At line:1 char:1 + .\New-RunAsAccount.ps1 -ResourceGroup $RGAutomationName -AutomationAc ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,New-RunAsAccount.ps1 Has anyone faced this difficulty? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 56e2500f-e1f5-bc87-6e5c-f41b59265049 * Version Independent ID: d212be48-7d05-847d-3045-cea82e6ba603 * Content: [Manage Azure Automation Run As accounts](https://docs.microsoft.com/en-us/azure/automation/manage-runas-account#feedback) * Content Source: [articles/automation/manage-runas-account.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/manage-runas-account.md) * Service: **automation** * Sub-service: **shared-capabilities** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**",1,updated script using az module there is some script that use new az module instead of azurerm module i tried to replace all azurerm module cmdlets by az module cmdlets and got two errors import module the specified module az profile was not loaded because no valid module file was found in any module directory at filepath new runasaccount char import module az profile categoryinfo resourceunavailable az profile string filenotfoundexception fullyqualifiederrorid modules modulenotfound microsoft powershell commands importmodulecommand and filepath new runasaccount please install the latest azure powershell and retry relevant doc url at line char new runasaccount resourcegroup rgautomationname automationac categoryinfo notspecified writeerrorexception fullyqualifiederrorid microsoft powershell commands writeerrorexception new runasaccount has anyone faced this difficulty document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service shared capabilities github login bobbytreed microsoft alias robreed ,1 440512,12700953023.0,IssuesEvent,2020-06-22 17:14:37,ansible/awx,https://api.github.com/repos/ansible/awx,opened,Add edit button to Organization -> Teams list rows,component:ui_next priority:medium state:needs_devel type:enhancement,"##### ISSUE TYPE - Feature Idea ##### SUMMARY If the user has the proper permissions to edit a team we should provide them with an edit button on this list: which would redirect them to `/#/teams/:id/edit`. This would be consistent with list behavior throughout the app. ",1.0,"Add edit button to Organization -> Teams list rows - ##### ISSUE TYPE - Feature Idea ##### SUMMARY If the user has the proper permissions to edit a team we should provide them with an edit button on this list: which would redirect them to `/#/teams/:id/edit`. This would be consistent with list behavior throughout the app. ",0,add edit button to organization teams list rows issue type feature idea summary if the user has the proper permissions to edit a team we should provide them with an edit button on this list img width alt screen shot at pm src which would redirect them to teams id edit this would be consistent with list behavior throughout the app ,0 34064,9257080833.0,IssuesEvent,2019-03-17 01:47:56,SHPEUCF/shpeucfapp,https://api.github.com/repos/SHPEUCF/shpeucfapp,closed,Build Leaderboard scene,Build,"This scene must: - Display number of points for all users/members. - Show is descending order (most points on top) - Display bar with annotations like: user name, #points - Eventually, we can also allow users to tap on the bar and see which events the earned the points from, like GBM 5 points, attending conference 5 points, etc. The functionality must be: - User gets points as they check in to events through the calendar event - Those points and its corresponding data get stored in the database under users data. - Leaderboard must keep watching changes/get update when user gets point and render the updated data on the Leaderboard scene",1.0,"Build Leaderboard scene - This scene must: - Display number of points for all users/members. - Show is descending order (most points on top) - Display bar with annotations like: user name, #points - Eventually, we can also allow users to tap on the bar and see which events the earned the points from, like GBM 5 points, attending conference 5 points, etc. The functionality must be: - User gets points as they check in to events through the calendar event - Those points and its corresponding data get stored in the database under users data. - Leaderboard must keep watching changes/get update when user gets point and render the updated data on the Leaderboard scene",0,build leaderboard scene this scene must display number of points for all users members show is descending order most points on top display bar with annotations like user name points eventually we can also allow users to tap on the bar and see which events the earned the points from like gbm points attending conference points etc the functionality must be user gets points as they check in to events through the calendar event those points and its corresponding data get stored in the database under users data leaderboard must keep watching changes get update when user gets point and render the updated data on the leaderboard scene,0 558899,16544227072.0,IssuesEvent,2021-05-27 21:10:40,returntocorp/semgrep,https://api.github.com/repos/returntocorp/semgrep,closed,Add support for typed metavariables in Javascript,enhancement lang:javascript pattern:types priority:low stale,"We support typed metavariables for statically typed languages like Java and Go. Javascript is more difficult, but it would be nice to have!",1.0,"Add support for typed metavariables in Javascript - We support typed metavariables for statically typed languages like Java and Go. Javascript is more difficult, but it would be nice to have!",0,add support for typed metavariables in javascript we support typed metavariables for statically typed languages like java and go javascript is more difficult but it would be nice to have ,0 5054,18403131306.0,IssuesEvent,2021-10-12 18:40:44,CDCgov/prime-field-teams,https://api.github.com/repos/CDCgov/prime-field-teams,opened,Solution Imp. - Plan & Schedule GO Live,sender-automation,"**Main Objectives & Tasks:** - [ ] Contact RS Pipeline Team to turn on HHSProtect reporting. - [ ] Determine if there is a backlog of results. If so, contact the SPHD to determine what results are needed. Example, in AL, if a Sender has only reported positives only through the ADPH Report Card, then ADPH wants all COVID (positive & negative) since the Sender started testing. - [ ] Coordinate and schedule a start date for daily results and backlog results between the SPHD and the Sender. ",1.0,"Solution Imp. - Plan & Schedule GO Live - **Main Objectives & Tasks:** - [ ] Contact RS Pipeline Team to turn on HHSProtect reporting. - [ ] Determine if there is a backlog of results. If so, contact the SPHD to determine what results are needed. Example, in AL, if a Sender has only reported positives only through the ADPH Report Card, then ADPH wants all COVID (positive & negative) since the Sender started testing. - [ ] Coordinate and schedule a start date for daily results and backlog results between the SPHD and the Sender. ",1,solution imp plan schedule go live main objectives tasks contact rs pipeline team to turn on hhsprotect reporting determine if there is a backlog of results if so contact the sphd to determine what results are needed example in al if a sender has only reported positives only through the adph report card then adph wants all covid positive negative since the sender started testing coordinate and schedule a start date for daily results and backlog results between the sphd and the sender ,1 219826,17114104765.0,IssuesEvent,2021-07-11 00:42:26,backend-br/vagas,https://api.github.com/repos/backend-br/vagas,closed,[REMOTO] Java Backend Engineer Specialist na AgileProcess,AWS CI CLT Docker Especialista Git Java MySQL Remoto Scrum Stale Testes Unitários startup,"## Nossa empresa A **AgileProcess** é uma startup criada para simplificar o processo logístico, tornando-o muito mais eficiente do início ao fim. Junto com pessoas que buscam tornar a logística cada vez mais digital e otimizada, a empresa utiliza as melhores tecnologias do mercado em busca de entregar sempre mais e melhor. Fundada em 2014 e com sede em Florianópolis, a **AgileProcess** está em constante crescimento e busca por criatividade, conhecimento, pessoas apaixonadas por inovação e tecnologia e o desejo de tornar a logística 100% digital em uma realidade. O QUE FAZEMOS? Utilizando as melhores tecnologias do mercado, o sistema **AgileProces**s otimiza o uso da frota, propõe as melhores rotas e sequenciamento de entregas e coletas, auxilia cada motorista, mostrando o percurso com apoio de GPS e faz a comprovação de entregas no exato momento em que forem realizadas. Hoje, mais de 9 milhões de entregas e coletas passam no software da **AgileProcess** por mês, presente em mais de 4.600 cidades pelo Brasil. ## Descrição da vaga Buscamos um(a) **Backend Engineer Specialist** que será responsável, junto ao nossos squads de desenvolvimento, por prover a melhor experiência para nossos clientes através de nossas soluções. RESPONSABILIDADES E ATRIBUIÇÕES - Desafiar o status quo e desenvolver soluções inovadoras para problemas complexos; - Desenvolver e manter nossos Microserviços de forma ágil, aplicando boas práticas de Engenharia de Software; - Contribuir com o desenvolvimento e arquitetura da plataforma, preparando-a para um crescimento acelerado; - Construir uma base sólida para o desenvolvimento de novos produtos; - Desenvolver sistemas escaláveis, sustentáveis e orientados ao usuário; - Trabalhar em um ambiente que estimula e valoriza a autonomia e a transparência; - Ajudar o crescimento do time de tecnologia e engenharia. ## Local 100% remoto. Estamos localizados em Florianópolis - Santa Catarina. ## Requisitos - Experiência e conhecimento profundo com desenvolvimento Java 8; - Experiência e conhecimento em GitFlow; - Experiência e conhecimento profundo em Docker; - Ter atuado na construção de testes unitários e integrados; - Ter atuado na construção de testes de comportamento (BDD); - Conhecimentos em Design Patterns, arquitetura e engenharia de software; - Conhecimentos em GitLab CI; - Conhecimentos em metodologias ágeis de desenvolvimento (Scrum, Kanban). Nosso Stack: - Java; - MySQL; - AWS; - Git (GitLab). ## Benefícios - Onboarding de boas-vindas! - “All Hands”: nosso encontro semanal com o CEO; - Dress Code: seja você mesmo(a); - Flexibilidade de horário; - VR/VA Flex: R$ 550,00 (mês); - Plano de saúde (para você e quem você ama); - Plano odontológico; - TotalPass; - Clube de Descontos - NewValue; - PLR; - Parceria com ZenKlub; e muito mais! ## Contratação CLT. Salário: R$ 12.500,00 - R$ 14.000,00 Nível: Especialista ## Como se candidatar Por favor envie um email para ana.felauto@agileprocess.com.br OU; Candidate-se pela nossa página de carreiras - https://agileprocess.gupy.io/jobs/864763?jobBoardSource=gupy_public_page OU; Me chame no whatsapp: +554898835995 ## Tempo médio de feedbacks Costumamos enviar feedbacks em até 03 dias após cada processo. E-mail para contato em caso de não haver resposta: ana.felauto@agileprocess.com.br ## Labels #### Alocação - Remoto #### Regime - CLT #### Nível - Especialista ",1.0,"[REMOTO] Java Backend Engineer Specialist na AgileProcess - ## Nossa empresa A **AgileProcess** é uma startup criada para simplificar o processo logístico, tornando-o muito mais eficiente do início ao fim. Junto com pessoas que buscam tornar a logística cada vez mais digital e otimizada, a empresa utiliza as melhores tecnologias do mercado em busca de entregar sempre mais e melhor. Fundada em 2014 e com sede em Florianópolis, a **AgileProcess** está em constante crescimento e busca por criatividade, conhecimento, pessoas apaixonadas por inovação e tecnologia e o desejo de tornar a logística 100% digital em uma realidade. O QUE FAZEMOS? Utilizando as melhores tecnologias do mercado, o sistema **AgileProces**s otimiza o uso da frota, propõe as melhores rotas e sequenciamento de entregas e coletas, auxilia cada motorista, mostrando o percurso com apoio de GPS e faz a comprovação de entregas no exato momento em que forem realizadas. Hoje, mais de 9 milhões de entregas e coletas passam no software da **AgileProcess** por mês, presente em mais de 4.600 cidades pelo Brasil. ## Descrição da vaga Buscamos um(a) **Backend Engineer Specialist** que será responsável, junto ao nossos squads de desenvolvimento, por prover a melhor experiência para nossos clientes através de nossas soluções. RESPONSABILIDADES E ATRIBUIÇÕES - Desafiar o status quo e desenvolver soluções inovadoras para problemas complexos; - Desenvolver e manter nossos Microserviços de forma ágil, aplicando boas práticas de Engenharia de Software; - Contribuir com o desenvolvimento e arquitetura da plataforma, preparando-a para um crescimento acelerado; - Construir uma base sólida para o desenvolvimento de novos produtos; - Desenvolver sistemas escaláveis, sustentáveis e orientados ao usuário; - Trabalhar em um ambiente que estimula e valoriza a autonomia e a transparência; - Ajudar o crescimento do time de tecnologia e engenharia. ## Local 100% remoto. Estamos localizados em Florianópolis - Santa Catarina. ## Requisitos - Experiência e conhecimento profundo com desenvolvimento Java 8; - Experiência e conhecimento em GitFlow; - Experiência e conhecimento profundo em Docker; - Ter atuado na construção de testes unitários e integrados; - Ter atuado na construção de testes de comportamento (BDD); - Conhecimentos em Design Patterns, arquitetura e engenharia de software; - Conhecimentos em GitLab CI; - Conhecimentos em metodologias ágeis de desenvolvimento (Scrum, Kanban). Nosso Stack: - Java; - MySQL; - AWS; - Git (GitLab). ## Benefícios - Onboarding de boas-vindas! - “All Hands”: nosso encontro semanal com o CEO; - Dress Code: seja você mesmo(a); - Flexibilidade de horário; - VR/VA Flex: R$ 550,00 (mês); - Plano de saúde (para você e quem você ama); - Plano odontológico; - TotalPass; - Clube de Descontos - NewValue; - PLR; - Parceria com ZenKlub; e muito mais! ## Contratação CLT. Salário: R$ 12.500,00 - R$ 14.000,00 Nível: Especialista ## Como se candidatar Por favor envie um email para ana.felauto@agileprocess.com.br OU; Candidate-se pela nossa página de carreiras - https://agileprocess.gupy.io/jobs/864763?jobBoardSource=gupy_public_page OU; Me chame no whatsapp: +554898835995 ## Tempo médio de feedbacks Costumamos enviar feedbacks em até 03 dias após cada processo. E-mail para contato em caso de não haver resposta: ana.felauto@agileprocess.com.br ## Labels #### Alocação - Remoto #### Regime - CLT #### Nível - Especialista ",0, java backend engineer specialist na agileprocess nossa empresa a agileprocess é uma startup criada para simplificar o processo logístico tornando o muito mais eficiente do início ao fim junto com pessoas que buscam tornar a logística cada vez mais digital e otimizada a empresa utiliza as melhores tecnologias do mercado em busca de entregar sempre mais e melhor fundada em e com sede em florianópolis a agileprocess está em constante crescimento e busca por criatividade conhecimento pessoas apaixonadas por inovação e tecnologia e o desejo de tornar a logística digital em uma realidade o que fazemos utilizando as melhores tecnologias do mercado o sistema agileproces s otimiza o uso da frota propõe as melhores rotas e sequenciamento de entregas e coletas auxilia cada motorista mostrando o percurso com apoio de gps e faz a comprovação de entregas no exato momento em que forem realizadas hoje mais de milhões de entregas e coletas passam no software da agileprocess por mês presente em mais de cidades pelo brasil descrição da vaga buscamos um a backend engineer specialist que será responsável junto ao nossos squads de desenvolvimento por prover a melhor experiência para nossos clientes através de nossas soluções responsabilidades e atribuições desafiar o status quo e desenvolver soluções inovadoras para problemas complexos desenvolver e manter nossos microserviços de forma ágil aplicando boas práticas de engenharia de software contribuir com o desenvolvimento e arquitetura da plataforma preparando a para um crescimento acelerado construir uma base sólida para o desenvolvimento de novos produtos desenvolver sistemas escaláveis sustentáveis e orientados ao usuário trabalhar em um ambiente que estimula e valoriza a autonomia e a transparência ajudar o crescimento do time de tecnologia e engenharia local remoto estamos localizados em florianópolis santa catarina requisitos experiência e conhecimento profundo com desenvolvimento java experiência e conhecimento em gitflow experiência e conhecimento profundo em docker ter atuado na construção de testes unitários e integrados ter atuado na construção de testes de comportamento bdd conhecimentos em design patterns arquitetura e engenharia de software conhecimentos em gitlab ci conhecimentos em metodologias ágeis de desenvolvimento scrum kanban nosso stack java mysql aws git gitlab benefícios onboarding de boas vindas “all hands” nosso encontro semanal com o ceo dress code seja você mesmo a flexibilidade de horário vr va flex r mês plano de saúde para você e quem você ama plano odontológico totalpass clube de descontos newvalue plr parceria com zenklub e muito mais contratação clt salário r r nível especialista como se candidatar por favor envie um email para ana felauto agileprocess com br ou candidate se pela nossa página de carreiras ou me chame no whatsapp tempo médio de feedbacks costumamos enviar feedbacks em até dias após cada processo e mail para contato em caso de não haver resposta ana felauto agileprocess com br labels alocação remoto regime clt nível especialista ,0 5665,20677434731.0,IssuesEvent,2022-03-10 10:37:03,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,opened,Automate the execution of all integration tests in Jenkins,team/qa subteam/qa-thunder type/jenkins-automation,"We want to automate the process of launching all the integration tests and obtain results formatted in a table. So far we have a pipeline with which we can launch a run to test a specific module or component of Wazuh (FIM, remoted, agentd ...), and it is necessary to launch `n` builds with different parameters to get a complete view. The idea is to create a new pipeline that automatically launches all the necessary builds, and formats the output of each of them to create an html report with a table showing the results obtained. This pipeline will be useful for testing releases, as well as testing PR changes before they are merged into stable branches.",1.0,"Automate the execution of all integration tests in Jenkins - We want to automate the process of launching all the integration tests and obtain results formatted in a table. So far we have a pipeline with which we can launch a run to test a specific module or component of Wazuh (FIM, remoted, agentd ...), and it is necessary to launch `n` builds with different parameters to get a complete view. The idea is to create a new pipeline that automatically launches all the necessary builds, and formats the output of each of them to create an html report with a table showing the results obtained. This pipeline will be useful for testing releases, as well as testing PR changes before they are merged into stable branches.",1,automate the execution of all integration tests in jenkins we want to automate the process of launching all the integration tests and obtain results formatted in a table so far we have a pipeline with which we can launch a run to test a specific module or component of wazuh fim remoted agentd and it is necessary to launch n builds with different parameters to get a complete view the idea is to create a new pipeline that automatically launches all the necessary builds and formats the output of each of them to create an html report with a table showing the results obtained this pipeline will be useful for testing releases as well as testing pr changes before they are merged into stable branches ,1 3648,14242672802.0,IssuesEvent,2020-11-19 02:24:26,PastVu/pastvu,https://api.github.com/repos/PastVu/pastvu,closed,Automerge with tagging,Automation CI/CD Priority: Major,"Задача модифицировать скрипт https://github.com/PastVu/pastvu/blob/master/.github/workflows/en-automerge.yml таким образом, чтобы при наличии текущем коммите мастера тега v.A.B.C в ветке `en` создавался тег `vA.B.C-en`. ",1.0,"Automerge with tagging - Задача модифицировать скрипт https://github.com/PastVu/pastvu/blob/master/.github/workflows/en-automerge.yml таким образом, чтобы при наличии текущем коммите мастера тега v.A.B.C в ветке `en` создавался тег `vA.B.C-en`. ",1,automerge with tagging задача модифицировать скрипт таким образом чтобы при наличии текущем коммите мастера тега v a b c в ветке en создавался тег va b c en img width alt image src ,1 739940,25729571306.0,IssuesEvent,2022-12-07 19:10:28,BIDMCDigitalPsychiatry/LAMP-platform,https://api.github.com/repos/BIDMCDigitalPsychiatry/LAMP-platform,closed,Data portal visualization error,bug 1day frontend priority HIGH,"It appears that for any researcher, visualizations in the data portal produce an error. Bug originally reported here: https://mindlamp.discourse.group/t/cortex-visualizations-show-react-error-310/726 **To Reproduce** Example steps to reproduce error: 1. Enter LAMP dashboard 2. Enter data portal 3. Enter GUI mode 4. Select researcher 5. Select ""data quality tags"" Output for error: ![image](https://user-images.githubusercontent.com/103652751/205092613-a3938cc3-0885-4781-97d7-467f3e3375c1.png) ",1.0,"Data portal visualization error - It appears that for any researcher, visualizations in the data portal produce an error. Bug originally reported here: https://mindlamp.discourse.group/t/cortex-visualizations-show-react-error-310/726 **To Reproduce** Example steps to reproduce error: 1. Enter LAMP dashboard 2. Enter data portal 3. Enter GUI mode 4. Select researcher 5. Select ""data quality tags"" Output for error: ![image](https://user-images.githubusercontent.com/103652751/205092613-a3938cc3-0885-4781-97d7-467f3e3375c1.png) ",0,data portal visualization error it appears that for any researcher visualizations in the data portal produce an error bug originally reported here to reproduce example steps to reproduce error enter lamp dashboard enter data portal enter gui mode select researcher select data quality tags output for error ,0 3054,13037836403.0,IssuesEvent,2020-07-28 14:22:56,prisma/language-tools,https://api.github.com/repos/prisma/language-tools,closed,Test formatting / binary execution fails,kind/improvement topic: automation,"This might need a reproducible crash of the binary for testing. _Originally posted by @janpio in https://github.com/prisma/vscode/issues/84#issuecomment-618607640_ An error is shown in the output and a window notification is given including the error details.",1.0,"Test formatting / binary execution fails - This might need a reproducible crash of the binary for testing. _Originally posted by @janpio in https://github.com/prisma/vscode/issues/84#issuecomment-618607640_ An error is shown in the output and a window notification is given including the error details.",1,test formatting binary execution fails this might need a reproducible crash of the binary for testing originally posted by janpio in an error is shown in the output and a window notification is given including the error details ,1 429169,30028127833.0,IssuesEvent,2023-06-27 07:48:28,Pecneb/computer_vision_research,https://api.github.com/repos/Pecneb/computer_vision_research,closed,Run detection on all bellevue datasets in hourly resolution,documentation,"- [x] Bellevue Newport - [x] Bellevue Eastgate - [x] Bellevue NE - [x] Bellevue SE",1.0,"Run detection on all bellevue datasets in hourly resolution - - [x] Bellevue Newport - [x] Bellevue Eastgate - [x] Bellevue NE - [x] Bellevue SE",0,run detection on all bellevue datasets in hourly resolution bellevue newport bellevue eastgate bellevue ne bellevue se,0 6992,24099219382.0,IssuesEvent,2022-09-19 22:00:15,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,closed,[YSQL] Failed while visiting tablets in sys catalog: Cannot add a table to a colocation group for tablet: place is taken by a table,kind/bug duplicate area/ysql priority/medium status/awaiting-triage qa_automation,"Jira Link: [DB-3212](https://yugabyte.atlassian.net/browse/DB-3212) ### Description Issue occured during stress testing TABLEGROUPS with 3/3 runs. Scenario - spawn 2xlarge VMs 3RF cluster, create 5 tablegroups, spawn 5*10*100 (batches) workload that will load data to tables. At some point FATAL occurs ``` F20220816 07:22:59 ../../src/yb/master/catalog_manager.cc:1001] T 00000000000000000000000000000000 P c158b26d1df94b5595435d1103627cbb: Failed to load sys catalog: Corruption (yb/master/catalog_loaders.cc:265): Failed while visiting tablets in sys catalog: Cannot add a table 000033e6000030008000000000004048 (ColocationId: 1838765072) to a colocation group for tablet 6a398db1629e43ff9a9c81084514fe59: place is taken by a table 000033e6000030008000000000004048 @ 0x7fc14e985c1b google::LogMessage::SendToLog() @ 0x7fc14e986cd8 google::LogMessage::Flush() @ 0x7fc14e98715f google::LogMessageFatal::~LogMessageFatal() @ 0x7fc151fc3e7e yb::master::CatalogManager::LoadSysCatalogDataTask() @ 0x7fc14ec29e5c yb::ThreadPool::DispatchThread() @ 0x7fc14ec252fb yb::Thread::SuperviseThread() @ 0x7fc14cf9a694 start_thread @ 0x7fc14c6d741d __clone ``` ",1.0,"[YSQL] Failed while visiting tablets in sys catalog: Cannot add a table to a colocation group for tablet: place is taken by a table - Jira Link: [DB-3212](https://yugabyte.atlassian.net/browse/DB-3212) ### Description Issue occured during stress testing TABLEGROUPS with 3/3 runs. Scenario - spawn 2xlarge VMs 3RF cluster, create 5 tablegroups, spawn 5*10*100 (batches) workload that will load data to tables. At some point FATAL occurs ``` F20220816 07:22:59 ../../src/yb/master/catalog_manager.cc:1001] T 00000000000000000000000000000000 P c158b26d1df94b5595435d1103627cbb: Failed to load sys catalog: Corruption (yb/master/catalog_loaders.cc:265): Failed while visiting tablets in sys catalog: Cannot add a table 000033e6000030008000000000004048 (ColocationId: 1838765072) to a colocation group for tablet 6a398db1629e43ff9a9c81084514fe59: place is taken by a table 000033e6000030008000000000004048 @ 0x7fc14e985c1b google::LogMessage::SendToLog() @ 0x7fc14e986cd8 google::LogMessage::Flush() @ 0x7fc14e98715f google::LogMessageFatal::~LogMessageFatal() @ 0x7fc151fc3e7e yb::master::CatalogManager::LoadSysCatalogDataTask() @ 0x7fc14ec29e5c yb::ThreadPool::DispatchThread() @ 0x7fc14ec252fb yb::Thread::SuperviseThread() @ 0x7fc14cf9a694 start_thread @ 0x7fc14c6d741d __clone ``` ",1, failed while visiting tablets in sys catalog cannot add a table to a colocation group for tablet place is taken by a table jira link description issue occured during stress testing tablegroups with runs scenario spawn vms cluster create tablegroups spawn batches workload that will load data to tables at some point fatal occurs src yb master catalog manager cc t p failed to load sys catalog corruption yb master catalog loaders cc failed while visiting tablets in sys catalog cannot add a table colocationid to a colocation group for tablet place is taken by a table google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb master catalogmanager loadsyscatalogdatatask yb threadpool dispatchthread yb thread supervisethread start thread clone ,1 179422,6625028926.0,IssuesEvent,2017-09-22 14:03:26,mercadopago/px-ios,https://api.github.com/repos/mercadopago/px-ios,opened,No se muestran los mensajes correctos en exclusiones,Priority: Medium,"### Comportamiento Esperado Cuando elijo un tipo de tarjeta (crédito, débito o prepaga), si para ese tipo de tarjeta solo hay 1 medio de pago deberia mostrar el disclaimer de que solo se acepta ese medio de pago, si hay mas de uno, e intento completar con un medio de pago que no es el elegido o que no es soportado al tocar el botón de ""Mas Info"" deberia listar los medios de pagos soportados dentro de la categoria de tarjeta elegida ",1.0,"No se muestran los mensajes correctos en exclusiones - ### Comportamiento Esperado Cuando elijo un tipo de tarjeta (crédito, débito o prepaga), si para ese tipo de tarjeta solo hay 1 medio de pago deberia mostrar el disclaimer de que solo se acepta ese medio de pago, si hay mas de uno, e intento completar con un medio de pago que no es el elegido o que no es soportado al tocar el botón de ""Mas Info"" deberia listar los medios de pagos soportados dentro de la categoria de tarjeta elegida ",0,no se muestran los mensajes correctos en exclusiones comportamiento esperado cuando elijo un tipo de tarjeta crédito débito o prepaga si para ese tipo de tarjeta solo hay medio de pago deberia mostrar el disclaimer de que solo se acepta ese medio de pago si hay mas de uno e intento completar con un medio de pago que no es el elegido o que no es soportado al tocar el botón de mas info deberia listar los medios de pagos soportados dentro de la categoria de tarjeta elegida ,0 328160,9990349744.0,IssuesEvent,2019-07-11 08:38:25,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,smallbusiness.chron.com - see bug description,browser-firefox-mobile engine-gecko priority-important," **URL**: https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html **Browser / Version**: Firefox Mobile 67.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Something else **Description**: Video autoplays with autoplay blocked **Steps to Reproduce**: Immediately upon loading the page to read an article, an unrelated video started autoplaying without any prompt. I don't want this video wasting my battery, my bandwidth, or my time. [![Screenshot Description](https://webcompat.com/uploads/2019/7/787ef557-678e-4acc-8a17-7de999cbcb7e-thumb.jpeg)](https://webcompat.com/uploads/2019/7/787ef557-678e-4acc-8a17-7de999cbcb7e.jpeg)
Browser Configuration
  • mixed active content blocked: false
  • image.mem.shared: true
  • buildID: 20190622041859
  • tracking content blocked: false
  • gfx.webrender.blob-images: true
  • hasTouchScreen: true
  • mixed passive content blocked: false
  • gfx.webrender.enabled: false
  • gfx.webrender.all: false
  • channel: default

Console Messages:

[u'[JavaScript Warning: ""The resource at https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""The resource at https://cdn.taboola.com/libtrc/hearstlocalnews-chronmobile/loader.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""The resource at https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""The resource at https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""The resource at https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""The resource at https://nexus.ensighten.com/hearst/news-3p/Bootstrap.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""Loading failed for the 
            `);
        }
        else
            res.end();
    })
    .listen(4100);
```

#### Provide the test code and the tested page URL (if applicable)

Test code

```js
import { Role, ClientFunction, Selector } from 'testcafe';

fixture `Test authentication`
    .page `http://localhost:4100/`;

const role = Role(`http://localhost:4100/#login`, async t => await t.click('input'), { preserveUrl: true });

test('first login', async t => {
    await t
        .wait(3000)
        .useRole(role)
        .expect(Selector('h1').innerText).eql('Authorized');
});

test('second login', async t => {
    await t
        .wait(3000)
        .useRole(role)
        .expect(Selector('h1').innerText).eql('Authorized');
});
```

### Workaround

```js
import { Role, ClientFunction, Selector } from 'testcafe';

fixture `Test authentication`
    .page `http://localhost:4100/`;

const role = Role(`http://localhost:4100/#login`, async t => await t.click('input'), { preserveUrl: true });
const reloadPage = new ClientFunction(() => location.reload(true));
const fixedUseRole = async (t, role) => {
	await t.useRole(role);
	await reloadPage();
};

test('first login', async t => {
    await t.wait(3000)
    await fixedUseRole(t, role);
    await t.expect(Selector('h1').innerText).eql('Authorized');
});

test('second login', async t => {
    await t.wait(3000)
    await fixedUseRole(t, role);
    await t.expect(Selector('h1').innerText).eql('Authorized');
});
```

### Specify your

* testcafe version: 0.19.0",1.0,"Role doesn't work when page navigation doesn't trigger page reloading - ### Are you requesting a feature or reporting a bug?

bug

### What is the current behavior?

Role doesn't work after first-time initialization. `Cookie`, `localStorage` and `sessionStorage` should be restored when a preserved page is loaded but the page changes only the hash and it isn't reloaded after navigating.

### What is the expected behavior?

Page must be reloaded after the `useRole` function call.

### How would you reproduce the current behavior (if this is a bug)?

Node server:

```js
const http = require('http');

http
    .createServer((req, res) => {
        if (req.url === '/') {
            res.writeHead(200, { 'content-type': 'text/html' });
            res.end(`
                

log in `); } else res.end(); }) .listen(4100); ``` #### Provide the test code and the tested page URL (if applicable) Test code ```js import { Role, ClientFunction, Selector } from 'testcafe'; fixture `Test authentication` .page `http://localhost:4100/`; const role = Role(`http://localhost:4100/#login`, async t => await t.click('input'), { preserveUrl: true }); test('first login', async t => { await t .wait(3000) .useRole(role) .expect(Selector('h1').innerText).eql('Authorized'); }); test('second login', async t => { await t .wait(3000) .useRole(role) .expect(Selector('h1').innerText).eql('Authorized'); }); ``` ### Workaround ```js import { Role, ClientFunction, Selector } from 'testcafe'; fixture `Test authentication` .page `http://localhost:4100/`; const role = Role(`http://localhost:4100/#login`, async t => await t.click('input'), { preserveUrl: true }); const reloadPage = new ClientFunction(() => location.reload(true)); const fixedUseRole = async (t, role) => { await t.useRole(role); await reloadPage(); }; test('first login', async t => { await t.wait(3000) await fixedUseRole(t, role); await t.expect(Selector('h1').innerText).eql('Authorized'); }); test('second login', async t => { await t.wait(3000) await fixedUseRole(t, role); await t.expect(Selector('h1').innerText).eql('Authorized'); }); ``` ### Specify your * testcafe version: 0.19.0",1,role doesn t work when page navigation doesn t trigger page reloading are you requesting a feature or reporting a bug bug what is the current behavior role doesn t work after first time initialization cookie localstorage and sessionstorage should be restored when a preserved page is loaded but the page changes only the hash and it isn t reloaded after navigating what is the expected behavior page must be reloaded after the userole function call how would you reproduce the current behavior if this is a bug node server js const http require http http createserver req res if req url res writehead content type text html res end log in var onhashchange function var newhash location hash if newhash if localstorage getitem isloggedin header textcontent authorized header style display block anchor style display none button style display none else header textcontent unauthorized anchor style display block button style display none else if newhash login if localstorage getitem isloggedin return location hash header style display none anchor style display none button style display block button addeventlistener click function localstorage setitem isloggedin true location hash onhashchange window addeventlistener hashchange onhashchange else res end listen provide the test code and the tested page url if applicable test code js import role clientfunction selector from testcafe fixture test authentication page const role role async t await t click input preserveurl true test first login async t await t wait userole role expect selector innertext eql authorized test second login async t await t wait userole role expect selector innertext eql authorized workaround js import role clientfunction selector from testcafe fixture test authentication page const role role async t await t click input preserveurl true const reloadpage new clientfunction location reload true const fixeduserole async t role await t userole role await reloadpage test first login async t await t wait await fixeduserole t role await t expect selector innertext eql authorized test second login async t await t wait await fixeduserole t role await t expect selector innertext eql authorized specify your testcafe version ,1 169943,20841989809.0,IssuesEvent,2022-03-21 02:02:07,michaeldotson/mini-capstone-vue-app,https://api.github.com/repos/michaeldotson/mini-capstone-vue-app,opened,CVE-2022-24772 (High) detected in node-forge-0.7.5.tgz,security vulnerability,"## CVE-2022-24772 - High Severity Vulnerability
Vulnerable Library - node-forge-0.7.5.tgz

JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.

Library home page: https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz

Path to dependency file: /mini-capstone-vue-app/package.json

Path to vulnerable library: /node_modules/node-forge/package.json

Dependency Hierarchy: - cli-service-3.5.1.tgz (Root Library) - webpack-dev-server-3.2.1.tgz - selfsigned-1.10.4.tgz - :x: **node-forge-0.7.5.tgz** (Vulnerable Library)

Vulnerability Details

Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a `DigestInfo` ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.

Publish Date: 2022-03-18

URL: CVE-2022-24772

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772

Release Date: 2022-03-18

Fix Resolution: node-forge - 1.3.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-24772 (High) detected in node-forge-0.7.5.tgz - ## CVE-2022-24772 - High Severity Vulnerability
Vulnerable Library - node-forge-0.7.5.tgz

JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.

Library home page: https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz

Path to dependency file: /mini-capstone-vue-app/package.json

Path to vulnerable library: /node_modules/node-forge/package.json

Dependency Hierarchy: - cli-service-3.5.1.tgz (Root Library) - webpack-dev-server-3.2.1.tgz - selfsigned-1.10.4.tgz - :x: **node-forge-0.7.5.tgz** (Vulnerable Library)

Vulnerability Details

Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a `DigestInfo` ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.

Publish Date: 2022-03-18

URL: CVE-2022-24772

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772

Release Date: 2022-03-18

Fix Resolution: node-forge - 1.3.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in node forge tgz cve high severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file mini capstone vue app package json path to vulnerable library node modules node forge package json dependency hierarchy cli service tgz root library webpack dev server tgz selfsigned tgz x node forge tgz vulnerable library vulnerability details forge also called node forge is a native implementation of transport layer security in javascript prior to version rsa pkcs signature verification code does not check for tailing garbage bytes after decoding a digestinfo asn structure this can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used the issue has been addressed in node forge version there are currently no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource ,0 1622,10469215037.0,IssuesEvent,2019-09-22 19:08:08,a-t-0/Productivity-phone,https://api.github.com/repos/a-t-0/Productivity-phone,opened,Sync multiple calenders with davdroid at once,Automation,"Currently, setting up the syncing with the (google) calendars is performed poorly automated by emulating human touch programatically on the phone from a pc. Davdroid has (random) request for donating, which disables the control/click flow, which leads to a async between commands given from pc and input required on phone. To solve, modify the davdroid so that it just asks the entire groups containing lists of calendar urls at once (copy pastable with single press, or enterable via api), and asking the username and password only once per group (or reading the username from the input).",1.0,"Sync multiple calenders with davdroid at once - Currently, setting up the syncing with the (google) calendars is performed poorly automated by emulating human touch programatically on the phone from a pc. Davdroid has (random) request for donating, which disables the control/click flow, which leads to a async between commands given from pc and input required on phone. To solve, modify the davdroid so that it just asks the entire groups containing lists of calendar urls at once (copy pastable with single press, or enterable via api), and asking the username and password only once per group (or reading the username from the input).",1,sync multiple calenders with davdroid at once currently setting up the syncing with the google calendars is performed poorly automated by emulating human touch programatically on the phone from a pc davdroid has random request for donating which disables the control click flow which leads to a async between commands given from pc and input required on phone to solve modify the davdroid so that it just asks the entire groups containing lists of calendar urls at once copy pastable with single press or enterable via api and asking the username and password only once per group or reading the username from the input ,1 8151,26282565131.0,IssuesEvent,2023-01-07 13:18:52,ita-social-projects/TeachUA,https://api.github.com/repos/ita-social-projects/TeachUA,closed,[Advanced search] Different spelling of center title 'Школа мистецтв імені Миколи Дмитровича Леонтовича',bug Backend Priority: Low Automation,"**Environment:** Windows 11, Google Chrome Version 107.0.5304.107 (Official Build) (64-bit). **Reproducible:** always. **Build found:** last commit [7652f37](https://github.com/ita-social-projects/TeachUA/commit/7652f37a2d6de58fe02b06fb38c91acef4b623c7) **Preconditions** 1. Go to the webpage: https://speak-ukrainian.org.ua/dev/ 2. Go to 'Гуртки' tab. 3. Click on 'Розширений пошук' button. **Steps to reproduce** 1. Click on 'Центр' radio button. 2. Make sure that 'Київ' city is selected (if not, select it). 3. Set 'Район міста' as 'Деснянський'. 4. Click on the center with title 'Школа мистецтв імені Миколи Дмитровича Леонтовича'. 5. Pay attention to the spelling of that title. 6. Go to a database. 7. Execute the following query: SELECT DISTINCT c.name FROM centers as c INNER JOIN locations as l ON c.id=l.center_id INNER JOIN cities as ct ON l.city_id=ct.id INNER JOIN districts as ds ON l.district_id=ds.id WHERE ct.name = 'Київ' AND ds.name = 'Деснянський'; 8. Double-click on the center title 'Школа мистецтв імені Миколи Дмитровича Леонтовича'. **Actual result** There are two spaces between the words 'Дмитровича' and 'Леонтовича' on DB. UI: ![image](https://user-images.githubusercontent.com/82941067/201613727-2abb27af-2e61-45b6-86f3-d79427209f7f.png) DB: ![image](https://user-images.githubusercontent.com/82941067/201613885-f9b510d8-d1e1-49da-bd16-3b8241722298.png) **Expected result** Center with title 'Школа мистецтв імені Миколи Дмитровича Леонтовича' should have spelled the same on UI and DB (with one space between the words 'Дмитровича' and 'Леонтовича'). **User story and test case links** User story #274 [Test case](https://jira.softserve.academy/browse/TUA-455) **Labels to be added** ""Bug"", Priority (""pri: ""). ",1.0,"[Advanced search] Different spelling of center title 'Школа мистецтв імені Миколи Дмитровича Леонтовича' - **Environment:** Windows 11, Google Chrome Version 107.0.5304.107 (Official Build) (64-bit). **Reproducible:** always. **Build found:** last commit [7652f37](https://github.com/ita-social-projects/TeachUA/commit/7652f37a2d6de58fe02b06fb38c91acef4b623c7) **Preconditions** 1. Go to the webpage: https://speak-ukrainian.org.ua/dev/ 2. Go to 'Гуртки' tab. 3. Click on 'Розширений пошук' button. **Steps to reproduce** 1. Click on 'Центр' radio button. 2. Make sure that 'Київ' city is selected (if not, select it). 3. Set 'Район міста' as 'Деснянський'. 4. Click on the center with title 'Школа мистецтв імені Миколи Дмитровича Леонтовича'. 5. Pay attention to the spelling of that title. 6. Go to a database. 7. Execute the following query: SELECT DISTINCT c.name FROM centers as c INNER JOIN locations as l ON c.id=l.center_id INNER JOIN cities as ct ON l.city_id=ct.id INNER JOIN districts as ds ON l.district_id=ds.id WHERE ct.name = 'Київ' AND ds.name = 'Деснянський'; 8. Double-click on the center title 'Школа мистецтв імені Миколи Дмитровича Леонтовича'. **Actual result** There are two spaces between the words 'Дмитровича' and 'Леонтовича' on DB. UI: ![image](https://user-images.githubusercontent.com/82941067/201613727-2abb27af-2e61-45b6-86f3-d79427209f7f.png) DB: ![image](https://user-images.githubusercontent.com/82941067/201613885-f9b510d8-d1e1-49da-bd16-3b8241722298.png) **Expected result** Center with title 'Школа мистецтв імені Миколи Дмитровича Леонтовича' should have spelled the same on UI and DB (with one space between the words 'Дмитровича' and 'Леонтовича'). **User story and test case links** User story #274 [Test case](https://jira.softserve.academy/browse/TUA-455) **Labels to be added** ""Bug"", Priority (""pri: ""). ",1, different spelling of center title школа мистецтв імені миколи дмитровича леонтовича environment windows google chrome version official build bit reproducible always build found last commit preconditions go to the webpage go to гуртки tab click on розширений пошук button steps to reproduce click on центр radio button make sure that київ city is selected if not select it set район міста as деснянський click on the center with title школа мистецтв імені миколи дмитровича леонтовича pay attention to the spelling of that title go to a database execute the following query select distinct c name from centers as c inner join locations as l on c id l center id inner join cities as ct on l city id ct id inner join districts as ds on l district id ds id where ct name київ and ds name деснянський double click on the center title школа мистецтв імені миколи дмитровича леонтовича actual result there are two spaces between the words дмитровича and леонтовича on db ui db expected result center with title школа мистецтв імені миколи дмитровича леонтовича should have spelled the same on ui and db with one space between the words дмитровича and леонтовича user story and test case links user story labels to be added bug priority pri ,1 1926,11103549389.0,IssuesEvent,2019-12-17 04:21:03,bandprotocol/d3n,https://api.github.com/repos/bandprotocol/d3n,closed,Run EVM bridge automated test for every PR / commit,automation bridge chore,Setup CI to run tests for every push that affects `bridge/evm` directory.,1.0,Run EVM bridge automated test for every PR / commit - Setup CI to run tests for every push that affects `bridge/evm` directory.,1,run evm bridge automated test for every pr commit setup ci to run tests for every push that affects bridge evm directory ,1 214628,16568902759.0,IssuesEvent,2021-05-30 01:43:21,SHOPFIFTEEN/FIFTEEN_FRONT,https://api.github.com/repos/SHOPFIFTEEN/FIFTEEN_FRONT,opened,1-34. 상품관리-등록 관한 Issue,bug documentation,"오류를 재연하기 위해 필요한 조치 (즉, 어떻게 하여 오류를 발견하였나) 관리자 화면에서 상품 등록 시 배송비, 할인율 등 숫자로 입력해야 하는 부분을 글자로 등록 시도 예상했던 동작이나 결과 해당 부분을 어떤 방식으로 수정해야 한다는 경고창 표시 실제 나타난 동작이나 결과 경고창 표시 없이 등록만 불가(어느 부분이 오류인지 확인 불가) 가능한 경우 오류 수정을 위한 제안. 어느 부분이 오류인지 경고창으로 표시할 수 있도록 수정 필요",1.0,"1-34. 상품관리-등록 관한 Issue - 오류를 재연하기 위해 필요한 조치 (즉, 어떻게 하여 오류를 발견하였나) 관리자 화면에서 상품 등록 시 배송비, 할인율 등 숫자로 입력해야 하는 부분을 글자로 등록 시도 예상했던 동작이나 결과 해당 부분을 어떤 방식으로 수정해야 한다는 경고창 표시 실제 나타난 동작이나 결과 경고창 표시 없이 등록만 불가(어느 부분이 오류인지 확인 불가) 가능한 경우 오류 수정을 위한 제안. 어느 부분이 오류인지 경고창으로 표시할 수 있도록 수정 필요",0, 상품관리 등록 관한 issue 오류를 재연하기 위해 필요한 조치 즉 어떻게 하여 오류를 발견하였나 관리자 화면에서 상품 등록 시 배송비 할인율 등 숫자로 입력해야 하는 부분을 글자로 등록 시도 예상했던 동작이나 결과 해당 부분을 어떤 방식으로 수정해야 한다는 경고창 표시 실제 나타난 동작이나 결과 경고창 표시 없이 등록만 불가 어느 부분이 오류인지 확인 불가 가능한 경우 오류 수정을 위한 제안 어느 부분이 오류인지 경고창으로 표시할 수 있도록 수정 필요,0 7469,24946537777.0,IssuesEvent,2022-11-01 01:04:36,dannytsang/homeassistant-config,https://api.github.com/repos/dannytsang/homeassistant-config,opened, Change automations to ⌛timers,automations integration: smartthings,"Similar to #63 but for all other automations that use the ""for"" parameter in automation triggers. Checklist: - [ ] Bedroom heated blankets - [ ] Fans - [ ] Switches",1.0," Change automations to ⌛timers - Similar to #63 but for all other automations that use the ""for"" parameter in automation triggers. Checklist: - [ ] Bedroom heated blankets - [ ] Fans - [ ] Switches",1, change automations to ⌛timers similar to but for all other automations that use the for parameter in automation triggers checklist bedroom heated blankets fans switches,1 404,6229997195.0,IssuesEvent,2017-07-11 06:35:23,VP-Technologies/assistant-server,https://api.github.com/repos/VP-Technologies/assistant-server,opened,Implement Creation of Devices DB,automation,"Follow the spec from the doc, and creating a test script with fake devices.",1.0,"Implement Creation of Devices DB - Follow the spec from the doc, and creating a test script with fake devices.",1,implement creation of devices db follow the spec from the doc and creating a test script with fake devices ,1 5978,21781161857.0,IssuesEvent,2022-05-13 19:05:29,dotnet/arcade,https://api.github.com/repos/dotnet/arcade,closed,CG work for dotnet-helix-machines,First Responder Detected By - Automation Helix-Machines Operations,"To drive our CG alert to Zero, please address the following items. https://dnceng.visualstudio.com/internal/_componentGovernance/dotnet-helix-machines?_a=alerts&typeId=6377838&alerts-view-option=active We need to address anything with a medium (or higher) priority",1.0,"CG work for dotnet-helix-machines - To drive our CG alert to Zero, please address the following items. https://dnceng.visualstudio.com/internal/_componentGovernance/dotnet-helix-machines?_a=alerts&typeId=6377838&alerts-view-option=active We need to address anything with a medium (or higher) priority",1,cg work for dotnet helix machines to drive our cg alert to zero please address the following items we need to address anything with a medium or higher priority,1 4588,16961498009.0,IssuesEvent,2021-06-29 04:59:42,ecotiya/wicum,https://api.github.com/repos/ecotiya/wicum,opened,自動ビルド及び自動テストの導入,automation,"# 【機能要件】 ・自動ビルド及び自動テストの導入。 ・とりあえずAWSのデプロイまで完了したあとに余裕があったらやる。 # 【タスク】 - [ ] task1 - [ ] task2 - [ ] task3 # 【調査事項】 ",1.0,"自動ビルド及び自動テストの導入 - # 【機能要件】 ・自動ビルド及び自動テストの導入。 ・とりあえずAWSのデプロイまで完了したあとに余裕があったらやる。 # 【タスク】 - [ ] task1 - [ ] task2 - [ ] task3 # 【調査事項】 ",1,自動ビルド及び自動テストの導入 【機能要件】 ・自動ビルド及び自動テストの導入。 ・とりあえずawsのデプロイまで完了したあとに余裕があったらやる。 【タスク】 【調査事項】 ,1 2392,11862563563.0,IssuesEvent,2020-03-25 18:09:16,elastic/metricbeat-tests-poc,https://api.github.com/repos/elastic/metricbeat-tests-poc,closed,Validate Helm charts,automation,"Let's use a BDD approach to validate the official Helm charts for elastic. Something like this: ```gherkin @helm @k8s @metricbeat Feature: The Helm chart is following product recommended configuration for Kubernetes Scenario: The Metricbeat chart will create recommended K8S resources Given a cluster is running When the ""metricbeat"" Elastic's helm chart is installed Then a pod will be deployed on each node of the cluster by a DaemonSet And a ""Deployment"" will manage additional pods for metricsets querying internal services And a ""kube-state-metrics"" chart will retrieve specific Kubernetes metrics And a ""ConfigMap"" resource contains the ""metricbeat.yml"" content And a ""ConfigMap"" resource contains the ""kube-state-metrics-metricbeat.yml"" content And a ""ServiceAccount"" resource manages RBAC And a ""ClusterRole"" resource manages RBAC And a ""ClusterRoleBinding"" resource manages RBAC ```",1.0,"Validate Helm charts - Let's use a BDD approach to validate the official Helm charts for elastic. Something like this: ```gherkin @helm @k8s @metricbeat Feature: The Helm chart is following product recommended configuration for Kubernetes Scenario: The Metricbeat chart will create recommended K8S resources Given a cluster is running When the ""metricbeat"" Elastic's helm chart is installed Then a pod will be deployed on each node of the cluster by a DaemonSet And a ""Deployment"" will manage additional pods for metricsets querying internal services And a ""kube-state-metrics"" chart will retrieve specific Kubernetes metrics And a ""ConfigMap"" resource contains the ""metricbeat.yml"" content And a ""ConfigMap"" resource contains the ""kube-state-metrics-metricbeat.yml"" content And a ""ServiceAccount"" resource manages RBAC And a ""ClusterRole"" resource manages RBAC And a ""ClusterRoleBinding"" resource manages RBAC ```",1,validate helm charts let s use a bdd approach to validate the official helm charts for elastic something like this gherkin helm metricbeat feature the helm chart is following product recommended configuration for kubernetes scenario the metricbeat chart will create recommended resources given a cluster is running when the metricbeat elastic s helm chart is installed then a pod will be deployed on each node of the cluster by a daemonset and a deployment will manage additional pods for metricsets querying internal services and a kube state metrics chart will retrieve specific kubernetes metrics and a configmap resource contains the metricbeat yml content and a configmap resource contains the kube state metrics metricbeat yml content and a serviceaccount resource manages rbac and a clusterrole resource manages rbac and a clusterrolebinding resource manages rbac ,1 62496,6798459654.0,IssuesEvent,2017-11-02 05:45:28,minishift/minishift,https://api.github.com/repos/minishift/minishift,closed,make integration failed for cmd-openshift feature,component/integration-test kind/bug priority/major status/needs-info,"``` $ make integration GODOG_OPTS=""-tags cmd-openshift -format pretty"" go install -pkgdir=/home/amit/go/src/github.com/minishift/minishift/out/bindata -ldflags=""-X github.com/minishift/minishift/pkg/version.minishiftVersion=1.5.0 -X github.com/minishift/minishift/pkg/version.b2dIsoVersion=v1.1.0 -X github.com/minishift/minishift/pkg/version.centOsIsoVersion=v1.1.0 -X github.com/minishift/minishift/pkg/version.openshiftVersion=v3.6.0 -X github.com/minishift/minishift/pkg/version.commitSha=0e75c4ec"" ./cmd/minishift mkdir -p /home/amit/go/src/github.com/minishift/minishift/out/integration-test go test -timeout 3600s github.com/minishift/minishift/test/integration --tags=integration -v -args --test-dir /home/amit/go/src/github.com/minishift/minishift/out/integration-test --binary /home/amit/go/bin/minishift -tags cmd-openshift -format pretty Test run using Boot2Docker iso image. Keeping Minishift cache directory '/home/amit/go/src/github.com/minishift/minishift/out/integration-test/cache' for test run. Log successfully started, logging into: /home/amit/go/src/github.com/minishift/minishift/out/integration-test/integration.log Running Integration test in: /home/amit/go/src/github.com/minishift/minishift/out/integration-test Using binary: /home/amit/go/bin/minishift Feature: Basic As a user I can perform basic operations of Minishift and OpenShift Feature: Openshift commands Commands ""minishift openshift [sub-command]"" are used for interaction with Openshift cluster in VM provided by Minishift. . . . Scenario: Getting existing service without route # features/cmd-openshift.feature:67 When executing ""minishift openshift service nodejs-ex"" succeeds # integration_test.go:652 -> github.com/minishift/minishift/test/integration.executingMinishiftCommandSucceedsOrFails Then stdout should contain ""nodejs-ex"" # integration_test.go:594 -> github.com/minishift/minishift/test/integration.commandReturnShouldContain Output did not match. Expected: 'nodejs-ex', Actual: '|-----------|------|----------|-----------|--------| | NAMESPACE | NAME | NODEPORT | ROUTE-URL | WEIGHT | |-----------|------|----------|-----------|--------| |-----------|------|----------|-----------|--------| ' And stdout should not match """""" ^http:\/\/nodejs-ex-myproject\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.nip\.io """""" Scenario: Getting existing service with route # features/cmd-openshift.feature:100 When executing ""minishift openshift service nodejs-ex"" succeeds # integration_test.go:652 -> github.com/minishift/minishift/test/integration.executingMinishiftCommandSucceedsOrFails Then stdout should contain ""nodejs-ex"" # integration_test.go:594 -> github.com/minishift/minishift/test/integration.commandReturnShouldContain Output did not match. Expected: 'nodejs-ex', Actual: '|-----------|------|----------|-----------|--------| | NAMESPACE | NAME | NODEPORT | ROUTE-URL | WEIGHT | |-----------|------|----------|-----------|--------| |-----------|------|----------|-----------|--------| ' And stdout should match """""" http:\/\/nodejs-ex-myproject\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.nip\.io """""" . . . --- Failed scenarios: features/cmd-openshift.feature:39 features/cmd-openshift.feature:69 features/cmd-openshift.feature:93 features/cmd-openshift.feature:102 features/cmd-openshift.feature:110 18 scenarios (13 passed, 5 failed) 48 steps (38 passed, 5 failed, 5 skipped) 2m22.19907923s testing: warning: no tests to run PASS exit status 1 FAIL github.com/minishift/minishift/test/integration 142.214s make: *** [Makefile:176: integration] Error 1 18:17 $ ```",1.0,"make integration failed for cmd-openshift feature - ``` $ make integration GODOG_OPTS=""-tags cmd-openshift -format pretty"" go install -pkgdir=/home/amit/go/src/github.com/minishift/minishift/out/bindata -ldflags=""-X github.com/minishift/minishift/pkg/version.minishiftVersion=1.5.0 -X github.com/minishift/minishift/pkg/version.b2dIsoVersion=v1.1.0 -X github.com/minishift/minishift/pkg/version.centOsIsoVersion=v1.1.0 -X github.com/minishift/minishift/pkg/version.openshiftVersion=v3.6.0 -X github.com/minishift/minishift/pkg/version.commitSha=0e75c4ec"" ./cmd/minishift mkdir -p /home/amit/go/src/github.com/minishift/minishift/out/integration-test go test -timeout 3600s github.com/minishift/minishift/test/integration --tags=integration -v -args --test-dir /home/amit/go/src/github.com/minishift/minishift/out/integration-test --binary /home/amit/go/bin/minishift -tags cmd-openshift -format pretty Test run using Boot2Docker iso image. Keeping Minishift cache directory '/home/amit/go/src/github.com/minishift/minishift/out/integration-test/cache' for test run. Log successfully started, logging into: /home/amit/go/src/github.com/minishift/minishift/out/integration-test/integration.log Running Integration test in: /home/amit/go/src/github.com/minishift/minishift/out/integration-test Using binary: /home/amit/go/bin/minishift Feature: Basic As a user I can perform basic operations of Minishift and OpenShift Feature: Openshift commands Commands ""minishift openshift [sub-command]"" are used for interaction with Openshift cluster in VM provided by Minishift. . . . Scenario: Getting existing service without route # features/cmd-openshift.feature:67 When executing ""minishift openshift service nodejs-ex"" succeeds # integration_test.go:652 -> github.com/minishift/minishift/test/integration.executingMinishiftCommandSucceedsOrFails Then stdout should contain ""nodejs-ex"" # integration_test.go:594 -> github.com/minishift/minishift/test/integration.commandReturnShouldContain Output did not match. Expected: 'nodejs-ex', Actual: '|-----------|------|----------|-----------|--------| | NAMESPACE | NAME | NODEPORT | ROUTE-URL | WEIGHT | |-----------|------|----------|-----------|--------| |-----------|------|----------|-----------|--------| ' And stdout should not match """""" ^http:\/\/nodejs-ex-myproject\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.nip\.io """""" Scenario: Getting existing service with route # features/cmd-openshift.feature:100 When executing ""minishift openshift service nodejs-ex"" succeeds # integration_test.go:652 -> github.com/minishift/minishift/test/integration.executingMinishiftCommandSucceedsOrFails Then stdout should contain ""nodejs-ex"" # integration_test.go:594 -> github.com/minishift/minishift/test/integration.commandReturnShouldContain Output did not match. Expected: 'nodejs-ex', Actual: '|-----------|------|----------|-----------|--------| | NAMESPACE | NAME | NODEPORT | ROUTE-URL | WEIGHT | |-----------|------|----------|-----------|--------| |-----------|------|----------|-----------|--------| ' And stdout should match """""" http:\/\/nodejs-ex-myproject\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.nip\.io """""" . . . --- Failed scenarios: features/cmd-openshift.feature:39 features/cmd-openshift.feature:69 features/cmd-openshift.feature:93 features/cmd-openshift.feature:102 features/cmd-openshift.feature:110 18 scenarios (13 passed, 5 failed) 48 steps (38 passed, 5 failed, 5 skipped) 2m22.19907923s testing: warning: no tests to run PASS exit status 1 FAIL github.com/minishift/minishift/test/integration 142.214s make: *** [Makefile:176: integration] Error 1 18:17 $ ```",0,make integration failed for cmd openshift feature make integration godog opts tags cmd openshift format pretty go install pkgdir home amit go src github com minishift minishift out bindata ldflags x github com minishift minishift pkg version minishiftversion x github com minishift minishift pkg version x github com minishift minishift pkg version centosisoversion x github com minishift minishift pkg version openshiftversion x github com minishift minishift pkg version commitsha cmd minishift mkdir p home amit go src github com minishift minishift out integration test go test timeout github com minishift minishift test integration tags integration v args test dir home amit go src github com minishift minishift out integration test binary home amit go bin minishift tags cmd openshift format pretty test run using iso image keeping minishift cache directory home amit go src github com minishift minishift out integration test cache for test run log successfully started logging into home amit go src github com minishift minishift out integration test integration log running integration test in home amit go src github com minishift minishift out integration test using binary home amit go bin minishift feature basic as a user i can perform basic operations of minishift and openshift feature openshift commands commands minishift openshift are used for interaction with openshift cluster in vm provided by minishift scenario getting existing service without route features cmd openshift feature when executing minishift openshift service nodejs ex succeeds integration test go github com minishift minishift test integration executingminishiftcommandsucceedsorfails then stdout should contain nodejs ex integration test go github com minishift minishift test integration commandreturnshouldcontain output did not match expected nodejs ex actual namespace name nodeport route url weight and stdout should not match http nodejs ex myproject nip io scenario getting existing service with route features cmd openshift feature when executing minishift openshift service nodejs ex succeeds integration test go github com minishift minishift test integration executingminishiftcommandsucceedsorfails then stdout should contain nodejs ex integration test go github com minishift minishift test integration commandreturnshouldcontain output did not match expected nodejs ex actual namespace name nodeport route url weight and stdout should match http nodejs ex myproject nip io failed scenarios features cmd openshift feature features cmd openshift feature features cmd openshift feature features cmd openshift feature features cmd openshift feature scenarios passed failed steps passed failed skipped testing warning no tests to run pass exit status fail github com minishift minishift test integration make error ,0 6040,21940581337.0,IssuesEvent,2022-05-23 17:39:12,pharmaverse/admiral,https://api.github.com/repos/pharmaverse/admiral,closed,Create workflow to automatically create man files,automation,"The workflow should be triggered whenever something is pushed to `devel` or `master`, run `devtools::document()` and commited any updated file in the `man` folder.",1.0,"Create workflow to automatically create man files - The workflow should be triggered whenever something is pushed to `devel` or `master`, run `devtools::document()` and commited any updated file in the `man` folder.",1,create workflow to automatically create man files the workflow should be triggered whenever something is pushed to devel or master run devtools document and commited any updated file in the man folder ,1 137052,11097825807.0,IssuesEvent,2019-12-16 14:07:13,zeebe-io/zeebe,https://api.github.com/repos/zeebe-io/zeebe,closed,LogStreamTest.shouldCloseLogStream unstabled,Status: Needs Review Type: Maintenance Type: Unstable Test,"**Description** Failed sometimes in the CI. ``` [ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.074 s <<< FAILURE! - in io.zeebe.logstreams.log.LogStreamTest [ERROR] io.zeebe.logstreams.log.LogStreamTest.shouldCloseLogStream Time elapsed: 0.683 s <<< FAILURE! java.lang.AssertionError: Expecting code to raise a throwable. at io.zeebe.logstreams.log.LogStreamTest.shouldCloseLogStream(LogStreamTest.java:91) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) ``` Output: ``` 12:13:49.119 [] [main] INFO io.zeebe.test - Test finished: shouldCreateNewLogStreamBatchWriter(io.zeebe.logstreams.log.LogStreamTest) 12:13:49.120 [] [main] INFO io.zeebe.test - Test started: shouldCloseLogStream(io.zeebe.logstreams.log.LogStreamTest) 12:13:49.313 [io.zeebe.logstreams.impl.LogStreamBuilder$1] [-zb-actors-3] WARN io.zeebe.logstreams - Unexpected non-empty log failed to read the last block 12:13:49.318 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-3] WARN io.zeebe.logstreams - Unexpected non-empty log failed to read the last block 12:13:49.533 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] DEBUG io.zeebe.logstreams - Configured log appender back pressure at partition 0 as AppenderVegasCfg{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled 12:13:49.600 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-3] INFO io.zeebe.logstreams - Close appender for log stream 0 12:13:49.601 [0-write-buffer] [-zb-actors-3] DEBUG io.zeebe.dispatcher - Dispatcher closed 12:13:49.602 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] INFO io.zeebe.logstreams - On closing logstream 0 close 1 readers 12:13:49.603 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] INFO io.zeebe.logstreams - Close log storage with name 0 ```",1.0,"LogStreamTest.shouldCloseLogStream unstabled - **Description** Failed sometimes in the CI. ``` [ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.074 s <<< FAILURE! - in io.zeebe.logstreams.log.LogStreamTest [ERROR] io.zeebe.logstreams.log.LogStreamTest.shouldCloseLogStream Time elapsed: 0.683 s <<< FAILURE! java.lang.AssertionError: Expecting code to raise a throwable. at io.zeebe.logstreams.log.LogStreamTest.shouldCloseLogStream(LogStreamTest.java:91) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) ``` Output: ``` 12:13:49.119 [] [main] INFO io.zeebe.test - Test finished: shouldCreateNewLogStreamBatchWriter(io.zeebe.logstreams.log.LogStreamTest) 12:13:49.120 [] [main] INFO io.zeebe.test - Test started: shouldCloseLogStream(io.zeebe.logstreams.log.LogStreamTest) 12:13:49.313 [io.zeebe.logstreams.impl.LogStreamBuilder$1] [-zb-actors-3] WARN io.zeebe.logstreams - Unexpected non-empty log failed to read the last block 12:13:49.318 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-3] WARN io.zeebe.logstreams - Unexpected non-empty log failed to read the last block 12:13:49.533 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] DEBUG io.zeebe.logstreams - Configured log appender back pressure at partition 0 as AppenderVegasCfg{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled 12:13:49.600 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-3] INFO io.zeebe.logstreams - Close appender for log stream 0 12:13:49.601 [0-write-buffer] [-zb-actors-3] DEBUG io.zeebe.dispatcher - Dispatcher closed 12:13:49.602 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] INFO io.zeebe.logstreams - On closing logstream 0 close 1 readers 12:13:49.603 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] INFO io.zeebe.logstreams - Close log storage with name 0 ```",0,logstreamtest shouldcloselogstream unstabled description failed sometimes in the ci tests run failures errors skipped time elapsed s failure in io zeebe logstreams log logstreamtest io zeebe logstreams log logstreamtest shouldcloselogstream time elapsed s failure java lang assertionerror expecting code to raise a throwable at io zeebe logstreams log logstreamtest shouldcloselogstream logstreamtest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit rules externalresource evaluate externalresource java at org junit rules externalresource evaluate externalresource java at org junit rules runrules evaluate runrules java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executelazy junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java output info io zeebe test test finished shouldcreatenewlogstreambatchwriter io zeebe logstreams log logstreamtest info io zeebe test test started shouldcloselogstream io zeebe logstreams log logstreamtest warn io zeebe logstreams unexpected non empty log failed to read the last block warn io zeebe logstreams unexpected non empty log failed to read the last block debug io zeebe logstreams configured log appender back pressure at partition as appendervegascfg initiallimit maxconcurrency alphalimit betalimit window limiting is disabled info io zeebe logstreams close appender for log stream debug io zeebe dispatcher dispatcher closed info io zeebe logstreams on closing logstream close readers info io zeebe logstreams close log storage with name ,0 281243,30888436302.0,IssuesEvent,2023-08-04 01:19:37,hshivhare67/kernel_v4.1.15,https://api.github.com/repos/hshivhare67/kernel_v4.1.15,reopened,CVE-2017-12762 (Critical) detected in linuxlinux-4.6,Mend: dependency security vulnerability,"## CVE-2017-12762 - Critical Severity Vulnerability
Vulnerable Library - linuxlinux-4.6

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in base branch: master

Vulnerable Source Files (2)

/drivers/isdn/i4l/isdn_common.c /drivers/isdn/i4l/isdn_common.c

Vulnerability Details

In /drivers/isdn/i4l/isdn_net.c: A user-controlled buffer is copied into a local buffer of constant size using strcpy without a length check which can cause a buffer overflow. This affects the Linux kernel 4.9-stable tree, 4.12-stable tree, 3.18-stable tree, and 4.4-stable tree.

Publish Date: 2017-08-09

URL: CVE-2017-12762

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2017-08-09

Fix Resolution: 3.18.64,v4.13-rc4,4.12.5,4.4.80,4.9.41

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2017-12762 (Critical) detected in linuxlinux-4.6 - ## CVE-2017-12762 - Critical Severity Vulnerability
Vulnerable Library - linuxlinux-4.6

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in base branch: master

Vulnerable Source Files (2)

/drivers/isdn/i4l/isdn_common.c /drivers/isdn/i4l/isdn_common.c

Vulnerability Details

In /drivers/isdn/i4l/isdn_net.c: A user-controlled buffer is copied into a local buffer of constant size using strcpy without a length check which can cause a buffer overflow. This affects the Linux kernel 4.9-stable tree, 4.12-stable tree, 3.18-stable tree, and 4.4-stable tree.

Publish Date: 2017-08-09

URL: CVE-2017-12762

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2017-08-09

Fix Resolution: 3.18.64,v4.13-rc4,4.12.5,4.4.80,4.9.41

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve critical detected in linuxlinux cve critical severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers isdn isdn common c drivers isdn isdn common c vulnerability details in drivers isdn isdn net c a user controlled buffer is copied into a local buffer of constant size using strcpy without a length check which can cause a buffer overflow this affects the linux kernel stable tree stable tree stable tree and stable tree publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend ,0 10156,31814696081.0,IssuesEvent,2023-09-13 19:30:56,figuren-theater/ft-platform,https://api.github.com/repos/figuren-theater/ft-platform,closed,Establish quality standards,automation,"```[tasklist] ### Repository Standards - [x] Has nice [README.md](https://github.com/figuren-theater/new-ft-module/blob/main/README.md) - [x] Add [`.github/workflows/ft-issue-gardening.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/ft-issue-gardening.yml) file (if not exists) - [x] Add [`.github/workflows/release-drafter.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/release-drafter.yml) file - [x] Delete [`.github/workflows/update-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/update-changelog.yml) file - [x] Add [`.github/workflows/prerelease-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/prerelease-changelog.yml) file - [x] Add [`.editorconfig`](https://github.com/figuren-theater/new-ft-module/blob/main/.editorconfig) file - [x] Add [`.phpcs.xml`](https://github.com/figuren-theater/new-ft-module/blob/main/.phpcs.xml) file - [x] Check that `.phpcs.xml` file is not present in `.gitignore` - [x] Add [`CHANGELOG.md`](https://github.com/figuren-theater/new-ft-module/blob/main/CHANGELOG.md) file with an *Unreleased-Heading* - [x] Add [`phpstan.neon`](https://github.com/figuren-theater/new-ft-module/blob/main/phpstan.neon) file - [x] Run `composer require --dev figuren-theater/code-quality` - [x] Run `composer normalize` - [x] Run `vendor/bin/phpstan analyze .` - [x] Run `vendor/bin/phpcs .` - [x] Fix all errors ;) - [x] commit, PR & merge all (additional) changes - [x] Has branch protection enabled - [x] Add [`.github/workflows/build-test-measure.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/build-test-measure.yml) file - [x] Enable repo for required **Build, test & measure** status checks via [Repo Settings](/settings/actions) - [x] Add **Build, test & measure** badge to the [code-quality](https://github.com/figuren-theater/code-quality) README - [x] Submit repo to [packagist.org](https://packagist.org/packages/figuren-theater/) - [x] Remove explicit `repositories` entry from [ft-platform](https://github.com/figuren-theater/ft-platform)s `composer.json` - [x] Update `README.md` to see all workflows running - [x] Publish the new drafted Release as Prerelease to trigger auto-updating versions in CHANGELOG.md and plugin.php --> THIS WILL A TRIGGER A DEPLOY !!! - [ ] https://github.com/figuren-theater/ft-platform/issues/14 ``` ",1.0,"Establish quality standards - ```[tasklist] ### Repository Standards - [x] Has nice [README.md](https://github.com/figuren-theater/new-ft-module/blob/main/README.md) - [x] Add [`.github/workflows/ft-issue-gardening.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/ft-issue-gardening.yml) file (if not exists) - [x] Add [`.github/workflows/release-drafter.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/release-drafter.yml) file - [x] Delete [`.github/workflows/update-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/update-changelog.yml) file - [x] Add [`.github/workflows/prerelease-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/prerelease-changelog.yml) file - [x] Add [`.editorconfig`](https://github.com/figuren-theater/new-ft-module/blob/main/.editorconfig) file - [x] Add [`.phpcs.xml`](https://github.com/figuren-theater/new-ft-module/blob/main/.phpcs.xml) file - [x] Check that `.phpcs.xml` file is not present in `.gitignore` - [x] Add [`CHANGELOG.md`](https://github.com/figuren-theater/new-ft-module/blob/main/CHANGELOG.md) file with an *Unreleased-Heading* - [x] Add [`phpstan.neon`](https://github.com/figuren-theater/new-ft-module/blob/main/phpstan.neon) file - [x] Run `composer require --dev figuren-theater/code-quality` - [x] Run `composer normalize` - [x] Run `vendor/bin/phpstan analyze .` - [x] Run `vendor/bin/phpcs .` - [x] Fix all errors ;) - [x] commit, PR & merge all (additional) changes - [x] Has branch protection enabled - [x] Add [`.github/workflows/build-test-measure.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/build-test-measure.yml) file - [x] Enable repo for required **Build, test & measure** status checks via [Repo Settings](/settings/actions) - [x] Add **Build, test & measure** badge to the [code-quality](https://github.com/figuren-theater/code-quality) README - [x] Submit repo to [packagist.org](https://packagist.org/packages/figuren-theater/) - [x] Remove explicit `repositories` entry from [ft-platform](https://github.com/figuren-theater/ft-platform)s `composer.json` - [x] Update `README.md` to see all workflows running - [x] Publish the new drafted Release as Prerelease to trigger auto-updating versions in CHANGELOG.md and plugin.php --> THIS WILL A TRIGGER A DEPLOY !!! - [ ] https://github.com/figuren-theater/ft-platform/issues/14 ``` ",1,establish quality standards repository standards has nice add file if not exists add file delete file add file add file add file check that phpcs xml file is not present in gitignore add file with an unreleased heading add file run composer require dev figuren theater code quality run composer normalize run vendor bin phpstan analyze run vendor bin phpcs fix all errors commit pr merge all additional changes has branch protection enabled add file enable repo for required build test measure status checks via settings actions add build test measure badge to the readme submit repo to remove explicit repositories entry from composer json update readme md to see all workflows running publish the new drafted release as prerelease to trigger auto updating versions in changelog md and plugin php this will a trigger a deploy ,1 5297,19029532929.0,IssuesEvent,2021-11-24 09:12:40,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,opened,[L10n Tests] Update/remove l10n-screenshots.config.yml ,eng:automation,"If we can get the project's locale and use them while triggering the hook to start the Taskcluster job, we could remove this file. If we can't do that, we should update this file as there are mismatches between the locales there and the locales we have to get screenshots for.",1.0,"[L10n Tests] Update/remove l10n-screenshots.config.yml - If we can get the project's locale and use them while triggering the hook to start the Taskcluster job, we could remove this file. If we can't do that, we should update this file as there are mismatches between the locales there and the locales we have to get screenshots for.",1, update remove screenshots config yml if we can get the project s locale and use them while triggering the hook to start the taskcluster job we could remove this file if we can t do that we should update this file as there are mismatches between the locales there and the locales we have to get screenshots for ,1 8798,27172261107.0,IssuesEvent,2023-02-17 20:36:31,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Files inheriting permissions do not show in delta query,status:investigating Needs: Triage :mag: automation:Closed,"#### Category - [x] Question - [x] Documentation issue - [ ] Bug #### Expected or Desired Behavior According to the [docs](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/scan-guidance?view=odsp-graph-online#scanning-permissions-hierarchies): > By default, the delta query response will include sharing information for all items in the query that changed even if they inherit their permissions from their parent and did not have direct sharing changes themselves For a directory `dir1` containing `fileA`, changing the permission on `dir1` should show `fileA` in the GET /delta response. #### Observed Behavior Changing permission on a directory does not produce items in the GET /delta query response for files inheriting the directory permission. #### Steps to Reproduce 1. Create directory `dir1` 1. Create `fileA` in `dir1` 1. Get the latest delta token via `GET /users/{userId}/drive/root/delta?token=latest` 1. Create a shareable link on `dir1` giving everyone with the link access. 1. Get the latest changes via `GET /users/{userId}/drive/root/delta?token=` 1. `fileA` is not included in the respone ``` > GET /users/{userId}/drive/root/delta?token= { ""@odata.context"": ""https://graph.microsoft.com/v1.0/$metadata#Collection(driveItem)"", ""@odata.deltaLink"": ""..."", ""value"": [ { ""@odata.type"": ""#microsoft.graph.driveItem"", ""name"": ""root"", ... }, { ""@odata.type"": ""#microsoft.graph.driveItem"", ""name"": ""dir1"", ... } ] } ```",1.0,"Files inheriting permissions do not show in delta query - #### Category - [x] Question - [x] Documentation issue - [ ] Bug #### Expected or Desired Behavior According to the [docs](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/scan-guidance?view=odsp-graph-online#scanning-permissions-hierarchies): > By default, the delta query response will include sharing information for all items in the query that changed even if they inherit their permissions from their parent and did not have direct sharing changes themselves For a directory `dir1` containing `fileA`, changing the permission on `dir1` should show `fileA` in the GET /delta response. #### Observed Behavior Changing permission on a directory does not produce items in the GET /delta query response for files inheriting the directory permission. #### Steps to Reproduce 1. Create directory `dir1` 1. Create `fileA` in `dir1` 1. Get the latest delta token via `GET /users/{userId}/drive/root/delta?token=latest` 1. Create a shareable link on `dir1` giving everyone with the link access. 1. Get the latest changes via `GET /users/{userId}/drive/root/delta?token=` 1. `fileA` is not included in the respone ``` > GET /users/{userId}/drive/root/delta?token= { ""@odata.context"": ""https://graph.microsoft.com/v1.0/$metadata#Collection(driveItem)"", ""@odata.deltaLink"": ""..."", ""value"": [ { ""@odata.type"": ""#microsoft.graph.driveItem"", ""name"": ""root"", ... }, { ""@odata.type"": ""#microsoft.graph.driveItem"", ""name"": ""dir1"", ... } ] } ```",1,files inheriting permissions do not show in delta query category question documentation issue bug expected or desired behavior according to the by default the delta query response will include sharing information for all items in the query that changed even if they inherit their permissions from their parent and did not have direct sharing changes themselves for a directory containing filea changing the permission on should show filea in the get delta response observed behavior changing permission on a directory does not produce items in the get delta query response for files inheriting the directory permission steps to reproduce create directory create filea in get the latest delta token via get users userid drive root delta token latest create a shareable link on giving everyone with the link access get the latest changes via get users userid drive root delta token filea is not included in the respone get users userid drive root delta token odata context odata deltalink value odata type microsoft graph driveitem name root odata type microsoft graph driveitem name ,1 4325,16087338687.0,IssuesEvent,2021-04-26 12:55:42,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,opened,Integration tests for Vulnerability Detector: Amazon Linux support,automation core/vuln detector qa,"### Description This issue is part of wazuh/wazuh#6734 We will include support for the Amazon Linux OS, including two new feeds, one for Amazon Linux and another one for Amazon Linux 2. The following tests should cover the new supported OS: - Feed tests: These tests have to cover the addition of the new feeds for Amazon Linux - Provider tests: Here we will test the settings for the new provider - Scan results tests: Here we have to add the tests that ensure Amazon Linux agents are scanned properly",1.0,"Integration tests for Vulnerability Detector: Amazon Linux support - ### Description This issue is part of wazuh/wazuh#6734 We will include support for the Amazon Linux OS, including two new feeds, one for Amazon Linux and another one for Amazon Linux 2. The following tests should cover the new supported OS: - Feed tests: These tests have to cover the addition of the new feeds for Amazon Linux - Provider tests: Here we will test the settings for the new provider - Scan results tests: Here we have to add the tests that ensure Amazon Linux agents are scanned properly",1,integration tests for vulnerability detector amazon linux support description this issue is part of wazuh wazuh we will include support for the amazon linux os including two new feeds one for amazon linux and another one for amazon linux the following tests should cover the new supported os feed tests these tests have to cover the addition of the new feeds for amazon linux provider tests here we will test the settings for the new provider scan results tests here we have to add the tests that ensure amazon linux agents are scanned properly,1 3387,13631989065.0,IssuesEvent,2020-09-24 18:54:17,DiptoChakrabarty/Jenkins,https://api.github.com/repos/DiptoChakrabarty/Jenkins,opened,Implement Jenkins slave master infrastructure build up ,CI/CD automation devops enhancement hacktoberfest,Currently the role only provides the functionality to configure jenkins in remote systems we should carry this forward to implement jenkins master and slave model by specifying the number of slaves we desire along with their configuration details .,1.0,Implement Jenkins slave master infrastructure build up - Currently the role only provides the functionality to configure jenkins in remote systems we should carry this forward to implement jenkins master and slave model by specifying the number of slaves we desire along with their configuration details .,1,implement jenkins slave master infrastructure build up currently the role only provides the functionality to configure jenkins in remote systems we should carry this forward to implement jenkins master and slave model by specifying the number of slaves we desire along with their configuration details ,1 1193,9666047446.0,IssuesEvent,2019-05-21 09:52:45,research-software-reactor/cyclecloud,https://api.github.com/repos/research-software-reactor/cyclecloud,opened,Create automated deployment template using ARM,step2setup-automation,Probably excluding identity as this is being worked on separately. ,1.0,Create automated deployment template using ARM - Probably excluding identity as this is being worked on separately. ,1,create automated deployment template using arm probably excluding identity as this is being worked on separately ,1 61056,8484539202.0,IssuesEvent,2018-10-26 03:10:26,NicholasThrom/typesafe-json,https://api.github.com/repos/NicholasThrom/typesafe-json,closed,Add updating changelog to PR template,documentation,"Some cards need the changelog to be updated when they are merged, but I know I'm going to forgot to do that. A checkbox on PRs would be helpful.",1.0,"Add updating changelog to PR template - Some cards need the changelog to be updated when they are merged, but I know I'm going to forgot to do that. A checkbox on PRs would be helpful.",0,add updating changelog to pr template some cards need the changelog to be updated when they are merged but i know i m going to forgot to do that a checkbox on prs would be helpful ,0 114814,4646676741.0,IssuesEvent,2016-10-01 02:11:23,bbengfort/cloudscope,https://api.github.com/repos/bbengfort/cloudscope,closed,Federated Backpressure,in progress priority: high type: feature,"1. Version numbers get an additional component that can only be incremented by the Raft leader. 2. When Raft commits a write, it increments that counter 3. Because versions are compared starting from the Raft number first, this has the affect of making a committed write the most recent write in the system (e.g. +200 version number). 4. Dependencies are all tracked by the original version number 5. On eventual receipt of a higher version that is already in the log: - find all dependencies of that version, and make their ""raft marker version"" equal to the parent. - continue until ""latest local"" - then set that one to latest to write to and push in gossip. I believe this system will eventually converge. In Eventual only: the raft number is always 0, so eventual just works the same way as always In Raft only: The ""raft number"" will be a monotonically increasing commit sequence but will have no other affect. In Federated: Given the following scenario: ``` A.1.0 / \ A.2.0 A.3.0 | | A.4.0 A.5.0 ``` If A.2.0 goes to Raft, Raft will make it A.2.1 and will reject A.3.0; The replica that performed Anti-Entropy with Raft will make A.4.0 --> A.4.1 and when A.5.0 comes in via anti-entropy, A.4.1 > A.5.0 so A.5.0 will be tossed out.",1.0,"Federated Backpressure - 1. Version numbers get an additional component that can only be incremented by the Raft leader. 2. When Raft commits a write, it increments that counter 3. Because versions are compared starting from the Raft number first, this has the affect of making a committed write the most recent write in the system (e.g. +200 version number). 4. Dependencies are all tracked by the original version number 5. On eventual receipt of a higher version that is already in the log: - find all dependencies of that version, and make their ""raft marker version"" equal to the parent. - continue until ""latest local"" - then set that one to latest to write to and push in gossip. I believe this system will eventually converge. In Eventual only: the raft number is always 0, so eventual just works the same way as always In Raft only: The ""raft number"" will be a monotonically increasing commit sequence but will have no other affect. In Federated: Given the following scenario: ``` A.1.0 / \ A.2.0 A.3.0 | | A.4.0 A.5.0 ``` If A.2.0 goes to Raft, Raft will make it A.2.1 and will reject A.3.0; The replica that performed Anti-Entropy with Raft will make A.4.0 --> A.4.1 and when A.5.0 comes in via anti-entropy, A.4.1 > A.5.0 so A.5.0 will be tossed out.",0,federated backpressure version numbers get an additional component that can only be incremented by the raft leader when raft commits a write it increments that counter because versions are compared starting from the raft number first this has the affect of making a committed write the most recent write in the system e g version number dependencies are all tracked by the original version number on eventual receipt of a higher version that is already in the log find all dependencies of that version and make their raft marker version equal to the parent continue until latest local then set that one to latest to write to and push in gossip i believe this system will eventually converge in eventual only the raft number is always so eventual just works the same way as always in raft only the raft number will be a monotonically increasing commit sequence but will have no other affect in federated given the following scenario a a a a a if a goes to raft raft will make it a and will reject a the replica that performed anti entropy with raft will make a a and when a comes in via anti entropy a a so a will be tossed out ,0 2714,12467889645.0,IssuesEvent,2020-05-28 17:52:03,dotnet/interactive,https://api.github.com/repos/dotnet/interactive,opened,Command line method for converting between .ipynb and .dib,Area-Automation Area-Jupyter enhancement,"Currently, the VS Code extension contains logic for converting a `.ipynb` into a `.dib` and vice versa. In order to support automation scenarios, this functionality should be made available through a subcommand on the `dotnet-interactive` CLI and as API methods within `Microsoft.DotNet.Interactive.Jupyter`, e.g.: ```console > dotnet interactive ipynb-to-dib /path/to/existing.ipynb /path/to/created.dib ``` Related: #467. ",1.0,"Command line method for converting between .ipynb and .dib - Currently, the VS Code extension contains logic for converting a `.ipynb` into a `.dib` and vice versa. In order to support automation scenarios, this functionality should be made available through a subcommand on the `dotnet-interactive` CLI and as API methods within `Microsoft.DotNet.Interactive.Jupyter`, e.g.: ```console > dotnet interactive ipynb-to-dib /path/to/existing.ipynb /path/to/created.dib ``` Related: #467. ",1,command line method for converting between ipynb and dib currently the vs code extension contains logic for converting a ipynb into a dib and vice versa in order to support automation scenarios this functionality should be made available through a subcommand on the dotnet interactive cli and as api methods within microsoft dotnet interactive jupyter e g console dotnet interactive ipynb to dib path to existing ipynb path to created dib related ,1 392770,26957958685.0,IssuesEvent,2023-02-08 16:06:54,hyperledger/firefly,https://api.github.com/repos/hyperledger/firefly,closed,Need documentation on the updated blockchain operation structure,documentation,See https://miro.com/app/board/uXjVOWHk_6s=/?moveToWidget=3458764544594470770&cot=14 for the new `history` and `historySummary` fields.,1.0,Need documentation on the updated blockchain operation structure - See https://miro.com/app/board/uXjVOWHk_6s=/?moveToWidget=3458764544594470770&cot=14 for the new `history` and `historySummary` fields.,0,need documentation on the updated blockchain operation structure see for the new history and historysummary fields ,0 3748,14491739545.0,IssuesEvent,2020-12-11 05:24:52,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,opened,[BUG]Engine Image fail to reach `ready` state if there are tainted worker node,area/manager bug priority/2 require/automation-e2e,"**Describe the bug** Engine Image fail to reach `ready` state if there are tainted worker node in the cluster and Longhorn doesn't tolerate the taint. **To Reproduce** Steps to reproduce the behavior: 1. Install Longhorn 2. Taint one of the node, then delete the engine image daemonset to allow recreation 3. Volume operations (attach/detach) will fail due to engine image is not ready. **Expected behavior** Volume operations should still work. **Log** ``` 2020-12-09T13:34:19.849639616-05:00 time=""2020-12-09T18:34:19Z"" level=error msg=""Error in request: unable to attach volume pvc-xxx to xxx: cannot attach volume pvc-xxx7 with image longhornio/longhorn-engine:v1.0.2: engine image ei-ee18f965 (longhornio/longhorn-engine:v1.0.2) is not ready, it's deploying"" ``` ``` typemeta: kind: """" apiversion: """" objectmeta: name: engine-image-ei-ee18f965 ... status: currentnumberscheduled: 6 numbermisscheduled: 1 desirednumberscheduled: 6 numberready: 6 observedgeneration: 1 updatednumberscheduled: 6 numberavailable: 6 numberunavailable: 0 collisioncount: null conditions: [] ``` **Environment:** - Longhorn version: v1.0.2 - Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s v1.18.6 - Node config - OS type and version: RHEL - CPU per node: 4 - Memory per node: 8 - Disk type(e.g. SSD/NVMe): SSD - Network bandwidth and latency between the nodes: 10GB - Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): VMWare **Additional context** It's due to the tainted node prevent the daemonset to reach the full availability, result in engine image object failed to reach ready state. ",1.0,"[BUG]Engine Image fail to reach `ready` state if there are tainted worker node - **Describe the bug** Engine Image fail to reach `ready` state if there are tainted worker node in the cluster and Longhorn doesn't tolerate the taint. **To Reproduce** Steps to reproduce the behavior: 1. Install Longhorn 2. Taint one of the node, then delete the engine image daemonset to allow recreation 3. Volume operations (attach/detach) will fail due to engine image is not ready. **Expected behavior** Volume operations should still work. **Log** ``` 2020-12-09T13:34:19.849639616-05:00 time=""2020-12-09T18:34:19Z"" level=error msg=""Error in request: unable to attach volume pvc-xxx to xxx: cannot attach volume pvc-xxx7 with image longhornio/longhorn-engine:v1.0.2: engine image ei-ee18f965 (longhornio/longhorn-engine:v1.0.2) is not ready, it's deploying"" ``` ``` typemeta: kind: """" apiversion: """" objectmeta: name: engine-image-ei-ee18f965 ... status: currentnumberscheduled: 6 numbermisscheduled: 1 desirednumberscheduled: 6 numberready: 6 observedgeneration: 1 updatednumberscheduled: 6 numberavailable: 6 numberunavailable: 0 collisioncount: null conditions: [] ``` **Environment:** - Longhorn version: v1.0.2 - Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s v1.18.6 - Node config - OS type and version: RHEL - CPU per node: 4 - Memory per node: 8 - Disk type(e.g. SSD/NVMe): SSD - Network bandwidth and latency between the nodes: 10GB - Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): VMWare **Additional context** It's due to the tainted node prevent the daemonset to reach the full availability, result in engine image object failed to reach ready state. ",1, engine image fail to reach ready state if there are tainted worker node describe the bug engine image fail to reach ready state if there are tainted worker node in the cluster and longhorn doesn t tolerate the taint to reproduce steps to reproduce the behavior install longhorn taint one of the node then delete the engine image daemonset to allow recreation volume operations attach detach will fail due to engine image is not ready expected behavior volume operations should still work log time level error msg error in request unable to attach volume pvc xxx to xxx cannot attach volume pvc with image longhornio longhorn engine engine image ei longhornio longhorn engine is not ready it s deploying typemeta kind apiversion objectmeta name engine image ei status currentnumberscheduled numbermisscheduled desirednumberscheduled numberready observedgeneration updatednumberscheduled numberavailable numberunavailable collisioncount null conditions environment longhorn version kubernetes distro e g rke eks openshift and version node config os type and version rhel cpu per node memory per node disk type e g ssd nvme ssd network bandwidth and latency between the nodes underlying infrastructure e g on aws gce eks gke vmware kvm baremetal vmware additional context it s due to the tainted node prevent the daemonset to reach the full availability result in engine image object failed to reach ready state ,1 110941,17009632871.0,IssuesEvent,2021-07-02 00:58:39,Chiencc/angular,https://api.github.com/repos/Chiencc/angular,opened,"CVE-2018-20677 (Medium) detected in bootstrap-3.3.7.tgz, bootstrap-3.1.1.tgz",security vulnerability,"## CVE-2018-20677 - Medium Severity Vulnerability
Vulnerable Libraries - bootstrap-3.3.7.tgz, bootstrap-3.1.1.tgz

bootstrap-3.3.7.tgz

The most popular front-end framework for developing responsive, mobile first projects on the web.

Library home page: https://registry.npmjs.org/bootstrap/-/bootstrap-3.3.7.tgz

Path to dependency file: angular/package.json

Path to vulnerable library: angular/node_modules/bootstrap

Dependency Hierarchy: - angular-benchpress-0.2.2.tgz (Root Library) - :x: **bootstrap-3.3.7.tgz** (Vulnerable Library)

bootstrap-3.1.1.tgz

Sleek, intuitive, and powerful front-end framework for faster and easier web development.

Library home page: https://registry.npmjs.org/bootstrap/-/bootstrap-3.1.1.tgz

Path to dependency file: angular/package.json

Path to vulnerable library: angular/node_modules/bootstrap

Dependency Hierarchy: - :x: **bootstrap-3.1.1.tgz** (Vulnerable Library)

Found in HEAD commit: 72f7fdba608ab5ba7ff145a21673661d5316ebaa

Found in base branch: master

Vulnerability Details

In Bootstrap before 3.4.0, XSS is possible in the affix configuration target property.

Publish Date: 2019-01-09

URL: CVE-2018-20677

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677

Release Date: 2019-01-09

Fix Resolution: Bootstrap - v3.4.0;NorDroN.AngularTemplate - 0.1.6;Dynamic.NET.Express.ProjectTemplates - 0.8.0;dotnetng.template - 1.0.0.4;ZNxtApp.Core.Module.Theme - 1.0.9-Beta;JMeter - 5.0.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2018-20677 (Medium) detected in bootstrap-3.3.7.tgz, bootstrap-3.1.1.tgz - ## CVE-2018-20677 - Medium Severity Vulnerability
Vulnerable Libraries - bootstrap-3.3.7.tgz, bootstrap-3.1.1.tgz

bootstrap-3.3.7.tgz

The most popular front-end framework for developing responsive, mobile first projects on the web.

Library home page: https://registry.npmjs.org/bootstrap/-/bootstrap-3.3.7.tgz

Path to dependency file: angular/package.json

Path to vulnerable library: angular/node_modules/bootstrap

Dependency Hierarchy: - angular-benchpress-0.2.2.tgz (Root Library) - :x: **bootstrap-3.3.7.tgz** (Vulnerable Library)

bootstrap-3.1.1.tgz

Sleek, intuitive, and powerful front-end framework for faster and easier web development.

Library home page: https://registry.npmjs.org/bootstrap/-/bootstrap-3.1.1.tgz

Path to dependency file: angular/package.json

Path to vulnerable library: angular/node_modules/bootstrap

Dependency Hierarchy: - :x: **bootstrap-3.1.1.tgz** (Vulnerable Library)

Found in HEAD commit: 72f7fdba608ab5ba7ff145a21673661d5316ebaa

Found in base branch: master

Vulnerability Details

In Bootstrap before 3.4.0, XSS is possible in the affix configuration target property.

Publish Date: 2019-01-09

URL: CVE-2018-20677

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677

Release Date: 2019-01-09

Fix Resolution: Bootstrap - v3.4.0;NorDroN.AngularTemplate - 0.1.6;Dynamic.NET.Express.ProjectTemplates - 0.8.0;dotnetng.template - 1.0.0.4;ZNxtApp.Core.Module.Theme - 1.0.9-Beta;JMeter - 5.0.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in bootstrap tgz bootstrap tgz cve medium severity vulnerability vulnerable libraries bootstrap tgz bootstrap tgz bootstrap tgz the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file angular package json path to vulnerable library angular node modules bootstrap dependency hierarchy angular benchpress tgz root library x bootstrap tgz vulnerable library bootstrap tgz sleek intuitive and powerful front end framework for faster and easier web development library home page a href path to dependency file angular package json path to vulnerable library angular node modules bootstrap dependency hierarchy x bootstrap tgz vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap before xss is possible in the affix configuration target property publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap nordron angulartemplate dynamic net express projecttemplates dotnetng template znxtapp core module theme beta jmeter step up your open source security game with whitesource ,0 487104,14019329923.0,IssuesEvent,2020-10-29 18:01:34,Elle624/FitLit-Refactor,https://api.github.com/repos/Elle624/FitLit-Refactor,closed,Refactor hydration card in scripts,Priority 1 html js refactor,"We want to refactor the hydration related code in the scripts file into functions that determine the change of information that is displayed upon the specific events - [x] Group innerText - [x] Group innerHTML - [x] Wrap into functions if necessary",1.0,"Refactor hydration card in scripts - We want to refactor the hydration related code in the scripts file into functions that determine the change of information that is displayed upon the specific events - [x] Group innerText - [x] Group innerHTML - [x] Wrap into functions if necessary",0,refactor hydration card in scripts we want to refactor the hydration related code in the scripts file into functions that determine the change of information that is displayed upon the specific events group innertext group innerhtml wrap into functions if necessary,0 9308,27957236710.0,IssuesEvent,2023-03-24 13:17:27,gchq/gaffer-docker,https://api.github.com/repos/gchq/gaffer-docker,closed,Improve log output of publish_images.sh,automation,"It should be clear what image and tag is being pushed and whether it was successful or not. As well as this, it would be much clearer if the docker build output wasn't there. ",1.0,"Improve log output of publish_images.sh - It should be clear what image and tag is being pushed and whether it was successful or not. As well as this, it would be much clearer if the docker build output wasn't there. ",1,improve log output of publish images sh it should be clear what image and tag is being pushed and whether it was successful or not as well as this it would be much clearer if the docker build output wasn t there ,1 271958,29794962629.0,IssuesEvent,2023-06-16 01:00:16,billmcchesney1/hadoop,https://api.github.com/repos/billmcchesney1/hadoop,closed,CVE-2016-10744 (Medium) detected in select2-4.0.0.tgz - autoclosed,Mend: dependency security vulnerability,"## CVE-2016-10744 - Medium Severity Vulnerability
Vulnerable Library - select2-4.0.0.tgz

Select2 is a jQuery based replacement for select boxes. It supports searching, remote data sets, and infinite scrolling of results.

Library home page: https://registry.npmjs.org/select2/-/select2-4.0.0.tgz

Path to dependency file: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json

Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/select2/package.json

Dependency Hierarchy: - :x: **select2-4.0.0.tgz** (Vulnerable Library)

Found in base branch: trunk

Vulnerability Details

In Select2 through 4.0.5, as used in Snipe-IT and other products, rich selectlists allow XSS. This affects use cases with Ajax remote data loading when HTML templates are used to display listbox data.

Publish Date: 2019-03-27

URL: CVE-2016-10744

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10744

Release Date: 2019-03-27

Fix Resolution: 4.0.8

*** - [ ] Check this box to open an automated fix PR ",True,"CVE-2016-10744 (Medium) detected in select2-4.0.0.tgz - autoclosed - ## CVE-2016-10744 - Medium Severity Vulnerability
Vulnerable Library - select2-4.0.0.tgz

Select2 is a jQuery based replacement for select boxes. It supports searching, remote data sets, and infinite scrolling of results.

Library home page: https://registry.npmjs.org/select2/-/select2-4.0.0.tgz

Path to dependency file: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json

Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/select2/package.json

Dependency Hierarchy: - :x: **select2-4.0.0.tgz** (Vulnerable Library)

Found in base branch: trunk

Vulnerability Details

In Select2 through 4.0.5, as used in Snipe-IT and other products, rich selectlists allow XSS. This affects use cases with Ajax remote data loading when HTML templates are used to display listbox data.

Publish Date: 2019-03-27

URL: CVE-2016-10744

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10744

Release Date: 2019-03-27

Fix Resolution: 4.0.8

*** - [ ] Check this box to open an automated fix PR ",0,cve medium detected in tgz autoclosed cve medium severity vulnerability vulnerable library tgz is a jquery based replacement for select boxes it supports searching remote data sets and infinite scrolling of results library home page a href path to dependency file hadoop yarn project hadoop yarn hadoop yarn ui src main webapp package json path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules package json dependency hierarchy x tgz vulnerable library found in base branch trunk vulnerability details in through as used in snipe it and other products rich selectlists allow xss this affects use cases with ajax remote data loading when html templates are used to display listbox data publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr ,0 468,6554443771.0,IssuesEvent,2017-09-06 05:55:56,inboundnow/inbound-pro,https://api.github.com/repos/inboundnow/inbound-pro,opened,[automation/email] Make sure DISABLE_WP_CRON is not set to true and notify user when it is,Automation Mailer UX Enhancement,This will help to prevent customer support requests and UX issues with automation/mailer when users have this flagged to true and are not aware of it. ,1.0,[automation/email] Make sure DISABLE_WP_CRON is not set to true and notify user when it is - This will help to prevent customer support requests and UX issues with automation/mailer when users have this flagged to true and are not aware of it. ,1, make sure disable wp cron is not set to true and notify user when it is this will help to prevent customer support requests and ux issues with automation mailer when users have this flagged to true and are not aware of it ,1 7076,24190043675.0,IssuesEvent,2022-09-23 16:35:37,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,FAILED: Automated Tests(114),automation,"Stats: { ""suites"": 44, ""tests"": 322, ""passes"": 208, ""pending"": 0, ""failures"": 114, ""start"": ""2022-09-21T06:13:43.771Z"", ""end"": ""2022-09-21T06:57:39.746Z"", ""duration"": 945206, ""testsRegistered"": 322, ""passPercent"": 64.59627329192547, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 0, ""hasSkipped"": false } Failed Tests: ""activate the service for Test environment"" ""activate the service for Dev environment"" ""grant namespace access to Mark (access manager)"" ""Grant CredentialIssuer.Admin permission to Janis (API Owner)"" ""authenticates Mark (Access-Manager)"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""Verify that API is accessible with the generated API Key"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Click on Grant Access button"" ""Grant Access to Test environment"" ""Verify that API is accessible with the generated API Key for Test environment"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list"" ""set IP address that is not accessible in the network as allowed IP and set Route as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is accessible in the network as allowed IP and set route as scope"" ""set IP address that is accessible in the network as allowed IP and set service as scope"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list "" ""set api rate limit as per the test config, Local Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Local Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""authenticates Mark (Access-Manager)"" ""verify that consumers are filters as per given parameter"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Verify that labels can be deleted"" ""Verify that labels can be updated"" ""Verify that labels can be added"" ""Grant namespace access to access manager(Mark)"" ""Grant CredentialIssuer.Admin permission to credential issuer(Wendy)"" ""Select the namespace created for client credential "" ""Creates authorization profile for Client ID/Secret"" ""Creates authorization profile for JWT - Generated Key Pair"" ""Creates authorization profile for JWKS URL"" ""Adds environment with Client ID/Secret authenticator to product"" ""Adds environment with JWT - Generated Key Pair authenticator to product"" ""Adds environment with JWT - JWKS URL authenticator to product"" ""Applies authorization plugin to service published to Kong Gateway"" ""activate the service for Test environment"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using client ID and secret; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Get current API Key"" ""Verify that only one API key(new key) is set to the consumer in Kong gateway"" ""Verify that API is not accessible with the old API Key"" ""Regenrate credential client ID and Secret"" ""Make sure that the old client ID and Secret is disabled"" ""grant namespace access to Mark (access manager)"" ""Grant permission to Janis (API Owner)"" ""Grant permission to Wendy"" ""Grant \""Access.Manager\"" access to Mark (access manager)"" ""Authenticates Mark (Access-Manager)"" ""Verify that the option to approve request is displayed"" ""Grant only \""Namespace.Manage\"" permission to Wendy"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that all the namespace options and activities are displayed"" ""Grant only \""CredentialIssuer.Admin\"" access to Wendy (access manager)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that only Authorization Profile option is displayed in Namespace page"" ""Verify that authorization profile for Client ID/Secret is generated"" ""Grant only \""Namespace.View\"" permission to Mark"" ""authenticates Mark"" ""Verify that service accounts are not created"" ""Grant \""GatewayConfig.Publish\"" and \""Namespace.View\"" access to Wendy (access manager)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that GWA API allows user to publish the API to Kong gateway"" ""Delete the product environment and verify the success code in the response"" ""Get the resource and verify that product environment is deleted"" ""Force delete the namespace and verify the success code in the response"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3095486922",1.0,"FAILED: Automated Tests(114) - Stats: { ""suites"": 44, ""tests"": 322, ""passes"": 208, ""pending"": 0, ""failures"": 114, ""start"": ""2022-09-21T06:13:43.771Z"", ""end"": ""2022-09-21T06:57:39.746Z"", ""duration"": 945206, ""testsRegistered"": 322, ""passPercent"": 64.59627329192547, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 0, ""hasSkipped"": false } Failed Tests: ""activate the service for Test environment"" ""activate the service for Dev environment"" ""grant namespace access to Mark (access manager)"" ""Grant CredentialIssuer.Admin permission to Janis (API Owner)"" ""authenticates Mark (Access-Manager)"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""Verify that API is accessible with the generated API Key"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Click on Grant Access button"" ""Grant Access to Test environment"" ""Verify that API is accessible with the generated API Key for Test environment"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list"" ""set IP address that is not accessible in the network as allowed IP and set Route as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is accessible in the network as allowed IP and set route as scope"" ""set IP address that is accessible in the network as allowed IP and set service as scope"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list "" ""set api rate limit as per the test config, Local Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Local Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""authenticates Mark (Access-Manager)"" ""verify that consumers are filters as per given parameter"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Verify that labels can be deleted"" ""Verify that labels can be updated"" ""Verify that labels can be added"" ""Grant namespace access to access manager(Mark)"" ""Grant CredentialIssuer.Admin permission to credential issuer(Wendy)"" ""Select the namespace created for client credential "" ""Creates authorization profile for Client ID/Secret"" ""Creates authorization profile for JWT - Generated Key Pair"" ""Creates authorization profile for JWKS URL"" ""Adds environment with Client ID/Secret authenticator to product"" ""Adds environment with JWT - Generated Key Pair authenticator to product"" ""Adds environment with JWT - JWKS URL authenticator to product"" ""Applies authorization plugin to service published to Kong Gateway"" ""activate the service for Test environment"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using client ID and secret; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Get current API Key"" ""Verify that only one API key(new key) is set to the consumer in Kong gateway"" ""Verify that API is not accessible with the old API Key"" ""Regenrate credential client ID and Secret"" ""Make sure that the old client ID and Secret is disabled"" ""grant namespace access to Mark (access manager)"" ""Grant permission to Janis (API Owner)"" ""Grant permission to Wendy"" ""Grant \""Access.Manager\"" access to Mark (access manager)"" ""Authenticates Mark (Access-Manager)"" ""Verify that the option to approve request is displayed"" ""Grant only \""Namespace.Manage\"" permission to Wendy"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that all the namespace options and activities are displayed"" ""Grant only \""CredentialIssuer.Admin\"" access to Wendy (access manager)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that only Authorization Profile option is displayed in Namespace page"" ""Verify that authorization profile for Client ID/Secret is generated"" ""Grant only \""Namespace.View\"" permission to Mark"" ""authenticates Mark"" ""Verify that service accounts are not created"" ""Grant \""GatewayConfig.Publish\"" and \""Namespace.View\"" access to Wendy (access manager)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that GWA API allows user to publish the API to Kong gateway"" ""Delete the product environment and verify the success code in the response"" ""Get the resource and verify that product environment is deleted"" ""Force delete the namespace and verify the success code in the response"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3095486922",1,failed automated tests stats suites tests passes pending failures start end duration testsregistered passpercent pendingpercent other hasother false skipped hasskipped false failed tests activate the service for test environment activate the service for dev environment grant namespace access to mark access manager grant credentialissuer admin permission to janis api owner authenticates mark access manager authenticates mark access manager verify the request details add group labels in request details window approves an access request verify that api is accessible with the generated api key authenticates mark access manager navigate to consumer page and filter the product click on the first consumer click on grant access button grant access to test environment verify that api is accessible with the generated api key for test environment authenticates mark access manager navigate to consumer page and filter the product select the consumer from the list set ip address that is not accessible in the network as allowed ip and set route as scope verify ip restriction error when the api calls other than the allowed ip set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip set ip address that is accessible in the network as allowed ip and set route as scope set ip address that is accessible in the network as allowed ip and set service as scope navigate to consumer page and filter the product set api ip restriction to global service level set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip navigate to consumer page and filter the product set api ip restriction to global service level set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip authenticates mark access manager navigate to consumer page and filter the product select the consumer from the list set api rate limit as per the test config local policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit as per the test config local policy and scope as route verify rate limit error when the api calls beyond the limit set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit as per the test config redis policy and scope as route verify rate limit error when the api calls beyond the limit set api rate limit to global service level verify that rate limiting is set at global service level set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit to global service level verify that rate limiting is set at global service level set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit authenticates mark access manager verify the request details add group labels in request details window approves an access request authenticates mark access manager verify that consumers are filters as per given parameter authenticates mark access manager navigate to consumer page and filter the product click on the first consumer verify that labels can be deleted verify that labels can be updated verify that labels can be added grant namespace access to access manager mark grant credentialissuer admin permission to credential issuer wendy select the namespace created for client credential creates authorization profile for client id secret creates authorization profile for jwt generated key pair creates authorization profile for jwks url adds environment with client id secret authenticator to product adds environment with jwt generated key pair authenticator to product adds environment with jwt jwks url authenticator to product applies authorization plugin to service published to kong gateway activate the service for test environment creates an access request access manager logs in approves an access request get access token using client id and secret make api request creates an access request access manager logs in approves an access request get access token using jwt key pair make api request creates an access request access manager logs in approves an access request get access token using jwt key pair make api request get current api key verify that only one api key new key is set to the consumer in kong gateway verify that api is not accessible with the old api key regenrate credential client id and secret make sure that the old client id and secret is disabled grant namespace access to mark access manager grant permission to janis api owner grant permission to wendy grant access manager access to mark access manager authenticates mark access manager verify that the option to approve request is displayed grant only namespace manage permission to wendy authenticates wendy credential issuer verify that all the namespace options and activities are displayed grant only credentialissuer admin access to wendy access manager authenticates wendy credential issuer verify that only authorization profile option is displayed in namespace page verify that authorization profile for client id secret is generated grant only namespace view permission to mark authenticates mark verify that service accounts are not created grant gatewayconfig publish and namespace view access to wendy access manager authenticates wendy credential issuer verify that gwa api allows user to publish the api to kong gateway delete the product environment and verify the success code in the response get the resource and verify that product environment is deleted force delete the namespace and verify the success code in the response run link ,1 4823,17651662898.0,IssuesEvent,2021-08-20 13:59:22,CDCgov/prime-field-teams,https://api.github.com/repos/CDCgov/prime-field-teams,closed,Solution Imp - Plan & Schedule Go Live,State-Louisiana sender-automation,"**Temporary New Daily Process:** - iPatientcare will produce a daily CSV file by running their stored procedure and copying the results with Headers to a CSV file stored in the /out/directory. - A6 will: - From the /Public Health/VBScript directory, A6 will run ""Reddy-Send.bat"" - A6 will review the .REP and .TXT files for errors. Note, a new Issue will be created when we begin development of the 100% automation of the Reddy process. **Potential Final Daily Process:** - iPatientcare will automate the execution and creation of the CSV file. - A6 will create a scheduled task to execute ""Reddy-Send.bat"" daily. - Dr. Reddy's office will be responsible for reviewing the error files.",1.0,"Solution Imp - Plan & Schedule Go Live - **Temporary New Daily Process:** - iPatientcare will produce a daily CSV file by running their stored procedure and copying the results with Headers to a CSV file stored in the /out/directory. - A6 will: - From the /Public Health/VBScript directory, A6 will run ""Reddy-Send.bat"" - A6 will review the .REP and .TXT files for errors. Note, a new Issue will be created when we begin development of the 100% automation of the Reddy process. **Potential Final Daily Process:** - iPatientcare will automate the execution and creation of the CSV file. - A6 will create a scheduled task to execute ""Reddy-Send.bat"" daily. - Dr. Reddy's office will be responsible for reviewing the error files.",1,solution imp plan schedule go live temporary new daily process ipatientcare will produce a daily csv file by running their stored procedure and copying the results with headers to a csv file stored in the out directory will from the public health vbscript directory will run reddy send bat will review the rep and txt files for errors note a new issue will be created when we begin development of the automation of the reddy process potential final daily process ipatientcare will automate the execution and creation of the csv file will create a scheduled task to execute reddy send bat daily dr reddy s office will be responsible for reviewing the error files ,1 25763,19102186016.0,IssuesEvent,2021-11-30 00:28:17,stylelint/stylelint,https://api.github.com/repos/stylelint/stylelint,closed,Use codecov for coverage,status: ready to implement type: infrastructure,"## What is a problem Builds on Coveralls have not worked for a while: https://coveralls.io/github/stylelint/stylelint Even when seeing the log of GitHub Actions, we cannot know anything... 🤷🏼 https://github.com/stylelint/stylelint/runs/4336137130?check_suite_focus=true#step:8:8 I've resynced with the GitHub repository on the Coveralls repo settings, but errors still occur: https://github.com/stylelint/stylelint/pull/5743#issuecomment-980070312 ## How to solve So, I think there are the following options: 1. Stop using Coveralls 2. Migrate to another service For example, [Codecov](https://codecov.io/) is an option. When I've tried my forked repository, it works: - https://codecov.io/gh/ybiquitous/stylelint/tree/401071d44695a610413a926fef873916ab66c102 - https://github.com/ybiquitous/stylelint/runs/4346303358?check_suite_focus=true#step:8:32 - https://github.com/ybiquitous/stylelint/pull/2 Does anyone have thoughts? ",1.0,"Use codecov for coverage - ## What is a problem Builds on Coveralls have not worked for a while: https://coveralls.io/github/stylelint/stylelint Even when seeing the log of GitHub Actions, we cannot know anything... 🤷🏼 https://github.com/stylelint/stylelint/runs/4336137130?check_suite_focus=true#step:8:8 I've resynced with the GitHub repository on the Coveralls repo settings, but errors still occur: https://github.com/stylelint/stylelint/pull/5743#issuecomment-980070312 ## How to solve So, I think there are the following options: 1. Stop using Coveralls 2. Migrate to another service For example, [Codecov](https://codecov.io/) is an option. When I've tried my forked repository, it works: - https://codecov.io/gh/ybiquitous/stylelint/tree/401071d44695a610413a926fef873916ab66c102 - https://github.com/ybiquitous/stylelint/runs/4346303358?check_suite_focus=true#step:8:32 - https://github.com/ybiquitous/stylelint/pull/2 Does anyone have thoughts? ",0,use codecov for coverage what is a problem builds on coveralls have not worked for a while even when seeing the log of github actions we cannot know anything 🤷🏼 i ve resynced with the github repository on the coveralls repo settings but errors still occur how to solve so i think there are the following options stop using coveralls migrate to another service for example is an option when i ve tried my forked repository it works does anyone have thoughts ,0 134252,29935643034.0,IssuesEvent,2023-06-22 12:35:31,max-kamps/jpd-breader,https://api.github.com/repos/max-kamps/jpd-breader,opened,Switch to Manifest V3,code enhancement blocked,"Current blockers: - No support for background service workers in Firefox - Possible workaround: Continue using background pages on Firefox (would require two separate manifests) - No support for `DOMParser` in background service workers - Required because no JPDB api support for FORQing and reviewing ",1.0,"Switch to Manifest V3 - Current blockers: - No support for background service workers in Firefox - Possible workaround: Continue using background pages on Firefox (would require two separate manifests) - No support for `DOMParser` in background service workers - Required because no JPDB api support for FORQing and reviewing ",0,switch to manifest current blockers no support for background service workers in firefox possible workaround continue using background pages on firefox would require two separate manifests no support for domparser in background service workers required because no jpdb api support for forqing and reviewing ,0 2757,12541183540.0,IssuesEvent,2020-06-05 11:50:58,input-output-hk/cardano-node,https://api.github.com/repos/input-output-hk/cardano-node,opened,[QA] - Create complex transactions,e2e automation,"- 1 input, 5 outputs - 15 inputs, 35 outputs - 100 inputs, 200 outputs !? ",1.0,"[QA] - Create complex transactions - - 1 input, 5 outputs - 15 inputs, 35 outputs - 100 inputs, 200 outputs !? ",1, create complex transactions input outputs inputs outputs inputs outputs ,1 372281,25992702231.0,IssuesEvent,2022-12-20 09:03:27,zkBob/zeropool-relayer,https://api.github.com/repos/zkBob/zeropool-relayer,closed,List of relayer endpoints in README outdated,documentation,Consider either to update the list of endpoint supported by the relayer or remove it from README.md file at all.,1.0,List of relayer endpoints in README outdated - Consider either to update the list of endpoint supported by the relayer or remove it from README.md file at all.,0,list of relayer endpoints in readme outdated consider either to update the list of endpoint supported by the relayer or remove it from readme md file at all ,0 162956,13906512739.0,IssuesEvent,2020-10-20 11:22:45,Dirnei/node-red-contrib-zigbee2mqtt-devices,https://api.github.com/repos/Dirnei/node-red-contrib-zigbee2mqtt-devices,opened,Node help text missing or unclear for some nodes,documentation,"# Override nodes Override-state, override-brightness, override-temperature, override color could use a better help text. Currently, it's only a one-liner. I don't fully understand what they do - so it does not make sense for me to write it. ## Ideas What it does: Overrides the state of a payload...? How to use it: Button:ON ----> override:OFF ---> Lamp:GoesToOFF # Other nodes **scene-selector:** Description is not enough. **climate-sensor:** Description is not enough. **generic-lamp:** Description is not enough. I don't understand what the node is for. **button-switch:** Description is not enough. **device-status:** What's the message. Is it really every message? So it just listens to all the messages for lets say Lamp X? What's the generic MQTT Device sitch for?",1.0,"Node help text missing or unclear for some nodes - # Override nodes Override-state, override-brightness, override-temperature, override color could use a better help text. Currently, it's only a one-liner. I don't fully understand what they do - so it does not make sense for me to write it. ## Ideas What it does: Overrides the state of a payload...? How to use it: Button:ON ----> override:OFF ---> Lamp:GoesToOFF # Other nodes **scene-selector:** Description is not enough. **climate-sensor:** Description is not enough. **generic-lamp:** Description is not enough. I don't understand what the node is for. **button-switch:** Description is not enough. **device-status:** What's the message. Is it really every message? So it just listens to all the messages for lets say Lamp X? What's the generic MQTT Device sitch for?",0,node help text missing or unclear for some nodes override nodes override state override brightness override temperature override color could use a better help text currently it s only a one liner i don t fully understand what they do so it does not make sense for me to write it ideas what it does overrides the state of a payload how to use it button on override off lamp goestooff other nodes scene selector description is not enough climate sensor description is not enough generic lamp description is not enough i don t understand what the node is for button switch description is not enough device status what s the message is it really every message so it just listens to all the messages for lets say lamp x what s the generic mqtt device sitch for ,0 1487,10194933281.0,IssuesEvent,2019-08-12 16:51:01,DoESLiverpool/somebody-should,https://api.github.com/repos/DoESLiverpool/somebody-should,opened,Give mqtt.local a static IP,2 - Should DoES System: Automation System: Network,With devices like the the [prototype room occupancy sensor/display](https://github.com/DoESLiverpool/somebody-should/issues/1185) talking to the IP address (they don't have mDNS support to talk to `mqtt.local`) of the MQTT broker it would be good if it was on a static IP so that a reboot doesn't take some of the services accidentally offliine.,1.0,Give mqtt.local a static IP - With devices like the the [prototype room occupancy sensor/display](https://github.com/DoESLiverpool/somebody-should/issues/1185) talking to the IP address (they don't have mDNS support to talk to `mqtt.local`) of the MQTT broker it would be good if it was on a static IP so that a reboot doesn't take some of the services accidentally offliine.,1,give mqtt local a static ip with devices like the the talking to the ip address they don t have mdns support to talk to mqtt local of the mqtt broker it would be good if it was on a static ip so that a reboot doesn t take some of the services accidentally offliine ,1 4772,17400932549.0,IssuesEvent,2021-08-02 19:33:07,improvecountry/ideas,https://api.github.com/repos/improvecountry/ideas,opened,Mobile notifications about food and medicines withdrawals,English Poland automation foreigners,"### Problem description Lack of mobile notifications about food or medicines withdrawals. These news are being posted accordingly on [GIS website](https://www.gov.pl/web/gis/ostrzezenia) and [RDG website](https://rdg.ezdrowie.gov.pl/) or in social media like Facebook or Twitter. ### Solution proposal It’s worth to consider to use [RSO mobile app](https://www.gov.pl/web/mswia/regionalny-system-ostrzegania) as an additional medium to inform citizens opportunely about possible risks for their health/lives. These notifications should be send in Polish and in English. ### Alternative solutions Workaround: GOV PL Info ([project description in English](https://krawczyk.in/gpi), [project description in Polish](https://krawczyk.in/gpi-pl)): - [GPI GIS (unofficial Telegram Channel)](https://t.me/s/gpi_gis) - [GPI GIF (unofficial Telegram Channel)](https://t.me/s/gpi_gif) ### Example solutions _No response_ ### Consent I'm consent to list my GitHub username and link to my GitHub profile on the List of Ideators., I'm consent to list my name and surname on the List of Ideators. ### Regulations - [X] I agree to follow Regulations of Improve Country",1.0,"Mobile notifications about food and medicines withdrawals - ### Problem description Lack of mobile notifications about food or medicines withdrawals. These news are being posted accordingly on [GIS website](https://www.gov.pl/web/gis/ostrzezenia) and [RDG website](https://rdg.ezdrowie.gov.pl/) or in social media like Facebook or Twitter. ### Solution proposal It’s worth to consider to use [RSO mobile app](https://www.gov.pl/web/mswia/regionalny-system-ostrzegania) as an additional medium to inform citizens opportunely about possible risks for their health/lives. These notifications should be send in Polish and in English. ### Alternative solutions Workaround: GOV PL Info ([project description in English](https://krawczyk.in/gpi), [project description in Polish](https://krawczyk.in/gpi-pl)): - [GPI GIS (unofficial Telegram Channel)](https://t.me/s/gpi_gis) - [GPI GIF (unofficial Telegram Channel)](https://t.me/s/gpi_gif) ### Example solutions _No response_ ### Consent I'm consent to list my GitHub username and link to my GitHub profile on the List of Ideators., I'm consent to list my name and surname on the List of Ideators. ### Regulations - [X] I agree to follow Regulations of Improve Country",1,mobile notifications about food and medicines withdrawals problem description lack of mobile notifications about food or medicines withdrawals these news are being posted accordingly on and or in social media like facebook or twitter solution proposal it’s worth to consider to use as an additional medium to inform citizens opportunely about possible risks for their health lives  these notifications should be send in polish and in english alternative solutions workaround gov pl info example solutions no response consent i m consent to list my github username and link to my github profile on the list of ideators i m consent to list my name and surname on the list of ideators regulations i agree to follow regulations of improve country,1 6949,24056799067.0,IssuesEvent,2022-09-16 17:45:47,mlcommons/ck,https://api.github.com/repos/mlcommons/ck,closed,[CM scripts] download Git repos without history,enhancement cm-script-automation cm-mlperf,"As we discussed today, the total size of MLPerf inference repo is 440MB. However, without history, it's only 5MB. The idea is to add an ENV (maybe CM_SKIP_GIT_HISTORY) to skip history in scripts when pulling such repos? It can be used in ""cm pull repo"" and in ""cm run script"" ... Can discuss it further ... ",1.0,"[CM scripts] download Git repos without history - As we discussed today, the total size of MLPerf inference repo is 440MB. However, without history, it's only 5MB. The idea is to add an ENV (maybe CM_SKIP_GIT_HISTORY) to skip history in scripts when pulling such repos? It can be used in ""cm pull repo"" and in ""cm run script"" ... Can discuss it further ... ",1, download git repos without history as we discussed today the total size of mlperf inference repo is however without history it s only the idea is to add an env maybe cm skip git history to skip history in scripts when pulling such repos it can be used in cm pull repo and in cm run script can discuss it further ,1 3568,13995460085.0,IssuesEvent,2020-10-28 03:20:24,domoticafacilconjota/capitulos,https://api.github.com/repos/domoticafacilconjota/capitulos,opened,[AtoNodeRED]Control de Luz de la salita,Automation a Node RED,"**Código de la automatización** ``` - id: '1602283635834' alias: Encender / Apagar Manual description: Desactiva las automatizaciones por movimientos y pasa el sistema a manual. trigger: - platform: device domain: mqtt device_id: d0c248fc0a7311eb8ac4bd3e4b327c07 type: action subtype: single discovery_id: 0x00158d000450b798 action_single condition: [] action: - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light domain: light - service: automation.turn_off data: {} entity_id: automation.centinela - service: automation.turn_off data: {} entity_id: automation.turn_the_light_on_when_motion_is_detected mode: single - id: '1602285296839' alias: Enciende al atardecer sin hay movimiento description: Enciende la luz por la tarde cuando hay movimiento trigger: - platform: device domain: binary_sensor entity_id: binary_sensor.movimiento_occupancy device_id: a72089e20a7311eb9994e3b6f09595af type: motion for: hours: 0 minutes: 0 seconds: 0 condition: - condition: time after: '16:30:00' before: '23:30:00' - condition: and conditions: - condition: sun after: sunset action: - type: turn_on device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light domain: light flash: short brightness_pct: 100 mode: single - id: '1602302233727' alias: Centinela description: Observa si hay movimiento. trigger: - platform: time_pattern minutes: /3 condition: - type: is_no_motion condition: device device_id: a72089e20a7311eb9994e3b6f09595af entity_id: binary_sensor.movimiento_occupancy domain: binary_sensor - condition: and conditions: - condition: device type: is_on device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light domain: light action: - service: light.turn_off data: {} entity_id: light.lampara_light mode: single - id: '1602305727370' alias: Desactiva y Activa description: apaga la luz y desactiva las actualizaciones y las activa para iniciar el ciclo. trigger: - platform: time at: '23:45:00' - platform: time at: '16:00:00' condition: [] action: - type: turn_off device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light domain: light - service: automation.toggle data: {} entity_id: automation.centinela - service: automation.toggle data: {} entity_id: automation.turn_the_light_on_when_motion_is_detected mode: single ``` **Explicación de lo que hace actualmente la automatización** La automiatizacion Enciende la luz de la sala , si el sol se entra y hay movimiento en la sala entre las 16 y las 23:45, la automatizacion Centinela monitorea cada 10 minutos si hay moviemto en la sala, de no haberlo apaga la luz. La luz se enciende nuevamente si hay movimiento. Estos ciclos pueden dejar de ser automaticos y pasarlos a manual al dar clip a un interruptor inalambrico para ser controlado de manera manual. De no usar la funcion manual, al termino del horario 23:45 se ejecuta la automatizacion Activa/desactiva apgando la luz de estar encendida y desactivando las automatizaciones de encendido por movimiento y la centinela, para activarlas nuevamente a las 16 hrs, para iniciar nuevamente el ciclo. **Notas del autor** El hardware usado es u sensor de movimiento aqara modelo RTCGQ11LM, un interuptor inalambrico aqara modelo WXKG11LM y una lampara xiomi modelo ZNLDP12LM todas ellas con protocolo zigbee.",1.0,"[AtoNodeRED]Control de Luz de la salita - **Código de la automatización** ``` - id: '1602283635834' alias: Encender / Apagar Manual description: Desactiva las automatizaciones por movimientos y pasa el sistema a manual. trigger: - platform: device domain: mqtt device_id: d0c248fc0a7311eb8ac4bd3e4b327c07 type: action subtype: single discovery_id: 0x00158d000450b798 action_single condition: [] action: - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light domain: light - service: automation.turn_off data: {} entity_id: automation.centinela - service: automation.turn_off data: {} entity_id: automation.turn_the_light_on_when_motion_is_detected mode: single - id: '1602285296839' alias: Enciende al atardecer sin hay movimiento description: Enciende la luz por la tarde cuando hay movimiento trigger: - platform: device domain: binary_sensor entity_id: binary_sensor.movimiento_occupancy device_id: a72089e20a7311eb9994e3b6f09595af type: motion for: hours: 0 minutes: 0 seconds: 0 condition: - condition: time after: '16:30:00' before: '23:30:00' - condition: and conditions: - condition: sun after: sunset action: - type: turn_on device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light domain: light flash: short brightness_pct: 100 mode: single - id: '1602302233727' alias: Centinela description: Observa si hay movimiento. trigger: - platform: time_pattern minutes: /3 condition: - type: is_no_motion condition: device device_id: a72089e20a7311eb9994e3b6f09595af entity_id: binary_sensor.movimiento_occupancy domain: binary_sensor - condition: and conditions: - condition: device type: is_on device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light domain: light action: - service: light.turn_off data: {} entity_id: light.lampara_light mode: single - id: '1602305727370' alias: Desactiva y Activa description: apaga la luz y desactiva las actualizaciones y las activa para iniciar el ciclo. trigger: - platform: time at: '23:45:00' - platform: time at: '16:00:00' condition: [] action: - type: turn_off device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light domain: light - service: automation.toggle data: {} entity_id: automation.centinela - service: automation.toggle data: {} entity_id: automation.turn_the_light_on_when_motion_is_detected mode: single ``` **Explicación de lo que hace actualmente la automatización** La automiatizacion Enciende la luz de la sala , si el sol se entra y hay movimiento en la sala entre las 16 y las 23:45, la automatizacion Centinela monitorea cada 10 minutos si hay moviemto en la sala, de no haberlo apaga la luz. La luz se enciende nuevamente si hay movimiento. Estos ciclos pueden dejar de ser automaticos y pasarlos a manual al dar clip a un interruptor inalambrico para ser controlado de manera manual. De no usar la funcion manual, al termino del horario 23:45 se ejecuta la automatizacion Activa/desactiva apgando la luz de estar encendida y desactivando las automatizaciones de encendido por movimiento y la centinela, para activarlas nuevamente a las 16 hrs, para iniciar nuevamente el ciclo. **Notas del autor** El hardware usado es u sensor de movimiento aqara modelo RTCGQ11LM, un interuptor inalambrico aqara modelo WXKG11LM y una lampara xiomi modelo ZNLDP12LM todas ellas con protocolo zigbee.",1, control de luz de la salita código de la automatización id alias encender apagar manual description desactiva las automatizaciones por movimientos y pasa el sistema a manual trigger platform device domain mqtt device id type action subtype single discovery id action single condition action type toggle device id entity id light lampara light domain light service automation turn off data entity id automation centinela service automation turn off data entity id automation turn the light on when motion is detected mode single id alias enciende al atardecer sin hay movimiento description enciende la luz por la tarde cuando hay movimiento trigger platform device domain binary sensor entity id binary sensor movimiento occupancy device id type motion for hours minutes seconds condition condition time after before condition and conditions condition sun after sunset action type turn on device id entity id light lampara light domain light flash short brightness pct mode single id alias centinela description observa si hay movimiento trigger platform time pattern minutes condition type is no motion condition device device id entity id binary sensor movimiento occupancy domain binary sensor condition and conditions condition device type is on device id entity id light lampara light domain light action service light turn off data entity id light lampara light mode single id alias desactiva y activa description apaga la luz y desactiva las actualizaciones y las activa para iniciar el ciclo trigger platform time at platform time at condition action type turn off device id entity id light lampara light domain light service automation toggle data entity id automation centinela service automation toggle data entity id automation turn the light on when motion is detected mode single explicación de lo que hace actualmente la automatización la automiatizacion enciende la luz de la sala si el sol se entra y hay movimiento en la sala entre las y las la automatizacion centinela monitorea cada minutos si hay moviemto en la sala de no haberlo apaga la luz la luz se enciende nuevamente si hay movimiento estos ciclos pueden dejar de ser automaticos y pasarlos a manual al dar clip a un interruptor inalambrico para ser controlado de manera manual de no usar la funcion manual al termino del horario se ejecuta la automatizacion activa desactiva apgando la luz de estar encendida y desactivando las automatizaciones de encendido por movimiento y la centinela para activarlas nuevamente a las hrs para iniciar nuevamente el ciclo notas del autor el hardware usado es u sensor de movimiento aqara modelo un interuptor inalambrico aqara modelo y una lampara xiomi modelo todas ellas con protocolo zigbee ,1 9105,27557787512.0,IssuesEvent,2023-03-07 19:21:11,PauloGasparSv/TestingHooks,https://api.github.com/repos/PauloGasparSv/TestingHooks,closed,New Test,Upwork Automation,Now testing with my handle outside of the teams but inside the `MARGELO_GH_USERNAMES` array,1.0,New Test - Now testing with my handle outside of the teams but inside the `MARGELO_GH_USERNAMES` array,1,new test now testing with my handle outside of the teams but inside the margelo gh usernames array,1 2570,12299505943.0,IssuesEvent,2020-05-11 12:30:32,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,Test update_interval parameter from vuln detector providers,automation core/vuln detector,"Hello team, We want to test the attribute `update_interval` of Vulnerability detector feeds. This test is related to #661 Regards.",1.0,"Test update_interval parameter from vuln detector providers - Hello team, We want to test the attribute `update_interval` of Vulnerability detector feeds. This test is related to #661 Regards.",1,test update interval parameter from vuln detector providers hello team we want to test the attribute update interval of vulnerability detector feeds this test is related to regards ,1 119400,15528674243.0,IssuesEvent,2021-03-13 12:01:32,racklet/racklet,https://api.github.com/repos/racklet/racklet,opened,RFC 2: TUFBoot,kind/design priority/important-longterm subproject/tufboot,"Describe in the second RFC how we'd like to handle the boot process of the bare metal compute using TUF, libgitops, LinuxBoot, and related technologies.",1.0,"RFC 2: TUFBoot - Describe in the second RFC how we'd like to handle the boot process of the bare metal compute using TUF, libgitops, LinuxBoot, and related technologies.",0,rfc tufboot describe in the second rfc how we d like to handle the boot process of the bare metal compute using tuf libgitops linuxboot and related technologies ,0 120522,15774057663.0,IssuesEvent,2021-04-01 00:22:01,hackforla/HomeUniteUs,https://api.github.com/repos/hackforla/HomeUniteUs,closed,Audit Calendaring: Case Worker Hi Fid Prototype 2.0,Design System UI/UX,"### Overview We need to audit 'Calendaring: Case Worker Hi Fid Prototype 2.0' in order to create a coherent design system for HUU. ### Action Items - [x] Meet with Julia to over audit process - [x] Audit in progress - [x] Review existing screens - [x] Take notes of current styles (typography + color) - [x] Take notes of current styles (components - buttons & icons) - [x] List out proposed changes - [x] Audit ready for review - [x] Review with team - [x] Document feedbacks from team - [x] Iterate based on feedback - [x] Ready for design system ### Resources Figma: https://www.figma.com/file/BNWqZk8SHKbtN1nw8BB7VM/Current-HUU-Everything-Figma?node-id=631%3A9689",1.0,"Audit Calendaring: Case Worker Hi Fid Prototype 2.0 - ### Overview We need to audit 'Calendaring: Case Worker Hi Fid Prototype 2.0' in order to create a coherent design system for HUU. ### Action Items - [x] Meet with Julia to over audit process - [x] Audit in progress - [x] Review existing screens - [x] Take notes of current styles (typography + color) - [x] Take notes of current styles (components - buttons & icons) - [x] List out proposed changes - [x] Audit ready for review - [x] Review with team - [x] Document feedbacks from team - [x] Iterate based on feedback - [x] Ready for design system ### Resources Figma: https://www.figma.com/file/BNWqZk8SHKbtN1nw8BB7VM/Current-HUU-Everything-Figma?node-id=631%3A9689",0,audit calendaring case worker hi fid prototype overview we need to audit calendaring case worker hi fid prototype in order to create a coherent design system for huu action items meet with julia to over audit process audit in progress review existing screens take notes of current styles typography color take notes of current styles components buttons icons list out proposed changes audit ready for review review with team document feedbacks from team iterate based on feedback ready for design system resources figma ,0 66378,8920200871.0,IssuesEvent,2019-01-21 05:31:22,golang/go,https://api.github.com/repos/golang/go,closed,"flag: PrintDefaults claims configurability, but doesn't say how",Documentation NeedsFix,"https://golang.org/pkg/flag/#PrintDefaults says > PrintDefaults prints, to standard error unless configured otherwise, ... I couldn't see anywhere how to send its output anywhere else. I dug through the source code and eventually made my way to `(*FlagSet).SetOutput`, but that was not at all obvious. ",1.0,"flag: PrintDefaults claims configurability, but doesn't say how - https://golang.org/pkg/flag/#PrintDefaults says > PrintDefaults prints, to standard error unless configured otherwise, ... I couldn't see anywhere how to send its output anywhere else. I dug through the source code and eventually made my way to `(*FlagSet).SetOutput`, but that was not at all obvious. ",0,flag printdefaults claims configurability but doesn t say how says printdefaults prints to standard error unless configured otherwise i couldn t see anywhere how to send its output anywhere else i dug through the source code and eventually made my way to flagset setoutput but that was not at all obvious ,0 268461,20324532659.0,IssuesEvent,2022-02-18 03:40:15,frc2609/rapid-react-robot-code-2022,https://api.github.com/repos/frc2609/rapid-react-robot-code-2022,closed,Lay out steps for climbing and document this,documentation,"We need concrete list of steps on how to achieve climbing, and we need this documented. Discuss things such as how we want the robot to climb up, whether we will use swinging or not, whether we want to use the base of the robot as a counter weight, etc.",1.0,"Lay out steps for climbing and document this - We need concrete list of steps on how to achieve climbing, and we need this documented. Discuss things such as how we want the robot to climb up, whether we will use swinging or not, whether we want to use the base of the robot as a counter weight, etc.",0,lay out steps for climbing and document this we need concrete list of steps on how to achieve climbing and we need this documented discuss things such as how we want the robot to climb up whether we will use swinging or not whether we want to use the base of the robot as a counter weight etc ,0 48207,2994700564.0,IssuesEvent,2015-07-22 13:23:48,CIS-412-Spring-2015/frontend,https://api.github.com/repos/CIS-412-Spring-2015/frontend,closed,Make the Cancel Button go to In Progress page,Backlog enhancement Low Priority,"As a web developer I want to be able to provide functionality to the Cancel button represented on the footer panel of the views, so that when the user clicks on it they are warned and then if they proceed it takes them to the in progress page and cancels their endeavors. - [x] Make a modal appear that warns user - [x] The ""OK"" option takes them to the in progress page - [x] The ""Cancel"" or ""x"" option leaves the user on the same page. ",1.0,"Make the Cancel Button go to In Progress page - As a web developer I want to be able to provide functionality to the Cancel button represented on the footer panel of the views, so that when the user clicks on it they are warned and then if they proceed it takes them to the in progress page and cancels their endeavors. - [x] Make a modal appear that warns user - [x] The ""OK"" option takes them to the in progress page - [x] The ""Cancel"" or ""x"" option leaves the user on the same page. ",0,make the cancel button go to in progress page as a web developer i want to be able to provide functionality to the cancel button represented on the footer panel of the views so that when the user clicks on it they are warned and then if they proceed it takes them to the in progress page and cancels their endeavors make a modal appear that warns user the ok option takes them to the in progress page the cancel or x option leaves the user on the same page ,0 237356,26084101207.0,IssuesEvent,2022-12-25 21:25:19,samqws-marketing/walmartlabs-concord,https://api.github.com/repos/samqws-marketing/walmartlabs-concord,opened,CVE-2022-46175 (High) detected in multiple libraries,security vulnerability,"## CVE-2022-46175 - High Severity Vulnerability
Vulnerable Libraries - json5-1.0.1.tgz, json5-2.2.0.tgz, json5-0.5.1.tgz

json5-1.0.1.tgz

JSON for humans.

Library home page: https://registry.npmjs.org/json5/-/json5-1.0.1.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/html-webpack-plugin/node_modules/json5/package.json,/console2/node_modules/tsconfig-paths/node_modules/json5/package.json,/console2/node_modules/resolve-url-loader/node_modules/json5/package.json,/console2/node_modules/postcss-loader/node_modules/json5/package.json,/console2/node_modules/babel-loader/node_modules/json5/package.json,/console2/node_modules/mini-css-extract-plugin/node_modules/json5/package.json,/console2/node_modules/webpack/node_modules/json5/package.json

Dependency Hierarchy: - react-scripts-4.0.3.tgz (Root Library) - babel-loader-8.1.0.tgz - loader-utils-1.4.0.tgz - :x: **json5-1.0.1.tgz** (Vulnerable Library)

json5-2.2.0.tgz

JSON for humans.

Library home page: https://registry.npmjs.org/json5/-/json5-2.2.0.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/json5/package.json

Dependency Hierarchy: - react-scripts-4.0.3.tgz (Root Library) - core-7.12.3.tgz - :x: **json5-2.2.0.tgz** (Vulnerable Library)

json5-0.5.1.tgz

JSON for the ES5 era.

Library home page: https://registry.npmjs.org/json5/-/json5-0.5.1.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/babel-register/node_modules/json5/package.json,/console2/node_modules/babel-cli/node_modules/json5/package.json

Dependency Hierarchy: - babel-cli-6.26.0.tgz (Root Library) - babel-core-6.26.3.tgz - :x: **json5-0.5.1.tgz** (Vulnerable Library)

Found in HEAD commit: b9420f3b9e73a9d381266ece72f7afb756f35a76

Found in base branch: master

Vulnerability Details

JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.

Publish Date: 2022-12-24

URL: CVE-2022-46175

CVSS 3 Score Details (7.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: Low - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.cve.org/CVERecord?id=CVE-2022-46175

Release Date: 2022-12-24

Fix Resolution: json5 - 2.2.2

",True,"CVE-2022-46175 (High) detected in multiple libraries - ## CVE-2022-46175 - High Severity Vulnerability
Vulnerable Libraries - json5-1.0.1.tgz, json5-2.2.0.tgz, json5-0.5.1.tgz

json5-1.0.1.tgz

JSON for humans.

Library home page: https://registry.npmjs.org/json5/-/json5-1.0.1.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/html-webpack-plugin/node_modules/json5/package.json,/console2/node_modules/tsconfig-paths/node_modules/json5/package.json,/console2/node_modules/resolve-url-loader/node_modules/json5/package.json,/console2/node_modules/postcss-loader/node_modules/json5/package.json,/console2/node_modules/babel-loader/node_modules/json5/package.json,/console2/node_modules/mini-css-extract-plugin/node_modules/json5/package.json,/console2/node_modules/webpack/node_modules/json5/package.json

Dependency Hierarchy: - react-scripts-4.0.3.tgz (Root Library) - babel-loader-8.1.0.tgz - loader-utils-1.4.0.tgz - :x: **json5-1.0.1.tgz** (Vulnerable Library)

json5-2.2.0.tgz

JSON for humans.

Library home page: https://registry.npmjs.org/json5/-/json5-2.2.0.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/json5/package.json

Dependency Hierarchy: - react-scripts-4.0.3.tgz (Root Library) - core-7.12.3.tgz - :x: **json5-2.2.0.tgz** (Vulnerable Library)

json5-0.5.1.tgz

JSON for the ES5 era.

Library home page: https://registry.npmjs.org/json5/-/json5-0.5.1.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/babel-register/node_modules/json5/package.json,/console2/node_modules/babel-cli/node_modules/json5/package.json

Dependency Hierarchy: - babel-cli-6.26.0.tgz (Root Library) - babel-core-6.26.3.tgz - :x: **json5-0.5.1.tgz** (Vulnerable Library)

Found in HEAD commit: b9420f3b9e73a9d381266ece72f7afb756f35a76

Found in base branch: master

Vulnerability Details

JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.

Publish Date: 2022-12-24

URL: CVE-2022-46175

CVSS 3 Score Details (7.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: Low - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.cve.org/CVERecord?id=CVE-2022-46175

Release Date: 2022-12-24

Fix Resolution: json5 - 2.2.2

",0,cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries tgz tgz tgz tgz json for humans library home page a href path to dependency file package json path to vulnerable library node modules html webpack plugin node modules package json node modules tsconfig paths node modules package json node modules resolve url loader node modules package json node modules postcss loader node modules package json node modules babel loader node modules package json node modules mini css extract plugin node modules package json node modules webpack node modules package json dependency hierarchy react scripts tgz root library babel loader tgz loader utils tgz x tgz vulnerable library tgz json for humans library home page a href path to dependency file package json path to vulnerable library node modules package json dependency hierarchy react scripts tgz root library core tgz x tgz vulnerable library tgz json for the era library home page a href path to dependency file package json path to vulnerable library node modules babel register node modules package json node modules babel cli node modules package json dependency hierarchy babel cli tgz root library babel core tgz x tgz vulnerable library found in head commit a href found in base branch master vulnerability details is an extension to the popular json file format that aims to be easier to write and maintain by hand e g for config files the parse method of the library before and including version does not restrict parsing of keys named proto allowing specially crafted strings to pollute the prototype of the resulting object this vulnerability pollutes the prototype of the object returned by parse and not the global object prototype which is the commonly understood definition of prototype pollution however polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations this vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from parse the actual impact will depend on how applications utilize the returned object and how they filter unwanted keys but could include denial of service cross site scripting elevation of privilege and in extreme cases remote code execution parse should restrict parsing of proto keys when parsing json strings to objects as a point of reference the json parse method included in javascript ignores proto keys simply changing parse to json parse in the examples above mitigates this vulnerability this vulnerability is patched in version and later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ,0 141112,18945391067.0,IssuesEvent,2021-11-18 09:37:40,MicrosoftDocs/microsoft-365-docs,https://api.github.com/repos/MicrosoftDocs/microsoft-365-docs,closed,Run Tests: DKIM button lead nowhere useful,security Defender for Office 365 doc-enhancement writer-input-required,"There is a button on this document labelled ""Run Tests: DKIM"" which point to https://aka.ms/diagdkim when I click and log into my tenant It places ""Diag: DKIM"" in a field under ""How can we help?"" heading on a pane to the right and then returns ""no solutions found"" Button goes no-where useful. Would be better if it lead to an actual DKIM setup testing form. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 249c059f-c3ce-a382-b99e-795a0c1f646a * Version Independent ID: 5656c114-ae16-6194-cd9f-353b33f4a1e1 * Content: [How to use DKIM for email in your custom domain - Office 365](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email?view=o365-worldwide) * Content Source: [microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email.md) * Product: **m365-security** * Technology: **mdo** * GitHub Login: @MSFTTracyP * Microsoft Alias: **tracyp**",True,"Run Tests: DKIM button lead nowhere useful - There is a button on this document labelled ""Run Tests: DKIM"" which point to https://aka.ms/diagdkim when I click and log into my tenant It places ""Diag: DKIM"" in a field under ""How can we help?"" heading on a pane to the right and then returns ""no solutions found"" Button goes no-where useful. Would be better if it lead to an actual DKIM setup testing form. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 249c059f-c3ce-a382-b99e-795a0c1f646a * Version Independent ID: 5656c114-ae16-6194-cd9f-353b33f4a1e1 * Content: [How to use DKIM for email in your custom domain - Office 365](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email?view=o365-worldwide) * Content Source: [microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email.md) * Product: **m365-security** * Technology: **mdo** * GitHub Login: @MSFTTracyP * Microsoft Alias: **tracyp**",0,run tests dkim button lead nowhere useful there is a button on this document labelled run tests dkim which point to when i click and log into my tenant it places diag dkim in a field under how can we help heading on a pane to the right and then returns no solutions found button goes no where useful would be better if it lead to an actual dkim setup testing form document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product security technology mdo github login msfttracyp microsoft alias tracyp ,0 8779,27172240289.0,IssuesEvent,2023-02-17 20:35:11,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Beta pages API doesn't work if you have pages with custom content types,area:Pages Needs: Investigation automation:Closed,"#### Category - [ ] Question - [ ] Documentation issue - [x] Bug #### Expected or Desired Behavior The API call to succeed and return pages. #### Observed Behavior You get the following error. ```JSON { ""error"": { ""code"": ""generalException"", ""message"": ""General exception while processing"", ""innerError"": { ""request-id"": ""6041c48f-732f-4a0b-9849-0294bd9e637b"", ""date"": ""2019-11-25T17:00:43"" } } } ``` Tenant mag013qa2tolead RID's 6041c48f-732f-4a0b-9849-0294bd9e637b (graph explorer) 3df05a8d-84e5-49f8-8809-1feff5127225 (dotnet sdk) #### Steps to Reproduce 1. create a team or communication site 1. add custom content types to the site pages library 1. add pages using that new content type 1. GET `https://graph.microsoft.com/beta//pages` Thanks for your help!",1.0,"Beta pages API doesn't work if you have pages with custom content types - #### Category - [ ] Question - [ ] Documentation issue - [x] Bug #### Expected or Desired Behavior The API call to succeed and return pages. #### Observed Behavior You get the following error. ```JSON { ""error"": { ""code"": ""generalException"", ""message"": ""General exception while processing"", ""innerError"": { ""request-id"": ""6041c48f-732f-4a0b-9849-0294bd9e637b"", ""date"": ""2019-11-25T17:00:43"" } } } ``` Tenant mag013qa2tolead RID's 6041c48f-732f-4a0b-9849-0294bd9e637b (graph explorer) 3df05a8d-84e5-49f8-8809-1feff5127225 (dotnet sdk) #### Steps to Reproduce 1. create a team or communication site 1. add custom content types to the site pages library 1. add pages using that new content type 1. GET `https://graph.microsoft.com/beta//pages` Thanks for your help!",1,beta pages api doesn t work if you have pages with custom content types category question documentation issue bug expected or desired behavior the api call to succeed and return pages observed behavior you get the following error json error code generalexception message general exception while processing innererror request id date tenant rid s graph explorer dotnet sdk steps to reproduce create a team or communication site add custom content types to the site pages library add pages using that new content type get thanks for your help ,1 35337,17027378683.0,IssuesEvent,2021-07-03 20:35:22,JuliaLang/julia,https://api.github.com/repos/JuliaLang/julia,closed,Improving broadcasting performance by working around recursion limits of inlining,broadcast performance,"Hi! I've discovered that many (runtime) performance issues with broadcasting are caused by inlining not working with the highly recursive broadcasting code. It turns out that defining more methods can actually help here. Here is a piece of code you can evaluate in REPL to see that: ```julia struct RecursiveInliningEnforcerA{T} makeargs::T end struct RecursiveInliningEnforcerB{TMT,TMH,TT,TH,TF} makeargs_tail::TMT makeargs_head::TMH headargs::TH tailargs::TT f::TF end for UB in [Any, RecursiveInliningEnforcerA] @eval @inline function (bb::RecursiveInliningEnforcerB{TMT,TMH,TT,TH,TF})(args::Vararg{Any,N}) where {N,TMT,TMH<:$UB,TT,TH,TF} args1 = bb.makeargs_head(args...) a = bb.headargs(args1...) b = bb.makeargs_tail(bb.tailargs(args1...)...) return (bb.f(a...), b...) end end for UB in [Any, RecursiveInliningEnforcerB] @eval @inline (a::RecursiveInliningEnforcerA{TTA})(head::TH, tail::Vararg{Any,N}) where {TTA<:$UB,TH,N} = (head, a.makeargs(tail...)...) end @inline function Broadcast.make_makeargs(makeargs_tail::TT, t::Tuple) where TT return RecursiveInliningEnforcerA(Broadcast.make_makeargs(makeargs_tail, Base.tail(t))) end function Broadcast.make_makeargs(makeargs_tail, t::Tuple{<:Broadcast.Broadcasted, Vararg{Any}}) bc = t[1] # c.f. the same expression in the function on leaf nodes above. Here # we recurse into siblings in the broadcast tree. let makeargs_tail = Broadcast.make_makeargs(makeargs_tail, Base.tail(t)), # Here we recurse into children. It would be valid to pass in makeargs_tail # here, and not use it below. However, in that case, our recursion is no # longer purely structural because we're building up one argument (the closure) # while destructuing another. makeargs_head = Broadcast.make_makeargs((args...)->args, bc.args), f = bc.f # Create two functions, one that splits of the first length(bc.args) # elements from the tuple and one that yields the remaining arguments. # N.B. We can't call headargs on `args...` directly because # args is flattened (i.e. our children have not been evaluated # yet). headargs, tailargs = Broadcast.make_headargs(bc.args), Broadcast.make_tailargs(bc.args) return RecursiveInliningEnforcerB(makeargs_tail, makeargs_head, headargs, tailargs, f) end end ``` This effectively duplicates these two functions: https://github.com/JuliaLang/julia/blob/abbb220b89ebcec87efd9fbf6c0ccae4f2a3ef4a/base/broadcast.jl#L380-L384 and https://github.com/JuliaLang/julia/blob/abbb220b89ebcec87efd9fbf6c0ccae4f2a3ef4a/base/broadcast.jl#L361 for different argument types. It turns out that it's sufficient to fix the following issues: https://github.com/JuliaArrays/StaticArrays.jl/issues/560 https://github.com/JuliaArrays/StaticArrays.jl/issues/682 https://github.com/JuliaArrays/StaticArrays.jl/issues/609 https://github.com/JuliaArrays/StaticArrays.jl/issues/797 What do you think about it?",True,"Improving broadcasting performance by working around recursion limits of inlining - Hi! I've discovered that many (runtime) performance issues with broadcasting are caused by inlining not working with the highly recursive broadcasting code. It turns out that defining more methods can actually help here. Here is a piece of code you can evaluate in REPL to see that: ```julia struct RecursiveInliningEnforcerA{T} makeargs::T end struct RecursiveInliningEnforcerB{TMT,TMH,TT,TH,TF} makeargs_tail::TMT makeargs_head::TMH headargs::TH tailargs::TT f::TF end for UB in [Any, RecursiveInliningEnforcerA] @eval @inline function (bb::RecursiveInliningEnforcerB{TMT,TMH,TT,TH,TF})(args::Vararg{Any,N}) where {N,TMT,TMH<:$UB,TT,TH,TF} args1 = bb.makeargs_head(args...) a = bb.headargs(args1...) b = bb.makeargs_tail(bb.tailargs(args1...)...) return (bb.f(a...), b...) end end for UB in [Any, RecursiveInliningEnforcerB] @eval @inline (a::RecursiveInliningEnforcerA{TTA})(head::TH, tail::Vararg{Any,N}) where {TTA<:$UB,TH,N} = (head, a.makeargs(tail...)...) end @inline function Broadcast.make_makeargs(makeargs_tail::TT, t::Tuple) where TT return RecursiveInliningEnforcerA(Broadcast.make_makeargs(makeargs_tail, Base.tail(t))) end function Broadcast.make_makeargs(makeargs_tail, t::Tuple{<:Broadcast.Broadcasted, Vararg{Any}}) bc = t[1] # c.f. the same expression in the function on leaf nodes above. Here # we recurse into siblings in the broadcast tree. let makeargs_tail = Broadcast.make_makeargs(makeargs_tail, Base.tail(t)), # Here we recurse into children. It would be valid to pass in makeargs_tail # here, and not use it below. However, in that case, our recursion is no # longer purely structural because we're building up one argument (the closure) # while destructuing another. makeargs_head = Broadcast.make_makeargs((args...)->args, bc.args), f = bc.f # Create two functions, one that splits of the first length(bc.args) # elements from the tuple and one that yields the remaining arguments. # N.B. We can't call headargs on `args...` directly because # args is flattened (i.e. our children have not been evaluated # yet). headargs, tailargs = Broadcast.make_headargs(bc.args), Broadcast.make_tailargs(bc.args) return RecursiveInliningEnforcerB(makeargs_tail, makeargs_head, headargs, tailargs, f) end end ``` This effectively duplicates these two functions: https://github.com/JuliaLang/julia/blob/abbb220b89ebcec87efd9fbf6c0ccae4f2a3ef4a/base/broadcast.jl#L380-L384 and https://github.com/JuliaLang/julia/blob/abbb220b89ebcec87efd9fbf6c0ccae4f2a3ef4a/base/broadcast.jl#L361 for different argument types. It turns out that it's sufficient to fix the following issues: https://github.com/JuliaArrays/StaticArrays.jl/issues/560 https://github.com/JuliaArrays/StaticArrays.jl/issues/682 https://github.com/JuliaArrays/StaticArrays.jl/issues/609 https://github.com/JuliaArrays/StaticArrays.jl/issues/797 What do you think about it?",0,improving broadcasting performance by working around recursion limits of inlining hi i ve discovered that many runtime performance issues with broadcasting are caused by inlining not working with the highly recursive broadcasting code it turns out that defining more methods can actually help here here is a piece of code you can evaluate in repl to see that julia struct recursiveinliningenforcera t makeargs t end struct recursiveinliningenforcerb tmt tmh tt th tf makeargs tail tmt makeargs head tmh headargs th tailargs tt f tf end for ub in eval inline function bb recursiveinliningenforcerb tmt tmh tt th tf args vararg any n where n tmt tmh ub tt th tf bb makeargs head args a bb headargs b bb makeargs tail bb tailargs return bb f a b end end for ub in eval inline a recursiveinliningenforcera tta head th tail vararg any n where tta ub th n head a makeargs tail end inline function broadcast make makeargs makeargs tail tt t tuple where tt return recursiveinliningenforcera broadcast make makeargs makeargs tail base tail t end function broadcast make makeargs makeargs tail t tuple broadcast broadcasted vararg any bc t c f the same expression in the function on leaf nodes above here we recurse into siblings in the broadcast tree let makeargs tail broadcast make makeargs makeargs tail base tail t here we recurse into children it would be valid to pass in makeargs tail here and not use it below however in that case our recursion is no longer purely structural because we re building up one argument the closure while destructuing another makeargs head broadcast make makeargs args args bc args f bc f create two functions one that splits of the first length bc args elements from the tuple and one that yields the remaining arguments n b we can t call headargs on args directly because args is flattened i e our children have not been evaluated yet headargs tailargs broadcast make headargs bc args broadcast make tailargs bc args return recursiveinliningenforcerb makeargs tail makeargs head headargs tailargs f end end this effectively duplicates these two functions and for different argument types it turns out that it s sufficient to fix the following issues what do you think about it ,0 173392,13399601685.0,IssuesEvent,2020-09-03 14:42:11,ultimate-pa/ultimate,https://api.github.com/repos/ultimate-pa/ultimate,opened,Wrong specification in rtinconsistency_test36,ReqAnalyzer test framework,"Test specification should be as follows ... `// #TestSpec: rt-inconsistent:; vacuous:req1; inconsistent:; results:-1` Maybe move the file into vacuity folder. ![image](https://user-images.githubusercontent.com/13038993/92129511-e86d2100-ee03-11ea-8293-2a84768d20f0.png) ",1.0,"Wrong specification in rtinconsistency_test36 - Test specification should be as follows ... `// #TestSpec: rt-inconsistent:; vacuous:req1; inconsistent:; results:-1` Maybe move the file into vacuity folder. ![image](https://user-images.githubusercontent.com/13038993/92129511-e86d2100-ee03-11ea-8293-2a84768d20f0.png) ",0,wrong specification in rtinconsistency test specification should be as follows testspec rt inconsistent vacuous inconsistent results maybe move the file into vacuity folder ,0 800,8149134483.0,IssuesEvent,2018-08-22 08:40:37,PierreRambaud/PrestaShop,https://api.github.com/repos/PierreRambaud/PrestaShop,closed,[BOOM-3849] css & js should use CDN when activated,1.7.2.2 1.7.4.0 Bug QA_automation Standard To Do Topwatchers," > This issue has been migrated from this Forge ticket [http://forge.prestashop.com/browse/BOOM-3849](http://forge.prestashop.com/browse/BOOM-3849) - _**Reporter:**_ jmawad@bobbies.com - _**Created at:**_ Fri, 15 Sep 2017 12:28:15 +0200

Apparently, even if you use mediaserver, the css and js in cache are not using CDN feature, as the images do.

You should write a Htaccess rule to handle the /themes/[theme]/assets/cache, and use media server url in header when calling the css

",1.0,"[BOOM-3849] css & js should use CDN when activated - > This issue has been migrated from this Forge ticket [http://forge.prestashop.com/browse/BOOM-3849](http://forge.prestashop.com/browse/BOOM-3849) - _**Reporter:**_ jmawad@bobbies.com - _**Created at:**_ Fri, 15 Sep 2017 12:28:15 +0200

Apparently, even if you use mediaserver, the css and js in cache are not using CDN feature, as the images do.

You should write a Htaccess rule to handle the /themes/[theme]/assets/cache, and use media server url in header when calling the css

",1, css js should use cdn when activated this issue has been migrated from this forge ticket reporter jmawad bobbies com created at fri sep apparently even if you use mediaserver the css and js in cache are not using cdn feature as the images do you should write a htaccess rule to handle the themes theme assets cache and use media server url in header when calling the css ,1 40824,10583043295.0,IssuesEvent,2019-10-08 12:57:07,ocaml/opam,https://api.github.com/repos/ocaml/opam,closed,Solaris 10 patch command doesn't get file to patch,AREA: BUILD AREA: PORTABILITY,"After editing opam-full-1.2.2-rc2/src_ext/Makefile to remove suppression of recipe echoing: ... if [ -d patches/cmdliner ]; then \ cd cmdliner && \ for p in ../patches/cmdliner/*.patch; do \ patch -p1 < $p; \ done; \ fi Looks like a unified context diff. File to patch: That is, the patch command prompts the user. opam-full-1.2.2-rc2/src_ext/patches/cmdliner/backport_pre_4_00_0.patch diff -Naur cmdliner-0.9.7/src/cmdliner.ml cmdliner-0.9.7.patched/src/cmdliner.ml --- cmdliner-0.9.7/src/cmdliner.ml 2015-02-06 11:33:44.000000000 +0100 +++ cmdliner-0.9.7.patched/src/cmdliner.ml 2015-02-18 23:04:04.000000000 +0100 ... See the man page for the Solaris 10 patch command. http://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9n6/index.html In particular, we are interested in the ""File Name Determination"" section of that document. If no file operand is specified, patch performs the following steps to obtain a path name: If the patch contains the strings **\* and - - -, patch strips components from the beginning of each path name (depending on the presence or value of the -p option), then tests for the existence of both files in the current directory ... src/cmdliner.ml src/cmdliner.ml ""Both"" files exist. If both files exist, patch assumes that no path name can be obtained from this step ... If no path name can be obtained by applying the previous steps, ... patch will write a prompt to standard output and request a file name interactively from standard input. One possible solution is for the makefile to read the patch file, extracting the path name using the Linux patch command algorithm. Then feed that path name to the patch command explicitly. Alan Feldstein Cosmic Horizon http://www.alanfeldstein.com ",1.0,"Solaris 10 patch command doesn't get file to patch - After editing opam-full-1.2.2-rc2/src_ext/Makefile to remove suppression of recipe echoing: ... if [ -d patches/cmdliner ]; then \ cd cmdliner && \ for p in ../patches/cmdliner/*.patch; do \ patch -p1 < $p; \ done; \ fi Looks like a unified context diff. File to patch: That is, the patch command prompts the user. opam-full-1.2.2-rc2/src_ext/patches/cmdliner/backport_pre_4_00_0.patch diff -Naur cmdliner-0.9.7/src/cmdliner.ml cmdliner-0.9.7.patched/src/cmdliner.ml --- cmdliner-0.9.7/src/cmdliner.ml 2015-02-06 11:33:44.000000000 +0100 +++ cmdliner-0.9.7.patched/src/cmdliner.ml 2015-02-18 23:04:04.000000000 +0100 ... See the man page for the Solaris 10 patch command. http://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9n6/index.html In particular, we are interested in the ""File Name Determination"" section of that document. If no file operand is specified, patch performs the following steps to obtain a path name: If the patch contains the strings **\* and - - -, patch strips components from the beginning of each path name (depending on the presence or value of the -p option), then tests for the existence of both files in the current directory ... src/cmdliner.ml src/cmdliner.ml ""Both"" files exist. If both files exist, patch assumes that no path name can be obtained from this step ... If no path name can be obtained by applying the previous steps, ... patch will write a prompt to standard output and request a file name interactively from standard input. One possible solution is for the makefile to read the patch file, extracting the path name using the Linux patch command algorithm. Then feed that path name to the patch command explicitly. Alan Feldstein Cosmic Horizon http://www.alanfeldstein.com ",0,solaris patch command doesn t get file to patch after editing opam full src ext makefile to remove suppression of recipe echoing if then cd cmdliner for p in patches cmdliner patch do patch p done fi looks like a unified context diff file to patch that is the patch command prompts the user opam full src ext patches cmdliner backport pre patch diff naur cmdliner src cmdliner ml cmdliner patched src cmdliner ml cmdliner src cmdliner ml cmdliner patched src cmdliner ml see the man page for the solaris patch command in particular we are interested in the file name determination section of that document if no file operand is specified patch performs the following steps to obtain a path name if the patch contains the strings and patch strips components from the beginning of each path name depending on the presence or value of the p option then tests for the existence of both files in the current directory src cmdliner ml src cmdliner ml both files exist if both files exist patch assumes that no path name can be obtained from this step if no path name can be obtained by applying the previous steps patch will write a prompt to standard output and request a file name interactively from standard input one possible solution is for the makefile to read the patch file extracting the path name using the linux patch command algorithm then feed that path name to the patch command explicitly alan feldstein cosmic horizon ,0 2467,12059707107.0,IssuesEvent,2020-04-15 19:46:41,BCDevOps/OpenShift4-RollOut,https://api.github.com/repos/BCDevOps/OpenShift4-RollOut,opened,Upgrade Aporeto For Use with OCP4,tech/automation tech/networking,"Tasks: - [x] Upgrade Aporeto Playbook for current Aporeto Release - [x] Ensure Cordon/Evac Install and Upgrades work as expected - [ ] Upgrade BCGov Network Policy Operator to a supported Operator-SDK version and Test - [x] Write Host Protection Policies - [ ] Move any remaining policy, enforcer, or namespace related policy configuration to NS Management Playbook - [ ] Verify Host Protection is Enabled and Working properly with Assistance from DXC I'll continue adding to this with further detail as things progress. As of today (April 15th 2020) I'm testing the playbook and making some additional changes to ensure host protection works ",1.0,"Upgrade Aporeto For Use with OCP4 - Tasks: - [x] Upgrade Aporeto Playbook for current Aporeto Release - [x] Ensure Cordon/Evac Install and Upgrades work as expected - [ ] Upgrade BCGov Network Policy Operator to a supported Operator-SDK version and Test - [x] Write Host Protection Policies - [ ] Move any remaining policy, enforcer, or namespace related policy configuration to NS Management Playbook - [ ] Verify Host Protection is Enabled and Working properly with Assistance from DXC I'll continue adding to this with further detail as things progress. As of today (April 15th 2020) I'm testing the playbook and making some additional changes to ensure host protection works ",1,upgrade aporeto for use with tasks upgrade aporeto playbook for current aporeto release ensure cordon evac install and upgrades work as expected upgrade bcgov network policy operator to a supported operator sdk version and test write host protection policies move any remaining policy enforcer or namespace related policy configuration to ns management playbook verify host protection is enabled and working properly with assistance from dxc i ll continue adding to this with further detail as things progress as of today april i m testing the playbook and making some additional changes to ensure host protection works ,1 4221,15823876563.0,IssuesEvent,2021-04-06 01:53:41,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Fix simulation of 'click' event in drag action,AREA: client FREQUENCY: level 1 STATE: Need improvement STATE: Stale SYSTEM: automations,"We should check that we simulate/skip simulation for `click` event in the end of `drag` action just like the browser does. Take into account the following points: 1) prevention of previous events (mouseup, pointerup, touchup, etc) 2) if drag action is executed for really draggable elements or not 3) touch emulation specificity 4) different browsers specificity",1.0,"Fix simulation of 'click' event in drag action - We should check that we simulate/skip simulation for `click` event in the end of `drag` action just like the browser does. Take into account the following points: 1) prevention of previous events (mouseup, pointerup, touchup, etc) 2) if drag action is executed for really draggable elements or not 3) touch emulation specificity 4) different browsers specificity",1,fix simulation of click event in drag action we should check that we simulate skip simulation for click event in the end of drag action just like the browser does take into account the following points prevention of previous events mouseup pointerup touchup etc if drag action is executed for really draggable elements or not touch emulation specificity different browsers specificity,1 45880,5756917367.0,IssuesEvent,2017-04-26 01:36:58,Microsoft/vscode,https://api.github.com/repos/Microsoft/vscode,opened,Test plan for Emmet features from extension using new APIs,testplan-item,"Test item for #21943 Complexity: 4 **Pre-req for testing** - Clone the repo for the extension https://github.com/Microsoft/vscode-emmet - Run `vsce package` to get the six - Side load the extension by `code-insiders --install-extension emmet-0.0.1.vsix` **Suggestion Items** - As you type an abbreviation, the suggestion list should show the expanded abbreviation. Can be turned off by disabling `emmet.suggestExpandedAbbreviation` - As you type an abbreviation, all possible abbreviations should show in the suggestion list. Can be turned off by disabling `emmet.suggestAbbreviations` **Commands to test:** - _Emmet 2.0: Expand abbreviation_ - The selected text or the text preceeding the cursor if no text is selected is taken as the abbreviation to expand. - Works with html, xml, jade, slim, haml files and css,scss,sass, less, stylus files - _Emmet 2.0: Wrap with abbreviation_ - The selected text or the current line if no text is selected, is wrapped with given abbreviation. - Works with html, xml, jade, slim, haml files - _Emmet 2.0: Remove Tag_ - The tag under the cursor is removed along with the corresponding opening/closing tag. Works with multiple cursors. Works only in html and xml files. - _Emmet 2.0: Update Tag_ - The tag under the cursor is updated to the given tag. Works with multiple cursors. Works only in html and xml files. - _Emmet 2.0: Go to Matching Pair_ - Cursor moves to the tag matching to the tag under cursor. Works with multiple cursors. Works only in html and xml files. ",1.0,"Test plan for Emmet features from extension using new APIs - Test item for #21943 Complexity: 4 **Pre-req for testing** - Clone the repo for the extension https://github.com/Microsoft/vscode-emmet - Run `vsce package` to get the six - Side load the extension by `code-insiders --install-extension emmet-0.0.1.vsix` **Suggestion Items** - As you type an abbreviation, the suggestion list should show the expanded abbreviation. Can be turned off by disabling `emmet.suggestExpandedAbbreviation` - As you type an abbreviation, all possible abbreviations should show in the suggestion list. Can be turned off by disabling `emmet.suggestAbbreviations` **Commands to test:** - _Emmet 2.0: Expand abbreviation_ - The selected text or the text preceeding the cursor if no text is selected is taken as the abbreviation to expand. - Works with html, xml, jade, slim, haml files and css,scss,sass, less, stylus files - _Emmet 2.0: Wrap with abbreviation_ - The selected text or the current line if no text is selected, is wrapped with given abbreviation. - Works with html, xml, jade, slim, haml files - _Emmet 2.0: Remove Tag_ - The tag under the cursor is removed along with the corresponding opening/closing tag. Works with multiple cursors. Works only in html and xml files. - _Emmet 2.0: Update Tag_ - The tag under the cursor is updated to the given tag. Works with multiple cursors. Works only in html and xml files. - _Emmet 2.0: Go to Matching Pair_ - Cursor moves to the tag matching to the tag under cursor. Works with multiple cursors. Works only in html and xml files. ",0,test plan for emmet features from extension using new apis test item for complexity pre req for testing clone the repo for the extension run vsce package to get the six side load the extension by code insiders install extension emmet vsix suggestion items as you type an abbreviation the suggestion list should show the expanded abbreviation can be turned off by disabling emmet suggestexpandedabbreviation as you type an abbreviation all possible abbreviations should show in the suggestion list can be turned off by disabling emmet suggestabbreviations commands to test emmet expand abbreviation the selected text or the text preceeding the cursor if no text is selected is taken as the abbreviation to expand works with html xml jade slim haml files and css scss sass less stylus files emmet wrap with abbreviation the selected text or the current line if no text is selected is wrapped with given abbreviation works with html xml jade slim haml files emmet remove tag the tag under the cursor is removed along with the corresponding opening closing tag works with multiple cursors works only in html and xml files emmet update tag the tag under the cursor is updated to the given tag works with multiple cursors works only in html and xml files emmet go to matching pair cursor moves to the tag matching to the tag under cursor works with multiple cursors works only in html and xml files ,0 7500,24991880294.0,IssuesEvent,2022-11-02 19:33:19,GoogleCloudPlatform/pubsec-declarative-toolkit,https://api.github.com/repos/GoogleCloudPlatform/pubsec-declarative-toolkit,opened,look at checking if there is a way to get workload AR vulnerability checks alongside the infrastructure vulnerability tab results already in SCC-P,developer-experience automation,"Michael will look at checking if there is a way to get workload AR vulnerability checks alongside the infrastructure vulnerability tab results already in SCC-P Artifact Registry scanning of cloud build targeted container SCC (non-premium has the vulnerabilities tab - but not compliance or threats ",1.0,"look at checking if there is a way to get workload AR vulnerability checks alongside the infrastructure vulnerability tab results already in SCC-P - Michael will look at checking if there is a way to get workload AR vulnerability checks alongside the infrastructure vulnerability tab results already in SCC-P Artifact Registry scanning of cloud build targeted container SCC (non-premium has the vulnerabilities tab - but not compliance or threats ",1,look at checking if there is a way to get workload ar vulnerability checks alongside the infrastructure vulnerability tab results already in scc p michael will look at checking if there is a way to get workload ar vulnerability checks alongside the infrastructure vulnerability tab results already in scc p artifact registry scanning of cloud build targeted container img width alt screen shot at pm src scc non premium has the vulnerabilities tab but not compliance or threats img width alt screen shot at pm src ,1 70427,30666492315.0,IssuesEvent,2023-07-25 18:41:58,cityofaustin/atd-data-tech,https://api.github.com/repos/cityofaustin/atd-data-tech,closed,Dev Team Mapping Sync,Type: Meeting Workgroup: DTS Service: Data Science Project: TPW Data Ecosystem Map,"### Objective Meet with the expert dev team/owners to map our their knowledge base concerning data pipeline for our major applications and dashboards. ### Participants Dev Team: John and Chai Data Science: Charlie, Rebekka and Kate ### Agenda Add agenda here or create agenda from [this template](https://docs.google.com/document/d/1d_49KW5C_vSz8Bs50v-cxyIJuTNJMwMrh7ypcxRHgZI/edit#) and add link. Working Miro Board - [Mapping Template](https://miro.com/app/board/uXjVM1BWe3I=/?share_link_id=270138594951) ------ - [x] Schedule meeting late July 2023 - [x] Optional: Schedule debrief - [x] Meet and take notes - [ ] Optional: Debrief with DTS team members - [ ] Create resulting issues ",1.0,"Dev Team Mapping Sync - ### Objective Meet with the expert dev team/owners to map our their knowledge base concerning data pipeline for our major applications and dashboards. ### Participants Dev Team: John and Chai Data Science: Charlie, Rebekka and Kate ### Agenda Add agenda here or create agenda from [this template](https://docs.google.com/document/d/1d_49KW5C_vSz8Bs50v-cxyIJuTNJMwMrh7ypcxRHgZI/edit#) and add link. Working Miro Board - [Mapping Template](https://miro.com/app/board/uXjVM1BWe3I=/?share_link_id=270138594951) ------ - [x] Schedule meeting late July 2023 - [x] Optional: Schedule debrief - [x] Meet and take notes - [ ] Optional: Debrief with DTS team members - [ ] Create resulting issues ",0,dev team mapping sync objective meet with the expert dev team owners to map our their knowledge base concerning data pipeline for our major applications and dashboards participants dev team john and chai data science charlie rebekka and kate agenda add agenda here or create agenda from and add link working miro board schedule meeting late july optional schedule debrief meet and take notes optional debrief with dts team members create resulting issues ,0 378488,26322867812.0,IssuesEvent,2023-01-10 02:14:38,apache/arrow,https://api.github.com/repos/apache/arrow,closed,[Documentation] Add PR template,Type: enhancement Component: Documentation,"### Describe the enhancement requested See #15232 ### Component(s) Documentation",1.0,"[Documentation] Add PR template - ### Describe the enhancement requested See #15232 ### Component(s) Documentation",0, add pr template describe the enhancement requested see component s documentation,0 9056,27437198039.0,IssuesEvent,2023-03-02 08:27:01,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,reopened,[CDC] Data loss seen in snapshot + stream mode,kind/bug area/cdc priority/medium area/cdcsdk qa_automation,"Jira Link: [DB-4559](https://yugabyte.atlassian.net/browse/DB-4559) ### Description ![Profile (1)](https://user-images.githubusercontent.com/109518123/209770778-63fde45e-d8fd-4cb9-b7f5-59217d2e1223.png) All logs: [All_Logs.zip](https://github.com/yugabyte/yugabyte-db/files/10312649/All_Logs.zip) Few errors in connector logs: ``` org.yb.client.CDCErrorException: Server[0a46ce1e05b04b399bf2b97d325f3257] NETWORK_ERROR[code 8]: recvmsg error: Connection refused at org.yb.client.TabletClient.dispatchCDCErrorOrReturnException(TabletClient.java:506) at org.yb.client.TabletClient.decode(TabletClient.java:437) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510) at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:366) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) ```",1.0,"[CDC] Data loss seen in snapshot + stream mode - Jira Link: [DB-4559](https://yugabyte.atlassian.net/browse/DB-4559) ### Description ![Profile (1)](https://user-images.githubusercontent.com/109518123/209770778-63fde45e-d8fd-4cb9-b7f5-59217d2e1223.png) All logs: [All_Logs.zip](https://github.com/yugabyte/yugabyte-db/files/10312649/All_Logs.zip) Few errors in connector logs: ``` org.yb.client.CDCErrorException: Server[0a46ce1e05b04b399bf2b97d325f3257] NETWORK_ERROR[code 8]: recvmsg error: Connection refused at org.yb.client.TabletClient.dispatchCDCErrorOrReturnException(TabletClient.java:506) at org.yb.client.TabletClient.decode(TabletClient.java:437) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510) at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:366) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) ```",1, data loss seen in snapshot stream mode jira link description all logs few errors in connector logs org yb client cdcerrorexception server network error recvmsg error connection refused at org yb client tabletclient dispatchcdcerrororreturnexception tabletclient java at org yb client tabletclient decode tabletclient java at io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java at io netty handler codec replayingdecoder calldecode replayingdecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler timeout idlestatehandler channelread idlestatehandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java ,1 3505,13879209498.0,IssuesEvent,2020-10-17 13:25:36,Cludch/HomeAssistant,https://api.github.com/repos/Cludch/HomeAssistant,closed,Automate office closet light,area: office automation lighting,The light in the office should turn on automatically if we are working and it turns dark outside.,1.0,Automate office closet light - The light in the office should turn on automatically if we are working and it turns dark outside.,1,automate office closet light the light in the office should turn on automatically if we are working and it turns dark outside ,1 408,6255706643.0,IssuesEvent,2017-07-14 08:02:41,vmware/harbor,https://api.github.com/repos/vmware/harbor,closed,project can not created when click ok button immediately,area/ui kind/automation-found kind/bug,"v1.1.2-294-g6ee631d project can not created when click ok button immediately ",1.0,"project can not created when click ok button immediately - v1.1.2-294-g6ee631d project can not created when click ok button immediately ",1,project can not created when click ok button immediately project can not created when click ok button immediately ,1 824858,31233211346.0,IssuesEvent,2023-08-20 00:29:36,zephyrproject-rtos/zephyr,https://api.github.com/repos/zephyrproject-rtos/zephyr,closed,ESP32-C3 with BLE setting CONFIG_MAC_BB_PD enabled doesn't build,bug priority: low platform: ESP32 Stale area: Bluetooth HCI,"**Describe the bug** Zephyr fails to build when `CONFIG_MAC_BB_PD`, ""Power down MAC and baseband of Wi-Fi and Bluetooth when PHY is disabled"", is enabled. **To Reproduce** 1. Append `CONFIG_MAC_BB_PD=y` to `prj.conf` in any bluetooth sample, e.g. `samples/bluetooth/beacon`. 2. Build for an ESP32-C3 board, e.g. `west build -b esp32c3_devkitm` 3. Link failure when building. **Expected behavior** Builds. **Impact** Can't use power down PHY feature. **Logs and console output** Cause of build failure: ``` [97/220] Building C object zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c.obj zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c: In function 'esp_bt_controller_init': zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1022:13: warning: implicit declaration of function 'esp_register_mac_bb_pd_callback' [-> 1022 | if (esp_register_mac_bb_pd_callback(btdm_mac_bb_power_down_cb) != 0) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1027:13: warning: implicit declaration of function 'esp_register_mac_bb_pu_callback' [-> 1027 | if (esp_register_mac_bb_pu_callback(btdm_mac_bb_power_up_cb) != 0) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1189:9: warning: implicit declaration of function 'esp_unregister_mac_bb_pd_callback' [> 1189 | esp_unregister_mac_bb_pd_callback(btdm_mac_bb_power_down_cb); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1191:9: warning: implicit declaration of function 'esp_unregister_mac_bb_pu_callback' [> 1191 | esp_unregister_mac_bb_pu_callback(btdm_mac_bb_power_up_cb); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` And then a link failure from lacking the above undeclared function. ``` riscv32-esp-elf/bin/ld.bfd: zephyr/libzephyr.a(esp_bt_adapter.c.obj): in function `esp_bt_power_domain_on': zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:385: undefined reference to `esp_register_mac_bb_pd_callback' riscv32-esp-elf/bin/ld.bfd: zephyr/libzephyr.a(esp_bt_adapter.c.obj): in function `esp_bt_controller_init': zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1022: undefined reference to `esp_register_mac_bb_pu_callback' riscv32-esp-elf/bin/ld.bfd: zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1168: undefined reference to `esp_unregister_mac_bb_pd_callback' riscv32-esp-elf/bin/ld.bfd: zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1168: undefined reference to `esp_unregister_mac_bb_pu_callback' ``` **Environment (please complete the following information):** - Linux - Crosstool-NG esp-12.2.0_20230208 - Zephyr v3.4.0-rc1-198-g0ae7812f6b",1.0,"ESP32-C3 with BLE setting CONFIG_MAC_BB_PD enabled doesn't build - **Describe the bug** Zephyr fails to build when `CONFIG_MAC_BB_PD`, ""Power down MAC and baseband of Wi-Fi and Bluetooth when PHY is disabled"", is enabled. **To Reproduce** 1. Append `CONFIG_MAC_BB_PD=y` to `prj.conf` in any bluetooth sample, e.g. `samples/bluetooth/beacon`. 2. Build for an ESP32-C3 board, e.g. `west build -b esp32c3_devkitm` 3. Link failure when building. **Expected behavior** Builds. **Impact** Can't use power down PHY feature. **Logs and console output** Cause of build failure: ``` [97/220] Building C object zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c.obj zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c: In function 'esp_bt_controller_init': zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1022:13: warning: implicit declaration of function 'esp_register_mac_bb_pd_callback' [-> 1022 | if (esp_register_mac_bb_pd_callback(btdm_mac_bb_power_down_cb) != 0) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1027:13: warning: implicit declaration of function 'esp_register_mac_bb_pu_callback' [-> 1027 | if (esp_register_mac_bb_pu_callback(btdm_mac_bb_power_up_cb) != 0) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1189:9: warning: implicit declaration of function 'esp_unregister_mac_bb_pd_callback' [> 1189 | esp_unregister_mac_bb_pd_callback(btdm_mac_bb_power_down_cb); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1191:9: warning: implicit declaration of function 'esp_unregister_mac_bb_pu_callback' [> 1191 | esp_unregister_mac_bb_pu_callback(btdm_mac_bb_power_up_cb); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` And then a link failure from lacking the above undeclared function. ``` riscv32-esp-elf/bin/ld.bfd: zephyr/libzephyr.a(esp_bt_adapter.c.obj): in function `esp_bt_power_domain_on': zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:385: undefined reference to `esp_register_mac_bb_pd_callback' riscv32-esp-elf/bin/ld.bfd: zephyr/libzephyr.a(esp_bt_adapter.c.obj): in function `esp_bt_controller_init': zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1022: undefined reference to `esp_register_mac_bb_pu_callback' riscv32-esp-elf/bin/ld.bfd: zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1168: undefined reference to `esp_unregister_mac_bb_pd_callback' riscv32-esp-elf/bin/ld.bfd: zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1168: undefined reference to `esp_unregister_mac_bb_pu_callback' ``` **Environment (please complete the following information):** - Linux - Crosstool-NG esp-12.2.0_20230208 - Zephyr v3.4.0-rc1-198-g0ae7812f6b",0, with ble setting config mac bb pd enabled doesn t build describe the bug zephyr fails to build when config mac bb pd power down mac and baseband of wi fi and bluetooth when phy is disabled is enabled to reproduce append config mac bb pd y to prj conf in any bluetooth sample e g samples bluetooth beacon build for an board e g west build b devkitm link failure when building expected behavior builds impact can t use power down phy feature logs and console output cause of build failure building c object zephyr modules hal espressif zephyr src bt esp bt adapter c obj zephyr modules hal espressif zephyr src bt esp bt adapter c in function esp bt controller init zephyr modules hal espressif zephyr src bt esp bt adapter c warning implicit declaration of function esp register mac bb pd callback if esp register mac bb pd callback btdm mac bb power down cb zephyr modules hal espressif zephyr src bt esp bt adapter c warning implicit declaration of function esp register mac bb pu callback if esp register mac bb pu callback btdm mac bb power up cb zephyr modules hal espressif zephyr src bt esp bt adapter c warning implicit declaration of function esp unregister mac bb pd callback esp unregister mac bb pd callback btdm mac bb power down cb zephyr modules hal espressif zephyr src bt esp bt adapter c warning implicit declaration of function esp unregister mac bb pu callback esp unregister mac bb pu callback btdm mac bb power up cb and then a link failure from lacking the above undeclared function esp elf bin ld bfd zephyr libzephyr a esp bt adapter c obj in function esp bt power domain on zephyr modules hal espressif zephyr src bt esp bt adapter c undefined reference to esp register mac bb pd callback esp elf bin ld bfd zephyr libzephyr a esp bt adapter c obj in function esp bt controller init zephyr modules hal espressif zephyr src bt esp bt adapter c undefined reference to esp register mac bb pu callback esp elf bin ld bfd zephyr modules hal espressif zephyr src bt esp bt adapter c undefined reference to esp unregister mac bb pd callback esp elf bin ld bfd zephyr modules hal espressif zephyr src bt esp bt adapter c undefined reference to esp unregister mac bb pu callback environment please complete the following information linux crosstool ng esp zephyr ,0 623,7588996106.0,IssuesEvent,2018-04-26 05:04:13,rancher/rancher,https://api.github.com/repos/rancher/rancher,closed,"API returns ""404 - Not Found"" error when trying to fetch container logs.",kind/bug setup/automation,"Server version - Build from master on Oct 10. Following test cases which validates container logs fails with ""404 - Not Found"" error when trying to fetch container logs. Manually when I try to fetch the logs of the container , I am able to get the container logs: _______________________________ test_native_logs _______________________________ client = , socat_containers = None native_name = 'native-test-728364', pull_images = None ``` def test_native_logs(client, socat_containers, native_name, pull_images): docker_client = get_docker_client(host(client)) test_msg = 'LOGS_WORK' docker_container = docker_client. \ create_container(NATIVE_TEST_IMAGE, name=native_name, tty=True, stdin_open=True, detach=True, command=['/bin/bash', '-c', 'echo ' + test_msg]) rancher_container, _ = start_and_wait(client, docker_container, docker_client, native_name) ``` > ``` > found_msg = search_logs(rancher_container, test_msg) > ``` cattlevalidationtest/core/test_native_docker.py:223: --- cattlevalidationtest/core/test_native_docker.py:280: in search_logs logs = container.logs() .tox/py27/local/lib/python2.7/site-packages/gdapi.py:233: in _args, *_kw) .tox/py27/local/lib/python2.7/site-packages/gdapi.py:399: in action return self._post(url, data=self._to_dict(_args, *_kw)) .tox/py27/local/lib/python2.7/site-packages/gdapi.py:62: in wrapped return fn(_args, *_kw) .tox/py27/local/lib/python2.7/site-packages/gdapi.py:273: in _post self._error(r.text) --- self = text = '{""id"":""db6259a2-6bbf-4c2a-af01-cdcbd3dfa74c"",""type"":""error"",""links"":{},""actions"":{},""status"":404,""code"":""Not Found"",""message"":""Not Found"",""detail"":null}' ``` def _error(self, text): ``` > ``` > raise ApiError(self._unmarshall(text)) > ``` > > E ApiError: (ApiError(...), ""Not Found : Not Found\n{'id': u'db6259a2-6bbf-4c2a-af01-cdcbd3dfa74c', 'actions': {}, 'type': u'error', 'status': 404, 'links': {}, 'code': u'Not Found', 'message': u'Not Found', 'detail': None}"") ",1.0,"API returns ""404 - Not Found"" error when trying to fetch container logs. - Server version - Build from master on Oct 10. Following test cases which validates container logs fails with ""404 - Not Found"" error when trying to fetch container logs. Manually when I try to fetch the logs of the container , I am able to get the container logs: _______________________________ test_native_logs _______________________________ client = , socat_containers = None native_name = 'native-test-728364', pull_images = None ``` def test_native_logs(client, socat_containers, native_name, pull_images): docker_client = get_docker_client(host(client)) test_msg = 'LOGS_WORK' docker_container = docker_client. \ create_container(NATIVE_TEST_IMAGE, name=native_name, tty=True, stdin_open=True, detach=True, command=['/bin/bash', '-c', 'echo ' + test_msg]) rancher_container, _ = start_and_wait(client, docker_container, docker_client, native_name) ``` > ``` > found_msg = search_logs(rancher_container, test_msg) > ``` cattlevalidationtest/core/test_native_docker.py:223: --- cattlevalidationtest/core/test_native_docker.py:280: in search_logs logs = container.logs() .tox/py27/local/lib/python2.7/site-packages/gdapi.py:233: in _args, *_kw) .tox/py27/local/lib/python2.7/site-packages/gdapi.py:399: in action return self._post(url, data=self._to_dict(_args, *_kw)) .tox/py27/local/lib/python2.7/site-packages/gdapi.py:62: in wrapped return fn(_args, *_kw) .tox/py27/local/lib/python2.7/site-packages/gdapi.py:273: in _post self._error(r.text) --- self = text = '{""id"":""db6259a2-6bbf-4c2a-af01-cdcbd3dfa74c"",""type"":""error"",""links"":{},""actions"":{},""status"":404,""code"":""Not Found"",""message"":""Not Found"",""detail"":null}' ``` def _error(self, text): ``` > ``` > raise ApiError(self._unmarshall(text)) > ``` > > E ApiError: (ApiError(...), ""Not Found : Not Found\n{'id': u'db6259a2-6bbf-4c2a-af01-cdcbd3dfa74c', 'actions': {}, 'type': u'error', 'status': 404, 'links': {}, 'code': u'Not Found', 'message': u'Not Found', 'detail': None}"") ",1,api returns not found error when trying to fetch container logs server version build from master on oct following test cases which validates container logs fails with not found error when trying to fetch container logs manually when i try to fetch the logs of the container i am able to get the container logs test native logs client socat containers none native name native test pull images none def test native logs client socat containers native name pull images docker client get docker client host client test msg logs work docker container docker client create container native test image name native name tty true stdin open true detach true command rancher container start and wait client docker container docker client native name found msg search logs rancher container test msg cattlevalidationtest core test native docker py cattlevalidationtest core test native docker py in search logs logs container logs tox local lib site packages gdapi py in args kw tox local lib site packages gdapi py in action return self post url data self to dict args kw tox local lib site packages gdapi py in wrapped return fn args kw tox local lib site packages gdapi py in post self error r text self text id type error links actions status code not found message not found detail null def error self text raise apierror self unmarshall text e apierror apierror not found not found n id u actions type u error status links code u not found message u not found detail none ,1 98528,4028235472.0,IssuesEvent,2016-05-18 04:49:56,lale-help/lale-help,https://api.github.com/repos/lale-help/lale-help,opened,Add documents to projects,priority:medium,"Similar to #307 (see spec here), we need to allow documents on projects.",1.0,"Add documents to projects - Similar to #307 (see spec here), we need to allow documents on projects.",0,add documents to projects similar to see spec here we need to allow documents on projects ,0 7965,25938258543.0,IssuesEvent,2022-12-16 15:55:37,kagemomiji/hubot-matteruser,https://api.github.com/repos/kagemomiji/hubot-matteruser,closed,Update CI actions,automation,"### Purpose CI actions use old version's action and duplicated running. ### Tasks - Only run at pull request - check at only node 16 - upgrade actions version",1.0,"Update CI actions - ### Purpose CI actions use old version's action and duplicated running. ### Tasks - Only run at pull request - check at only node 16 - upgrade actions version",1,update ci actions purpose ci actions use old version s action and duplicated running tasks only run at pull request check at only node upgrade actions version,1 4794,17539885065.0,IssuesEvent,2021-08-12 10:39:59,maxim-nazarenko/tf-module-update,https://api.github.com/repos/maxim-nazarenko/tf-module-update,closed,Add Github actions,automation,"Add actions to run tests. It would be a start point for further automation like docker image builds, releases, etc.",1.0,"Add Github actions - Add actions to run tests. It would be a start point for further automation like docker image builds, releases, etc.",1,add github actions add actions to run tests it would be a start point for further automation like docker image builds releases etc ,1 15731,10340902247.0,IssuesEvent,2019-09-03 23:46:54,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Incorrect Azure CLI compatibility,Pri2 cxp doc-enhancement service-fabric-mesh/svc triaged,"This is the error: Skipping 'mesh-0.10.6-py2.py3-none-any.whl' as not compatible with this version of the CLI. Extension compatibility result: is_compatible=False cli_core_version=2.0.45 min_required=2.0.67 max_required=None --- #### Document details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: a9790045-bddf-bfe7-1d13-2738f0a988d0 * Version Independent ID: 75b735d5-0a64-7214-417c-b34301c63c79 * Content: [Set up the Azure Service Fabric Mesh CLI](https://docs.microsoft.com/en-gb/azure/service-fabric-mesh/service-fabric-mesh-howto-setup-cli) * Content Source: [articles/service-fabric-mesh/service-fabric-mesh-howto-setup-cli.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric-mesh/service-fabric-mesh-howto-setup-cli.md) * Service: **service-fabric-mesh** * GitHub Login: @dkkapur * Microsoft Alias: **dekapur**",1.0,"Incorrect Azure CLI compatibility - This is the error: Skipping 'mesh-0.10.6-py2.py3-none-any.whl' as not compatible with this version of the CLI. Extension compatibility result: is_compatible=False cli_core_version=2.0.45 min_required=2.0.67 max_required=None --- #### Document details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: a9790045-bddf-bfe7-1d13-2738f0a988d0 * Version Independent ID: 75b735d5-0a64-7214-417c-b34301c63c79 * Content: [Set up the Azure Service Fabric Mesh CLI](https://docs.microsoft.com/en-gb/azure/service-fabric-mesh/service-fabric-mesh-howto-setup-cli) * Content Source: [articles/service-fabric-mesh/service-fabric-mesh-howto-setup-cli.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric-mesh/service-fabric-mesh-howto-setup-cli.md) * Service: **service-fabric-mesh** * GitHub Login: @dkkapur * Microsoft Alias: **dekapur**",0,incorrect azure cli compatibility this is the error skipping mesh none any whl as not compatible with this version of the cli extension compatibility result is compatible false cli core version min required max required none document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id bddf version independent id content content source service service fabric mesh github login dkkapur microsoft alias dekapur ,0 152804,12127046854.0,IssuesEvent,2020-04-22 18:02:34,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,roachtest: sqlsmith/setup=tpch-sf1/setting=no-mutations failed,C-test-failure O-roachtest O-robot branch-master release-blocker,"[(roachtest).sqlsmith/setup=tpch-sf1/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891353&tab=buildLog) on [master@056e32e84831f13b286fceb7681dd0cd2b00b4b4](https://github.com/cockroachdb/cockroach/commits/056e32e84831f13b286fceb7681dd0cd2b00b4b4): ``` The test failed on branch=master, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=tpch-sf1/setting=no-mutations/run_1 sqlsmith.go:185,sqlsmith.go:199,test_runner.go:753: ping: dial tcp 34.67.155.147:26257: connect: connection refused previous sql: SELECT tab_111.ps_suppkey AS col_257 FROM defaultdb.public.partsupp@[0] AS tab_111, defaultdb.public.customer AS tab_112 JOIN defaultdb.public.customer AS tab_113 ON (tab_112.c_nationkey) = (tab_113.c_custkey), defaultdb.public.customer@primary AS tab_114, defaultdb.public.partsupp@[0] AS tab_115 WHERE st_intersects('0101000000000000000000F03F000000000000F03F':::GEOMETRY::GEOMETRY, '0101000000000000000000F03F000000000000F03F':::GEOMETRY::GEOMETRY)::BOOL LIMIT 23:::INT8; cluster.go:1481,context.go:135,cluster.go:1470,test_runner.go:825: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-1891353-1587540974-06-n4cpu4 --oneshot --ignore-empty-nodes: exit status 1 3: 6463 1: dead 2: 6316 4: 10941 Error: UNCLASSIFIED_PROBLEM: 1: dead (1) UNCLASSIFIED_PROBLEM Wraps: (2) 1: dead | main.glob..func13 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1129 | main.wrap.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:272 | github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).execute | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:766 | github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).ExecuteC | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:852 | github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).Execute | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:800 | main.main | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1793 | runtime.main | /usr/local/go/src/runtime/proc.go:203 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1357 Error types: (1) errors.Unclassified (2) *errors.fundamental ```
More

Artifacts: [/sqlsmith/setup=tpch-sf1/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891353&tab=artifacts#/sqlsmith/setup=tpch-sf1/setting=no-mutations) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dtpch-sf1%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project) powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)

",2.0,"roachtest: sqlsmith/setup=tpch-sf1/setting=no-mutations failed - [(roachtest).sqlsmith/setup=tpch-sf1/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891353&tab=buildLog) on [master@056e32e84831f13b286fceb7681dd0cd2b00b4b4](https://github.com/cockroachdb/cockroach/commits/056e32e84831f13b286fceb7681dd0cd2b00b4b4): ``` The test failed on branch=master, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=tpch-sf1/setting=no-mutations/run_1 sqlsmith.go:185,sqlsmith.go:199,test_runner.go:753: ping: dial tcp 34.67.155.147:26257: connect: connection refused previous sql: SELECT tab_111.ps_suppkey AS col_257 FROM defaultdb.public.partsupp@[0] AS tab_111, defaultdb.public.customer AS tab_112 JOIN defaultdb.public.customer AS tab_113 ON (tab_112.c_nationkey) = (tab_113.c_custkey), defaultdb.public.customer@primary AS tab_114, defaultdb.public.partsupp@[0] AS tab_115 WHERE st_intersects('0101000000000000000000F03F000000000000F03F':::GEOMETRY::GEOMETRY, '0101000000000000000000F03F000000000000F03F':::GEOMETRY::GEOMETRY)::BOOL LIMIT 23:::INT8; cluster.go:1481,context.go:135,cluster.go:1470,test_runner.go:825: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-1891353-1587540974-06-n4cpu4 --oneshot --ignore-empty-nodes: exit status 1 3: 6463 1: dead 2: 6316 4: 10941 Error: UNCLASSIFIED_PROBLEM: 1: dead (1) UNCLASSIFIED_PROBLEM Wraps: (2) 1: dead | main.glob..func13 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1129 | main.wrap.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:272 | github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).execute | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:766 | github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).ExecuteC | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:852 | github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).Execute | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:800 | main.main | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1793 | runtime.main | /usr/local/go/src/runtime/proc.go:203 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1357 Error types: (1) errors.Unclassified (2) *errors.fundamental ```
More

Artifacts: [/sqlsmith/setup=tpch-sf1/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891353&tab=artifacts#/sqlsmith/setup=tpch-sf1/setting=no-mutations) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dtpch-sf1%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project) powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)

",0,roachtest sqlsmith setup tpch setting no mutations failed on the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts sqlsmith setup tpch setting no mutations run sqlsmith go sqlsmith go test runner go ping dial tcp connect connection refused previous sql select tab ps suppkey as col from defaultdb public partsupp as tab defaultdb public customer as tab join defaultdb public customer as tab on tab c nationkey tab c custkey defaultdb public customer primary as tab defaultdb public partsupp as tab where st intersects geometry geometry geometry geometry bool limit cluster go context go cluster go test runner go dead node detection home agent work go src github com cockroachdb cockroach bin roachprod monitor teamcity oneshot ignore empty nodes exit status dead error unclassified problem dead unclassified problem wraps dead main glob home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go main wrap home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go github com cockroachdb cockroach vendor github com cobra command execute home agent work go src github com cockroachdb cockroach vendor github com cobra command go github com cockroachdb cockroach vendor github com cobra command executec home agent work go src github com cockroachdb cockroach vendor github com cobra command go github com cockroachdb cockroach vendor github com cobra command execute home agent work go src github com cockroachdb cockroach vendor github com cobra command go main main home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s error types errors unclassified errors fundamental more artifacts powered by ,0 6012,21883751495.0,IssuesEvent,2022-05-19 16:28:25,mozilla-mobile/focus-ios,https://api.github.com/repos/mozilla-mobile/focus-ios,closed,Ugrade to Xcode 13.3,eng:automation,The new Xcode 13.3 is already available in Bitrise. Let's see if the app works and then upgrade it,1.0,Ugrade to Xcode 13.3 - The new Xcode 13.3 is already available in Bitrise. Let's see if the app works and then upgrade it,1,ugrade to xcode the new xcode is already available in bitrise let s see if the app works and then upgrade it,1 6175,22359272942.0,IssuesEvent,2022-06-15 18:43:16,aws/aws-iot-device-sdk-cpp-v2,https://api.github.com/repos/aws/aws-iot-device-sdk-cpp-v2,closed,Apple Silicon support,feature-request automation-exempt,"Hi, Does the sdk supports Apple Silicon? If so, where can I found the build and test instructions? thanks",1.0,"Apple Silicon support - Hi, Does the sdk supports Apple Silicon? If so, where can I found the build and test instructions? thanks",1,apple silicon support hi does the sdk supports apple silicon if so where can i found the build and test instructions thanks,1 787488,27719281769.0,IssuesEvent,2023-03-14 19:14:04,MasterCruelty/robbot,https://api.github.com/repos/MasterCruelty/robbot,closed,[2.0] 📑 /atm timetables as photo,enhancement low priority refactor,"* [x] Try to convert subway timetable from the pdf web link to an image thus we can send it directly instead of clicking the link. * [x] The same thing in case the waiting time is None. ",1.0,"[2.0] 📑 /atm timetables as photo - * [x] Try to convert subway timetable from the pdf web link to an image thus we can send it directly instead of clicking the link. * [x] The same thing in case the waiting time is None. ",0, 📑 atm timetables as photo try to convert subway timetable from the pdf web link to an image thus we can send it directly instead of clicking the link the same thing in case the waiting time is none ,0 220338,16938119534.0,IssuesEvent,2021-06-27 00:49:24,bevyengine/bevy,https://api.github.com/repos/bevyengine/bevy,opened,Add a transform and Quaternion example section. ,documentation needs-triage,"## How can Bevy's documentation be improved? Currently there is not really a specific place to add transformation examples other than 2d or 3d. I think this would encourage more examples to be created about this topic. Examples that could be included: - translation of object - rotation of objects - scaling of objects - translation, rotation, and scaling at the same time - show the difference between global transform and Transform - the 3d parenting example could be moved into here. - more examples showcasing the various methods on Transform and Quaternion. ",1.0,"Add a transform and Quaternion example section. - ## How can Bevy's documentation be improved? Currently there is not really a specific place to add transformation examples other than 2d or 3d. I think this would encourage more examples to be created about this topic. Examples that could be included: - translation of object - rotation of objects - scaling of objects - translation, rotation, and scaling at the same time - show the difference between global transform and Transform - the 3d parenting example could be moved into here. - more examples showcasing the various methods on Transform and Quaternion. ",0,add a transform and quaternion example section how can bevy s documentation be improved currently there is not really a specific place to add transformation examples other than or i think this would encourage more examples to be created about this topic examples that could be included translation of object rotation of objects scaling of objects translation rotation and scaling at the same time show the difference between global transform and transform the parenting example could be moved into here more examples showcasing the various methods on transform and quaternion ,0 4591,16964969647.0,IssuesEvent,2021-06-29 09:46:03,keptn/keptn,https://api.github.com/repos/keptn/keptn,opened,Unify automated PRs to have the pipeline logic in the target repo,area:core area:go-utils automation refactoring release-automation type:chore,"Right now, some of the automated PR are created from within the `keptn/keptn` repo but others are created from different repos like `keptn/go-utils` or `keptn/kubernetes-utils`. For an easier overview, this should be unified to that the dependency repos only send github events to the `keptn/keptn` repo and the update and PR creation logic should happen inside the main repo. An example for this pattern can be seen in the `keptn/spec` repo which send an event to `keptn/go-utils` and `keptn/keptn` when a new release tag is created. In the same manner `keptn/go-utils` and `keptn/kubernetes-utils` should send update events to `keptn/keptn`, which in turn updates its own dependencies accordingly. ",2.0,"Unify automated PRs to have the pipeline logic in the target repo - Right now, some of the automated PR are created from within the `keptn/keptn` repo but others are created from different repos like `keptn/go-utils` or `keptn/kubernetes-utils`. For an easier overview, this should be unified to that the dependency repos only send github events to the `keptn/keptn` repo and the update and PR creation logic should happen inside the main repo. An example for this pattern can be seen in the `keptn/spec` repo which send an event to `keptn/go-utils` and `keptn/keptn` when a new release tag is created. In the same manner `keptn/go-utils` and `keptn/kubernetes-utils` should send update events to `keptn/keptn`, which in turn updates its own dependencies accordingly. ",1,unify automated prs to have the pipeline logic in the target repo right now some of the automated pr are created from within the keptn keptn repo but others are created from different repos like keptn go utils or keptn kubernetes utils for an easier overview this should be unified to that the dependency repos only send github events to the keptn keptn repo and the update and pr creation logic should happen inside the main repo an example for this pattern can be seen in the keptn spec repo which send an event to keptn go utils and keptn keptn when a new release tag is created in the same manner keptn go utils and keptn kubernetes utils should send update events to keptn keptn which in turn updates its own dependencies accordingly ,1 8429,26964769121.0,IssuesEvent,2023-02-08 21:17:18,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,[Cloud Posture] Add functional tests for Findings Group By,automation Team:Cloud Security 8.8 candidate,"**Description** We have 3 findings tables (names taken from folders): - `latest_findings` - `findings_by_resource` - `resource_findings` our dropdown enables switching between `latest_findings` and `findings_by_resource`. the latter enables navigating to `resource_findings` the tests we add should verify all of that works :) **Definition of done** - Extend Findings service (page object) with action / assertion methods needed for Group By - Implement test cases defined in [testrail](https://elastic.testrail.io/index.php?/suites/view/1116&group_by=cases:section_id&group_order=asc&display_deleted_cases=0&group_id=35310) for Findings Group By **Out of scope** - TBD **Related tasks/epics** - https://github.com/elastic/kibana/issues/140484 (Initial setup) - https://github.com/elastic/kibana/issues/140490 **Checklist** Please follow the following checklist in the beginning of your work, please comment with a suggested of high level solution. It should include: - [ ] Comment describing high level implementation details - [ ] Include API and data models - [ ] Include assumptions being taken - [ ] Provide backward/forward compatibility when changing data model schemas and key constants - [ ] Mention relevant individuals with a reason (getting feedback, fyi etc) - [ ] Submit a PR for our [technical index](https://github.com/elastic/security-team/blob/main/docs/cloud-security-posture-team/Technical_Index.md) that includes breaking changes/ new features **Before closing this ticket** - [ ] Commit the [technical index](https://github.com/elastic/security-team/blob/main/docs/cloud-security-posture-team/Technical_Index.md) PR - [ ] Reference to tech-debts that shall be solved as we move forward ",1.0,"[Cloud Posture] Add functional tests for Findings Group By - **Description** We have 3 findings tables (names taken from folders): - `latest_findings` - `findings_by_resource` - `resource_findings` our dropdown enables switching between `latest_findings` and `findings_by_resource`. the latter enables navigating to `resource_findings` the tests we add should verify all of that works :) **Definition of done** - Extend Findings service (page object) with action / assertion methods needed for Group By - Implement test cases defined in [testrail](https://elastic.testrail.io/index.php?/suites/view/1116&group_by=cases:section_id&group_order=asc&display_deleted_cases=0&group_id=35310) for Findings Group By **Out of scope** - TBD **Related tasks/epics** - https://github.com/elastic/kibana/issues/140484 (Initial setup) - https://github.com/elastic/kibana/issues/140490 **Checklist** Please follow the following checklist in the beginning of your work, please comment with a suggested of high level solution. It should include: - [ ] Comment describing high level implementation details - [ ] Include API and data models - [ ] Include assumptions being taken - [ ] Provide backward/forward compatibility when changing data model schemas and key constants - [ ] Mention relevant individuals with a reason (getting feedback, fyi etc) - [ ] Submit a PR for our [technical index](https://github.com/elastic/security-team/blob/main/docs/cloud-security-posture-team/Technical_Index.md) that includes breaking changes/ new features **Before closing this ticket** - [ ] Commit the [technical index](https://github.com/elastic/security-team/blob/main/docs/cloud-security-posture-team/Technical_Index.md) PR - [ ] Reference to tech-debts that shall be solved as we move forward ",1, add functional tests for findings group by description we have findings tables names taken from folders latest findings findings by resource resource findings our dropdown enables switching between latest findings and findings by resource the latter enables navigating to resource findings the tests we add should verify all of that works definition of done extend findings service page object with action assertion methods needed for group by implement test cases defined in for findings group by out of scope tbd related tasks epics initial setup checklist please follow the following checklist in the beginning of your work please comment with a suggested of high level solution it should include comment describing high level implementation details include api and data models include assumptions being taken provide backward forward compatibility when changing data model schemas and key constants mention relevant individuals with a reason getting feedback fyi etc submit a pr for our that includes breaking changes new features before closing this ticket commit the pr reference to tech debts that shall be solved as we move forward ,1 38806,12601781437.0,IssuesEvent,2020-06-11 10:28:10,rammatzkvosky/1010-1,https://api.github.com/repos/rammatzkvosky/1010-1,opened,CVE-2020-10969 (High) detected in jackson-databind-2.8.8.jar,security vulnerability,"## CVE-2020-10969 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.8.8.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /tmp/ws-scm/1010-1/pom.xml

Path to vulnerable library: epository/com/fasterxml/jackson/core/jackson-databind/2.8.8/jackson-databind-2.8.8.jar

Dependency Hierarchy: - :x: **jackson-databind-2.8.8.jar** (Vulnerable Library)

Found in HEAD commit: e42bbbd0a667a61447797ee7b7cee2bbf8be7012

Vulnerability Details

FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to javax.swing.JEditorPane.

Publish Date: 2020-03-26

URL: CVE-2020-10969

CVSS 3 Score Details (8.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10969

Release Date: 2020-03-26

Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.8.11.6;com.fasterxml.jackson.core:jackson-databind:2.7.9.7

",True,"CVE-2020-10969 (High) detected in jackson-databind-2.8.8.jar - ## CVE-2020-10969 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.8.8.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /tmp/ws-scm/1010-1/pom.xml

Path to vulnerable library: epository/com/fasterxml/jackson/core/jackson-databind/2.8.8/jackson-databind-2.8.8.jar

Dependency Hierarchy: - :x: **jackson-databind-2.8.8.jar** (Vulnerable Library)

Found in HEAD commit: e42bbbd0a667a61447797ee7b7cee2bbf8be7012

Vulnerability Details

FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to javax.swing.JEditorPane.

Publish Date: 2020-03-26

URL: CVE-2020-10969

CVSS 3 Score Details (8.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10969

Release Date: 2020-03-26

Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.8.11.6;com.fasterxml.jackson.core:jackson-databind:2.7.9.7

",0,cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm pom xml path to vulnerable library epository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to javax swing jeditorpane publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to javax swing jeditorpane vulnerabilityurl ,0 324499,9904702201.0,IssuesEvent,2019-06-27 09:45:50,kubernetes-sigs/cluster-api-provider-gcp,https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-gcp,closed, [FR] authentication with GCP,lifecycle/rotten priority/important-soon,Currently the authentication is done via cloud service account. Allow authentication similar to that in https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L305,1.0, [FR] authentication with GCP - Currently the authentication is done via cloud service account. Allow authentication similar to that in https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L305,0, authentication with gcp currently the authentication is done via cloud service account allow authentication similar to that in ,0 269199,20376113961.0,IssuesEvent,2022-02-21 15:46:46,Diversion2k22/Samarpan,https://api.github.com/repos/Diversion2k22/Samarpan,closed,Update README.md,documentation enhancement good first issue diversion-2k22,"### Update/ Improve README.md - Add the image/ thumbnail of the application inside the file - Add a section explaining the way of accessing the documents with screenshots ",1.0,"Update README.md - ### Update/ Improve README.md - Add the image/ thumbnail of the application inside the file - Add a section explaining the way of accessing the documents with screenshots ",0,update readme md update improve readme md add the image thumbnail of the application inside the file add a section explaining the way of accessing the documents with screenshots ,0 720,7887587826.0,IssuesEvent,2018-06-27 18:59:04,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Shouldn't Get-AutomationConnection be replaced by Get-AzureRmAutomationConnection ?,assigned-to-author automation/svc doc-enhancement triaged," --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 038d927f-2bcc-c62d-b3c3-f194513bced6 * Version Independent ID: 41adf2c5-3ab7-7387-e541-89e34aa6a6b1 * Content: [My first PowerShell runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-powershell) * Content Source: [articles/automation/automation-first-runbook-textual-powershell.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-powershell.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Shouldn't Get-AutomationConnection be replaced by Get-AzureRmAutomationConnection ? - --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 038d927f-2bcc-c62d-b3c3-f194513bced6 * Version Independent ID: 41adf2c5-3ab7-7387-e541-89e34aa6a6b1 * Content: [My first PowerShell runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-powershell) * Content Source: [articles/automation/automation-first-runbook-textual-powershell.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-powershell.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,shouldn t get automationconnection be replaced by get azurermautomationconnection document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 6803,23936270938.0,IssuesEvent,2022-09-11 09:18:27,AdamXweb/awesome-aussie,https://api.github.com/repos/AdamXweb/awesome-aussie,opened,[ADDITION] Lumachain,Awaiting Review Added to Airtable Automation from Airtable,"### Category Logistics ### Software to be added Lumachain ### Supporting Material URL: https://lumachain.io/ Description: Lumachain is a supply chain platform designed for tracking the origin, condition, and location of items in the food supply chain. Size: HQ: Sydney LinkedIn: https://www.linkedin.com/company/lumachain/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rec7kKLaBKEWojA1f",1.0,"[ADDITION] Lumachain - ### Category Logistics ### Software to be added Lumachain ### Supporting Material URL: https://lumachain.io/ Description: Lumachain is a supply chain platform designed for tracking the origin, condition, and location of items in the food supply chain. Size: HQ: Sydney LinkedIn: https://www.linkedin.com/company/lumachain/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rec7kKLaBKEWojA1f",1, lumachain category logistics software to be added lumachain supporting material url description lumachain is a supply chain platform designed for tracking the origin condition and location of items in the food supply chain size hq sydney linkedin see record on airtable ,1 7967,25945852349.0,IssuesEvent,2022-12-17 00:49:01,influxdata/ui,https://api.github.com/repos/influxdata/ui,closed,Add function to New Script Editor toggle when it's turned off to trigger the VWO survey,team/ui team/automation,"In order to trigger the VWO survey after a user has turned off the New Script Editor, we need to include the logic in that specific slide toggle component to trigger the survey. Here is the code we need to add to onClick prop of the New Script Editor slide toggle: `executeTrigger();`",1.0,"Add function to New Script Editor toggle when it's turned off to trigger the VWO survey - In order to trigger the VWO survey after a user has turned off the New Script Editor, we need to include the logic in that specific slide toggle component to trigger the survey. Here is the code we need to add to onClick prop of the New Script Editor slide toggle: `executeTrigger();`",1,add function to new script editor toggle when it s turned off to trigger the vwo survey in order to trigger the vwo survey after a user has turned off the new script editor we need to include the logic in that specific slide toggle component to trigger the survey here is the code we need to add to onclick prop of the new script editor slide toggle executetrigger ,1 180974,14844376867.0,IssuesEvent,2021-01-17 01:15:39,Objective-Redux/Objective-Redux,https://api.github.com/repos/Objective-Redux/Objective-Redux,closed,Add documentation page about adding new take effects,documentation,"**Executive Summary** There should be a documentation topic that illustrates how to create new take effects creators. **Justification** Not every take effect/configuration can be supported by Objective-Redux. As such, the documentation should show how to create new ones without relying solely on what's provided. **Proposed Solution** A topic in the documentation for creating new take effects for Objective-Redux. The topic page should also link to the source code, which is a good example of how to create them. **Current workarounds** Describe what is currently being done to solve the problem in the absence of this feature. **Work involved** Include any research you've done into how the solution might be implemented.",1.0,"Add documentation page about adding new take effects - **Executive Summary** There should be a documentation topic that illustrates how to create new take effects creators. **Justification** Not every take effect/configuration can be supported by Objective-Redux. As such, the documentation should show how to create new ones without relying solely on what's provided. **Proposed Solution** A topic in the documentation for creating new take effects for Objective-Redux. The topic page should also link to the source code, which is a good example of how to create them. **Current workarounds** Describe what is currently being done to solve the problem in the absence of this feature. **Work involved** Include any research you've done into how the solution might be implemented.",0,add documentation page about adding new take effects executive summary there should be a documentation topic that illustrates how to create new take effects creators justification not every take effect configuration can be supported by objective redux as such the documentation should show how to create new ones without relying solely on what s provided proposed solution a topic in the documentation for creating new take effects for objective redux the topic page should also link to the source code which is a good example of how to create them current workarounds describe what is currently being done to solve the problem in the absence of this feature work involved include any research you ve done into how the solution might be implemented ,0 2903,12753824708.0,IssuesEvent,2020-06-28 01:02:07,turicas/covid19-br,https://api.github.com/repos/turicas/covid19-br,closed,Corrigir falha no GH Action do goodtables,automation,"A GitHub Action do gootables, está falhando faz tempo, sem ser notado. O erro deve ser corrigido, e como uma forma de dar mais evidência ao status desta Action, que é agendada para executar uma vez por dia, seria interessante colocar um badge no `README.md`. https://github.com/turicas/covid19-br/actions?query=workflow%3Agoodtables",1.0,"Corrigir falha no GH Action do goodtables - A GitHub Action do gootables, está falhando faz tempo, sem ser notado. O erro deve ser corrigido, e como uma forma de dar mais evidência ao status desta Action, que é agendada para executar uma vez por dia, seria interessante colocar um badge no `README.md`. https://github.com/turicas/covid19-br/actions?query=workflow%3Agoodtables",1,corrigir falha no gh action do goodtables a github action do gootables está falhando faz tempo sem ser notado o erro deve ser corrigido e como uma forma de dar mais evidência ao status desta action que é agendada para executar uma vez por dia seria interessante colocar um badge no readme md ,1 9254,27801623364.0,IssuesEvent,2023-03-17 16:15:13,Automattic/sensei,https://api.github.com/repos/Automattic/sensei,opened,Update the next version script to exclude code behind a feature flag,[Type] Enhancement Release Automation," ### Is your feature request related to a problem? Please describe We need to figure out how to update `scripts/replace-next-version-tag.sh` so that it doesn't replace the `$$next-version$$` placeholders for code that is behind a feature flag and has not yet shipped. ### Goals - [ ] Communicate the proposed approach and ensure buy-in has been obtained before starting on implementation. This is especially important for any process changes that may be required. - [ ] Implement a solution such that the `$$next-version$$` placeholders are **not** replaced for any code in `trunk` that is behind a feature flag.",1.0,"Update the next version script to exclude code behind a feature flag - ### Is your feature request related to a problem? Please describe We need to figure out how to update `scripts/replace-next-version-tag.sh` so that it doesn't replace the `$$next-version$$` placeholders for code that is behind a feature flag and has not yet shipped. ### Goals - [ ] Communicate the proposed approach and ensure buy-in has been obtained before starting on implementation. This is especially important for any process changes that may be required. - [ ] Implement a solution such that the `$$next-version$$` placeholders are **not** replaced for any code in `trunk` that is behind a feature flag.",1,update the next version script to exclude code behind a feature flag is your feature request related to a problem please describe we need to figure out how to update scripts replace next version tag sh so that it doesn t replace the next version placeholders for code that is behind a feature flag and has not yet shipped goals communicate the proposed approach and ensure buy in has been obtained before starting on implementation this is especially important for any process changes that may be required implement a solution such that the next version placeholders are not replaced for any code in trunk that is behind a feature flag ,1 42067,9126344826.0,IssuesEvent,2019-02-24 20:50:40,C0ZEN/ngx-store-test,https://api.github.com/repos/C0ZEN/ngx-store-test,closed,"Fix ""identical-code"" issue in src/app/views/todos/todos.component.ts",codeclimate,"Identical blocks of code found in 2 locations. Consider refactoring. https://codeclimate.com/github/C0ZEN/ngx-store-test/src/app/views/todos/todos.component.ts#issue_5c72f6e276cfa600010000ff",1.0,"Fix ""identical-code"" issue in src/app/views/todos/todos.component.ts - Identical blocks of code found in 2 locations. Consider refactoring. https://codeclimate.com/github/C0ZEN/ngx-store-test/src/app/views/todos/todos.component.ts#issue_5c72f6e276cfa600010000ff",0,fix identical code issue in src app views todos todos component ts identical blocks of code found in locations consider refactoring ,0 4163,15691963525.0,IssuesEvent,2021-03-25 18:31:15,ScottEgan/HomeAssistantConfig,https://api.github.com/repos/ScottEgan/HomeAssistantConfig,opened,Set bedroom pico to control all bedroom lights,automations,Use the off button to control bedroom light group,1.0,Set bedroom pico to control all bedroom lights - Use the off button to control bedroom light group,1,set bedroom pico to control all bedroom lights use the off button to control bedroom light group,1 435113,12531526153.0,IssuesEvent,2020-06-04 14:39:10,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,hanime.tv - see bug description,browser-focus-geckoview engine-gecko priority-normal," **URL**: https://hanime.tv/search **Browser / Version**: Firefox Mobile 76.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Something else **Description**: doesn't block ads **Steps to Reproduce**: It doesn't block ads, desktop ad blockers do.
Browser Configuration
  • None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"hanime.tv - see bug description - **URL**: https://hanime.tv/search **Browser / Version**: Firefox Mobile 76.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Something else **Description**: doesn't block ads **Steps to Reproduce**: It doesn't block ads, desktop ad blockers do.
Browser Configuration
  • None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,hanime tv see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description doesn t block ads steps to reproduce it doesn t block ads desktop ad blockers do browser configuration none from with ❤️ ,0 472261,13621705345.0,IssuesEvent,2020-09-24 01:28:45,open-telemetry/opentelemetry-java,https://api.github.com/repos/open-telemetry/opentelemetry-java,closed,Update the attribute names for the OTel attributes for the zipkin exporter,good first issue help wanted priority:p2 release:required-for-ga,"See here: https://github.com/open-telemetry/opentelemetry-specification/pull/967 (although, we didn't match the previously spec'd names, either).",1.0,"Update the attribute names for the OTel attributes for the zipkin exporter - See here: https://github.com/open-telemetry/opentelemetry-specification/pull/967 (although, we didn't match the previously spec'd names, either).",0,update the attribute names for the otel attributes for the zipkin exporter see here although we didn t match the previously spec d names either ,0 32769,12149113902.0,IssuesEvent,2020-04-24 15:37:40,TreyM-WSS/terra-clinical,https://api.github.com/repos/TreyM-WSS/terra-clinical,opened,"CVE-2018-19797 (Medium) detected in node-sass-v4.13.1, CSS::Sass-v3.6.0",security vulnerability,"## CVE-2018-19797 - Medium Severity Vulnerability
Vulnerable Libraries -

Vulnerability Details

In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Selector_List::populate_extends in SharedPtr.hpp (used by ast.cpp and ast_selectors.cpp) may cause a Denial of Service (application crash) via a crafted sass input file.

Publish Date: 2018-12-03

URL: CVE-2018-19797

CVSS 3 Score Details (6.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19797

Release Date: 2019-09-01

Fix Resolution: LibSass - 3.6.0

",True,"CVE-2018-19797 (Medium) detected in node-sass-v4.13.1, CSS::Sass-v3.6.0 - ## CVE-2018-19797 - Medium Severity Vulnerability
Vulnerable Libraries -

Vulnerability Details

In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Selector_List::populate_extends in SharedPtr.hpp (used by ast.cpp and ast_selectors.cpp) may cause a Denial of Service (application crash) via a crafted sass input file.

Publish Date: 2018-12-03

URL: CVE-2018-19797

CVSS 3 Score Details (6.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19797

Release Date: 2019-09-01

Fix Resolution: LibSass - 3.6.0

",0,cve medium detected in node sass css sass cve medium severity vulnerability vulnerable libraries vulnerability details in libsass a null pointer dereference in the function sass selector list populate extends in sharedptr hpp used by ast cpp and ast selectors cpp may cause a denial of service application crash via a crafted sass input file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in libsass a null pointer dereference in the function sass selector list populate extends in sharedptr hpp used by ast cpp and ast selectors cpp may cause a denial of service application crash via a crafted sass input file vulnerabilityurl ,0 515563,14965298284.0,IssuesEvent,2021-01-27 13:12:13,jbroutier/whatisflying-db,https://api.github.com/repos/jbroutier/whatisflying-db,opened,Missing Lockheed aircraft types pictures,Category: Aircraft type Priority: Normal,"Add pictures for the following aircraft types: - [ ] L-14 Super Electra - [ ] Ventura - [ ] YO-3 Quiet Star - [ ] T-33 Silver Star - [ ] U-2",1.0,"Missing Lockheed aircraft types pictures - Add pictures for the following aircraft types: - [ ] L-14 Super Electra - [ ] Ventura - [ ] YO-3 Quiet Star - [ ] T-33 Silver Star - [ ] U-2",0,missing lockheed aircraft types pictures add pictures for the following aircraft types l super electra ventura yo quiet star t silver star u ,0 6767,23874305766.0,IssuesEvent,2022-09-07 17:29:27,Journeyman-dev/FossSweeper,https://api.github.com/repos/Journeyman-dev/FossSweeper,closed,Verify Code Linting with Trunk on Push,automation,A GitHub Action should run on pushes that uses Trunk to verify if all files are linted correctly. This check will be a requirement for pull requests to be merged in order to ensure code quality.,1.0,Verify Code Linting with Trunk on Push - A GitHub Action should run on pushes that uses Trunk to verify if all files are linted correctly. This check will be a requirement for pull requests to be merged in order to ensure code quality.,1,verify code linting with trunk on push a github action should run on pushes that uses trunk to verify if all files are linted correctly this check will be a requirement for pull requests to be merged in order to ensure code quality ,1 35033,7887357875.0,IssuesEvent,2018-06-27 18:12:35,dotnet/coreclr,https://api.github.com/repos/dotnet/coreclr,closed,Remove sbyte overloads of Intel AES intrinsics,area-CodeGen enhancement,"Currently, each `Aes` intrinsic has `byte` and `sbyte` overloads, but the unsigned `byte` is probably sufficient for data encryption/decryption operations. ```csharp public static class Aes { public static bool IsSupported { get => IsSupported; } public static Vector128 Decrypt(Vector128 value, Vector128 roundKey) => Decrypt(value, roundKey); public static Vector128 Decrypt(Vector128 value, Vector128 roundKey) => Decrypt(value, roundKey); public static Vector128 DecryptLast(Vector128 value, Vector128 roundKey) => DecryptLast(value, roundKey); public static Vector128 DecryptLast(Vector128 value, Vector128 roundKey) => DecryptLast(value, roundKey); public static Vector128 Encrypt(Vector128 value, Vector128 roundKey) => Encrypt(value, roundKey); public static Vector128 Encrypt(Vector128 value, Vector128 roundKey) => Encrypt(value, roundKey); public static Vector128 EncryptLast(Vector128 value, Vector128 roundKey) => EncryptLast(value, roundKey); public static Vector128 EncryptLast(Vector128 value, Vector128 roundKey) => EncryptLast(value, roundKey); public static Vector128 InvisibleMixColumn(Vector128 value) => InvisibleMixColumn(value); public static Vector128 InvisibleMixColumn(Vector128 value) => InvisibleMixColumn(value); public static Vector128 KeygenAssist(Vector128 value, byte control) => KeygenAssist(value, control); public static Vector128 KeygenAssist(Vector128 value, byte control) => KeygenAssist(value, control); } ```",1.0,"Remove sbyte overloads of Intel AES intrinsics - Currently, each `Aes` intrinsic has `byte` and `sbyte` overloads, but the unsigned `byte` is probably sufficient for data encryption/decryption operations. ```csharp public static class Aes { public static bool IsSupported { get => IsSupported; } public static Vector128 Decrypt(Vector128 value, Vector128 roundKey) => Decrypt(value, roundKey); public static Vector128 Decrypt(Vector128 value, Vector128 roundKey) => Decrypt(value, roundKey); public static Vector128 DecryptLast(Vector128 value, Vector128 roundKey) => DecryptLast(value, roundKey); public static Vector128 DecryptLast(Vector128 value, Vector128 roundKey) => DecryptLast(value, roundKey); public static Vector128 Encrypt(Vector128 value, Vector128 roundKey) => Encrypt(value, roundKey); public static Vector128 Encrypt(Vector128 value, Vector128 roundKey) => Encrypt(value, roundKey); public static Vector128 EncryptLast(Vector128 value, Vector128 roundKey) => EncryptLast(value, roundKey); public static Vector128 EncryptLast(Vector128 value, Vector128 roundKey) => EncryptLast(value, roundKey); public static Vector128 InvisibleMixColumn(Vector128 value) => InvisibleMixColumn(value); public static Vector128 InvisibleMixColumn(Vector128 value) => InvisibleMixColumn(value); public static Vector128 KeygenAssist(Vector128 value, byte control) => KeygenAssist(value, control); public static Vector128 KeygenAssist(Vector128 value, byte control) => KeygenAssist(value, control); } ```",0,remove sbyte overloads of intel aes intrinsics currently each aes intrinsic has byte and sbyte overloads but the unsigned byte is probably sufficient for data encryption decryption operations csharp public static class aes public static bool issupported get issupported public static decrypt value roundkey decrypt value roundkey public static decrypt value roundkey decrypt value roundkey public static decryptlast value roundkey decryptlast value roundkey public static decryptlast value roundkey decryptlast value roundkey public static encrypt value roundkey encrypt value roundkey public static encrypt value roundkey encrypt value roundkey public static encryptlast value roundkey encryptlast value roundkey public static encryptlast value roundkey encryptlast value roundkey public static invisiblemixcolumn value invisiblemixcolumn value public static invisiblemixcolumn value invisiblemixcolumn value public static keygenassist value byte control keygenassist value control public static keygenassist value byte control keygenassist value control ,0 284014,24576937942.0,IssuesEvent,2022-10-13 13:03:09,mozilla-mobile/focus-android,https://api.github.com/repos/mozilla-mobile/focus-android,closed,Intermittent UI test failure - < AddToHomescreenTest. noNameShortcutTest >,eng:ui-test eng:intermittent-test,"### Firebase Test Run: 💥 Failed 1x ❗ In one run it failed due to: [Firebase link](https://console.firebase.google.com/u/0/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/7859466701158352881/executions/bs.743fcfea29776606/testcases/1/test-cases) ### Stacktrace: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at org.junit.Assert.assertTrue(Assert.java:42) at org.junit.Assert.assertTrue(Assert.java:53) at org.mozilla.focus.activity.robots.SearchRobot.typeInSearchBar(SearchRobot.kt:23) at org.mozilla.focus.activity.robots.SearchRobot$Transition$loadPage$1.invoke(SearchRobot.kt:84) at org.mozilla.focus.activity.robots.SearchRobot$Transition$loadPage$1.invoke(SearchRobot.kt:84) at org.mozilla.focus.activity.robots.SearchRobotKt.searchScreen(SearchRobot.kt:122) at org.mozilla.focus.activity.robots.SearchRobot$Transition.loadPage(SearchRobot.kt:84) at org.mozilla.focus.activity.AddToHomescreenTest.noNameShortcutTest(AddToHomescreenTest.kt:77) ❗ In the other run it failed fur to the ANR [#7344](https://github.com/mozilla-mobile/focus-android/issues/7344#issuecomment-1249132497) ### Build:9/16 Main ",2.0,"Intermittent UI test failure - < AddToHomescreenTest. noNameShortcutTest > - ### Firebase Test Run: 💥 Failed 1x ❗ In one run it failed due to: [Firebase link](https://console.firebase.google.com/u/0/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/7859466701158352881/executions/bs.743fcfea29776606/testcases/1/test-cases) ### Stacktrace: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at org.junit.Assert.assertTrue(Assert.java:42) at org.junit.Assert.assertTrue(Assert.java:53) at org.mozilla.focus.activity.robots.SearchRobot.typeInSearchBar(SearchRobot.kt:23) at org.mozilla.focus.activity.robots.SearchRobot$Transition$loadPage$1.invoke(SearchRobot.kt:84) at org.mozilla.focus.activity.robots.SearchRobot$Transition$loadPage$1.invoke(SearchRobot.kt:84) at org.mozilla.focus.activity.robots.SearchRobotKt.searchScreen(SearchRobot.kt:122) at org.mozilla.focus.activity.robots.SearchRobot$Transition.loadPage(SearchRobot.kt:84) at org.mozilla.focus.activity.AddToHomescreenTest.noNameShortcutTest(AddToHomescreenTest.kt:77) ❗ In the other run it failed fur to the ANR [#7344](https://github.com/mozilla-mobile/focus-android/issues/7344#issuecomment-1249132497) ### Build:9/16 Main ",0,intermittent ui test failure firebase test run 💥 failed ❗ in one run it failed due to stacktrace java lang assertionerror at org junit assert fail assert java at org junit assert asserttrue assert java at org junit assert asserttrue assert java at org mozilla focus activity robots searchrobot typeinsearchbar searchrobot kt at org mozilla focus activity robots searchrobot transition loadpage invoke searchrobot kt at org mozilla focus activity robots searchrobot transition loadpage invoke searchrobot kt at org mozilla focus activity robots searchrobotkt searchscreen searchrobot kt at org mozilla focus activity robots searchrobot transition loadpage searchrobot kt at org mozilla focus activity addtohomescreentest nonameshortcuttest addtohomescreentest kt ❗ in the other run it failed fur to the anr build main ,0 8248,26568923854.0,IssuesEvent,2023-01-20 23:59:30,influxdata/ui,https://api.github.com/repos/influxdata/ui,closed,Schema composition: treatment of Tags versus Fields.,enhancement team/automation,"Per discussion with the team. We are looking to treat Tags as a clearly different filter than Fields. We are looking to do the following: * confirm that Tag use earlier in the query (before fields), improves the query performance. (e.g. large data sets) * in the UI: * have the schema browser be skewing the user to add tag filters earlier (rather than as the last thing in the schema browser list). * other UI decisions -- TBD. * in the flux-lsp: * Tag versus Filter injection (in schema composition): * Tag filters may be added multiple times in the query, in an append fashion. * inject still done using the schema composition * but the tag would be added as a new filter each time. * newly added Fields * extend in the existing Field filter. ",1.0,"Schema composition: treatment of Tags versus Fields. - Per discussion with the team. We are looking to treat Tags as a clearly different filter than Fields. We are looking to do the following: * confirm that Tag use earlier in the query (before fields), improves the query performance. (e.g. large data sets) * in the UI: * have the schema browser be skewing the user to add tag filters earlier (rather than as the last thing in the schema browser list). * other UI decisions -- TBD. * in the flux-lsp: * Tag versus Filter injection (in schema composition): * Tag filters may be added multiple times in the query, in an append fashion. * inject still done using the schema composition * but the tag would be added as a new filter each time. * newly added Fields * extend in the existing Field filter. ",1,schema composition treatment of tags versus fields per discussion with the team we are looking to treat tags as a clearly different filter than fields we are looking to do the following confirm that tag use earlier in the query before fields improves the query performance e g large data sets in the ui have the schema browser be skewing the user to add tag filters earlier rather than as the last thing in the schema browser list other ui decisions tbd in the flux lsp tag versus filter injection in schema composition tag filters may be added multiple times in the query in an append fashion inject still done using the schema composition but the tag would be added as a new filter each time newly added fields extend in the existing field filter ,1 221040,24590548906.0,IssuesEvent,2022-10-14 01:28:22,vincenzodistasio97/home-cloud,https://api.github.com/repos/vincenzodistasio97/home-cloud,opened,"CVE-2022-37601 (High) detected in loader-utils-1.4.0.tgz, loader-utils-1.2.3.tgz",security vulnerability,"## CVE-2022-37601 - High Severity Vulnerability
Vulnerable Libraries - loader-utils-1.4.0.tgz, loader-utils-1.2.3.tgz

loader-utils-1.4.0.tgz

utils for webpack loaders

Library home page: https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz

Path to dependency file: /client/package.json

Path to vulnerable library: /client/node_modules/loader-utils/package.json

Dependency Hierarchy: - react-scripts-3.4.1.tgz (Root Library) - css-loader-3.4.2.tgz - :x: **loader-utils-1.4.0.tgz** (Vulnerable Library)

loader-utils-1.2.3.tgz

utils for webpack loaders

Library home page: https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz

Path to dependency file: /client/package.json

Path to vulnerable library: /client/node_modules/adjust-sourcemap-loader/node_modules/loader-utils/package.json,/client/node_modules/resolve-url-loader/node_modules/loader-utils/package.json,/client/node_modules/react-dev-utils/node_modules/loader-utils/package.json

Dependency Hierarchy: - react-scripts-3.4.1.tgz (Root Library) - react-dev-utils-10.2.1.tgz - :x: **loader-utils-1.2.3.tgz** (Vulnerable Library)

Found in HEAD commit: 0eb270221557ac4df481974af8dfb9ea1288bc9b

Found in base branch: master

Vulnerability Details

Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils 2.0.0 via the name variable in parseQuery.js.

Publish Date: 2022-10-12

URL: CVE-2022-37601

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2022-10-12

Fix Resolution (loader-utils): 2.0.0

Direct dependency fix Resolution (react-scripts): 5.0.1

Fix Resolution (loader-utils): 2.0.0

Direct dependency fix Resolution (react-scripts): 5.0.1

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-37601 (High) detected in loader-utils-1.4.0.tgz, loader-utils-1.2.3.tgz - ## CVE-2022-37601 - High Severity Vulnerability
Vulnerable Libraries - loader-utils-1.4.0.tgz, loader-utils-1.2.3.tgz

loader-utils-1.4.0.tgz

utils for webpack loaders

Library home page: https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz

Path to dependency file: /client/package.json

Path to vulnerable library: /client/node_modules/loader-utils/package.json

Dependency Hierarchy: - react-scripts-3.4.1.tgz (Root Library) - css-loader-3.4.2.tgz - :x: **loader-utils-1.4.0.tgz** (Vulnerable Library)

loader-utils-1.2.3.tgz

utils for webpack loaders

Library home page: https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz

Path to dependency file: /client/package.json

Path to vulnerable library: /client/node_modules/adjust-sourcemap-loader/node_modules/loader-utils/package.json,/client/node_modules/resolve-url-loader/node_modules/loader-utils/package.json,/client/node_modules/react-dev-utils/node_modules/loader-utils/package.json

Dependency Hierarchy: - react-scripts-3.4.1.tgz (Root Library) - react-dev-utils-10.2.1.tgz - :x: **loader-utils-1.2.3.tgz** (Vulnerable Library)

Found in HEAD commit: 0eb270221557ac4df481974af8dfb9ea1288bc9b

Found in base branch: master

Vulnerability Details

Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils 2.0.0 via the name variable in parseQuery.js.

Publish Date: 2022-10-12

URL: CVE-2022-37601

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2022-10-12

Fix Resolution (loader-utils): 2.0.0

Direct dependency fix Resolution (react-scripts): 5.0.1

Fix Resolution (loader-utils): 2.0.0

Direct dependency fix Resolution (react-scripts): 5.0.1

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in loader utils tgz loader utils tgz cve high severity vulnerability vulnerable libraries loader utils tgz loader utils tgz loader utils tgz utils for webpack loaders library home page a href path to dependency file client package json path to vulnerable library client node modules loader utils package json dependency hierarchy react scripts tgz root library css loader tgz x loader utils tgz vulnerable library loader utils tgz utils for webpack loaders library home page a href path to dependency file client package json path to vulnerable library client node modules adjust sourcemap loader node modules loader utils package json client node modules resolve url loader node modules loader utils package json client node modules react dev utils node modules loader utils package json dependency hierarchy react scripts tgz root library react dev utils tgz x loader utils tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution vulnerability in function parsequery in parsequery js in webpack loader utils via the name variable in parsequery js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution loader utils direct dependency fix resolution react scripts fix resolution loader utils direct dependency fix resolution react scripts step up your open source security game with mend ,0 7738,25510343958.0,IssuesEvent,2022-11-28 12:39:23,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,closed,PV encryption tests via UI does not skip for ODF > 4.8,ui_automation,"According to the test https://github.com/red-hat-storage/ocs-ci/blob/master/tests/ui/test_pv_encryption_ui.py, the test is expected to be skipped for ODF > 4.8. However the test is run in ODF 4.12 as seen from this run: ocs-ci results for OCS4-12-Downstream-OCP4-12-VSPHERE6-UPI-KMS-VAULT-V1-1AZ-RHCOS-VSAN-3M-3W-tier1 (BUILD ID: 4.12.0-91 RUN ID: 1668107034) ",1.0,"PV encryption tests via UI does not skip for ODF > 4.8 - According to the test https://github.com/red-hat-storage/ocs-ci/blob/master/tests/ui/test_pv_encryption_ui.py, the test is expected to be skipped for ODF > 4.8. However the test is run in ODF 4.12 as seen from this run: ocs-ci results for OCS4-12-Downstream-OCP4-12-VSPHERE6-UPI-KMS-VAULT-V1-1AZ-RHCOS-VSAN-3M-3W-tier1 (BUILD ID: 4.12.0-91 RUN ID: 1668107034) ",1,pv encryption tests via ui does not skip for odf according to the test the test is expected to be skipped for odf however the test is run in odf as seen from this run ocs ci results for downstream upi kms vault rhcos vsan build id run id ,1 59769,6662990658.0,IssuesEvent,2017-10-02 14:56:23,EasyRPG/Player,https://api.github.com/repos/EasyRPG/Player,closed,Message Stretch does not get restored when loading RPG_RT savegame,Patch available Savegames Testcase available,"#### Name of the game: Gromada ([download](http://tsukuru.pl/index.php?link=gra&title=2k-gromada)) #### Attach files (as a .zip archive or link them) - A savegame next to the problem: [savegames.zip](https://github.com/EasyRPG/Player/files/1185288/gromada.zip) (I guess it is Save15, not tested) #### Describe the issue in detail and how to reproduce it: > The game uses tiled system graphics. EasyRPG doesn't detect that when loading a save file created by RPG_RT. This doesn't happen with save files generated by EasyRPG. (issue reported by mail) ![screenshot_20170709-095122](https://user-images.githubusercontent.com/4691314/28748441-5ab33358-74b8-11e7-93a8-6d9eeeeb1ec5.png)",1.0,"Message Stretch does not get restored when loading RPG_RT savegame - #### Name of the game: Gromada ([download](http://tsukuru.pl/index.php?link=gra&title=2k-gromada)) #### Attach files (as a .zip archive or link them) - A savegame next to the problem: [savegames.zip](https://github.com/EasyRPG/Player/files/1185288/gromada.zip) (I guess it is Save15, not tested) #### Describe the issue in detail and how to reproduce it: > The game uses tiled system graphics. EasyRPG doesn't detect that when loading a save file created by RPG_RT. This doesn't happen with save files generated by EasyRPG. (issue reported by mail) ![screenshot_20170709-095122](https://user-images.githubusercontent.com/4691314/28748441-5ab33358-74b8-11e7-93a8-6d9eeeeb1ec5.png)",0,message stretch does not get restored when loading rpg rt savegame name of the game gromada attach files as a zip archive or link them a savegame next to the problem i guess it is not tested describe the issue in detail and how to reproduce it the game uses tiled system graphics easyrpg doesn t detect that when loading a save file created by rpg rt this doesn t happen with save files generated by easyrpg issue reported by mail ,0 1245,9763496927.0,IssuesEvent,2019-06-05 13:56:46,spacemeshos/go-spacemesh,https://api.github.com/repos/spacemeshos/go-spacemesh,closed,Notify user when Docker image was not found in Docker registry,automation,"# Overview / Motivation Tests pull Docker Images from DockerHub. This means that the required image which appear in config.yaml of the test must already be in DockerHub. Currently, If image does not exists, the deployment fails on timeout # The Task Notify the user that image does not exist in DockerHub ",1.0,"Notify user when Docker image was not found in Docker registry - # Overview / Motivation Tests pull Docker Images from DockerHub. This means that the required image which appear in config.yaml of the test must already be in DockerHub. Currently, If image does not exists, the deployment fails on timeout # The Task Notify the user that image does not exist in DockerHub ",1,notify user when docker image was not found in docker registry overview motivation tests pull docker images from dockerhub this means that the required image which appear in config yaml of the test must already be in dockerhub currently if image does not exists the deployment fails on timeout the task notify the user that image does not exist in dockerhub ,1 622676,19653756785.0,IssuesEvent,2022-01-10 10:17:41,EscolaLMS/Admin,https://api.github.com/repos/EscolaLMS/Admin,closed,[LMS Admin] #39 Nie można wyczyścić wartości dla...,high priority priority high,"Nie można wyczyścić wartości dla pola additional_fields_required *Source url:* https://admin-stage.escolalms.com/#/settings/escola_auth *Reported by:* *Reported at:* 30 Dec at 14:22 UTC *Console:* [1× Error](https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7#console) *Location:* PL, Lesser Poland, Stary Sacz *Browser:* Chrome 96.0.4664.110 *OS:* macOS 10.15.7 *Screen:* 1920x1080 *Viewport:* 1920x1001 *Screenshot:* ![Screenshot](https://ybug.io/data/reports/b307ykjnw50dx3pg1r2y/qvd6naj8y7a4y2p76qn7/screenshot.jpg?_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2NDA4NzQxNTYsImRhdGEiOnsicmVwb3J0IjoicXZkNm5hajh5N2E0eTJwNzZxbjciLCJ1c2VyIjoiR2l0SHViIn19.FCcK5YRaUTO6mR3NMf6eqEQtaAVw2o9Y9jqZ4ReQWqs) For more details please visit report page on Ybug: [https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7](https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7) ",2.0,"[LMS Admin] #39 Nie można wyczyścić wartości dla... - Nie można wyczyścić wartości dla pola additional_fields_required *Source url:* https://admin-stage.escolalms.com/#/settings/escola_auth *Reported by:* *Reported at:* 30 Dec at 14:22 UTC *Console:* [1× Error](https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7#console) *Location:* PL, Lesser Poland, Stary Sacz *Browser:* Chrome 96.0.4664.110 *OS:* macOS 10.15.7 *Screen:* 1920x1080 *Viewport:* 1920x1001 *Screenshot:* ![Screenshot](https://ybug.io/data/reports/b307ykjnw50dx3pg1r2y/qvd6naj8y7a4y2p76qn7/screenshot.jpg?_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2NDA4NzQxNTYsImRhdGEiOnsicmVwb3J0IjoicXZkNm5hajh5N2E0eTJwNzZxbjciLCJ1c2VyIjoiR2l0SHViIn19.FCcK5YRaUTO6mR3NMf6eqEQtaAVw2o9Y9jqZ4ReQWqs) For more details please visit report page on Ybug: [https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7](https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7) ",0, nie można wyczyścić wartości dla nie można wyczyścić wartości dla pola additional fields required source url reported by reported at dec at utc console location pl lesser poland stary sacz browser chrome os macos screen viewport screenshot for more details please visit report page on ybug ,0 4870,17871310175.0,IssuesEvent,2021-09-06 15:56:30,betagouv/preuve-covoiturage,https://api.github.com/repos/betagouv/preuve-covoiturage,reopened,Paramétrage S3 pour la durée de vie des exports open data et CNAME,INFRA Open Data Automation,"- [ ] Utiliser le s3 public pour les export opendata - [ ] Enregistrer un CNAME pour le S3 public & export",1.0,"Paramétrage S3 pour la durée de vie des exports open data et CNAME - - [ ] Utiliser le s3 public pour les export opendata - [ ] Enregistrer un CNAME pour le S3 public & export",1,paramétrage pour la durée de vie des exports open data et cname utiliser le public pour les export opendata enregistrer un cname pour le public export,1 139940,20984011223.0,IssuesEvent,2022-03-28 23:41:36,dotnet/upgrade-assistant,https://api.github.com/repos/dotnet/upgrade-assistant,opened,upgrade-assistant upgrade . This tool is not supported on non-Windows platforms due to dependencies on Visual Studio.,design-proposal," ## Summary How about putting that higher in the README.MD, rather than having us assume it will work on Linux/Mac ootb? ## Motivation and goals 10-15 minutes time wasted to discover the tool isn't as XPLAT as it might be? ## In scope A list of major scenarios, perhaps in priority order. ## Out of scope Scenarios you explicitly want to exclude. ## Risks / unknowns How might developers misinterpret/misuse this? How might implementing it restrict us from other enhancements in the future? Also list any perf/security/correctness concerns. ## Examples Give brief examples of possible developer experiences (e.g., code they would write). Don't be deeply concerned with how it would be implemented yet. Your examples could even be from other technology stacks. ",1.0,"upgrade-assistant upgrade . This tool is not supported on non-Windows platforms due to dependencies on Visual Studio. - ## Summary How about putting that higher in the README.MD, rather than having us assume it will work on Linux/Mac ootb? ## Motivation and goals 10-15 minutes time wasted to discover the tool isn't as XPLAT as it might be? ## In scope A list of major scenarios, perhaps in priority order. ## Out of scope Scenarios you explicitly want to exclude. ## Risks / unknowns How might developers misinterpret/misuse this? How might implementing it restrict us from other enhancements in the future? Also list any perf/security/correctness concerns. ## Examples Give brief examples of possible developer experiences (e.g., code they would write). Don't be deeply concerned with how it would be implemented yet. Your examples could even be from other technology stacks. ",0,upgrade assistant upgrade this tool is not supported on non windows platforms due to dependencies on visual studio this template is useful to build consensus about whether work should be done and if so the high level shape of how it should be approached use this before fixating on a particular implementation summary how about putting that higher in the readme md rather than having us assume it will work on linux mac ootb motivation and goals minutes time wasted to discover the tool isn t as xplat as it might be in scope a list of major scenarios perhaps in priority order out of scope scenarios you explicitly want to exclude risks unknowns how might developers misinterpret misuse this how might implementing it restrict us from other enhancements in the future also list any perf security correctness concerns examples give brief examples of possible developer experiences e g code they would write don t be deeply concerned with how it would be implemented yet your examples could even be from other technology stacks detailed design it s often best not to fill this out until you get basic consensus about the above when you do consider adding an implementation proposal with the following headings detailed design drawbacks considered alternatives open questions references if there s one clear design you have consensus on you could do that directly in a pr ,0 7002,24110414351.0,IssuesEvent,2022-09-20 10:52:03,mlcommons/ck,https://api.github.com/repos/mlcommons/ck,closed,[CK2/CM] Can we have post_preprocess_deps and pre_postprocess_deps?,enhancement cm-script-automation,"Currently we have ""deps"" which are executed before a script invocation and ""post_deps"" which are executed after a script invocation. But we do not have an option to run a cm script immediately after the preprocess function or immediately before the postprocess function. I think it'll be a good idea to have `post_preprocess_deps` and `pre_postprocess_deps` to add this functionality. This functionality can be useful in the following scenario We have an application code in Python and C (both as independent CM scripts) and a CM wrapper script for both. We can now prepare the language independent inputs for the application in the preprocess function and depending on the language chosen (handled as variations) we can call the respective scripts as `post_preprocess_deps` which will do the actual application run and finally in the postprocess function we can process the produced outputs. Similarly, `pre_postprocess_deps` can also be useful as in the case of `python-venv`.",1.0,"[CK2/CM] Can we have post_preprocess_deps and pre_postprocess_deps? - Currently we have ""deps"" which are executed before a script invocation and ""post_deps"" which are executed after a script invocation. But we do not have an option to run a cm script immediately after the preprocess function or immediately before the postprocess function. I think it'll be a good idea to have `post_preprocess_deps` and `pre_postprocess_deps` to add this functionality. This functionality can be useful in the following scenario We have an application code in Python and C (both as independent CM scripts) and a CM wrapper script for both. We can now prepare the language independent inputs for the application in the preprocess function and depending on the language chosen (handled as variations) we can call the respective scripts as `post_preprocess_deps` which will do the actual application run and finally in the postprocess function we can process the produced outputs. Similarly, `pre_postprocess_deps` can also be useful as in the case of `python-venv`.",1, can we have post preprocess deps and pre postprocess deps currently we have deps which are executed before a script invocation and post deps which are executed after a script invocation but we do not have an option to run a cm script immediately after the preprocess function or immediately before the postprocess function i think it ll be a good idea to have post preprocess deps and pre postprocess deps to add this functionality this functionality can be useful in the following scenario we have an application code in python and c both as independent cm scripts and a cm wrapper script for both we can now prepare the language independent inputs for the application in the preprocess function and depending on the language chosen handled as variations we can call the respective scripts as post preprocess deps which will do the actual application run and finally in the postprocess function we can process the produced outputs similarly pre postprocess deps can also be useful as in the case of python venv ,1 5435,19593871874.0,IssuesEvent,2022-01-05 15:43:55,nautobot/nautobot,https://api.github.com/repos/nautobot/nautobot,opened,Job aborts when run with value None for an optional ObjectVar,type: bug group: automation,"### Environment * Python version: 3.6 * Nautobot version: 1.2.2 ### Steps to Reproduce 1. Define the following Job: ```python from nautobot.extras.jobs import Job, ObjectVar from nautobot.dcim.models import Region class OptionalObjectVar(Job): region = ObjectVar( description=""Region (optional)"", model=Region, required=False, ) def run(self, data, commit): self.log_info(obj=data[""region""], message=""The Region if any that the user provided."") ``` 2. Run this job after selecting a specific Region from the presented dropdown and verify that the job executes successfully. 3. Run this job again without selecting any Region from the dropdown. ### Expected Behavior Job to run successfully and log the expected info message. ### Observed Behavior Job errors out before running, with the following traceback: ![image](https://user-images.githubusercontent.com/5603551/148245805-c78f0809-4ec9-4549-966c-390064b58865.png) ``` Traceback (most recent call last): File ""/source/nautobot/extras/jobs.py"", line 988, in run_job data = job_class.deserialize_data(data) File ""/source/nautobot/extras/jobs.py"", line 340, in deserialize_data return_data[field_name] = var.field_attrs[""queryset""].get(pk=value) File ""/usr/local/lib/python3.9/site-packages/cacheops/query.py"", line 353, in get return qs._no_monkey.get(qs, *args, **kwargs) File ""/usr/local/lib/python3.9/site-packages/django/db/models/query.py"", line 429, in get raise self.model.DoesNotExist( nautobot.dcim.models.sites.Region.DoesNotExist: Region matching query does not exist. ``` This appears to be a bug or oversight in the implementation of `Job.deserialize_data()`.",1.0,"Job aborts when run with value None for an optional ObjectVar - ### Environment * Python version: 3.6 * Nautobot version: 1.2.2 ### Steps to Reproduce 1. Define the following Job: ```python from nautobot.extras.jobs import Job, ObjectVar from nautobot.dcim.models import Region class OptionalObjectVar(Job): region = ObjectVar( description=""Region (optional)"", model=Region, required=False, ) def run(self, data, commit): self.log_info(obj=data[""region""], message=""The Region if any that the user provided."") ``` 2. Run this job after selecting a specific Region from the presented dropdown and verify that the job executes successfully. 3. Run this job again without selecting any Region from the dropdown. ### Expected Behavior Job to run successfully and log the expected info message. ### Observed Behavior Job errors out before running, with the following traceback: ![image](https://user-images.githubusercontent.com/5603551/148245805-c78f0809-4ec9-4549-966c-390064b58865.png) ``` Traceback (most recent call last): File ""/source/nautobot/extras/jobs.py"", line 988, in run_job data = job_class.deserialize_data(data) File ""/source/nautobot/extras/jobs.py"", line 340, in deserialize_data return_data[field_name] = var.field_attrs[""queryset""].get(pk=value) File ""/usr/local/lib/python3.9/site-packages/cacheops/query.py"", line 353, in get return qs._no_monkey.get(qs, *args, **kwargs) File ""/usr/local/lib/python3.9/site-packages/django/db/models/query.py"", line 429, in get raise self.model.DoesNotExist( nautobot.dcim.models.sites.Region.DoesNotExist: Region matching query does not exist. ``` This appears to be a bug or oversight in the implementation of `Job.deserialize_data()`.",1,job aborts when run with value none for an optional objectvar environment python version nautobot version steps to reproduce define the following job python from nautobot extras jobs import job objectvar from nautobot dcim models import region class optionalobjectvar job region objectvar description region optional model region required false def run self data commit self log info obj data message the region if any that the user provided run this job after selecting a specific region from the presented dropdown and verify that the job executes successfully run this job again without selecting any region from the dropdown expected behavior job to run successfully and log the expected info message observed behavior job errors out before running with the following traceback traceback most recent call last file source nautobot extras jobs py line in run job data job class deserialize data data file source nautobot extras jobs py line in deserialize data return data var field attrs get pk value file usr local lib site packages cacheops query py line in get return qs no monkey get qs args kwargs file usr local lib site packages django db models query py line in get raise self model doesnotexist nautobot dcim models sites region doesnotexist region matching query does not exist this appears to be a bug or oversight in the implementation of job deserialize data ,1 43918,17769687470.0,IssuesEvent,2021-08-30 12:13:29,hashicorp/terraform-provider-aws,https://api.github.com/repos/hashicorp/terraform-provider-aws,closed,aws_pinpoint_email_channel configuration set should use name instead of ARN,bug service/pinpoint," ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave ""+1"" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment ### Terraform CLI and Terraform AWS Provider Version ``` Terraform v0.14.8 + provider registry.terraform.io/hashicorp/aws v3.37.0 ``` ### Affected Resource(s) * aws_pinpoint_email_channel ### Terraform Configuration Files Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation. This example is based off of the example configuration from the terraform docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel ```hcl resource ""aws_pinpoint_email_channel"" ""email"" { application_id = aws_pinpoint_app.app.application_id configuration_set = aws_ses_configuration_set.test.arn from_address = ""user@example.com"" role_arn = aws_iam_role.role.arn } resource ""aws_ses_configuration_set"" ""test"" { name = ""some-configuration-set-test"" } resource ""aws_pinpoint_app"" ""app"" {} resource ""aws_ses_domain_identity"" ""identity"" { domain = ""example.com"" } resource ""aws_iam_role"" ""role"" { assume_role_policy = < The expected behaviors is to be able to seamlessly configure a Pinpoint email channel using an SES configuration set passed in via the configuration_set property of the aws_pinpoint_email_channel resource. After discussing with AWS support, we've come to the conclusion that [the documentation (and implementation) for specifying a configuration set to Pinpoint ](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel)is incorrect. Instead of supplying the ARN of the configuration set, the name of the configuration set should be given instead. The [documentation for the AWS Pinpoint Email Channel API](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id-channels-email.html) is a little vague in that it doesn't directly specify whether to use the name or ARN of the configuration set, but the [documentation for the ConfigurationSet property](https://docs.aws.amazon.com/ses/latest/APIReference/API_ConfigurationSet.html) clearly states that it is the name of the configuration set and not the ARN. ### Actual Behavior When I supplied the ARN of the config set (as terraform currently expects), the pinpoint project completely broke, meaning no emails were being sent. Furthermore, trying to edit the ""Open and click tracking settings"" from the Pinpoint email channel dashboard resulted in a series of `bad request` errors. We were able to resolve this by using the [`update-email-channel` AWS CLI command](https://docs.aws.amazon.com/cli/latest/reference/pinpoint/update-email-channel.html) to manually specify the configuration set name instead of the ARN to the email channel. After doing so, everything worked as expected. ### Steps to Reproduce 1. Create a [pinpoint app,](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_app) a [pinpoint email channel](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel), and an [SES configuration set](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ses_configuration_set) 2. In the aws_pinpoint_email_channel resource, when adding a configuration set, specify the ARN of the configuration set you created. 3. Apply changes",1.0,"aws_pinpoint_email_channel configuration set should use name instead of ARN - ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave ""+1"" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment ### Terraform CLI and Terraform AWS Provider Version ``` Terraform v0.14.8 + provider registry.terraform.io/hashicorp/aws v3.37.0 ``` ### Affected Resource(s) * aws_pinpoint_email_channel ### Terraform Configuration Files Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation. This example is based off of the example configuration from the terraform docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel ```hcl resource ""aws_pinpoint_email_channel"" ""email"" { application_id = aws_pinpoint_app.app.application_id configuration_set = aws_ses_configuration_set.test.arn from_address = ""user@example.com"" role_arn = aws_iam_role.role.arn } resource ""aws_ses_configuration_set"" ""test"" { name = ""some-configuration-set-test"" } resource ""aws_pinpoint_app"" ""app"" {} resource ""aws_ses_domain_identity"" ""identity"" { domain = ""example.com"" } resource ""aws_iam_role"" ""role"" { assume_role_policy = < The expected behaviors is to be able to seamlessly configure a Pinpoint email channel using an SES configuration set passed in via the configuration_set property of the aws_pinpoint_email_channel resource. After discussing with AWS support, we've come to the conclusion that [the documentation (and implementation) for specifying a configuration set to Pinpoint ](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel)is incorrect. Instead of supplying the ARN of the configuration set, the name of the configuration set should be given instead. The [documentation for the AWS Pinpoint Email Channel API](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id-channels-email.html) is a little vague in that it doesn't directly specify whether to use the name or ARN of the configuration set, but the [documentation for the ConfigurationSet property](https://docs.aws.amazon.com/ses/latest/APIReference/API_ConfigurationSet.html) clearly states that it is the name of the configuration set and not the ARN. ### Actual Behavior When I supplied the ARN of the config set (as terraform currently expects), the pinpoint project completely broke, meaning no emails were being sent. Furthermore, trying to edit the ""Open and click tracking settings"" from the Pinpoint email channel dashboard resulted in a series of `bad request` errors. We were able to resolve this by using the [`update-email-channel` AWS CLI command](https://docs.aws.amazon.com/cli/latest/reference/pinpoint/update-email-channel.html) to manually specify the configuration set name instead of the ARN to the email channel. After doing so, everything worked as expected. ### Steps to Reproduce 1. Create a [pinpoint app,](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_app) a [pinpoint email channel](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel), and an [SES configuration set](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ses_configuration_set) 2. In the aws_pinpoint_email_channel resource, when adding a configuration set, specify the ARN of the configuration set you created. 3. Apply changes",0,aws pinpoint email channel configuration set should use name instead of arn please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform cli and terraform aws provider version terraform provider registry terraform io hashicorp aws affected resource s aws pinpoint email channel terraform configuration files please include all terraform configurations required to reproduce the bug bug reports without a functional reproduction may be closed without investigation this example is based off of the example configuration from the terraform docs hcl resource aws pinpoint email channel email application id aws pinpoint app app application id configuration set aws ses configuration set test arn from address user example com role arn aws iam role role arn resource aws ses configuration set test name some configuration set test resource aws pinpoint app app resource aws ses domain identity identity domain example com resource aws iam role role assume role policy eof version statement action sts assumerole principal service pinpoint amazonaws com effect allow sid eof resource aws iam role policy role policy name role policy role aws iam role role id policy eof version statement action mobileanalytics putevents mobileanalytics putitems effect allow resource eof expected behavior the expected behaviors is to be able to seamlessly configure a pinpoint email channel using an ses configuration set passed in via the configuration set property of the aws pinpoint email channel resource after discussing with aws support we ve come to the conclusion that incorrect instead of supplying the arn of the configuration set the name of the configuration set should be given instead the is a little vague in that it doesn t directly specify whether to use the name or arn of the configuration set but the clearly states that it is the name of the configuration set and not the arn actual behavior when i supplied the arn of the config set as terraform currently expects the pinpoint project completely broke meaning no emails were being sent furthermore trying to edit the open and click tracking settings from the pinpoint email channel dashboard resulted in a series of bad request errors we were able to resolve this by using the to manually specify the configuration set name instead of the arn to the email channel after doing so everything worked as expected steps to reproduce create a a and an in the aws pinpoint email channel resource when adding a configuration set specify the arn of the configuration set you created apply changes,0 7378,24743767268.0,IssuesEvent,2022-10-21 08:01:20,Azure/azure-sdk-tools,https://api.github.com/repos/Azure/azure-sdk-tools,closed,"azure-sdk-for-net PR check in azure-rest-api-specs fails with ""permission denied""",SDK Automation,"The azure-sdk-for net PR check is failing in several PRs in the azure-rest-api-specs repo, e.g. - [# 18174](https://github.com/Azure/azure-rest-api-specs/pull/18174/checks?check_run_id=6031561755) - [# 18603](https://github.com/Azure/azure-rest-api-specs/pull/18603/checks?check_run_id=6031991775) ",1.0,"azure-sdk-for-net PR check in azure-rest-api-specs fails with ""permission denied"" - The azure-sdk-for net PR check is failing in several PRs in the azure-rest-api-specs repo, e.g. - [# 18174](https://github.com/Azure/azure-rest-api-specs/pull/18174/checks?check_run_id=6031561755) - [# 18603](https://github.com/Azure/azure-rest-api-specs/pull/18603/checks?check_run_id=6031991775) ",1,azure sdk for net pr check in azure rest api specs fails with permission denied the azure sdk for net pr check is failing in several prs in the azure rest api specs repo e g img width alt image src ,1 3217,13206167516.0,IssuesEvent,2020-08-14 19:34:34,coq-community/manifesto,https://api.github.com/repos/coq-community/manifesto,closed,Add Coq to Travis CI,automation,"## Meta-issue ## Our current template builds everything in Docker, which is less flexible than what Travis can do with other languages. It'll be nice if we could have `language: coq` and run scripts conveniently. Travis allows [community-supported languages](//docs.travis-ci.com/user/languages/community-supported-languages), and our community seems the right people to do that. Similar issue in OCaml world: ocaml/ocaml-ci-scripts#53",1.0,"Add Coq to Travis CI - ## Meta-issue ## Our current template builds everything in Docker, which is less flexible than what Travis can do with other languages. It'll be nice if we could have `language: coq` and run scripts conveniently. Travis allows [community-supported languages](//docs.travis-ci.com/user/languages/community-supported-languages), and our community seems the right people to do that. Similar issue in OCaml world: ocaml/ocaml-ci-scripts#53",1,add coq to travis ci meta issue our current template builds everything in docker which is less flexible than what travis can do with other languages it ll be nice if we could have language coq and run scripts conveniently travis allows docs travis ci com user languages community supported languages and our community seems the right people to do that similar issue in ocaml world ocaml ocaml ci scripts ,1 666,7745646186.0,IssuesEvent,2018-05-29 19:00:38,pypa/pip,https://api.github.com/repos/pypa/pip,closed,Investigate why the pypy3 CI job is timing out,C: automation P: pypy needs triage,"The CI job for pypy3 is timing out very often; I've restarted at least 10 CI builds as a result of this issue. I'm opening this issue to bring it to the notice of others since I'm unable to figure out what exactly the problem is here. If nothing else, this would make it more encouraging for myself to look into this when I have the time because closing issues has something satisfying to it. :P ",1.0,"Investigate why the pypy3 CI job is timing out - The CI job for pypy3 is timing out very often; I've restarted at least 10 CI builds as a result of this issue. I'm opening this issue to bring it to the notice of others since I'm unable to figure out what exactly the problem is here. If nothing else, this would make it more encouraging for myself to look into this when I have the time because closing issues has something satisfying to it. :P ",1,investigate why the ci job is timing out the ci job for is timing out very often i ve restarted at least ci builds as a result of this issue i m opening this issue to bring it to the notice of others since i m unable to figure out what exactly the problem is here if nothing else this would make it more encouraging for myself to look into this when i have the time because closing issues has something satisfying to it p ,1 148271,11845418859.0,IssuesEvent,2020-03-24 08:19:38,microsoft/AzureStorageExplorer,https://api.github.com/repos/microsoft/AzureStorageExplorer,opened,An error arises when creating shared access signature for one regular blob container,:beetle: regression :gear: blobs 🧪 testing,"**Storage Explorer Version:** 1.12.0 **Build**: [20200324.2](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3577314) **Branch**: master **Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ macOS High Sierra **Architecture**: ia32/x64 **Regression From:** Previous release(1.12.0) **Steps to reproduce:** 1. Expand one non-ADLS Gen2 storage account -> Blob Containers. 2. Create a new blob container -> Right click it. 3. Click 'Get Shared Access Signature...' -> Click 'Next' on the popped dialog. 4. Check the result. **Expect Experience:** No error arises. **Actual Experience:** The below error arises. ![image](https://user-images.githubusercontent.com/41351993/77402416-de7ec680-6d6b-11ea-9212-b1a37bfba65d.png) **More Info:** This issue also reproduces for blobs.",1.0,"An error arises when creating shared access signature for one regular blob container - **Storage Explorer Version:** 1.12.0 **Build**: [20200324.2](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3577314) **Branch**: master **Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ macOS High Sierra **Architecture**: ia32/x64 **Regression From:** Previous release(1.12.0) **Steps to reproduce:** 1. Expand one non-ADLS Gen2 storage account -> Blob Containers. 2. Create a new blob container -> Right click it. 3. Click 'Get Shared Access Signature...' -> Click 'Next' on the popped dialog. 4. Check the result. **Expect Experience:** No error arises. **Actual Experience:** The below error arises. ![image](https://user-images.githubusercontent.com/41351993/77402416-de7ec680-6d6b-11ea-9212-b1a37bfba65d.png) **More Info:** This issue also reproduces for blobs.",0,an error arises when creating shared access signature for one regular blob container storage explorer version build branch master platform os windows linux ubuntu macos high sierra architecture regression from previous release steps to reproduce expand one non adls storage account blob containers create a new blob container right click it click get shared access signature click next on the popped dialog check the result expect experience no error arises actual experience the below error arises more info this issue also reproduces for blobs ,0 135019,10959850927.0,IssuesEvent,2019-11-27 12:20:24,raiden-network/raiden,https://api.github.com/repos/raiden-network/raiden,closed,Add scenario for stressing a hub node,Component / Scenario Player Flag / Testing,"### Introduction As part of https://github.com/raiden-network/team/issues/664 it was discovered that we need to have the https://github.com/raiden-network/raiden/blob/develop/raiden/tests/scenarios/ci/Scenario-Stress-Hub.yaml scenario updated to be run as part of the nigthly scenarios. This is done to make sure that nodes are able to handle high loads. ### Description It should suffice to adjust the already existing scenario to comply with the current standards we use for the other scenarios. Some asserts should however be added in the end in order to verify that everything worked as expected. ",1.0,"Add scenario for stressing a hub node - ### Introduction As part of https://github.com/raiden-network/team/issues/664 it was discovered that we need to have the https://github.com/raiden-network/raiden/blob/develop/raiden/tests/scenarios/ci/Scenario-Stress-Hub.yaml scenario updated to be run as part of the nigthly scenarios. This is done to make sure that nodes are able to handle high loads. ### Description It should suffice to adjust the already existing scenario to comply with the current standards we use for the other scenarios. Some asserts should however be added in the end in order to verify that everything worked as expected. ",0,add scenario for stressing a hub node introduction as part of it was discovered that we need to have the scenario updated to be run as part of the nigthly scenarios this is done to make sure that nodes are able to handle high loads description it should suffice to adjust the already existing scenario to comply with the current standards we use for the other scenarios some asserts should however be added in the end in order to verify that everything worked as expected ,0 433466,12505816358.0,IssuesEvent,2020-06-02 11:26:40,OpenNebula/one,https://api.github.com/repos/OpenNebula/one,closed,Implement a find by criteria for OpenNebula resources,Category: Core & System Priority: Low Status: Accepted Type: Backlog,"--- Author Name: **OpenNebula Systems Support Team** (OpenNebula Systems Support Team) Original Redmine Issue: 5462, https://dev.opennebula.org/issues/5462 Original Date: 2017-10-17 --- For instance, find a VM with a particular tag, or a particular NIC. ",1.0,"Implement a find by criteria for OpenNebula resources - --- Author Name: **OpenNebula Systems Support Team** (OpenNebula Systems Support Team) Original Redmine Issue: 5462, https://dev.opennebula.org/issues/5462 Original Date: 2017-10-17 --- For instance, find a VM with a particular tag, or a particular NIC. ",0,implement a find by criteria for opennebula resources author name opennebula systems support team opennebula systems support team original redmine issue original date for instance find a vm with a particular tag or a particular nic ,0 114193,4621303284.0,IssuesEvent,2016-09-27 00:30:29,4-20ma/i2c_adc_ads7828,https://api.github.com/repos/4-20ma/i2c_adc_ads7828,opened,Update README,Priority: Low Type: Maintenance," Match style/content of `ModbusMaster` - [ ] use `.md` extension - [ ] add standard title - [ ] add badges - [ ] 2 spaces before ## - [ ] update sections: - Overview - Features (add device address: 0x20) - Installation (update) - Schematic (combine with Hardware) - Example - Caveats (add) - Support (update, remove Questions/Feedback) - [ ] convert backtick block language to cpp - [ ] superscript i2c - [ ] backtick `i2c_adc_ads7828` - [ ] remove deprecated INSTALL - The README provide 3 installation methods, including links to arduino.cc. ",1.0,"Update README - Match style/content of `ModbusMaster` - [ ] use `.md` extension - [ ] add standard title - [ ] add badges - [ ] 2 spaces before ## - [ ] update sections: - Overview - Features (add device address: 0x20) - Installation (update) - Schematic (combine with Hardware) - Example - Caveats (add) - Support (update, remove Questions/Feedback) - [ ] convert backtick block language to cpp - [ ] superscript i2c - [ ] backtick `i2c_adc_ads7828` - [ ] remove deprecated INSTALL - The README provide 3 installation methods, including links to arduino.cc. ",0,update readme match style content of modbusmaster use md extension add standard title add badges spaces before update sections overview features add device address installation update schematic combine with hardware example caveats add support update remove questions feedback convert backtick block language to cpp superscript i c backtick adc remove deprecated install the readme provide installation methods including links to arduino cc ,0 23974,12167293930.0,IssuesEvent,2020-04-27 10:39:39,microsoft/react-native-windows,https://api.github.com/repos/microsoft/react-native-windows,closed,Move AsyncStorageManagerWin32 to run on a background thread,Area: Performance Platform: Desktop,"ASMW32 currently blocks on SQLite APIs completing, which may take a relatively long time (however long the disk IO takes). The implementation of ASM on Android & iOS do their disk IO asynchronously. We should make ASMW32 asynchronous for parity with those platforms.",True,"Move AsyncStorageManagerWin32 to run on a background thread - ASMW32 currently blocks on SQLite APIs completing, which may take a relatively long time (however long the disk IO takes). The implementation of ASM on Android & iOS do their disk IO asynchronously. We should make ASMW32 asynchronous for parity with those platforms.",0,move to run on a background thread currently blocks on sqlite apis completing which may take a relatively long time however long the disk io takes the implementation of asm on android ios do their disk io asynchronously we should make asynchronous for parity with those platforms ,0 83586,3637693128.0,IssuesEvent,2016-02-12 12:14:54,x-team/unleash,https://api.github.com/repos/x-team/unleash,closed,Smaller cards?,enhancement low priority,"I'd like to experiment with smaller cards; right now they seem a little bit clumsy. ![screenshot 2015-05-15 22 29 24](https://cloud.githubusercontent.com/assets/7212882/7664882/8c19d6c6-fb53-11e4-9f8f-662f65613e99.png) ",1.0,"Smaller cards? - I'd like to experiment with smaller cards; right now they seem a little bit clumsy. ![screenshot 2015-05-15 22 29 24](https://cloud.githubusercontent.com/assets/7212882/7664882/8c19d6c6-fb53-11e4-9f8f-662f65613e99.png) ",0,smaller cards i d like to experiment with smaller cards right now they seem a little bit clumsy ,0 130922,12465916336.0,IssuesEvent,2020-05-28 14:43:46,twoodby/github_base,https://api.github.com/repos/twoodby/github_base,opened,Updated readme,documentation,Update the readme to explain the new branch tasks and how to setup to use tasks,1.0,Updated readme - Update the readme to explain the new branch tasks and how to setup to use tasks,0,updated readme update the readme to explain the new branch tasks and how to setup to use tasks,0 271355,29477928539.0,IssuesEvent,2023-06-02 01:05:06,samq-ghdemo/SEARCH-NCJIS-nibrs,https://api.github.com/repos/samq-ghdemo/SEARCH-NCJIS-nibrs,opened,CVE-2021-23445 (Medium) detected in datatables-1.10.15.jar,Mend: dependency security vulnerability,"## CVE-2021-23445 - Medium Severity Vulnerability
Vulnerable Library - datatables-1.10.15.jar

WebJar for DataTables

Library home page: http://webjars.org

Path to dependency file: /web/nibrs-web/pom.xml

Path to vulnerable library: /canner/.m2/repository/org/webjars/datatables/1.10.15/datatables-1.10.15.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/datatables-1.10.15.jar

Dependency Hierarchy: - :x: **datatables-1.10.15.jar** (Vulnerable Library)

Found in HEAD commit: 2643373aa9a184ff4ea81e98caf4009bf2ee8e91

Found in base branch: master

Vulnerability Details

This affects the package datatables.net before 1.11.3. If an array is passed to the HTML escape entities function it would not have its contents escaped.

Publish Date: 2021-09-27

URL: CVE-2021-23445

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23445

Release Date: 2021-09-27

Fix Resolution: 1.10.20

*** - [ ] Check this box to open an automated fix PR ",True,"CVE-2021-23445 (Medium) detected in datatables-1.10.15.jar - ## CVE-2021-23445 - Medium Severity Vulnerability
Vulnerable Library - datatables-1.10.15.jar

WebJar for DataTables

Library home page: http://webjars.org

Path to dependency file: /web/nibrs-web/pom.xml

Path to vulnerable library: /canner/.m2/repository/org/webjars/datatables/1.10.15/datatables-1.10.15.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/datatables-1.10.15.jar

Dependency Hierarchy: - :x: **datatables-1.10.15.jar** (Vulnerable Library)

Found in HEAD commit: 2643373aa9a184ff4ea81e98caf4009bf2ee8e91

Found in base branch: master

Vulnerability Details

This affects the package datatables.net before 1.11.3. If an array is passed to the HTML escape entities function it would not have its contents escaped.

Publish Date: 2021-09-27

URL: CVE-2021-23445

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23445

Release Date: 2021-09-27

Fix Resolution: 1.10.20

*** - [ ] Check this box to open an automated fix PR ",0,cve medium detected in datatables jar cve medium severity vulnerability vulnerable library datatables jar webjar for datatables library home page a href path to dependency file web nibrs web pom xml path to vulnerable library canner repository org webjars datatables datatables jar web nibrs web target nibrs web web inf lib datatables jar dependency hierarchy x datatables jar vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package datatables net before if an array is passed to the html escape entities function it would not have its contents escaped publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr ,0 127449,27046850520.0,IssuesEvent,2023-02-13 10:22:31,Regalis11/Barotrauma,https://api.github.com/repos/Regalis11/Barotrauma,closed,Favorite servers list problem,Bug Need more info Code Networking Unstable,"If add a server to Favorite and restart Barotrauma server is listed as offline. The server's endpoint in Data/favoriteservers.xml is changed from `111.222.333.444:12345` to `[::ffff:111.222.333.444]:12345` which makes not possible to connect to that server again. It looks like all representation of IP addresses are like that (for example, command ""clientlist"" gives the same format of addresses and all log files too)",1.0,"Favorite servers list problem - If add a server to Favorite and restart Barotrauma server is listed as offline. The server's endpoint in Data/favoriteservers.xml is changed from `111.222.333.444:12345` to `[::ffff:111.222.333.444]:12345` which makes not possible to connect to that server again. It looks like all representation of IP addresses are like that (for example, command ""clientlist"" gives the same format of addresses and all log files too)",0,favorite servers list problem if add a server to favorite and restart barotrauma server is listed as offline the server s endpoint in data favoriteservers xml is changed from to which makes not possible to connect to that server again it looks like all representation of ip addresses are like that for example command clientlist gives the same format of addresses and all log files too ,0 272225,20738572946.0,IssuesEvent,2022-03-14 15:40:39,expertsleepersltd/issues,https://api.github.com/repos/expertsleepersltd/issues,closed,CV/MIDI docs don't mention scaling relationship between voltage and MIDI CC,documentation disting mk4,"Parameters 2 & 3, when non-zero, allow you to generate CC messages from the X & Y inputs (using the parameter value as the CC number). If Y is to be converted to a CC, then notes are no longer generated.",1.0,"CV/MIDI docs don't mention scaling relationship between voltage and MIDI CC - Parameters 2 & 3, when non-zero, allow you to generate CC messages from the X & Y inputs (using the parameter value as the CC number). If Y is to be converted to a CC, then notes are no longer generated.",0,cv midi docs don t mention scaling relationship between voltage and midi cc parameters when non zero allow you to generate cc messages from the x y inputs using the parameter value as the cc number if y is to be converted to a cc then notes are no longer generated ,0 9253,27798441167.0,IssuesEvent,2023-03-17 14:13:07,aws-samples/eks-workshop-v2,https://api.github.com/repos/aws-samples/eks-workshop-v2,opened,Upgrade ArgoCD version for ArgoCD lab,enhancement content/automation,"### What would you like to be added? Upgrade the version of ArgoCD helm chart used in terraform eks addon for the ArgoCD lab ### Why is this needed? Keep up to date with bug and security fixes",1.0,"Upgrade ArgoCD version for ArgoCD lab - ### What would you like to be added? Upgrade the version of ArgoCD helm chart used in terraform eks addon for the ArgoCD lab ### Why is this needed? Keep up to date with bug and security fixes",1,upgrade argocd version for argocd lab what would you like to be added upgrade the version of argocd helm chart used in terraform eks addon for the argocd lab why is this needed keep up to date with bug and security fixes,1 7427,24847100160.0,IssuesEvent,2022-10-26 16:43:59,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,FAILED: Automated Tests(232),automation,"Stats: { ""suites"": 51, ""tests"": 379, ""passes"": 90, ""pending"": 0, ""failures"": 232, ""start"": ""2022-10-19T17:56:13.491Z"", ""end"": ""2022-10-19T18:34:31.911Z"", ""duration"": 690871, ""testsRegistered"": 379, ""passPercent"": 23.7467018469657, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 57, ""hasSkipped"": true } Failed Tests: ""update the Dataset in BC Data Catelogue to appear the API in the Directory"" ""publish product to directory"" ""Create a Test environment"" ""applies authorization plugin to service published to Kong Gateway"" ""activate the service for Test environment"" ""activate the service for Dev environment"" ""Grant namespace access to Mark (access manager)"" ""Grant CredentialIssuer.Admin permission to Janis (API Owner)"" ""Collect the credentials"" ""Close the popup without collecting credentials"" ""authenticates Mark (Access-Manager)"" ""Verify that the request status is Pending Approval"" ""Collect the credentials"" ""Verify that API is not accessible with the generated API Key when the request is not approved"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Click on Grant Access button"" ""Grant Access to Test environment"" ""Verify the service is accessible with API key for elevated access"" ""Verify the service is accessibale with API key for free access"" ""Verify the service is accessible with API key for elevated access"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list"" ""set IP address that is not accessible in the network as allowed IP and set Route as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is accessible in the network as allowed IP and set route as scope"" ""verify the success stats when the API calls within the allowed IP range"" ""set IP address that is accessible in the network as allowed IP and set service as scope"" ""verify the success stats when the API calls within the allowed IP range"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""Verify that IP Restriction is set at global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""Verify that IP Restriction is set at global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list "" ""set api rate limit as per the test config, Local Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Local Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""creates an access request"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""Verify that API is accessible with the generated API Key"" ""authenticates Mark (Access-Manager)"" ""verify that consumers are filters as per given parameter"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Verify that labels can be deleted"" ""Verify that labels can be updated"" ""Verify that labels can be added"" ""Grant namespace access to access manager(Mark)"" ""Grant CredentialIssuer.Admin permission to credential issuer(Wendy)"" ""Select the namespace created for client credential "" ""Creates authorization profile for Client ID/Secret"" ""Creates authorization profile for JWT - Generated Key Pair"" ""Creates authorization profile for JWKS URL"" ""Creates invalid authorization profile"" ""Update the Dataset in BC Data Catalogue to appear the API in the Directory"" ""Adds environment with Client ID/Secret authenticator to product"" ""Adds environment with JWT - Generated Key Pair authenticator to product"" ""Adds environment with JWT - JWKS URL authenticator to product"" ""Applies authorization plugin to service published to Kong Gateway"" ""activate the service for Test environment"" ""Adds environment for invalid authorization profile to other"" ""Creates an access request"" ""Access Manager logs in"" ""Select scopes in Authorization Tab"" ""approves an access request"" ""Get access token using client ID and secret; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Get current API Key"" ""Regenrate credential"" ""Verify that new API key is set to the consumer"" ""Verify that only one API key(new key) is set to the consumer in Kong gateway"" ""Regenrate credential client ID and Secret"" ""Make sure that the old client ID and Secret is disabled"" ""update the Dataset in BC Data Catelogue to appear the API in the Directory"" ""publish product to directory"" ""applies authorization plugin to service published to Kong Gateway"" ""Delete Product Environment"" ""Delete the Product"" ""authenticates Janis (api owner) to get the user session token"" ""Get the resource and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Navigate to activity page"" ""Developer logs in"" ""Authenticates api owner"" ""authenticates Harley (developer)"" ""authenticates Harley (developer)"" ""authenticates Harley (developer)"" ""authenticates Mark (Access-Manager)"" ""authenticates Harley (developer)"" ""Delete application"" ""Verify that application is deleted"" ""Verify that API is not accessible with the generated API Key when the application is deleted"" ""authenticates Janis (api owner)"" ""authenticates Janis (api owner)"" ""authenticates Janis (api owner)"" ""authenticates Janis (api owner)"" ""authenticates Harley (developer)"" ""authenticates Janis (api owner)"" ""Authenticates Mark (Access-Manager)"" ""Navigate to Consumer Page to see the Approve Request option"" ""Verify that the option to approve request is displayed"" ""Authenticates Janis (api owner)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that all the namespace options and activities are displayed"" ""Authenticates Janis (api owner)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that only Authorization Profile option is displayed in Namespace page"" ""Verify that authorization profile for Client ID/Secret is generated"" ""Authenticates Janis (api owner)"" ""authenticates Mark"" ""Navigate to Consumer Page to see the Approve Request option"" ""Navigate to Consumer Page to see the Approve Request option"" ""Verify that service accounts are not created"" ""authenticates Janis (api owner)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that GWA API allows user to publish the API to Kong gateway"" ""authenticates Janis (api owner)"" ""authenticates Janis (api owner)"" ""Prepare the Request Specification for the API"" ""Prepare the Request Specification for the API"" ""authenticates Janis (api owner) to get the user session token"" ""Get the resource and verify the success code in the response"" ""Compare the scope values in response against the expected values"" ""Get the resource and verify the success code in the response"" ""Compare the Namespace values in response against the expected values"" ""Delete the namespace associated with the organization, organization unit and verify the success code in the response"" ""Verify that the deleted Namespace is not displayed in Get Call"" ""Add the access of the organization to the specific user and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Compare the Namespace values in response against the expected values"" ""authenticates Janis (api owner) to get the user session token"" ""Put the resource and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Verify the status code and response message for invalid slugvalue"" ""Delete the documentation"" ""Delete the documentation"" ""Put the resource and verify the success code in the response"" ""Verify that document contant is displayed for GET /documentation"" ""Verify the status code and response message for invalid slug id"" ""Verify that document contant is fetch by slug ID"" ""authenticates Janis (api owner) to get the user session token"" ""Put the resource and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Delete the authorization profile"" ""Verify that the authorization profile is deleted"" ""Put the resource and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Delete the authorization profile"" ""Verify that the authorization profile is deleted"" ""Put the resource and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Delete the authorization profile"" ""Verify that the authorization profile is deleted"" ""authenticates Janis (api owner) to get the user session token"" ""Put the resource and verify the success code in the response"" ""Get the resource and verify the success code and product name in the response"" ""Compare the values in response against the values passed in the request"" ""authenticates Janis (api owner) to get the user session token"" ""Delete the product environment and verify the success code in the response"" ""Get the resource and verify that product environment is deleted"" ""Delete the product and verify the success code in the response"" ""Get the resource and verify that product is deleted"" ""authenticates Janis (api owner) to get the user session token"" ""Put the resource (/organization/{org}/datasets) and verify the success code in the response"" ""Get the resource (/organization/{org}/datasets/{name}) and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Put the resource (/namespaces/{ns}/datasets/{name}) and verify the success code in the response"" ""Get the resource (/namespaces/{ns}/datasets/{name}) and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Get the resource (/organizations/{org}/datasets/{name}) and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Get the resource (/organizations/{org}/datasets) and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Get the directory details (/directory) and verify the success code in the response"" ""Get the directory details by its ID (/directory/{id}) and verify the success code in the response"" ""Get the namespace directory details (/namespaces/{ns}/directory) and verify the success code and empty response for the namespace with no directory"" ""Get the namespace directory details (/namespaces/{ns}/directory) and verify the success code in the response"" ""Get the namespace directory details by its ID (/namespaces/{ns}/directory/{id}) and verify the success code in the response"" ""Get the namespace directory details (/namespaces/{ns}/directory/{id}) for non exist directory ID and verify the response code"" ""Delete the dataset (/organizations/{org}/datasets/{name}) and verify the success code in the response"" ""Verify that deleted dataset does not display in Get dataset list"" ""authenticates Janis (api owner) to get the user session token"" ""Get the resource and verify the success code in the response"" ""Verify that the selected Namespace is displayed in the Response list in the response"" ""Get the resource and verify the success code in the response"" ""Get the resource for namespace summary and verify the success code in the response"" ""Delete the namespace and verify the Validation to prevent deleting the namespace"" ""Force delete the namespace and verify the success code in the response"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3283710064",1.0,"FAILED: Automated Tests(232) - Stats: { ""suites"": 51, ""tests"": 379, ""passes"": 90, ""pending"": 0, ""failures"": 232, ""start"": ""2022-10-19T17:56:13.491Z"", ""end"": ""2022-10-19T18:34:31.911Z"", ""duration"": 690871, ""testsRegistered"": 379, ""passPercent"": 23.7467018469657, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 57, ""hasSkipped"": true } Failed Tests: ""update the Dataset in BC Data Catelogue to appear the API in the Directory"" ""publish product to directory"" ""Create a Test environment"" ""applies authorization plugin to service published to Kong Gateway"" ""activate the service for Test environment"" ""activate the service for Dev environment"" ""Grant namespace access to Mark (access manager)"" ""Grant CredentialIssuer.Admin permission to Janis (API Owner)"" ""Collect the credentials"" ""Close the popup without collecting credentials"" ""authenticates Mark (Access-Manager)"" ""Verify that the request status is Pending Approval"" ""Collect the credentials"" ""Verify that API is not accessible with the generated API Key when the request is not approved"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Click on Grant Access button"" ""Grant Access to Test environment"" ""Verify the service is accessible with API key for elevated access"" ""Verify the service is accessibale with API key for free access"" ""Verify the service is accessible with API key for elevated access"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list"" ""set IP address that is not accessible in the network as allowed IP and set Route as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is accessible in the network as allowed IP and set route as scope"" ""verify the success stats when the API calls within the allowed IP range"" ""set IP address that is accessible in the network as allowed IP and set service as scope"" ""verify the success stats when the API calls within the allowed IP range"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""Verify that IP Restriction is set at global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""Verify that IP Restriction is set at global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list "" ""set api rate limit as per the test config, Local Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Local Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""creates an access request"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""Verify that API is accessible with the generated API Key"" ""authenticates Mark (Access-Manager)"" ""verify that consumers are filters as per given parameter"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Verify that labels can be deleted"" ""Verify that labels can be updated"" ""Verify that labels can be added"" ""Grant namespace access to access manager(Mark)"" ""Grant CredentialIssuer.Admin permission to credential issuer(Wendy)"" ""Select the namespace created for client credential "" ""Creates authorization profile for Client ID/Secret"" ""Creates authorization profile for JWT - Generated Key Pair"" ""Creates authorization profile for JWKS URL"" ""Creates invalid authorization profile"" ""Update the Dataset in BC Data Catalogue to appear the API in the Directory"" ""Adds environment with Client ID/Secret authenticator to product"" ""Adds environment with JWT - Generated Key Pair authenticator to product"" ""Adds environment with JWT - JWKS URL authenticator to product"" ""Applies authorization plugin to service published to Kong Gateway"" ""activate the service for Test environment"" ""Adds environment for invalid authorization profile to other"" ""Creates an access request"" ""Access Manager logs in"" ""Select scopes in Authorization Tab"" ""approves an access request"" ""Get access token using client ID and secret; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Get current API Key"" ""Regenrate credential"" ""Verify that new API key is set to the consumer"" ""Verify that only one API key(new key) is set to the consumer in Kong gateway"" ""Regenrate credential client ID and Secret"" ""Make sure that the old client ID and Secret is disabled"" ""update the Dataset in BC Data Catelogue to appear the API in the Directory"" ""publish product to directory"" ""applies authorization plugin to service published to Kong Gateway"" ""Delete Product Environment"" ""Delete the Product"" ""authenticates Janis (api owner) to get the user session token"" ""Get the resource and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Navigate to activity page"" ""Developer logs in"" ""Authenticates api owner"" ""authenticates Harley (developer)"" ""authenticates Harley (developer)"" ""authenticates Harley (developer)"" ""authenticates Mark (Access-Manager)"" ""authenticates Harley (developer)"" ""Delete application"" ""Verify that application is deleted"" ""Verify that API is not accessible with the generated API Key when the application is deleted"" ""authenticates Janis (api owner)"" ""authenticates Janis (api owner)"" ""authenticates Janis (api owner)"" ""authenticates Janis (api owner)"" ""authenticates Harley (developer)"" ""authenticates Janis (api owner)"" ""Authenticates Mark (Access-Manager)"" ""Navigate to Consumer Page to see the Approve Request option"" ""Verify that the option to approve request is displayed"" ""Authenticates Janis (api owner)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that all the namespace options and activities are displayed"" ""Authenticates Janis (api owner)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that only Authorization Profile option is displayed in Namespace page"" ""Verify that authorization profile for Client ID/Secret is generated"" ""Authenticates Janis (api owner)"" ""authenticates Mark"" ""Navigate to Consumer Page to see the Approve Request option"" ""Navigate to Consumer Page to see the Approve Request option"" ""Verify that service accounts are not created"" ""authenticates Janis (api owner)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that GWA API allows user to publish the API to Kong gateway"" ""authenticates Janis (api owner)"" ""authenticates Janis (api owner)"" ""Prepare the Request Specification for the API"" ""Prepare the Request Specification for the API"" ""authenticates Janis (api owner) to get the user session token"" ""Get the resource and verify the success code in the response"" ""Compare the scope values in response against the expected values"" ""Get the resource and verify the success code in the response"" ""Compare the Namespace values in response against the expected values"" ""Delete the namespace associated with the organization, organization unit and verify the success code in the response"" ""Verify that the deleted Namespace is not displayed in Get Call"" ""Add the access of the organization to the specific user and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Compare the Namespace values in response against the expected values"" ""authenticates Janis (api owner) to get the user session token"" ""Put the resource and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Verify the status code and response message for invalid slugvalue"" ""Delete the documentation"" ""Delete the documentation"" ""Put the resource and verify the success code in the response"" ""Verify that document contant is displayed for GET /documentation"" ""Verify the status code and response message for invalid slug id"" ""Verify that document contant is fetch by slug ID"" ""authenticates Janis (api owner) to get the user session token"" ""Put the resource and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Delete the authorization profile"" ""Verify that the authorization profile is deleted"" ""Put the resource and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Delete the authorization profile"" ""Verify that the authorization profile is deleted"" ""Put the resource and verify the success code in the response"" ""Get the resource and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Delete the authorization profile"" ""Verify that the authorization profile is deleted"" ""authenticates Janis (api owner) to get the user session token"" ""Put the resource and verify the success code in the response"" ""Get the resource and verify the success code and product name in the response"" ""Compare the values in response against the values passed in the request"" ""authenticates Janis (api owner) to get the user session token"" ""Delete the product environment and verify the success code in the response"" ""Get the resource and verify that product environment is deleted"" ""Delete the product and verify the success code in the response"" ""Get the resource and verify that product is deleted"" ""authenticates Janis (api owner) to get the user session token"" ""Put the resource (/organization/{org}/datasets) and verify the success code in the response"" ""Get the resource (/organization/{org}/datasets/{name}) and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Put the resource (/namespaces/{ns}/datasets/{name}) and verify the success code in the response"" ""Get the resource (/namespaces/{ns}/datasets/{name}) and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Get the resource (/organizations/{org}/datasets/{name}) and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Get the resource (/organizations/{org}/datasets) and verify the success code in the response"" ""Compare the values in response against the values passed in the request"" ""Get the directory details (/directory) and verify the success code in the response"" ""Get the directory details by its ID (/directory/{id}) and verify the success code in the response"" ""Get the namespace directory details (/namespaces/{ns}/directory) and verify the success code and empty response for the namespace with no directory"" ""Get the namespace directory details (/namespaces/{ns}/directory) and verify the success code in the response"" ""Get the namespace directory details by its ID (/namespaces/{ns}/directory/{id}) and verify the success code in the response"" ""Get the namespace directory details (/namespaces/{ns}/directory/{id}) for non exist directory ID and verify the response code"" ""Delete the dataset (/organizations/{org}/datasets/{name}) and verify the success code in the response"" ""Verify that deleted dataset does not display in Get dataset list"" ""authenticates Janis (api owner) to get the user session token"" ""Get the resource and verify the success code in the response"" ""Verify that the selected Namespace is displayed in the Response list in the response"" ""Get the resource and verify the success code in the response"" ""Get the resource for namespace summary and verify the success code in the response"" ""Delete the namespace and verify the Validation to prevent deleting the namespace"" ""Force delete the namespace and verify the success code in the response"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3283710064",1,failed automated tests stats suites tests passes pending failures start end duration testsregistered passpercent pendingpercent other hasother false skipped hasskipped true failed tests update the dataset in bc data catelogue to appear the api in the directory publish product to directory create a test environment applies authorization plugin to service published to kong gateway activate the service for test environment activate the service for dev environment grant namespace access to mark access manager grant credentialissuer admin permission to janis api owner collect the credentials close the popup without collecting credentials authenticates mark access manager verify that the request status is pending approval collect the credentials verify that api is not accessible with the generated api key when the request is not approved authenticates mark access manager verify the request details add group labels in request details window approves an access request authenticates mark access manager navigate to consumer page and filter the product click on the first consumer click on grant access button grant access to test environment verify the service is accessible with api key for elevated access verify the service is accessibale with api key for free access verify the service is accessible with api key for elevated access authenticates mark access manager navigate to consumer page and filter the product select the consumer from the list set ip address that is not accessible in the network as allowed ip and set route as scope verify ip restriction error when the api calls other than the allowed ip set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip set ip address that is accessible in the network as allowed ip and set route as scope verify the success stats when the api calls within the allowed ip range set ip address that is accessible in the network as allowed ip and set service as scope verify the success stats when the api calls within the allowed ip range navigate to consumer page and filter the product set api ip restriction to global service level verify that ip restriction is set at global service level set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip navigate to consumer page and filter the product set api ip restriction to global service level verify that ip restriction is set at global service level set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip authenticates mark access manager navigate to consumer page and filter the product select the consumer from the list set api rate limit as per the test config local policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit as per the test config local policy and scope as route verify rate limit error when the api calls beyond the limit set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit as per the test config redis policy and scope as route verify rate limit error when the api calls beyond the limit set api rate limit to global service level verify that rate limiting is set at global service level set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit to global service level verify that rate limiting is set at global service level set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit creates an access request authenticates mark access manager verify the request details add group labels in request details window approves an access request verify that api is accessible with the generated api key authenticates mark access manager verify that consumers are filters as per given parameter authenticates mark access manager navigate to consumer page and filter the product click on the first consumer verify that labels can be deleted verify that labels can be updated verify that labels can be added grant namespace access to access manager mark grant credentialissuer admin permission to credential issuer wendy select the namespace created for client credential creates authorization profile for client id secret creates authorization profile for jwt generated key pair creates authorization profile for jwks url creates invalid authorization profile update the dataset in bc data catalogue to appear the api in the directory adds environment with client id secret authenticator to product adds environment with jwt generated key pair authenticator to product adds environment with jwt jwks url authenticator to product applies authorization plugin to service published to kong gateway activate the service for test environment adds environment for invalid authorization profile to other creates an access request access manager logs in select scopes in authorization tab approves an access request get access token using client id and secret make api request creates an access request access manager logs in approves an access request get access token using jwt key pair make api request creates an access request access manager logs in approves an access request get access token using jwt key pair make api request get current api key regenrate credential verify that new api key is set to the consumer verify that only one api key new key is set to the consumer in kong gateway regenrate credential client id and secret make sure that the old client id and secret is disabled update the dataset in bc data catelogue to appear the api in the directory publish product to directory applies authorization plugin to service published to kong gateway delete product environment delete the product authenticates janis api owner to get the user session token get the resource and verify the success code in the response get the resource and verify the success code in the response navigate to activity page developer logs in authenticates api owner authenticates harley developer authenticates harley developer authenticates harley developer authenticates mark access manager authenticates harley developer delete application verify that application is deleted verify that api is not accessible with the generated api key when the application is deleted authenticates janis api owner authenticates janis api owner authenticates janis api owner authenticates janis api owner authenticates harley developer authenticates janis api owner authenticates mark access manager navigate to consumer page to see the approve request option verify that the option to approve request is displayed authenticates janis api owner authenticates wendy credential issuer verify that all the namespace options and activities are displayed authenticates janis api owner authenticates wendy credential issuer verify that only authorization profile option is displayed in namespace page verify that authorization profile for client id secret is generated authenticates janis api owner authenticates mark navigate to consumer page to see the approve request option navigate to consumer page to see the approve request option verify that service accounts are not created authenticates janis api owner authenticates wendy credential issuer verify that gwa api allows user to publish the api to kong gateway authenticates janis api owner authenticates janis api owner prepare the request specification for the api prepare the request specification for the api authenticates janis api owner to get the user session token get the resource and verify the success code in the response compare the scope values in response against the expected values get the resource and verify the success code in the response compare the namespace values in response against the expected values delete the namespace associated with the organization organization unit and verify the success code in the response verify that the deleted namespace is not displayed in get call add the access of the organization to the specific user and verify the success code in the response get the resource and verify the success code in the response compare the namespace values in response against the expected values authenticates janis api owner to get the user session token put the resource and verify the success code in the response get the resource and verify the success code in the response compare the values in response against the values passed in the request verify the status code and response message for invalid slugvalue delete the documentation delete the documentation put the resource and verify the success code in the response verify that document contant is displayed for get documentation verify the status code and response message for invalid slug id verify that document contant is fetch by slug id authenticates janis api owner to get the user session token put the resource and verify the success code in the response get the resource and verify the success code in the response compare the values in response against the values passed in the request delete the authorization profile verify that the authorization profile is deleted put the resource and verify the success code in the response get the resource and verify the success code in the response compare the values in response against the values passed in the request delete the authorization profile verify that the authorization profile is deleted put the resource and verify the success code in the response get the resource and verify the success code in the response compare the values in response against the values passed in the request delete the authorization profile verify that the authorization profile is deleted authenticates janis api owner to get the user session token put the resource and verify the success code in the response get the resource and verify the success code and product name in the response compare the values in response against the values passed in the request authenticates janis api owner to get the user session token delete the product environment and verify the success code in the response get the resource and verify that product environment is deleted delete the product and verify the success code in the response get the resource and verify that product is deleted authenticates janis api owner to get the user session token put the resource organization org datasets and verify the success code in the response get the resource organization org datasets name and verify the success code in the response compare the values in response against the values passed in the request put the resource namespaces ns datasets name and verify the success code in the response get the resource namespaces ns datasets name and verify the success code in the response compare the values in response against the values passed in the request get the resource organizations org datasets name and verify the success code in the response compare the values in response against the values passed in the request get the resource organizations org datasets and verify the success code in the response compare the values in response against the values passed in the request get the directory details directory and verify the success code in the response get the directory details by its id directory id and verify the success code in the response get the namespace directory details namespaces ns directory and verify the success code and empty response for the namespace with no directory get the namespace directory details namespaces ns directory and verify the success code in the response get the namespace directory details by its id namespaces ns directory id and verify the success code in the response get the namespace directory details namespaces ns directory id for non exist directory id and verify the response code delete the dataset organizations org datasets name and verify the success code in the response verify that deleted dataset does not display in get dataset list authenticates janis api owner to get the user session token get the resource and verify the success code in the response verify that the selected namespace is displayed in the response list in the response get the resource and verify the success code in the response get the resource for namespace summary and verify the success code in the response delete the namespace and verify the validation to prevent deleting the namespace force delete the namespace and verify the success code in the response run link ,1 6274,22659497446.0,IssuesEvent,2022-07-02 00:41:49,wilkins88/ApexLibs,https://api.github.com/repos/wilkins88/ApexLibs,opened,Allow ordering on SObject Setting in triggers,enhancement Automation,"AC: - Similar to how ordering can be applied to handlers, allow ordering on the sobject setting level to control ordering across packages",1.0,"Allow ordering on SObject Setting in triggers - AC: - Similar to how ordering can be applied to handlers, allow ordering on the sobject setting level to control ordering across packages",1,allow ordering on sobject setting in triggers ac similar to how ordering can be applied to handlers allow ordering on the sobject setting level to control ordering across packages,1 203,4675950260.0,IssuesEvent,2016-10-07 09:51:43,cf-tm-bot/openstack_cpi,https://api.github.com/repos/cf-tm-bot/openstack_cpi,closed, extract lifecycle terraform from current test-terraform - Story Id: 127445791,chore ci env-creation-automation started,"currently, we have just one terraform script for bats and lifecycles. extract the lifecycle part, so we can run it separately. --- Mirrors: [story 127445791](https://www.pivotaltracker.com/story/show/127445791) submitted on Aug 1, 2016 UTC - **Requester**: Marco Voelz - **Owners**: Felix Riegger, Mauro Morales - **Estimate**: 0.0",1.0," extract lifecycle terraform from current test-terraform - Story Id: 127445791 - currently, we have just one terraform script for bats and lifecycles. extract the lifecycle part, so we can run it separately. --- Mirrors: [story 127445791](https://www.pivotaltracker.com/story/show/127445791) submitted on Aug 1, 2016 UTC - **Requester**: Marco Voelz - **Owners**: Felix Riegger, Mauro Morales - **Estimate**: 0.0",1, extract lifecycle terraform from current test terraform story id currently we have just one terraform script for bats and lifecycles extract the lifecycle part so we can run it separately mirrors submitted on aug utc requester marco voelz owners felix riegger mauro morales estimate ,1 3440,13766274373.0,IssuesEvent,2020-10-07 14:23:02,elastic/beats,https://api.github.com/repos/elastic/beats,closed,[CI] Run Journalbeat compatibility tests,Journalbeat Team:Automation [zube]: Inbox automation ci enhancement,"We need to test Journalbeat with different Linux distributions and different systems versions. * CentOS/RHEL 6.5+/7.x (64 bits) * CentOS/RHEL 8 (64 bits) * Ubuntu 14.04 (32 bits) * Ubuntu 14.04 (64 bits) * Ubuntu 16.04 (64 bits) * Ubuntu 18.04 (64 bits) * Ubuntu 20.04 (64 bits) * Debian 8 (64 bits) * Debian 9 (64 bits) * Debian 10 (64 bits) We need a make/mage target to run the proper test because there are some differences between systems versions, those test would be different thus we will have to select the kind of test by passing parameters/env vars/whatever",2.0,"[CI] Run Journalbeat compatibility tests - We need to test Journalbeat with different Linux distributions and different systems versions. * CentOS/RHEL 6.5+/7.x (64 bits) * CentOS/RHEL 8 (64 bits) * Ubuntu 14.04 (32 bits) * Ubuntu 14.04 (64 bits) * Ubuntu 16.04 (64 bits) * Ubuntu 18.04 (64 bits) * Ubuntu 20.04 (64 bits) * Debian 8 (64 bits) * Debian 9 (64 bits) * Debian 10 (64 bits) We need a make/mage target to run the proper test because there are some differences between systems versions, those test would be different thus we will have to select the kind of test by passing parameters/env vars/whatever",1, run journalbeat compatibility tests we need to test journalbeat with different linux distributions and different systems versions centos rhel x bits centos rhel bits ubuntu bits ubuntu bits ubuntu bits ubuntu bits ubuntu bits debian bits debian bits debian bits we need a make mage target to run the proper test because there are some differences between systems versions those test would be different thus we will have to select the kind of test by passing parameters env vars whatever,1 167392,13024032732.0,IssuesEvent,2020-07-27 11:05:20,hibernate/hibernate-reactive,https://api.github.com/repos/hibernate/hibernate-reactive,opened,Update tests after upgrade to vert.x sql client 3.9.2,testing,Now that the [client is updated](https://github.com/hibernate/hibernate-reactive/pull/293) some of the types that weren't working for DB2 will work.,1.0,Update tests after upgrade to vert.x sql client 3.9.2 - Now that the [client is updated](https://github.com/hibernate/hibernate-reactive/pull/293) some of the types that weren't working for DB2 will work.,0,update tests after upgrade to vert x sql client now that the some of the types that weren t working for will work ,0 17901,12685561805.0,IssuesEvent,2020-06-20 05:18:56,microsoft/TypeScript,https://api.github.com/repos/microsoft/TypeScript,closed,Unable to publish due to baseline difference in `.d.ts` emit,High Priority Infrastructure,"We seem to be hitting some sort of issue with parenthesization on `.d.ts` files. This has been blocking nightly publishes, and will block any sort of beta publish next week. https://typescript.visualstudio.com/TypeScript/_build/results?buildId=76918&view=logs&j=fd490c07-0b22-5182-fac9-6d67fe1e939b&t=00933dce-c782-5c03-4a85-76379ccfa50a&l=139 ![image](https://user-images.githubusercontent.com/972891/84962490-d9f05280-b0bb-11ea-8130-c8109e79923e.png) ",1.0,"Unable to publish due to baseline difference in `.d.ts` emit - We seem to be hitting some sort of issue with parenthesization on `.d.ts` files. This has been blocking nightly publishes, and will block any sort of beta publish next week. https://typescript.visualstudio.com/TypeScript/_build/results?buildId=76918&view=logs&j=fd490c07-0b22-5182-fac9-6d67fe1e939b&t=00933dce-c782-5c03-4a85-76379ccfa50a&l=139 ![image](https://user-images.githubusercontent.com/972891/84962490-d9f05280-b0bb-11ea-8130-c8109e79923e.png) ",0,unable to publish due to baseline difference in d ts emit we seem to be hitting some sort of issue with parenthesization on d ts files this has been blocking nightly publishes and will block any sort of beta publish next week ,0 539238,15785747731.0,IssuesEvent,2021-04-01 16:45:35,wso2/product-apim,https://api.github.com/repos/wso2/product-apim,closed,Remove unused analytics configuration UI from devportal and publisher,API-M 4.0.0 Priority/Normal REST APIs React-UI Type/Bug,"### Description: ### Steps to reproduce: ![image](https://user-images.githubusercontent.com/3313885/109153005-487cc880-7792-11eb-9c9e-ce0aad2077b5.png) and ![image](https://user-images.githubusercontent.com/3313885/109153024-4fa3d680-7792-11eb-9e85-7c0d8f5e3567.png) should be removed along with their respective components ### Affected Product Version: ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: #### Suggested Labels: #### Suggested Assignees: ",1.0,"Remove unused analytics configuration UI from devportal and publisher - ### Description: ### Steps to reproduce: ![image](https://user-images.githubusercontent.com/3313885/109153005-487cc880-7792-11eb-9c9e-ce0aad2077b5.png) and ![image](https://user-images.githubusercontent.com/3313885/109153024-4fa3d680-7792-11eb-9e85-7c0d8f5e3567.png) should be removed along with their respective components ### Affected Product Version: ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: #### Suggested Labels: #### Suggested Assignees: ",0,remove unused analytics configuration ui from devportal and publisher description steps to reproduce and should be removed along with their respective components affected product version environment details with versions os client env docker optional fields related issues suggested labels suggested assignees ,0 361,5718327206.0,IssuesEvent,2017-04-19 19:16:36,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,"A call of pressKey(""enter"") doesn't raise the ""click"" event on a button element",AREA: client SYSTEM: automations TYPE: bug,"### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? A call of pressKey(""enter"") doesn't raise the ""click"" event on a button element ### What is the expected behavior? Enter key press should raise the ""click"" event on a button element. #### Provide the test code and the tested page URL (if applicable) Test code ```js test(""tester"", async t => { await ClientFunction(() => { el().focus(); }, { dependencies: { el: Selector(""#test"") } })(); await t .wait(2000) .pressKey(""enter"") .wait(2000); }); ``` ```html
``` ### Specify your * operating system: * testcafe version: * node.js version:",1.0,"A call of pressKey(""enter"") doesn't raise the ""click"" event on a button element - ### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? A call of pressKey(""enter"") doesn't raise the ""click"" event on a button element ### What is the expected behavior? Enter key press should raise the ""click"" event on a button element. #### Provide the test code and the tested page URL (if applicable) Test code ```js test(""tester"", async t => { await ClientFunction(() => { el().focus(); }, { dependencies: { el: Selector(""#test"") } })(); await t .wait(2000) .pressKey(""enter"") .wait(2000); }); ``` ```html
``` ### Specify your * operating system: * testcafe version: * node.js version:",1,a call of presskey enter doesn t raise the click event on a button element are you requesting a feature or reporting a bug bug what is the current behavior a call of presskey enter doesn t raise the click event on a button element what is the expected behavior enter key press should raise the click event on a button element provide the test code and the tested page url if applicable test code js test tester async t await clientfunction el focus dependencies el selector test await t wait presskey enter wait html click me document getelementbyid test addeventlistener click function document getelementbyid res innerhtml clicked document getelementbyid test addeventlistener keypress function document getelementbyid res innerhtml keypress specify your operating system testcafe version node js version ,1 4095,15397879968.0,IssuesEvent,2021-03-03 22:55:10,MinaProtocol/mina,https://api.github.com/repos/MinaProtocol/mina,closed,Integration Test Core: Improved GraphQL Port Management,acceptance-automation,"In the current implementation of the integration test framework, whenever we need to query the GraphQL port of a daemon we have deployed, we begin a port-forwarding that pod's GraphQL port to a local port on the host machine running the integration test executive. This is done in an on-demand fashion, where we begin port forwarding as soon as we need to send the first GraphQL request to each node. https://github.com/MinaProtocol/mina/blob/develop/src/lib/integration_test_cloud_engine/kubernetes_network.ml#L78 As we begin to scale the integration test framework to launch larger networks, and begin to utilize the GraphQL queries more and more, we need to ensure that this system for managing GraphQL ports is responsive enough to not cause large delays in the testing when we need to broadcast a GraphQL query out to a somewhat large set of nodes running on the network. In my personal testing (and this should probably be confirmed by someone else as well), running `kubectl port-forward ...` pauses and takes a little bit of time (seconds) to startup. Doing this on-demand for N nodes all at once would cause a somewhat lengthy delay in the integration test framework. First, we should measure the performance of this system (eg, run a network with 50 nodes and see how long it takes to setup each of those port forwarding commands). From that, we should have a meeting to discuss how to approach this problem. There are likely a few ways to alleviate this, including setting up port-forwarding for all nodes at the start of the test rather than on-demand, or using kubectl to tunnel into pods when sending queries rather than exposing ports out of those pods to our local machine. ## Post Meeting @0x0I, Helena, and myself met to discuss how we want to manage this moving forward. We landed on the following solution: Setup a single, static ingress for all the integration tests. Deploy without setting explicit GraphQL ports, allowing kubernetes to dynamically assign random, free ports to each service. Once the deploy is done, before we start the test from the test_executive, query all of the ports that were assigned. Setup path-based routing in the ingress to each of these services and their respective GraphQL ports. Use this ingress as the single entrypoint to talk to all the GraphQL instances runnning on our nodes.",1.0,"Integration Test Core: Improved GraphQL Port Management - In the current implementation of the integration test framework, whenever we need to query the GraphQL port of a daemon we have deployed, we begin a port-forwarding that pod's GraphQL port to a local port on the host machine running the integration test executive. This is done in an on-demand fashion, where we begin port forwarding as soon as we need to send the first GraphQL request to each node. https://github.com/MinaProtocol/mina/blob/develop/src/lib/integration_test_cloud_engine/kubernetes_network.ml#L78 As we begin to scale the integration test framework to launch larger networks, and begin to utilize the GraphQL queries more and more, we need to ensure that this system for managing GraphQL ports is responsive enough to not cause large delays in the testing when we need to broadcast a GraphQL query out to a somewhat large set of nodes running on the network. In my personal testing (and this should probably be confirmed by someone else as well), running `kubectl port-forward ...` pauses and takes a little bit of time (seconds) to startup. Doing this on-demand for N nodes all at once would cause a somewhat lengthy delay in the integration test framework. First, we should measure the performance of this system (eg, run a network with 50 nodes and see how long it takes to setup each of those port forwarding commands). From that, we should have a meeting to discuss how to approach this problem. There are likely a few ways to alleviate this, including setting up port-forwarding for all nodes at the start of the test rather than on-demand, or using kubectl to tunnel into pods when sending queries rather than exposing ports out of those pods to our local machine. ## Post Meeting @0x0I, Helena, and myself met to discuss how we want to manage this moving forward. We landed on the following solution: Setup a single, static ingress for all the integration tests. Deploy without setting explicit GraphQL ports, allowing kubernetes to dynamically assign random, free ports to each service. Once the deploy is done, before we start the test from the test_executive, query all of the ports that were assigned. Setup path-based routing in the ingress to each of these services and their respective GraphQL ports. Use this ingress as the single entrypoint to talk to all the GraphQL instances runnning on our nodes.",1,integration test core improved graphql port management in the current implementation of the integration test framework whenever we need to query the graphql port of a daemon we have deployed we begin a port forwarding that pod s graphql port to a local port on the host machine running the integration test executive this is done in an on demand fashion where we begin port forwarding as soon as we need to send the first graphql request to each node as we begin to scale the integration test framework to launch larger networks and begin to utilize the graphql queries more and more we need to ensure that this system for managing graphql ports is responsive enough to not cause large delays in the testing when we need to broadcast a graphql query out to a somewhat large set of nodes running on the network in my personal testing and this should probably be confirmed by someone else as well running kubectl port forward pauses and takes a little bit of time seconds to startup doing this on demand for n nodes all at once would cause a somewhat lengthy delay in the integration test framework first we should measure the performance of this system eg run a network with nodes and see how long it takes to setup each of those port forwarding commands from that we should have a meeting to discuss how to approach this problem there are likely a few ways to alleviate this including setting up port forwarding for all nodes at the start of the test rather than on demand or using kubectl to tunnel into pods when sending queries rather than exposing ports out of those pods to our local machine post meeting helena and myself met to discuss how we want to manage this moving forward we landed on the following solution setup a single static ingress for all the integration tests deploy without setting explicit graphql ports allowing kubernetes to dynamically assign random free ports to each service once the deploy is done before we start the test from the test executive query all of the ports that were assigned setup path based routing in the ingress to each of these services and their respective graphql ports use this ingress as the single entrypoint to talk to all the graphql instances runnning on our nodes ,1 107565,4310914557.0,IssuesEvent,2016-07-21 20:49:28,Toolwatchapp/tw-mobile,https://api.github.com/repos/Toolwatchapp/tw-mobile,closed,Wording typo fix ,effort: 1 (easy) priority: 3 (nice to have) type:enhancement,"Doesn't seem I could find that one in the i18n file :( Please change deleting a watch confirm box ""Watch suppression"" to ""Delete watch"". Thanks",1.0,"Wording typo fix - Doesn't seem I could find that one in the i18n file :( Please change deleting a watch confirm box ""Watch suppression"" to ""Delete watch"". Thanks",0,wording typo fix doesn t seem i could find that one in the file please change deleting a watch confirm box watch suppression to delete watch thanks,0 67777,9099937488.0,IssuesEvent,2019-02-20 06:50:47,poliastro/poliastro,https://api.github.com/repos/poliastro/poliastro,closed,Documentation CSS messed up by Jupyter stuff,bug documentation upstream,"Compare: https://docs.poliastro.space/en/latest/ with: https://docs.poliastro.space/en/stable/ And the culprit seems to be some inline CSS. It can confirmed by adding this line to `/etc/hosts/`: ``` 127.0.0.1 unpkg.com ```",1.0,"Documentation CSS messed up by Jupyter stuff - Compare: https://docs.poliastro.space/en/latest/ with: https://docs.poliastro.space/en/stable/ And the culprit seems to be some inline CSS. It can confirmed by adding this line to `/etc/hosts/`: ``` 127.0.0.1 unpkg.com ```",0,documentation css messed up by jupyter stuff compare with and the culprit seems to be some inline css it can confirmed by adding this line to etc hosts unpkg com ,0 270680,20605338247.0,IssuesEvent,2022-03-06 22:05:44,bounswe/bounswe2022group5,https://api.github.com/repos/bounswe/bounswe2022group5,closed,Creating wiki page for the research about favourite repos,Type: Documentation Type: Research,"#### Description: A wiki page for displaying the collective result of our research about favourite github repositories mentioned in Assignment1 needs to be created. The research results should include name of the repo, link to the repo and a description. #### To Do: * Create a wiki page for research results * Add a research result template as example",1.0,"Creating wiki page for the research about favourite repos - #### Description: A wiki page for displaying the collective result of our research about favourite github repositories mentioned in Assignment1 needs to be created. The research results should include name of the repo, link to the repo and a description. #### To Do: * Create a wiki page for research results * Add a research result template as example",0,creating wiki page for the research about favourite repos description a wiki page for displaying the collective result of our research about favourite github repositories mentioned in needs to be created the research results should include name of the repo link to the repo and a description to do create a wiki page for research results add a research result template as example,0 69930,15043648745.0,IssuesEvent,2021-02-03 01:10:51,yaeljacobs67/proxysql,https://api.github.com/repos/yaeljacobs67/proxysql,opened,CVE-2019-19645 (Medium) detected in wazuhv3.3.1,security vulnerability,"## CVE-2019-19645 - Medium Severity Vulnerability
Vulnerable Library - wazuhv3.3.1

Wazuh - The Open Source Security Platform

Library home page: https://github.com/wazuh/wazuh.git

Vulnerable Source Files (2)

proxysql/deps/sqlite3/sqlite-amalgamation-3190200.tar/sqlite-amalgamation-3190200/sqlite3.c proxysql/deps/sqlite3/sqlite-amalgamation-3190200.tar/sqlite-amalgamation-3190200/sqlite3.c

Vulnerability Details

alter.c in SQLite through 3.30.1 allows attackers to trigger infinite recursion via certain types of self-referential views in conjunction with ALTER TABLE statements.

Publish Date: 2019-12-09

URL: CVE-2019-19645

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-19645

Release Date: 2019-12-09

Fix Resolution: 3.31.0

",True,"CVE-2019-19645 (Medium) detected in wazuhv3.3.1 - ## CVE-2019-19645 - Medium Severity Vulnerability
Vulnerable Library - wazuhv3.3.1

Wazuh - The Open Source Security Platform

Library home page: https://github.com/wazuh/wazuh.git

Vulnerable Source Files (2)

proxysql/deps/sqlite3/sqlite-amalgamation-3190200.tar/sqlite-amalgamation-3190200/sqlite3.c proxysql/deps/sqlite3/sqlite-amalgamation-3190200.tar/sqlite-amalgamation-3190200/sqlite3.c

Vulnerability Details

alter.c in SQLite through 3.30.1 allows attackers to trigger infinite recursion via certain types of self-referential views in conjunction with ALTER TABLE statements.

Publish Date: 2019-12-09

URL: CVE-2019-19645

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-19645

Release Date: 2019-12-09

Fix Resolution: 3.31.0

",0,cve medium detected in cve medium severity vulnerability vulnerable library wazuh the open source security platform library home page a href vulnerable source files proxysql deps sqlite amalgamation tar sqlite amalgamation c proxysql deps sqlite amalgamation tar sqlite amalgamation c vulnerability details alter c in sqlite through allows attackers to trigger infinite recursion via certain types of self referential views in conjunction with alter table statements publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ,0 156991,13669084718.0,IssuesEvent,2020-09-29 00:54:02,UnBArqDsw/2020.1_G7_TCM,https://api.github.com/repos/UnBArqDsw/2020.1_G7_TCM,closed,Diagrama de colaboração/comunicação,documentation,Elaborar o diagrama de colaboração/comunicação para a aplicação.,1.0,Diagrama de colaboração/comunicação - Elaborar o diagrama de colaboração/comunicação para a aplicação.,0,diagrama de colaboração comunicação elaborar o diagrama de colaboração comunicação para a aplicação ,0 4236,15855685721.0,IssuesEvent,2021-04-08 00:28:33,aws/aws-cli,https://api.github.com/repos/aws/aws-cli,closed,completer doesn't work with file redirection in bash,autocomplete automation-exempt bug,"`aws ec2 describe-instances > ` doesn't let me choose a file. ",1.0,"completer doesn't work with file redirection in bash - `aws ec2 describe-instances > ` doesn't let me choose a file. ",1,completer doesn t work with file redirection in bash aws describe instances doesn t let me choose a file ,1 40688,12799618608.0,IssuesEvent,2020-07-02 15:40:34,TreyM-WSS/concord,https://api.github.com/repos/TreyM-WSS/concord,opened,CVE-2020-7608 (Medium) detected in yargs-parser-11.1.1.tgz,security vulnerability,"## CVE-2020-7608 - Medium Severity Vulnerability
Vulnerable Library - yargs-parser-11.1.1.tgz

the mighty option parser used by yargs

Library home page: https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz

Path to dependency file: /tmp/ws-scm/concord/console2/package.json

Path to vulnerable library: /tmp/ws-scm/concord/console2/node_modules/webpack-dev-server/node_modules/yargs-parser/package.json

Dependency Hierarchy: - react-scripts-3.4.1.tgz (Root Library) - webpack-dev-server-3.10.3.tgz - yargs-12.0.5.tgz - :x: **yargs-parser-11.1.1.tgz** (Vulnerable Library)

Found in HEAD commit: cfb756aae811651de93ac8a69c7191e48bb4960f

Vulnerability Details

yargs-parser could be tricked into adding or modifying properties of Object.prototype using a ""__proto__"" payload.

Publish Date: 2020-03-16

URL: CVE-2020-7608

CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608

Release Date: 2020-03-16

Fix Resolution: v18.1.1;13.1.2;15.0.1

",True,"CVE-2020-7608 (Medium) detected in yargs-parser-11.1.1.tgz - ## CVE-2020-7608 - Medium Severity Vulnerability
Vulnerable Library - yargs-parser-11.1.1.tgz

the mighty option parser used by yargs

Library home page: https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz

Path to dependency file: /tmp/ws-scm/concord/console2/package.json

Path to vulnerable library: /tmp/ws-scm/concord/console2/node_modules/webpack-dev-server/node_modules/yargs-parser/package.json

Dependency Hierarchy: - react-scripts-3.4.1.tgz (Root Library) - webpack-dev-server-3.10.3.tgz - yargs-12.0.5.tgz - :x: **yargs-parser-11.1.1.tgz** (Vulnerable Library)

Found in HEAD commit: cfb756aae811651de93ac8a69c7191e48bb4960f

Vulnerability Details

yargs-parser could be tricked into adding or modifying properties of Object.prototype using a ""__proto__"" payload.

Publish Date: 2020-03-16

URL: CVE-2020-7608

CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608

Release Date: 2020-03-16

Fix Resolution: v18.1.1;13.1.2;15.0.1

",0,cve medium detected in yargs parser tgz cve medium severity vulnerability vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm concord package json path to vulnerable library tmp ws scm concord node modules webpack dev server node modules yargs parser package json dependency hierarchy react scripts tgz root library webpack dev server tgz yargs tgz x yargs parser tgz vulnerable library found in head commit a href vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload vulnerabilityurl ,0 6354,22843857719.0,IssuesEvent,2022-07-13 02:30:32,keycloak/keycloak-benchmark,https://api.github.com/repos/keycloak/keycloak-benchmark,closed,add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent,automation,"### Description add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent ### Discussion _No response_ ### Motivation To be able to scale the number of runs that can be executed using the existing framework ### Details This particular script would be designed with a specific Jenkins CI system in mind, however, similar approach can be adopted to any other CI systems around.",1.0,"add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent - ### Description add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent ### Discussion _No response_ ### Motivation To be able to scale the number of runs that can be executed using the existing framework ### Details This particular script would be designed with a specific Jenkins CI system in mind, however, similar approach can be adopted to any other CI systems around.",1,add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent description add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent discussion no response motivation to be able to scale the number of runs that can be executed using the existing framework details this particular script would be designed with a specific jenkins ci system in mind however similar approach can be adopted to any other ci systems around ,1 131641,5163549418.0,IssuesEvent,2017-01-17 07:17:32,VirtoCommerce/vc-platform,https://api.github.com/repos/VirtoCommerce/vc-platform,closed,Storefront fix bug when Contact.Id and User.Id can be different,bug high priority,"Need use User.MemberId to link security account with CRM contact ",1.0,"Storefront fix bug when Contact.Id and User.Id can be different - Need use User.MemberId to link security account with CRM contact ",0,storefront fix bug when contact id and user id can be different need use user memberid to link security account with crm contact ,0 3766,14531472321.0,IssuesEvent,2020-12-14 20:50:41,BCDevOps/OpenShift4-RollOut,https://api.github.com/repos/BCDevOps/OpenShift4-RollOut,opened,OCP GOLD - Configure the new GOLD cluster,team/DXC tech/automation,"**Describe the issue** After bootstrapping all nodes is complete, final configuration will need to be applied before the cluster is in a working state. **Which Sprint Goal is this issue related to?** **Additional context** Involved playbook is found here: **Definition of done Checklist (where applicable)** - [ ] Run playbooks/config-api.yaml - confirm web console and oc still work. - [ ] Run playbooks/config-everything.yaml. Confirm overall cluster functionality.",1.0,"OCP GOLD - Configure the new GOLD cluster - **Describe the issue** After bootstrapping all nodes is complete, final configuration will need to be applied before the cluster is in a working state. **Which Sprint Goal is this issue related to?** **Additional context** Involved playbook is found here: **Definition of done Checklist (where applicable)** - [ ] Run playbooks/config-api.yaml - confirm web console and oc still work. - [ ] Run playbooks/config-everything.yaml. Confirm overall cluster functionality.",1,ocp gold configure the new gold cluster describe the issue after bootstrapping all nodes is complete final configuration will need to be applied before the cluster is in a working state which sprint goal is this issue related to additional context involved playbook is found here definition of done checklist where applicable run playbooks config api yaml confirm web console and oc still work run playbooks config everything yaml confirm overall cluster functionality ,1 4689,17243962693.0,IssuesEvent,2021-07-21 05:28:52,pc2ccs/pc2v9,https://api.github.com/repos/pc2ccs/pc2v9,opened,Update team numbers in pc2 from a CLICS event feed,automation enhancement,"**Is your feature request related to a problem?** More of an inefficiency **Feature Description**: Update team numbers based on team numbers in a CLICS event feed. The CMS Team Id (aka external team id) will be used to match the team account, then the team number can be updated/assignged. **Have you considered other ways to accomplish the same thing?** Yes, there is no workaround. **Do you have any specific suggestions for how your feature would be ***implemented*** in PC^2?** If so, Create a command line tool that will update an existing teams.tsv and update the team numbers. Another option is to add a UI feature that will read the event feed and update the team numbers. **Additional context**: Note that connection information can be read from CDP/config files or from the ContestInformation class.",1.0,"Update team numbers in pc2 from a CLICS event feed - **Is your feature request related to a problem?** More of an inefficiency **Feature Description**: Update team numbers based on team numbers in a CLICS event feed. The CMS Team Id (aka external team id) will be used to match the team account, then the team number can be updated/assignged. **Have you considered other ways to accomplish the same thing?** Yes, there is no workaround. **Do you have any specific suggestions for how your feature would be ***implemented*** in PC^2?** If so, Create a command line tool that will update an existing teams.tsv and update the team numbers. Another option is to add a UI feature that will read the event feed and update the team numbers. **Additional context**: Note that connection information can be read from CDP/config files or from the ContestInformation class.",1,update team numbers in from a clics event feed is your feature request related to a problem more of an inefficiency feature description update team numbers based on team numbers in a clics event feed the cms team id aka external team id will be used to match the team account then the team number can be updated assignged have you considered other ways to accomplish the same thing yes there is no workaround do you have any specific suggestions for how your feature would be implemented in pc if so create a command line tool that will update an existing teams tsv and update the team numbers another option is to add a ui feature that will read the event feed and update the team numbers additional context note that connection information can be read from cdp config files or from the contestinformation class ,1 9261,27818868531.0,IssuesEvent,2023-03-19 01:13:59,uceumice/remix-kawaii,https://api.github.com/repos/uceumice/remix-kawaii,opened,"Issue Title [actions] Automate publishing to **npm** registry...",actions automation registry," Right now every push to the registry requires manual hastle from my side of things. It would be really nice if the process of publishing and versioning would be release based. For example, on new release tag of form: `v*` a `publish.yaml` workflow would build all packages and publish them with a version of `v[tag]` to the npm registry. Dunno how big it is of a deal to implement, gonna look up on `remix-utils` + a monorepo organized approach for inspirations.",1.0,"Issue Title [actions] Automate publishing to **npm** registry... - Right now every push to the registry requires manual hastle from my side of things. It would be really nice if the process of publishing and versioning would be release based. For example, on new release tag of form: `v*` a `publish.yaml` workflow would build all packages and publish them with a version of `v[tag]` to the npm registry. Dunno how big it is of a deal to implement, gonna look up on `remix-utils` + a monorepo organized approach for inspirations.",1,issue title automate publishing to npm registry right now every push to the registry requires manual hastle from my side of things it would be really nice if the process of publishing and versioning would be release based for example on new release tag of form v a publish yaml workflow would build all packages and publish them with a version of v to the npm registry dunno how big it is of a deal to implement gonna look up on remix utils a monorepo organized approach for inspirations ,1 5432,19590473978.0,IssuesEvent,2022-01-05 12:23:03,Shopify/toxiproxy,https://api.github.com/repos/Shopify/toxiproxy,closed,Suggestion: please consider signing release tags,ideas automation,"Thank you for your work on Toxiproxy! In addition to signing commits, it would be very helpful if you would consider [signing release tags](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-tags), to facilitate easy commandline verification of releases with `git tag -v`. Currently: ``` $ git verify-commit c6c22ff9f2d40dd1c9db2ca7c4e7ba5162a42743 gpg: Signature made Sun Oct 17 12:22:39 2021 EDT gpg: using DSA key 93189009CE638E5BBFAF0DC0ACD0D4390D132705 gpg: Good signature from ""Michael Nikitochkin (miry) "" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 9318 9009 CE63 8E5B BFAF 0DC0 ACD0 D439 0D13 2705 $ git tag -v v2.2.0 error: v2.2.0: cannot verify a non-tag object of type commit. ``` This change would be nice for automated workflows, so that it's easy to grab the latest release tag from the Github API and verify the signature on the tag object before installing.",1.0,"Suggestion: please consider signing release tags - Thank you for your work on Toxiproxy! In addition to signing commits, it would be very helpful if you would consider [signing release tags](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-tags), to facilitate easy commandline verification of releases with `git tag -v`. Currently: ``` $ git verify-commit c6c22ff9f2d40dd1c9db2ca7c4e7ba5162a42743 gpg: Signature made Sun Oct 17 12:22:39 2021 EDT gpg: using DSA key 93189009CE638E5BBFAF0DC0ACD0D4390D132705 gpg: Good signature from ""Michael Nikitochkin (miry) "" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 9318 9009 CE63 8E5B BFAF 0DC0 ACD0 D439 0D13 2705 $ git tag -v v2.2.0 error: v2.2.0: cannot verify a non-tag object of type commit. ``` This change would be nice for automated workflows, so that it's easy to grab the latest release tag from the Github API and verify the signature on the tag object before installing.",1,suggestion please consider signing release tags thank you for your work on toxiproxy in addition to signing commits it would be very helpful if you would consider to facilitate easy commandline verification of releases with git tag v currently git verify commit gpg signature made sun oct edt gpg using dsa key gpg good signature from michael nikitochkin miry gpg warning this key is not certified with a trusted signature gpg there is no indication that the signature belongs to the owner primary key fingerprint bfaf git tag v error cannot verify a non tag object of type commit this change would be nice for automated workflows so that it s easy to grab the latest release tag from the github api and verify the signature on the tag object before installing ,1 3255,13249939537.0,IssuesEvent,2020-08-19 21:47:17,ThinkingEngine-net/PickleTestSuite,https://api.github.com/repos/ThinkingEngine-net/PickleTestSuite,closed,Behat states a feature file may include tags at scenario level,Browser Automation bug," Behat states a feature file may include tags at scenario level and at feature level. When you run execute it only picks up tags at scenario level. In the following example, it picks up the @PL-234 tag, but not the @smoke, @setup, or @javascript tag set at the feature level: **@smoke @setup @javascript Feature: 00 Smoke test site setup Background: Given ""admin"" login with profile settings @PL-234 Scenario: PL-234 Check you are on the correct version of Totara When I go direct to ""/admin/index.php"" Then I am on the expected version number** ",1.0,"Behat states a feature file may include tags at scenario level - Behat states a feature file may include tags at scenario level and at feature level. When you run execute it only picks up tags at scenario level. In the following example, it picks up the @PL-234 tag, but not the @smoke, @setup, or @javascript tag set at the feature level: **@smoke @setup @javascript Feature: 00 Smoke test site setup Background: Given ""admin"" login with profile settings @PL-234 Scenario: PL-234 Check you are on the correct version of Totara When I go direct to ""/admin/index.php"" Then I am on the expected version number** ",1,behat states a feature file may include tags at scenario level behat states a feature file may include tags at scenario level and at feature level when you run execute it only picks up tags at scenario level in the following example it picks up the pl tag but not the smoke setup or javascript tag set at the feature level smoke setup javascript feature smoke test site setup background given admin login with profile settings pl scenario pl check you are on the correct version of totara when i go direct to admin index php then i am on the expected version number ,1 4393,16455857638.0,IssuesEvent,2021-05-21 12:28:51,mozilla-mobile/focus-ios,https://api.github.com/repos/mozilla-mobile/focus-ios,closed,Add a CODEOWNERS to require reviews of to bitrise.yml changes,eng:automation,"Let's create a `.github/CODEOWNERS` file to require that any change to `bitrise.yml` must be approved by a specific group of people. I would like to suggest that that group for now is just @isabelrios and @st3fan This goes together with the _Require review from Code Owners_ option that can be found under the _Branch Protection_ settings. (Currently enabled) This adds a good safeguard and makes sure no accidental changes to our CI configuration happen. https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners ",1.0,"Add a CODEOWNERS to require reviews of to bitrise.yml changes - Let's create a `.github/CODEOWNERS` file to require that any change to `bitrise.yml` must be approved by a specific group of people. I would like to suggest that that group for now is just @isabelrios and @st3fan This goes together with the _Require review from Code Owners_ option that can be found under the _Branch Protection_ settings. (Currently enabled) This adds a good safeguard and makes sure no accidental changes to our CI configuration happen. https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners ",1,add a codeowners to require reviews of to bitrise yml changes let s create a github codeowners file to require that any change to bitrise yml must be approved by a specific group of people i would like to suggest that that group for now is just isabelrios and this goes together with the require review from code owners option that can be found under the branch protection settings currently enabled this adds a good safeguard and makes sure no accidental changes to our ci configuration happen ,1 2010,11259394928.0,IssuesEvent,2020-01-13 08:13:06,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,FIM v2.0: Improve parameter generator function to be applied to every test,automation component/fim,"## Description For most of our test, we generate their configuration parameters through a function called `generate_params` (fim.py). We need to improve its functionality to be able to use it for every test and cover every possible case. At least, we should cover these: - [x] To be able to assign a different value to a same attribute and wildcard in a list for different modes. - [x] To be able to append new parameters to every existing configuration. - [x] To be able to generate as many configurations as wanted.",1.0,"FIM v2.0: Improve parameter generator function to be applied to every test - ## Description For most of our test, we generate their configuration parameters through a function called `generate_params` (fim.py). We need to improve its functionality to be able to use it for every test and cover every possible case. At least, we should cover these: - [x] To be able to assign a different value to a same attribute and wildcard in a list for different modes. - [x] To be able to append new parameters to every existing configuration. - [x] To be able to generate as many configurations as wanted.",1,fim improve parameter generator function to be applied to every test description for most of our test we generate their configuration parameters through a function called generate params fim py we need to improve its functionality to be able to use it for every test and cover every possible case at least we should cover these to be able to assign a different value to a same attribute and wildcard in a list for different modes to be able to append new parameters to every existing configuration to be able to generate as many configurations as wanted ,1 1836,10916537395.0,IssuesEvent,2019-11-21 13:33:47,zalando-incubator/kopf,https://api.github.com/repos/zalando-incubator/kopf,opened,Revise the e2e testing for pristine-clean environments,automation,"0.23 release got few unfortunate bugs (#251 #249) sneaked into master and the releases: for cluster-scoped resources in a namespaces operators, and for peering auto-detection. Despite 0.23 contained a lot of massive refactorings, and the failures were expected to happen to some extent, they were not noticed neither by the unit-tests, nor by the e2e test, nor by our internal usage of the operators with 0.23rcX versions. This, in turn, was caused by the tests setup: either the cases were absent (like the cluster-scoped resources in namespaced operators), or the environment was half-configured enough to cause false-positives (`zalando.org/v1` group existed due to `KopfExample` in the tests, thus passing the e2e tests for `KopfPeering` resources, but not in the clusters of the 1st-time users, where it crashed with 404). This highlights that the tests have become insufficient and the testing approach has to be revised to ensure better stability. A new setup should cover all the cases, and especially the quick-start scenario in a pristine clean environment without any assumptions. Related: #13 ",1.0,"Revise the e2e testing for pristine-clean environments - 0.23 release got few unfortunate bugs (#251 #249) sneaked into master and the releases: for cluster-scoped resources in a namespaces operators, and for peering auto-detection. Despite 0.23 contained a lot of massive refactorings, and the failures were expected to happen to some extent, they were not noticed neither by the unit-tests, nor by the e2e test, nor by our internal usage of the operators with 0.23rcX versions. This, in turn, was caused by the tests setup: either the cases were absent (like the cluster-scoped resources in namespaced operators), or the environment was half-configured enough to cause false-positives (`zalando.org/v1` group existed due to `KopfExample` in the tests, thus passing the e2e tests for `KopfPeering` resources, but not in the clusters of the 1st-time users, where it crashed with 404). This highlights that the tests have become insufficient and the testing approach has to be revised to ensure better stability. A new setup should cover all the cases, and especially the quick-start scenario in a pristine clean environment without any assumptions. Related: #13 ",1,revise the testing for pristine clean environments release got few unfortunate bugs sneaked into master and the releases for cluster scoped resources in a namespaces operators and for peering auto detection despite contained a lot of massive refactorings and the failures were expected to happen to some extent they were not noticed neither by the unit tests nor by the test nor by our internal usage of the operators with versions this in turn was caused by the tests setup either the cases were absent like the cluster scoped resources in namespaced operators or the environment was half configured enough to cause false positives zalando org group existed due to kopfexample in the tests thus passing the tests for kopfpeering resources but not in the clusters of the time users where it crashed with this highlights that the tests have become insufficient and the testing approach has to be revised to ensure better stability a new setup should cover all the cases and especially the quick start scenario in a pristine clean environment without any assumptions related ,1 13080,3105749951.0,IssuesEvent,2015-08-31 22:39:36,rackerlabs/encore-ui,https://api.github.com/repos/rackerlabs/encore-ui,closed,Add feedback to Collapsible Table Filter pattern in styleguide,design documentation effort:medium priority:soon,"In relation to #915, the Collapsible Table Filter pattern needs some sort of feedback or notice that a data set is filtered when collapsed. *Table is filtered by ORD, but there's no indication of applied filters when collapsed.* ![screen shot 2015-04-30 at 11 06 13 am](https://cloud.githubusercontent.com/assets/545605/7421377/8c347a50-ef49-11e4-8a2d-a3d0d5dad233.png) ",1.0,"Add feedback to Collapsible Table Filter pattern in styleguide - In relation to #915, the Collapsible Table Filter pattern needs some sort of feedback or notice that a data set is filtered when collapsed. *Table is filtered by ORD, but there's no indication of applied filters when collapsed.* ![screen shot 2015-04-30 at 11 06 13 am](https://cloud.githubusercontent.com/assets/545605/7421377/8c347a50-ef49-11e4-8a2d-a3d0d5dad233.png) ",0,add feedback to collapsible table filter pattern in styleguide in relation to the collapsible table filter pattern needs some sort of feedback or notice that a data set is filtered when collapsed table is filtered by ord but there s no indication of applied filters when collapsed ,0 275463,8576026245.0,IssuesEvent,2018-11-12 19:04:31,mozilla/addons-server,https://api.github.com/repos/mozilla/addons-server,closed,Addons detail contains empty author list,component: api priority: p3 triaged type: papercut,"See https://addons.mozilla.org/api/v3/addons/addon/eyes-in-the-clouds/ Would be good to understand is this is an expected state or if there's an underlying issue to allow the author list to end up empty. See also https://github.com/mozilla/addons-frontend/issues/2073",1.0,"Addons detail contains empty author list - See https://addons.mozilla.org/api/v3/addons/addon/eyes-in-the-clouds/ Would be good to understand is this is an expected state or if there's an underlying issue to allow the author list to end up empty. See also https://github.com/mozilla/addons-frontend/issues/2073",0,addons detail contains empty author list see would be good to understand is this is an expected state or if there s an underlying issue to allow the author list to end up empty see also ,0 77928,15569904819.0,IssuesEvent,2021-03-17 01:15:55,Killy85/MachineLearningExercises,https://api.github.com/repos/Killy85/MachineLearningExercises,opened,"CVE-2021-27923 (High) detected in Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl, Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl",security vulnerability,"## CVE-2021-27923 - High Severity Vulnerability
Vulnerable Libraries - Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl, Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl

Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl

Python Imaging Library (Fork)

Library home page: https://files.pythonhosted.org/packages/0d/f3/421598450cb9503f4565d936860763b5af413a61009d87a5ab1e34139672/Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl

Dependency Hierarchy: - :x: **Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)

Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl

Python Imaging Library (Fork)

Library home page: https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl

Dependency Hierarchy: - :x: **Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)

Vulnerability Details

Pillow before 8.1.1 allows attackers to cause a denial of service (memory consumption) because the reported size of a contained image is not properly checked for an ICO container, and thus an attempted memory allocation can be very large.

Publish Date: 2021-03-03

URL: CVE-2021-27923

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-27923 (High) detected in Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl, Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-27923 - High Severity Vulnerability
Vulnerable Libraries - Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl, Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl

Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl

Python Imaging Library (Fork)

Library home page: https://files.pythonhosted.org/packages/0d/f3/421598450cb9503f4565d936860763b5af413a61009d87a5ab1e34139672/Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl

Dependency Hierarchy: - :x: **Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)

Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl

Python Imaging Library (Fork)

Library home page: https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl

Dependency Hierarchy: - :x: **Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)

Vulnerability Details

Pillow before 8.1.1 allows attackers to cause a denial of service (memory consumption) because the reported size of a contained image is not properly checked for an ICO container, and thus an attempted memory allocation can be very large.

Publish Date: 2021-03-03

URL: CVE-2021-27923

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in pillow whl pillow whl cve high severity vulnerability vulnerable libraries pillow whl pillow whl pillow whl python imaging library fork library home page a href dependency hierarchy x pillow whl vulnerable library pillow whl python imaging library fork library home page a href dependency hierarchy x pillow whl vulnerable library vulnerability details pillow before allows attackers to cause a denial of service memory consumption because the reported size of a contained image is not properly checked for an ico container and thus an attempted memory allocation can be very large publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource ,0 3615,14146824825.0,IssuesEvent,2020-11-10 19:49:30,domoticafacilconjota/capitulos,https://api.github.com/repos/domoticafacilconjota/capitulos,opened,[AtoNodeRED] Mensaje a la APP o a Telegram de que Home Assistant ha iniciado,Automation a Node RED,"**Código de la automatización** ``` - id: '1604933637567' alias: HA iniciado description: '' trigger: - platform: homeassistant event: start condition: [] action: - service: notify.mobile_app_redmi_note_7 data: title: HA init message: Home Assistant ha comenzado una nueva sesión a las {{ as_timestamp(now())| timestamp_local}}. mode: single ``` **Explicación de lo que hace actualmente la automatización** Esta automatización envía una notificación a la APP de HA con el fecha y la hora de cuando a iniciado HA y funciona perfectamente. **Notas del autor** En node-red hHe creado un flow con el nodo ""event:all"" y en el ""event type"" he colocado ""homeassistant_start"" pero no me da salida al iniciar HA.",1.0,"[AtoNodeRED] Mensaje a la APP o a Telegram de que Home Assistant ha iniciado - **Código de la automatización** ``` - id: '1604933637567' alias: HA iniciado description: '' trigger: - platform: homeassistant event: start condition: [] action: - service: notify.mobile_app_redmi_note_7 data: title: HA init message: Home Assistant ha comenzado una nueva sesión a las {{ as_timestamp(now())| timestamp_local}}. mode: single ``` **Explicación de lo que hace actualmente la automatización** Esta automatización envía una notificación a la APP de HA con el fecha y la hora de cuando a iniciado HA y funciona perfectamente. **Notas del autor** En node-red hHe creado un flow con el nodo ""event:all"" y en el ""event type"" he colocado ""homeassistant_start"" pero no me da salida al iniciar HA.",1, mensaje a la app o a telegram de que home assistant ha iniciado código de la automatización id alias ha iniciado description trigger platform homeassistant event start condition action service notify mobile app redmi note data title ha init message home assistant ha comenzado una nueva sesión a las as timestamp now timestamp local mode single explicación de lo que hace actualmente la automatización esta automatización envía una notificación a la app de ha con el fecha y la hora de cuando a iniciado ha y funciona perfectamente notas del autor en node red hhe creado un flow con el nodo event all y en el event type he colocado homeassistant start pero no me da salida al iniciar ha ,1 485254,13963262854.0,IssuesEvent,2020-10-25 13:25:50,MSFS-Mega-Pack/MSFS2020-livery-manager,https://api.github.com/repos/MSFS-Mega-Pack/MSFS2020-livery-manager,closed,"[BUG] No scrollbar in ""Available Liveries""",area: livery installation bug priority: MEDIUM type: ui type: ux,"## Description The tab Available Liveries does not show a scrollbar to scroll through the menu without using the scrollwheel on a mouse. ## To reproduce 1. Click on Available Liveries ## Environment **Manager version:** 0.0.2 ## Screenshots or videos
Click to expand ![image](https://user-images.githubusercontent.com/16933892/97084388-e874ea80-1616-11eb-879a-3e3c3c57caed.png)
",1.0,"[BUG] No scrollbar in ""Available Liveries"" - ## Description The tab Available Liveries does not show a scrollbar to scroll through the menu without using the scrollwheel on a mouse. ## To reproduce 1. Click on Available Liveries ## Environment **Manager version:** 0.0.2 ## Screenshots or videos
Click to expand ![image](https://user-images.githubusercontent.com/16933892/97084388-e874ea80-1616-11eb-879a-3e3c3c57caed.png)
",0, no scrollbar in available liveries description the tab available liveries does not show a scrollbar to scroll through the menu without using the scrollwheel on a mouse to reproduce click on available liveries environment manager version screenshots or videos click to expand ,0 74734,25285500539.0,IssuesEvent,2022-11-16 18:56:38,primefaces/primefaces,https://api.github.com/repos/primefaces/primefaces,opened,TriStateCheckbox: Item label not aligned with checkbox,:lady_beetle: defect :bangbang: needs-triage,"### Describe the bug It was correct on Primefaces 8 (tested on older version of primefaces-test), but it isn't from Primefaces 10 to 12 (latest primefaces-test). It's just 1 line to reproduce, so I didn't upload the reproducer, but I can if needed. Code: `` How it's displayed: ![image](https://user-images.githubusercontent.com/832674/202267618-d40c1f27-8313-4a49-af2b-ebf06588a4ed.png) What`s expected: ![image](https://user-images.githubusercontent.com/832674/202267946-5231510d-2326-4ff2-ba93-15724e54cc2c.png) ### Reproducer Code: `` ### Expected behavior ![image](https://user-images.githubusercontent.com/832674/202267946-5231510d-2326-4ff2-ba93-15724e54cc2c.png) ### PrimeFaces edition Community ### PrimeFaces version 10-12 ### Theme Any ### JSF implementation Mojarra ### JSF version 2.2.x ### Java version 11 ### Browser(s) Any",1.0,"TriStateCheckbox: Item label not aligned with checkbox - ### Describe the bug It was correct on Primefaces 8 (tested on older version of primefaces-test), but it isn't from Primefaces 10 to 12 (latest primefaces-test). It's just 1 line to reproduce, so I didn't upload the reproducer, but I can if needed. Code: `` How it's displayed: ![image](https://user-images.githubusercontent.com/832674/202267618-d40c1f27-8313-4a49-af2b-ebf06588a4ed.png) What`s expected: ![image](https://user-images.githubusercontent.com/832674/202267946-5231510d-2326-4ff2-ba93-15724e54cc2c.png) ### Reproducer Code: `` ### Expected behavior ![image](https://user-images.githubusercontent.com/832674/202267946-5231510d-2326-4ff2-ba93-15724e54cc2c.png) ### PrimeFaces edition Community ### PrimeFaces version 10-12 ### Theme Any ### JSF implementation Mojarra ### JSF version 2.2.x ### Java version 11 ### Browser(s) Any",0,tristatecheckbox item label not aligned with checkbox describe the bug it was correct on primefaces tested on older version of primefaces test but it isn t from primefaces to latest primefaces test it s just line to reproduce so i didn t upload the reproducer but i can if needed code how it s displayed what s expected reproducer code expected behavior primefaces edition community primefaces version theme any jsf implementation mojarra jsf version x java version browser s any,0 26239,26579960216.0,IssuesEvent,2023-01-22 10:20:02,godotengine/godot,https://api.github.com/repos/godotengine/godot,closed,Can't edit TileMap or TileSet after opening its shader,bug topic:editor confirmed usability topic:2d,"**Godot version:** 3.1 **OS/device including version:** Windows 10 Pro 64-bit **Issue description:** Once you open a Shader on a TileMap, you can no longer modify the tiles. This happens because the Shader panel is shown instead of the TileMap panel. This persists even if you close and reopen Godot. It took me a while to figure out how to resume editing the TileMap, either by: A. Remove the Shader from the TileMap; or B. Collapse the Shader in the Inspector the navigate away and then back to editing the TileMap. **Steps to reproduce:** 1. Create a TileMap 2. Add a ShaderMaterial to the TileMap 3. Add a Shader to the ShaderMaterial 4. Open the Shader **Minimal reproduction project:** [TileMapShaderBug.zip](https://github.com/godotengine/godot/files/3582276/TileMapShaderBug.zip) ",True,"Can't edit TileMap or TileSet after opening its shader - **Godot version:** 3.1 **OS/device including version:** Windows 10 Pro 64-bit **Issue description:** Once you open a Shader on a TileMap, you can no longer modify the tiles. This happens because the Shader panel is shown instead of the TileMap panel. This persists even if you close and reopen Godot. It took me a while to figure out how to resume editing the TileMap, either by: A. Remove the Shader from the TileMap; or B. Collapse the Shader in the Inspector the navigate away and then back to editing the TileMap. **Steps to reproduce:** 1. Create a TileMap 2. Add a ShaderMaterial to the TileMap 3. Add a Shader to the ShaderMaterial 4. Open the Shader **Minimal reproduction project:** [TileMapShaderBug.zip](https://github.com/godotengine/godot/files/3582276/TileMapShaderBug.zip) ",0,can t edit tilemap or tileset after opening its shader godot version os device including version windows pro bit issue description once you open a shader on a tilemap you can no longer modify the tiles this happens because the shader panel is shown instead of the tilemap panel this persists even if you close and reopen godot it took me a while to figure out how to resume editing the tilemap either by a remove the shader from the tilemap or b collapse the shader in the inspector the navigate away and then back to editing the tilemap steps to reproduce create a tilemap add a shadermaterial to the tilemap add a shader to the shadermaterial open the shader minimal reproduction project ,0 21171,3466938659.0,IssuesEvent,2015-12-22 08:26:42,Ryzhehvost/keyla,https://api.github.com/repos/Ryzhehvost/keyla,closed,Tray icon not correct,auto-migrated Can't reproduce duplicate Priority-Medium Type-Defect,"``` What steps will reproduce the problem? 1. Open Windows Explorer or rename file on the desktop or use Search option in the Start menu 2. 3. What is the expected output? What do you see instead? The layout is changed when pressing (Ctrl+Shift on my PC) but the flag is not correct in the keyla tray icon. It does not catch the change. Even pressing shortcut defined in keyla does not change the icon (but the key layout is changed) What version of the product are you using? On what operating system? 0.1.9.0. On Win7 64 Please provide any additional information below. Would be nice to be able to switch to the next keyboard by clicking once on the icon in the tray ``` Original issue reported on code.google.com by `okt...@gmail.com` on 21 Apr 2013 at 4:49",1.0,"Tray icon not correct - ``` What steps will reproduce the problem? 1. Open Windows Explorer or rename file on the desktop or use Search option in the Start menu 2. 3. What is the expected output? What do you see instead? The layout is changed when pressing (Ctrl+Shift on my PC) but the flag is not correct in the keyla tray icon. It does not catch the change. Even pressing shortcut defined in keyla does not change the icon (but the key layout is changed) What version of the product are you using? On what operating system? 0.1.9.0. On Win7 64 Please provide any additional information below. Would be nice to be able to switch to the next keyboard by clicking once on the icon in the tray ``` Original issue reported on code.google.com by `okt...@gmail.com` on 21 Apr 2013 at 4:49",0,tray icon not correct what steps will reproduce the problem open windows explorer or rename file on the desktop or use search option in the start menu what is the expected output what do you see instead the layout is changed when pressing ctrl shift on my pc but the flag is not correct in the keyla tray icon it does not catch the change even pressing shortcut defined in keyla does not change the icon but the key layout is changed what version of the product are you using on what operating system on please provide any additional information below would be nice to be able to switch to the next keyboard by clicking once on the icon in the tray original issue reported on code google com by okt gmail com on apr at ,0 35144,6417293855.0,IssuesEvent,2017-08-08 16:26:02,F5Networks/f5-openstack-lbaasv1,https://api.github.com/repos/F5Networks/f5-openstack-lbaasv1,closed,Version numbers don't match in installation instructions (index.rst),bug documentation P4 S5,"#### OpenStack Release Liberty #### Description On http://f5-openstack-lbaasv1.readthedocs.io/en/liberty/: the version numbers shown in the example under quick start don't correspond to the release that provides support for liberty. ",1.0,"Version numbers don't match in installation instructions (index.rst) - #### OpenStack Release Liberty #### Description On http://f5-openstack-lbaasv1.readthedocs.io/en/liberty/: the version numbers shown in the example under quick start don't correspond to the release that provides support for liberty. ",0,version numbers don t match in installation instructions index rst openstack release liberty description on the version numbers shown in the example under quick start don t correspond to the release that provides support for liberty ,0 6750,6584198368.0,IssuesEvent,2017-09-13 09:18:43,dweitz43/nhl,https://api.github.com/repos/dweitz43/nhl,opened,Consume MySportsFeeds Data into Server-side Database,backend db infrastructure js needs research,the data from the third-party datasource must be consumed by the server-side database #2. This requires more research to figure out the most elegant/efficient solution...,1.0,Consume MySportsFeeds Data into Server-side Database - the data from the third-party datasource must be consumed by the server-side database #2. This requires more research to figure out the most elegant/efficient solution...,0,consume mysportsfeeds data into server side database the data from the third party datasource must be consumed by the server side database this requires more research to figure out the most elegant efficient solution ,0 200909,22916021712.0,IssuesEvent,2022-07-17 01:10:47,ShaikUsaf/linux-4.19.72_CVE-2020-14356,https://api.github.com/repos/ShaikUsaf/linux-4.19.72_CVE-2020-14356,opened,CVE-2022-2380 (Medium) detected in linuxlinux-4.19.238,security vulnerability,"## CVE-2022-2380 - Medium Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.238

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in HEAD commit: 05c0743befb9102e2ecb9fbd0fa8eb43eb8bb5ec

Found in base branch: master

Vulnerable Source Files (2)

/drivers/video/fbdev/sm712fb.c /drivers/video/fbdev/sm712fb.c

Vulnerability Details

The Linux kernel was found vulnerable out of bounds memory access in the drivers/video/fbdev/sm712fb.c:smtcfb_read() function. The vulnerability could result in local attackers being able to crash the kernel.

Publish Date: 2022-07-13

URL: CVE-2022-2380

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.linuxkernelcves.com/cves/CVE-2022-2380

Release Date: 2022-07-13

Fix Resolution: v4.9.311,v4.14.276,v4.19.238,v5.4.189,v5.10.110,v5.15.33,v5.16.19,v5.17.2,v5.18

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-2380 (Medium) detected in linuxlinux-4.19.238 - ## CVE-2022-2380 - Medium Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.238

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in HEAD commit: 05c0743befb9102e2ecb9fbd0fa8eb43eb8bb5ec

Found in base branch: master

Vulnerable Source Files (2)

/drivers/video/fbdev/sm712fb.c /drivers/video/fbdev/sm712fb.c

Vulnerability Details

The Linux kernel was found vulnerable out of bounds memory access in the drivers/video/fbdev/sm712fb.c:smtcfb_read() function. The vulnerability could result in local attackers being able to crash the kernel.

Publish Date: 2022-07-13

URL: CVE-2022-2380

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.linuxkernelcves.com/cves/CVE-2022-2380

Release Date: 2022-07-13

Fix Resolution: v4.9.311,v4.14.276,v4.19.238,v5.4.189,v5.10.110,v5.15.33,v5.16.19,v5.17.2,v5.18

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers video fbdev c drivers video fbdev c vulnerability details the linux kernel was found vulnerable out of bounds memory access in the drivers video fbdev c smtcfb read function the vulnerability could result in local attackers being able to crash the kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0 9385,28139010587.0,IssuesEvent,2023-04-01 18:23:01,awslabs/aws-lambda-powertools-typescript,https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript,closed,Maintenance: update Lerna to latest,area/automation type/internal status/confirmed,"### Summary The project currently uses Lerna version `4.0.0` which fairly old and can be updated to the latest version. ### Why is this needed? This version is over a year old and is behind two major versions and as such is not getting updates of any kind anymore. ### Which area does this relate to? Automation ### Solution Update `lerna` to its latest version and update the `make-release` workflow accordingly. ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/) ### Future readers Please react with 👍 and your use case to help us understand customer demand.",1.0,"Maintenance: update Lerna to latest - ### Summary The project currently uses Lerna version `4.0.0` which fairly old and can be updated to the latest version. ### Why is this needed? This version is over a year old and is behind two major versions and as such is not getting updates of any kind anymore. ### Which area does this relate to? Automation ### Solution Update `lerna` to its latest version and update the `make-release` workflow accordingly. ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/) ### Future readers Please react with 👍 and your use case to help us understand customer demand.",1,maintenance update lerna to latest summary the project currently uses lerna version which fairly old and can be updated to the latest version why is this needed this version is over a year old and is behind two major versions and as such is not getting updates of any kind anymore which area does this relate to automation solution update lerna to its latest version and update the make release workflow accordingly acknowledgment this request meets should this be considered in other lambda powertools languages i e and future readers please react with 👍 and your use case to help us understand customer demand ,1 11007,4128041026.0,IssuesEvent,2016-06-10 02:54:30,TEAMMATES/teammates,https://api.github.com/repos/TEAMMATES/teammates,closed,Re-organize FileHelper classes,a-CodeQuality m.Aspect,"There are two `FileHelper`s, one for production (reading input stream etc.) and one for non-production (reading files etc.), but they're not very well-organized right now. Also, there are some self-defined functions that can actually fit in either one of these classes.",1.0,"Re-organize FileHelper classes - There are two `FileHelper`s, one for production (reading input stream etc.) and one for non-production (reading files etc.), but they're not very well-organized right now. Also, there are some self-defined functions that can actually fit in either one of these classes.",0,re organize filehelper classes there are two filehelper s one for production reading input stream etc and one for non production reading files etc but they re not very well organized right now also there are some self defined functions that can actually fit in either one of these classes ,0 8846,27172324094.0,IssuesEvent,2023-02-17 20:40:33,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Onedrive Api search request always returns an empty collection,type:bug area:Search status:backlogged automation:Closed,"> Thank you for reporting an issue or suggesting an enhancement. We appreciate your feedback - to help the team to understand your needs, please complete the below template to ensure we have the necessary details to assist you. > > _Submission Guidelines:_ > - Questions and bugs are welcome, please let us know what's on your mind. > - If you are reporting an issue around any of the documents or articles, please provide clear reference(s) to the specific file(s) or URL('s). > - Remember to include sufficient details and context. > - If you have multiple issues, please submit them as separate issues so we can track resolution. > > _(DELETE THIS PARAGRAPH AFTER READING)_ > #### Category - [x] Question - [ ] Documentation issue - [ ] Bug > For the above list, an empty checkbox is [ ]. A checked checkbox is [x] with no space between the brackets. Use the `PREVIEW` tab at the top right to preview the rendering before submitting your issue. > > If you are planning to share a new feature request (enhancement / suggestion), please use the OneDrive Developer Platform UserVoice at http://aka.ms/od-dev-uservoice, or the SharePoint Developer Platform UserVoice at http://aka.ms/sp-dev-uservoice. > If you have a question about Azure Active Directory, outside of issues with the documentation provided in the OneDrive Developer Center, please ask it here: https://stackoverflow.com/questions/tagged/azure-active-directory > > _(DELETE THIS PARAGRAPH AFTER READING)_ > #### Expected or Desired Behavior > If you are reporting a bug, please describe the expected behavior. > > _(DELETE THIS PARAGRAPH AFTER READING)_ > #### Observed Behavior > If you are reporting a bug, please describe the observed behavior. > > Please also provide the following response headers corresponding to your request(s): > - Date (in UTC, please) > - request-id > - SPRequestGuid (for requests made to OneDrive for Business) > > _(DELETE THIS PARAGRAPH AFTER READING)_ > #### Steps to Reproduce > If you are reporting a bug, please describe the steps to reproduce the bug in sufficient detail for another person to be able to reproduce it. > > _(DELETE THIS PARAGRAPH AFTER READING)_ > Thank you. [ ]: http://aka.ms/onedrive-api-issues [x]: http://aka.ms/onedrive-api-issues",1.0,"Onedrive Api search request always returns an empty collection - > Thank you for reporting an issue or suggesting an enhancement. We appreciate your feedback - to help the team to understand your needs, please complete the below template to ensure we have the necessary details to assist you. > > _Submission Guidelines:_ > - Questions and bugs are welcome, please let us know what's on your mind. > - If you are reporting an issue around any of the documents or articles, please provide clear reference(s) to the specific file(s) or URL('s). > - Remember to include sufficient details and context. > - If you have multiple issues, please submit them as separate issues so we can track resolution. > > _(DELETE THIS PARAGRAPH AFTER READING)_ > #### Category - [x] Question - [ ] Documentation issue - [ ] Bug > For the above list, an empty checkbox is [ ]. A checked checkbox is [x] with no space between the brackets. Use the `PREVIEW` tab at the top right to preview the rendering before submitting your issue. > > If you are planning to share a new feature request (enhancement / suggestion), please use the OneDrive Developer Platform UserVoice at http://aka.ms/od-dev-uservoice, or the SharePoint Developer Platform UserVoice at http://aka.ms/sp-dev-uservoice. > If you have a question about Azure Active Directory, outside of issues with the documentation provided in the OneDrive Developer Center, please ask it here: https://stackoverflow.com/questions/tagged/azure-active-directory > > _(DELETE THIS PARAGRAPH AFTER READING)_ > #### Expected or Desired Behavior > If you are reporting a bug, please describe the expected behavior. > > _(DELETE THIS PARAGRAPH AFTER READING)_ > #### Observed Behavior > If you are reporting a bug, please describe the observed behavior. > > Please also provide the following response headers corresponding to your request(s): > - Date (in UTC, please) > - request-id > - SPRequestGuid (for requests made to OneDrive for Business) > > _(DELETE THIS PARAGRAPH AFTER READING)_ > #### Steps to Reproduce > If you are reporting a bug, please describe the steps to reproduce the bug in sufficient detail for another person to be able to reproduce it. > > _(DELETE THIS PARAGRAPH AFTER READING)_ > Thank you. [ ]: http://aka.ms/onedrive-api-issues [x]: http://aka.ms/onedrive-api-issues",1,onedrive api search request always returns an empty collection thank you for reporting an issue or suggesting an enhancement we appreciate your feedback to help the team to understand your needs please complete the below template to ensure we have the necessary details to assist you submission guidelines questions and bugs are welcome please let us know what s on your mind if you are reporting an issue around any of the documents or articles please provide clear reference s to the specific file s or url s remember to include sufficient details and context if you have multiple issues please submit them as separate issues so we can track resolution delete this paragraph after reading category question documentation issue bug for the above list an empty checkbox is a checked checkbox is with no space between the brackets use the preview tab at the top right to preview the rendering before submitting your issue if you are planning to share a new feature request enhancement suggestion please use the onedrive developer platform uservoice at or the sharepoint developer platform uservoice at if you have a question about azure active directory outside of issues with the documentation provided in the onedrive developer center please ask it here delete this paragraph after reading expected or desired behavior if you are reporting a bug please describe the expected behavior delete this paragraph after reading observed behavior if you are reporting a bug please describe the observed behavior please also provide the following response headers corresponding to your request s date in utc please request id sprequestguid for requests made to onedrive for business delete this paragraph after reading steps to reproduce if you are reporting a bug please describe the steps to reproduce the bug in sufficient detail for another person to be able to reproduce it delete this paragraph after reading thank you ,1 7330,24648554123.0,IssuesEvent,2022-10-17 16:38:08,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Cypress Test - Consumer detail - edit labels,automation,"1. Manage/Edit labels spec 1.1 authenticates Mark (Access-Manager) 1.2 Navigate to Consumer page and filter the product 1.3 Click on the first consumer 1.4 Verify that labels can be deleted 1.5 Verify that labels can be updated 1.6 Verify that labels can be added ",1.0,"Cypress Test - Consumer detail - edit labels - 1. Manage/Edit labels spec 1.1 authenticates Mark (Access-Manager) 1.2 Navigate to Consumer page and filter the product 1.3 Click on the first consumer 1.4 Verify that labels can be deleted 1.5 Verify that labels can be updated 1.6 Verify that labels can be added ",1,cypress test consumer detail edit labels manage edit labels spec authenticates mark access manager navigate to consumer page and filter the product click on the first consumer verify that labels can be deleted verify that labels can be updated verify that labels can be added ,1 1628,10471758732.0,IssuesEvent,2019-09-23 08:39:30,big-neon/bn-web,https://api.github.com/repos/big-neon/bn-web,opened,Automation: Big Neon : Test 27: Order Management: Order Page navigation,Automation,"**Pre-conditions:** - User should have admin access - User should be logged into Big Neon - User should have created an event - User should have purchased tickets as a consumer for the event **Steps:** 1. Within the ""Events"" dashboard on Box Office, select the 3 dots at the top right corner of the event for which tickets have been purchased. 2. From the drop down list that appears, select the option; ""Dashboard"" 3. User should be redirected to the dashboard view. 4. Within the dashboard, select the option ""Orders"" 5. Drop down appears 6. Select ""Manage Orders"" 7. User should be redirected to the Orders page 8. user should see a list of all orders/purchases displayed for the selected event 9. ",1.0,"Automation: Big Neon : Test 27: Order Management: Order Page navigation - **Pre-conditions:** - User should have admin access - User should be logged into Big Neon - User should have created an event - User should have purchased tickets as a consumer for the event **Steps:** 1. Within the ""Events"" dashboard on Box Office, select the 3 dots at the top right corner of the event for which tickets have been purchased. 2. From the drop down list that appears, select the option; ""Dashboard"" 3. User should be redirected to the dashboard view. 4. Within the dashboard, select the option ""Orders"" 5. Drop down appears 6. Select ""Manage Orders"" 7. User should be redirected to the Orders page 8. user should see a list of all orders/purchases displayed for the selected event 9. ",1,automation big neon test order management order page navigation pre conditions user should have admin access user should be logged into big neon user should have created an event user should have purchased tickets as a consumer for the event steps within the events dashboard on box office select the dots at the top right corner of the event for which tickets have been purchased from the drop down list that appears select the option dashboard user should be redirected to the dashboard view within the dashboard select the option orders drop down appears select manage orders user should be redirected to the orders page user should see a list of all orders purchases displayed for the selected event ,1 7449,24900295725.0,IssuesEvent,2022-10-28 20:05:11,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,closed,Change the namespace selection to `openshift-storage` before searching/selecting OCS operator,bug Medium Priority ui_automation lifecycle/stale,"Failure is seen on Run ID 1643910120 Issue is in function after `logger.info(""Search OCS operator installed"")` line- ``` def verify_ocs_operator_tabs(self): """""" Verify OCS Operator Tabs """""" self.navigate_installed_operators_page() logger.info(""Search OCS operator installed"") self.do_send_keys( locator=self.validation_loc[""search_ocs_installed""], text=""OpenShift Container Storage"", ) ```",1.0,"Change the namespace selection to `openshift-storage` before searching/selecting OCS operator - Failure is seen on Run ID 1643910120 Issue is in function after `logger.info(""Search OCS operator installed"")` line- ``` def verify_ocs_operator_tabs(self): """""" Verify OCS Operator Tabs """""" self.navigate_installed_operators_page() logger.info(""Search OCS operator installed"") self.do_send_keys( locator=self.validation_loc[""search_ocs_installed""], text=""OpenShift Container Storage"", ) ```",1,change the namespace selection to openshift storage before searching selecting ocs operator failure is seen on run id issue is in function after logger info search ocs operator installed line def verify ocs operator tabs self verify ocs operator tabs self navigate installed operators page logger info search ocs operator installed self do send keys locator self validation loc text openshift container storage ,1 4510,16745951549.0,IssuesEvent,2021-06-11 15:31:35,mozilla-mobile/focus-ios,https://api.github.com/repos/mozilla-mobile/focus-ios,opened,"Refactor waitforExistence, waitforHittable, waitForEnable methods",eng:automation,"Let's refactor these methods to remove the 30 secs expectation and add a waitFor general method that can be used by all of them. Hopefully this may fix issue #1928 ",1.0,"Refactor waitforExistence, waitforHittable, waitForEnable methods - Let's refactor these methods to remove the 30 secs expectation and add a waitFor general method that can be used by all of them. Hopefully this may fix issue #1928 ",1,refactor waitforexistence waitforhittable waitforenable methods let s refactor these methods to remove the secs expectation and add a waitfor general method that can be used by all of them hopefully this may fix issue ,1 190782,15256111179.0,IssuesEvent,2021-02-20 18:48:17,AbelardoCuesta/git_flow_practice,https://api.github.com/repos/AbelardoCuesta/git_flow_practice,closed,Un commit que no sigue la convención de código o arreglo a realizar,documentation,"L El último commit tiene el siguiente mensaje: `Se añade codigo base` Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.",1.0,"Un commit que no sigue la convención de código o arreglo a realizar - L El último commit tiene el siguiente mensaje: `Se añade codigo base` Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.",0,un commit que no sigue la convención de código o arreglo a realizar l el último commit tiene el siguiente mensaje se añade codigo base este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado ,0 7602,25246437968.0,IssuesEvent,2022-11-15 11:26:29,ita-social-projects/TeachUA,https://api.github.com/repos/ita-social-projects/TeachUA,closed,[Гуртки tab] Page content disappears after going to 54+ page,bug Frontend UI Priority: High Automation,"**Environment:** Windows 11, Google Chrome 106.0.5249.91 (Official Build) (64-bit) **Reproducible:** always. **Build found:** commit [0008581](https://github.com/ita-social-projects/TeachUA/commit/0008581a5a00a1ddc3514ab25f9f5745e166e26d) **Preconditions** 1. Go to the webpage: https://speak-ukrainian.org.ua/dev/ **Steps to reproduce** 1. Go to 'Гуртки' tab. 2. Scroll down to pagination. 3. Click on page number 58. **Actual result** All page content disappears. https://user-images.githubusercontent.com/82941067/193587440-c7493425-dfc0-49a9-9294-7e931a4dff03.mp4 **Expected result** 1. Club components that are present on page number 58 should appear. 2. '>' button should be disabled. ",1.0,"[Гуртки tab] Page content disappears after going to 54+ page - **Environment:** Windows 11, Google Chrome 106.0.5249.91 (Official Build) (64-bit) **Reproducible:** always. **Build found:** commit [0008581](https://github.com/ita-social-projects/TeachUA/commit/0008581a5a00a1ddc3514ab25f9f5745e166e26d) **Preconditions** 1. Go to the webpage: https://speak-ukrainian.org.ua/dev/ **Steps to reproduce** 1. Go to 'Гуртки' tab. 2. Scroll down to pagination. 3. Click on page number 58. **Actual result** All page content disappears. https://user-images.githubusercontent.com/82941067/193587440-c7493425-dfc0-49a9-9294-7e931a4dff03.mp4 **Expected result** 1. Club components that are present on page number 58 should appear. 2. '>' button should be disabled. ",1, page content disappears after going to page environment windows google chrome official build bit reproducible always build found commit preconditions go to the webpage steps to reproduce go to гуртки tab scroll down to pagination click on page number actual result all page content disappears expected result club components that are present on page number should appear button should be disabled ,1 41673,10563560586.0,IssuesEvent,2019-10-04 21:20:07,department-of-veterans-affairs/va.gov-team,https://api.github.com/repos/department-of-veterans-affairs/va.gov-team,closed,[KEYBOARD]: Map - Focus is moved to the search results when users press arrow keys or +/-,508-defect-2 508/Accessibility facility locator frontend vsa vsa-global-ux,"## Issue Focus is being moved from the map to the search results when keyboard users press an arrow key to shift the map in a direction, or when users press plus or minus to zoom the map. This was noted as an SC 3.3.2 violation. Animated GIF attached below. ## Related Issues * https://app.zenhub.com/workspaces/vsp-5cedc9cce6e3335dc5a49fc4/issues/department-of-veterans-affairs/va.gov-team/491 * https://app.zenhub.com/workspaces/vsp-5cedc9cce6e3335dc5a49fc4/issues/department-of-veterans-affairs/va.gov-team/515 ## Audit Finding * Note 5, Defect 1 of 2 ## Acceptance Criteria * As a keyboard user, I would like the map to retain focus when I press an arrow key, or any other keyboard shortcut. The number of results should update as before, but the yellow focus halo should stay on the map. ## Environment * MacOS Mojave * Chrome latest * https://staging.va.gov/find-locations/ ## WCAG or Vendor Guidance (optional) * [On Input: Understanding SC 3.2.2](https://www.w3.org/TR/UNDERSTANDING-WCAG20/consistent-behavior-unpredictable-change.html) ## Screenshots or Trace Logs ![map-focus-change-smaller.gif](https://images.zenhubusercontent.com/5ac217b74b5806bc2bcd3fc8/5c21b5e3-c1dd-4f04-8b41-f4563c5ece76)",1.0,"[KEYBOARD]: Map - Focus is moved to the search results when users press arrow keys or +/- - ## Issue Focus is being moved from the map to the search results when keyboard users press an arrow key to shift the map in a direction, or when users press plus or minus to zoom the map. This was noted as an SC 3.3.2 violation. Animated GIF attached below. ## Related Issues * https://app.zenhub.com/workspaces/vsp-5cedc9cce6e3335dc5a49fc4/issues/department-of-veterans-affairs/va.gov-team/491 * https://app.zenhub.com/workspaces/vsp-5cedc9cce6e3335dc5a49fc4/issues/department-of-veterans-affairs/va.gov-team/515 ## Audit Finding * Note 5, Defect 1 of 2 ## Acceptance Criteria * As a keyboard user, I would like the map to retain focus when I press an arrow key, or any other keyboard shortcut. The number of results should update as before, but the yellow focus halo should stay on the map. ## Environment * MacOS Mojave * Chrome latest * https://staging.va.gov/find-locations/ ## WCAG or Vendor Guidance (optional) * [On Input: Understanding SC 3.2.2](https://www.w3.org/TR/UNDERSTANDING-WCAG20/consistent-behavior-unpredictable-change.html) ## Screenshots or Trace Logs ![map-focus-change-smaller.gif](https://images.zenhubusercontent.com/5ac217b74b5806bc2bcd3fc8/5c21b5e3-c1dd-4f04-8b41-f4563c5ece76)",0, map focus is moved to the search results when users press arrow keys or issue focus is being moved from the map to the search results when keyboard users press an arrow key to shift the map in a direction or when users press plus or minus to zoom the map this was noted as an sc violation animated gif attached below related issues audit finding note defect of acceptance criteria as a keyboard user i would like the map to retain focus when i press an arrow key or any other keyboard shortcut the number of results should update as before but the yellow focus halo should stay on the map environment macos mojave chrome latest wcag or vendor guidance optional screenshots or trace logs ,0 2892,12746209508.0,IssuesEvent,2020-06-26 15:32:01,chavarera/python-mini-projects,https://api.github.com/repos/chavarera/python-mini-projects,opened,Write a program to download multiple images,Automation beginner,"**Problem Statement** write a program that accept category from user and download n no of images to local system",1.0,"Write a program to download multiple images - **Problem Statement** write a program that accept category from user and download n no of images to local system",1,write a program to download multiple images problem statement write a program that accept category from user and download n no of images to local system,1 5986,21787901305.0,IssuesEvent,2022-05-14 12:41:10,ThinkingEngine-net/PickleTestSuite,https://api.github.com/repos/ThinkingEngine-net/PickleTestSuite,closed,Edge Chromium - Performance Logging functions not working,bug Browser Automation,"Current disable, so page status not available. Needs to be fixed.",1.0,"Edge Chromium - Performance Logging functions not working - Current disable, so page status not available. Needs to be fixed.",1,edge chromium performance logging functions not working current disable so page status not available needs to be fixed ,1 6813,23939521762.0,IssuesEvent,2022-09-11 18:03:47,smcnab1/op-question-mark,https://api.github.com/repos/smcnab1/op-question-mark,closed,[FR] Implementation of Apple Shortcuts,Status: Review Needed Type: Enhancement Priority: Low For: Automations,"**Describe the solution you'd like** Implementation of Apple Shortcuts to trigger actions in home assistant **Describe alternatives you've considered** Home Assistant Widget for iOS, also yet to fully investigate **Additional context** Used to trigger alarm/scenes/scripts from phone Home Screen. Make life easier for wife to access",1.0,"[FR] Implementation of Apple Shortcuts - **Describe the solution you'd like** Implementation of Apple Shortcuts to trigger actions in home assistant **Describe alternatives you've considered** Home Assistant Widget for iOS, also yet to fully investigate **Additional context** Used to trigger alarm/scenes/scripts from phone Home Screen. Make life easier for wife to access",1, implementation of apple shortcuts describe the solution you d like implementation of apple shortcuts to trigger actions in home assistant describe alternatives you ve considered home assistant widget for ios also yet to fully investigate additional context used to trigger alarm scenes scripts from phone home screen make life easier for wife to access,1 306522,26476101271.0,IssuesEvent,2023-01-17 11:14:31,woocommerce/woocommerce,https://api.github.com/repos/woocommerce/woocommerce,opened,[HPOS]: Rename `cot` to `hpos` in our workflows ,type: task focus: custom order tables focus: smoke tests,"### Describe the solution you'd like Our workflows are referencing `cot` in many places when they should be referencing the new name `hpos`. This task is to update the names and references so they use the more appropriate `hpos` reference, being mindful of the impacts 😊 ### Describe alternatives you've considered n/a ### Additional context _No response_",1.0,"[HPOS]: Rename `cot` to `hpos` in our workflows - ### Describe the solution you'd like Our workflows are referencing `cot` in many places when they should be referencing the new name `hpos`. This task is to update the names and references so they use the more appropriate `hpos` reference, being mindful of the impacts 😊 ### Describe alternatives you've considered n/a ### Additional context _No response_",0, rename cot to hpos in our workflows describe the solution you d like our workflows are referencing cot in many places when they should be referencing the new name hpos this task is to update the names and references so they use the more appropriate hpos reference being mindful of the impacts 😊 describe alternatives you ve considered n a additional context no response ,0 4181,15736267478.0,IssuesEvent,2021-03-30 00:15:22,aws/aws-cli,https://api.github.com/repos/aws/aws-cli,closed,Sync missing files,automation-exempt needs-reproduction s3sync,"Server: `EC2 linux` Version: `aws-cli/1.16.108 Python/2.7.15 Linux/4.9.62-21.56.amzn1.x86_64 botocore/1.12.98` After `aws s3 sync` running over 270T of data I lost few GB of files. Sync didn't copy files with special characters at all. Example of file `/data/company/storage/projects/1013815/3.Company Estimates B. Estimates` Had to use `cp -R -n`",1.0,"Sync missing files - Server: `EC2 linux` Version: `aws-cli/1.16.108 Python/2.7.15 Linux/4.9.62-21.56.amzn1.x86_64 botocore/1.12.98` After `aws s3 sync` running over 270T of data I lost few GB of files. Sync didn't copy files with special characters at all. Example of file `/data/company/storage/projects/1013815/3.Company Estimates B. Estimates` Had to use `cp -R -n`",1,sync missing files server linux version aws cli python linux botocore after aws sync running over of data i lost few gb of files sync didn t copy files with special characters at all example of file data company storage projects company estimates b estimates had to use cp r n ,1 5508,19829704262.0,IssuesEvent,2022-01-20 10:39:43,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[FEATURE] Mutating/Validating admission webhook,kind/enhancement priority/1 require/automation-e2e area/api,"**Is your feature request related to a problem? Please describe.** This is part of #791, besides having schema validation, we need to have validation hooks to integrate into Kubernetes to validate or mutate Longhorn CRs. **Describe the solution you'd like** Have our resources validation to hook into Kubernetes CR CRUD flow. **Describe alternatives you've considered** N/A **Additional context** - https://github.com/longhorn/longhorn/issues/2570#issuecomment-965945136 - https://github.com/rancher/webhook",1.0,"[FEATURE] Mutating/Validating admission webhook - **Is your feature request related to a problem? Please describe.** This is part of #791, besides having schema validation, we need to have validation hooks to integrate into Kubernetes to validate or mutate Longhorn CRs. **Describe the solution you'd like** Have our resources validation to hook into Kubernetes CR CRUD flow. **Describe alternatives you've considered** N/A **Additional context** - https://github.com/longhorn/longhorn/issues/2570#issuecomment-965945136 - https://github.com/rancher/webhook",1, mutating validating admission webhook is your feature request related to a problem please describe this is part of besides having schema validation we need to have validation hooks to integrate into kubernetes to validate or mutate longhorn crs describe the solution you d like have our resources validation to hook into kubernetes cr crud flow describe alternatives you ve considered n a additional context ,1 145826,13162169241.0,IssuesEvent,2020-08-10 21:01:32,pivotal/cloud-service-broker,https://api.github.com/repos/pivotal/cloud-service-broker,closed,"[DOCS] How to add ""read_write_endpoint_failover_policy"" to defaults in example-configs",documentation enhancement,"## Documentation Requested Would it be possible to add in the ""example-configs"" document an example of setting the ""read_write_endpoint_failover_policy"" as a default for the ""csb-azure-mssql-db-failover-group""? File: https://github.com/pivotal/cloud-service-broker/blob/master/docs/example-configs.md Section: Azure csb-azure-mssql-db-failover-group Feature: ""read_write_endpoint_failover_policy"":""Manual/Automatic"" https://github.com/pivotal/cloud-service-broker/commit/0ebfac6428c268e22d394f0f7fdaa68759be87f5 ",1.0,"[DOCS] How to add ""read_write_endpoint_failover_policy"" to defaults in example-configs - ## Documentation Requested Would it be possible to add in the ""example-configs"" document an example of setting the ""read_write_endpoint_failover_policy"" as a default for the ""csb-azure-mssql-db-failover-group""? File: https://github.com/pivotal/cloud-service-broker/blob/master/docs/example-configs.md Section: Azure csb-azure-mssql-db-failover-group Feature: ""read_write_endpoint_failover_policy"":""Manual/Automatic"" https://github.com/pivotal/cloud-service-broker/commit/0ebfac6428c268e22d394f0f7fdaa68759be87f5 ",0, how to add read write endpoint failover policy to defaults in example configs documentation requested would it be possible to add in the example configs document an example of setting the read write endpoint failover policy as a default for the csb azure mssql db failover group file section azure csb azure mssql db failover group feature read write endpoint failover policy manual automatic ,0 5444,19619734604.0,IssuesEvent,2022-01-07 03:48:38,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,"v5.2.3 upgrade to v5.4.0-nightly-20211221 fail for ""server_config is only supported with TiCDC version v4.0.13 or later""",type/bug severity/minor found/automation area/ticdc,"### What did you do? 1. install v5.2.3 tidb cluster, with config: cdc: sorter.max-memory-consumption: 1073741824 2. upgrade to v5.4.0-nightly-20211221 3. upgrade fail ### What did you expect to see? upgrade success ### What did you see instead? 2021-12-30T09:17:51.941+0800 INFO Execute command finished {""code"": 1, ""error"": ""init config failed: cdc-peer:8300: server_config is only supported with TiCDC version v4.0.13 or later"", ""errorVerbose"": ""server_config is only supported with TiCDC version v4.0.13 or later\ngithub.com/pingcap/tiup/pkg/cluster/spec.(*CDCInstance).InitConfig\n\tgithub.com/pingcap/tiup/pkg/cluster/spec/cdc.go:168\ngithub.com/pingcap/tiup/pkg/cluster/task.(*InitConfig).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/task/init_config.go:51\ngithub.com/pingcap/tiup/pkg/cluster/task.(*Serial).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/task/task.go:85\ngithub.com/pingcap/tiup/pkg/cluster/task.(*Parallel).Execute.func1\n\tgithub.com/pingcap/tiup/pkg/cluster/task/task.go:142\nruntime.goexit\n\truntime/asm_amd64.s:1581\ninit config failed: cdc-peer:8300""} ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console (paste TiDB cluster version here) ``` TiCDC version (execute `cdc version`): [release-version=v5.2.3] [git-hash=a04ddac9fe83c8bdff267e4c44150060ea05f5ec] [git-branch=heads/refs/tags/v5.2.3] [utc-build-time=""2021-11-26 07:39:58""] [go-version=""go version go1.16.4 linux/amd64""] ```console (paste TiCDC version here) ```",1.0,"v5.2.3 upgrade to v5.4.0-nightly-20211221 fail for ""server_config is only supported with TiCDC version v4.0.13 or later"" - ### What did you do? 1. install v5.2.3 tidb cluster, with config: cdc: sorter.max-memory-consumption: 1073741824 2. upgrade to v5.4.0-nightly-20211221 3. upgrade fail ### What did you expect to see? upgrade success ### What did you see instead? 2021-12-30T09:17:51.941+0800 INFO Execute command finished {""code"": 1, ""error"": ""init config failed: cdc-peer:8300: server_config is only supported with TiCDC version v4.0.13 or later"", ""errorVerbose"": ""server_config is only supported with TiCDC version v4.0.13 or later\ngithub.com/pingcap/tiup/pkg/cluster/spec.(*CDCInstance).InitConfig\n\tgithub.com/pingcap/tiup/pkg/cluster/spec/cdc.go:168\ngithub.com/pingcap/tiup/pkg/cluster/task.(*InitConfig).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/task/init_config.go:51\ngithub.com/pingcap/tiup/pkg/cluster/task.(*Serial).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/task/task.go:85\ngithub.com/pingcap/tiup/pkg/cluster/task.(*Parallel).Execute.func1\n\tgithub.com/pingcap/tiup/pkg/cluster/task/task.go:142\nruntime.goexit\n\truntime/asm_amd64.s:1581\ninit config failed: cdc-peer:8300""} ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console (paste TiDB cluster version here) ``` TiCDC version (execute `cdc version`): [release-version=v5.2.3] [git-hash=a04ddac9fe83c8bdff267e4c44150060ea05f5ec] [git-branch=heads/refs/tags/v5.2.3] [utc-build-time=""2021-11-26 07:39:58""] [go-version=""go version go1.16.4 linux/amd64""] ```console (paste TiCDC version here) ```",1, upgrade to nightly fail for server config is only supported with ticdc version or later what did you do install tidb cluster with config cdc sorter max memory consumption upgrade to nightly upgrade fail what did you expect to see upgrade success what did you see instead info execute command finished code error init config failed cdc peer server config is only supported with ticdc version or later errorverbose server config is only supported with ticdc version or later ngithub com pingcap tiup pkg cluster spec cdcinstance initconfig n tgithub com pingcap tiup pkg cluster spec cdc go ngithub com pingcap tiup pkg cluster task initconfig execute n tgithub com pingcap tiup pkg cluster task init config go ngithub com pingcap tiup pkg cluster task serial execute n tgithub com pingcap tiup pkg cluster task task go ngithub com pingcap tiup pkg cluster task parallel execute n tgithub com pingcap tiup pkg cluster task task go nruntime goexit n truntime asm s ninit config failed cdc peer versions of the cluster upstream tidb cluster version execute select tidb version in a mysql client console paste tidb cluster version here ticdc version execute cdc version console paste ticdc version here ,1 61827,14641784015.0,IssuesEvent,2020-12-25 08:21:41,hackmdio/codimd,https://api.github.com/repos/hackmdio/codimd,closed,Stored XSS in mermaid,security,"Hi, This weekend I played hxpctf, during competition there was a challenge called hackme. It was a Docker with codimd. My solution was unintended: I use google analytics to exploit a stored xss bug in mermaid. Here is my [PoC](https://github.com/Alemmi/ctf-writeups/blob/main/hxpctf-2020/hackme/solution.md) The bug seems to be known by the mermaid developers ([issue](https://github.com/mermaid-js/mermaid/issues/869)). I tryed it on [hackmd.io](https://hackmd.io/) and it works, too. Hope you can fix soon! P.S. Now I'm going to reopen the issue in mermaid repository. This is also a duplicate, but the other issues are marked as ""solved"". Thanks Alessandro Mizzaro",True,"Stored XSS in mermaid - Hi, This weekend I played hxpctf, during competition there was a challenge called hackme. It was a Docker with codimd. My solution was unintended: I use google analytics to exploit a stored xss bug in mermaid. Here is my [PoC](https://github.com/Alemmi/ctf-writeups/blob/main/hxpctf-2020/hackme/solution.md) The bug seems to be known by the mermaid developers ([issue](https://github.com/mermaid-js/mermaid/issues/869)). I tryed it on [hackmd.io](https://hackmd.io/) and it works, too. Hope you can fix soon! P.S. Now I'm going to reopen the issue in mermaid repository. This is also a duplicate, but the other issues are marked as ""solved"". Thanks Alessandro Mizzaro",0,stored xss in mermaid hi this weekend i played hxpctf during competition there was a challenge called hackme it was a docker with codimd my solution was unintended i use google analytics to exploit a stored xss bug in mermaid here is my the bug seems to be known by the mermaid developers i tryed it on and it works too hope you can fix soon p s now i m going to reopen the issue in mermaid repository this is also a duplicate but the other issues are marked as solved thanks alessandro mizzaro,0 6447,23177209371.0,IssuesEvent,2022-07-31 15:46:31,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,[doc] Automation for creating documentation for a new release and versioning of release docu,doc stale research needs discussion release-automation area:devops,"## User story When releasing a new version of Keptn, the documentation is also released based on the same release tag. As a user, I can switch between the release versions, while the latest stable version is shown by default. *Future situation:* As we are going to change the versioning of releases by incrementing the Minor version (Major.Minor.Patch), the current documentation approach does not scale. We would have duplicate the docu for each new release. ### Details * Using tagging / branching to create release documentation in https://github.com/keptn/keptn.github.io * On the page, I can switch between the release docu. For example, see istio.io: ![image](https://user-images.githubusercontent.com/729071/130186885-b185163d-fb99-4ee1-b707-df6fd455b12e.png) * When switching to an older release, the release version is reflected in the URL: https://istio.io/v1.9/ * Consequently, we should not show the documentation for previous releases, but rather the release docu the user has selected: ![image](https://user-images.githubusercontent.com/729071/130187166-76a3fc25-d684-487d-b0dc-4637b456feb7.png) ### Advantage * By applying this approach, it becomes obsolete to duplicate the docu for each release in: https://github.com/keptn/keptn.github.io/tree/master/content/docs",1.0,"[doc] Automation for creating documentation for a new release and versioning of release docu - ## User story When releasing a new version of Keptn, the documentation is also released based on the same release tag. As a user, I can switch between the release versions, while the latest stable version is shown by default. *Future situation:* As we are going to change the versioning of releases by incrementing the Minor version (Major.Minor.Patch), the current documentation approach does not scale. We would have duplicate the docu for each new release. ### Details * Using tagging / branching to create release documentation in https://github.com/keptn/keptn.github.io * On the page, I can switch between the release docu. For example, see istio.io: ![image](https://user-images.githubusercontent.com/729071/130186885-b185163d-fb99-4ee1-b707-df6fd455b12e.png) * When switching to an older release, the release version is reflected in the URL: https://istio.io/v1.9/ * Consequently, we should not show the documentation for previous releases, but rather the release docu the user has selected: ![image](https://user-images.githubusercontent.com/729071/130187166-76a3fc25-d684-487d-b0dc-4637b456feb7.png) ### Advantage * By applying this approach, it becomes obsolete to duplicate the docu for each release in: https://github.com/keptn/keptn.github.io/tree/master/content/docs",1, automation for creating documentation for a new release and versioning of release docu user story when releasing a new version of keptn the documentation is also released based on the same release tag as a user i can switch between the release versions while the latest stable version is shown by default future situation as we are going to change the versioning of releases by incrementing the minor version major minor patch the current documentation approach does not scale we would have duplicate the docu for each new release details using tagging branching to create release documentation in on the page i can switch between the release docu for example see istio io when switching to an older release the release version is reflected in the url consequently we should not show the documentation for previous releases but rather the release docu the user has selected advantage by applying this approach it becomes obsolete to duplicate the docu for each release in ,1 2106,11394590259.0,IssuesEvent,2020-01-30 09:40:06,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,opened,"Kibana OSS container does not start correctly - Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"".",automation bug,"We detected on ITs that Python is not running the test on the test that starts Kibana, the following error is shown in the Kibana logs ``` {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: apm_oss""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dashboard_embeddable_container""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dev_tools""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: eui_utils""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: kibana_legacy""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: status_page""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:12+00:00"",""tags"":[""fatal"",""root""],""pid"":6,""message"":""{ Error: Unknown configuration key(s): \""xpack.apm.serviceMapEnabled\"". Check for spelling errors and ensure that expected plugins are installed.\n at ensureValidConfiguration (/usr/share/kibana/src/core/server/legacy/config/ensure_valid_configuration.js:46:11) code: 'InvalidConfig', processExitCode: 64, cause: undefined }""} FATAL Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"". Check for spelling errors and ensure that expected plugins are installed. {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: apm_oss""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dashboard_embeddable_container""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dev_tools""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: eui_utils""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: kibana_legacy""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: status_page""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:17+00:00"",""tags"":[""fatal"",""root""],""pid"":6,""message"":""{ Error: Unknown configuration key(s): \""xpack.apm.serviceMapEnabled\"". Check for spelling errors and ensure that expected plugins are installed.\n at ensureValidConfiguration (/usr/share/kibana/src/core/server/legacy/config/ensure_valid_configuration.js:46:11) code: 'InvalidConfig', processExitCode: 64, cause: undefined }""} FATAL Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"". Check for spelling errors and ensure that expected plugins are installed. ``` ",1.0,"Kibana OSS container does not start correctly - Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"". - We detected on ITs that Python is not running the test on the test that starts Kibana, the following error is shown in the Kibana logs ``` {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: apm_oss""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dashboard_embeddable_container""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dev_tools""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: eui_utils""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: kibana_legacy""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: status_page""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:12+00:00"",""tags"":[""fatal"",""root""],""pid"":6,""message"":""{ Error: Unknown configuration key(s): \""xpack.apm.serviceMapEnabled\"". Check for spelling errors and ensure that expected plugins are installed.\n at ensureValidConfiguration (/usr/share/kibana/src/core/server/legacy/config/ensure_valid_configuration.js:46:11) code: 'InvalidConfig', processExitCode: 64, cause: undefined }""} FATAL Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"". Check for spelling errors and ensure that expected plugins are installed. {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: apm_oss""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dashboard_embeddable_container""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dev_tools""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: eui_utils""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: kibana_legacy""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: status_page""} {""type"":""log"",""@timestamp"":""2020-01-30T03:18:17+00:00"",""tags"":[""fatal"",""root""],""pid"":6,""message"":""{ Error: Unknown configuration key(s): \""xpack.apm.serviceMapEnabled\"". Check for spelling errors and ensure that expected plugins are installed.\n at ensureValidConfiguration (/usr/share/kibana/src/core/server/legacy/config/ensure_valid_configuration.js:46:11) code: 'InvalidConfig', processExitCode: 64, cause: undefined }""} FATAL Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"". Check for spelling errors and ensure that expected plugins are installed. ``` ",1,kibana oss container does not start correctly error unknown configuration key s xpack apm servicemapenabled we detected on its that python is not running the test on the test that starts kibana the following error is shown in the kibana logs type log timestamp tags pid message expect plugin id in camelcase but found apm oss type log timestamp tags pid message expect plugin id in camelcase but found dashboard embeddable container type log timestamp tags pid message expect plugin id in camelcase but found dev tools type log timestamp tags pid message expect plugin id in camelcase but found eui utils type log timestamp tags pid message expect plugin id in camelcase but found kibana legacy type log timestamp tags pid message expect plugin id in camelcase but found status page type log timestamp tags pid message error unknown configuration key s xpack apm servicemapenabled check for spelling errors and ensure that expected plugins are installed n at ensurevalidconfiguration usr share kibana src core server legacy config ensure valid configuration js code invalidconfig processexitcode cause undefined fatal error unknown configuration key s xpack apm servicemapenabled check for spelling errors and ensure that expected plugins are installed type log timestamp tags pid message expect plugin id in camelcase but found apm oss type log timestamp tags pid message expect plugin id in camelcase but found dashboard embeddable container type log timestamp tags pid message expect plugin id in camelcase but found dev tools type log timestamp tags pid message expect plugin id in camelcase but found eui utils type log timestamp tags pid message expect plugin id in camelcase but found kibana legacy type log timestamp tags pid message expect plugin id in camelcase but found status page type log timestamp tags pid message error unknown configuration key s xpack apm servicemapenabled check for spelling errors and ensure that expected plugins are installed n at ensurevalidconfiguration usr share kibana src core server legacy config ensure valid configuration js code invalidconfig processexitcode cause undefined fatal error unknown configuration key s xpack apm servicemapenabled check for spelling errors and ensure that expected plugins are installed ,1 10114,4007417667.0,IssuesEvent,2016-05-12 18:04:27,Shopify/javascript,https://api.github.com/repos/Shopify/javascript,closed,Remove returns from anonymous `addEventListener` handlers,new-codemod,"Vanilla and jQuery handlers treat return values very differently. Removing (completely meaningless) explicit return values from `addEventListener` handlers should discourage conflating of techniques. Example: ``` document.addEventListener('input', event => { if (event.target.type === 'hidden') { event.preventDefault(); return event.stopPropagation(); } }, true); ```",1.0,"Remove returns from anonymous `addEventListener` handlers - Vanilla and jQuery handlers treat return values very differently. Removing (completely meaningless) explicit return values from `addEventListener` handlers should discourage conflating of techniques. Example: ``` document.addEventListener('input', event => { if (event.target.type === 'hidden') { event.preventDefault(); return event.stopPropagation(); } }, true); ```",0,remove returns from anonymous addeventlistener handlers vanilla and jquery handlers treat return values very differently removing completely meaningless explicit return values from addeventlistener handlers should discourage conflating of techniques example document addeventlistener input event if event target type hidden event preventdefault return event stoppropagation true ,0 4130,15589316578.0,IssuesEvent,2021-03-18 07:52:19,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,change create to created,Pri2 automation/svc cxp doc-enhancement dsc/subsvc triaged,"Once you have create a composite resource module change to Once you have **created** a composite resource module --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 5d1c3be2-bb62-2e96-01eb-ee2d16c4a3c4 * Version Independent ID: 544726a4-a816-2eb3-f236-f083b618074d * Content: [Convert configurations to composite resources for Azure Automation State Configuration](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-create-composite) * Content Source: [articles/automation/automation-dsc-create-composite.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-dsc-create-composite.md) * Service: **automation** * Sub-service: **dsc** * GitHub Login: @mgreenegit * Microsoft Alias: **migreene**",1.0,"change create to created - Once you have create a composite resource module change to Once you have **created** a composite resource module --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 5d1c3be2-bb62-2e96-01eb-ee2d16c4a3c4 * Version Independent ID: 544726a4-a816-2eb3-f236-f083b618074d * Content: [Convert configurations to composite resources for Azure Automation State Configuration](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-create-composite) * Content Source: [articles/automation/automation-dsc-create-composite.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-dsc-create-composite.md) * Service: **automation** * Sub-service: **dsc** * GitHub Login: @mgreenegit * Microsoft Alias: **migreene**",1,change create to created once you have create a composite resource module change to once you have created a composite resource module document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service dsc github login mgreenegit microsoft alias migreene ,1 338237,30287838880.0,IssuesEvent,2023-07-08 22:50:15,MrMelbert/MapleStationCode,https://api.github.com/repos/MrMelbert/MapleStationCode,opened,Flaky test strange_reagent: list index out of bounds,🤖 Flaky Test Report," Flaky tests were detected in [this test run](https://github.com/MrMelbert/MapleStationCode/actions/runs/5496696407/attempts/1). This means that there was a failure that was cleared when the tests were simply restarted. Failures: ``` strange_reagent: [22:35:09] Runtime in log_holder.dm,253: list index out of bounds proc name: human readable timestamp (/datum/log_holder/proc/human_readable_timestamp) usr: *no key*/(magic polar bear) usr.loc: (Test Room (126,126,13)) src: /datum/log_holder (/datum/log_holder) call stack: /datum/log_holder (/datum/log_holder): human readable timestamp(3) /datum/log_category/debug_mobt... (/datum/log_category/debug_mobtag): create entry(""TAG: mob_4273 CREATED: *no key..."", null, null) /datum/log_holder (/datum/log_holder): Log(""debug-mobtag"", ""TAG: mob_4273 CREATED: *no key..."", null) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): log mob tag(""TAG: mob_4273 CREATED: *no key..."", null) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0) Atoms (/datum/controller/subsystem/atoms): InitAtom(the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser), 0, /list (/list)) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): New(0) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): New(the floor (126,126,13) (/turf/open/floor/iron)) /datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): allocate(/mob/living/simple_animal/host... (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser)) /datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): allocate new target(/mob/living/simple_animal/host... (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser)) /datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): Run() RunUnitTest(/datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent), /list (/list)) RunUnitTests() /datum/callback (/datum/callback): InvokeAsync() at log_holder.dm:253 ``` ",1.0,"Flaky test strange_reagent: list index out of bounds - Flaky tests were detected in [this test run](https://github.com/MrMelbert/MapleStationCode/actions/runs/5496696407/attempts/1). This means that there was a failure that was cleared when the tests were simply restarted. Failures: ``` strange_reagent: [22:35:09] Runtime in log_holder.dm,253: list index out of bounds proc name: human readable timestamp (/datum/log_holder/proc/human_readable_timestamp) usr: *no key*/(magic polar bear) usr.loc: (Test Room (126,126,13)) src: /datum/log_holder (/datum/log_holder) call stack: /datum/log_holder (/datum/log_holder): human readable timestamp(3) /datum/log_category/debug_mobt... (/datum/log_category/debug_mobtag): create entry(""TAG: mob_4273 CREATED: *no key..."", null, null) /datum/log_holder (/datum/log_holder): Log(""debug-mobtag"", ""TAG: mob_4273 CREATED: *no key..."", null) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): log mob tag(""TAG: mob_4273 CREATED: *no key..."", null) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0) Atoms (/datum/controller/subsystem/atoms): InitAtom(the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser), 0, /list (/list)) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): New(0) the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): New(the floor (126,126,13) (/turf/open/floor/iron)) /datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): allocate(/mob/living/simple_animal/host... (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser)) /datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): allocate new target(/mob/living/simple_animal/host... (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser)) /datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): Run() RunUnitTest(/datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent), /list (/list)) RunUnitTests() /datum/callback (/datum/callback): InvokeAsync() at log_holder.dm:253 ``` ",0,flaky test strange reagent list index out of bounds flaky tests were detected in this means that there was a failure that was cleared when the tests were simply restarted failures strange reagent runtime in log holder dm list index out of bounds proc name human readable timestamp datum log holder proc human readable timestamp usr no key magic polar bear usr loc test room src datum log holder datum log holder call stack datum log holder datum log holder human readable timestamp datum log category debug mobt datum log category debug mobtag create entry tag mob created no key null null datum log holder datum log holder log debug mobtag tag mob created no key null the magic polar bear mob living simple animal hostile asteroid polarbear lesser log mob tag tag mob created no key null the magic polar bear mob living simple animal hostile asteroid polarbear lesser initialize the magic polar bear mob living simple animal hostile asteroid polarbear lesser initialize the magic polar bear mob living simple animal hostile asteroid polarbear lesser initialize the magic polar bear mob living simple animal hostile asteroid polarbear lesser initialize the magic polar bear mob living simple animal hostile asteroid polarbear lesser initialize atoms datum controller subsystem atoms initatom the magic polar bear mob living simple animal hostile asteroid polarbear lesser list list the magic polar bear mob living simple animal hostile asteroid polarbear lesser new the magic polar bear mob living simple animal hostile asteroid polarbear lesser new the floor turf open floor iron datum unit test strange reage datum unit test strange reagent allocate mob living simple animal host mob living simple animal hostile asteroid polarbear lesser datum unit test strange reage datum unit test strange reagent allocate new target mob living simple animal host mob living simple animal hostile asteroid polarbear lesser datum unit test strange reage datum unit test strange reagent run rununittest datum unit test strange reage datum unit test strange reagent list list rununittests datum callback datum callback invokeasync at log holder dm ,0 260583,19676666561.0,IssuesEvent,2022-01-11 13:04:49,systemd/systemd,https://api.github.com/repos/systemd/systemd,closed,[networkd] description of UseDNS= / UseDomains=,network documentation needs-reporter-feedback ❓,"Systemd version: 238.133 Distro: Archlinux From manual page: `When true (the default), the DNS servers received from the DHCP server will be used and take precedence over any statically configured ones.` What I observed is that both sections simply get merged, without any particular preferences (formally DNS= listed DNSes are before dynamically obtained ones). Same applies to Domains= and UseDomains= - static and dynamic are simply merged. _Probably_ the same applies to NTP= / UseNTP= pair - but I haven't checked this yet. The behavior is as usual - the relevant DNSes are tried in turn (in case some of servers fail, tested with simple iptables rule)",1.0,"[networkd] description of UseDNS= / UseDomains= - Systemd version: 238.133 Distro: Archlinux From manual page: `When true (the default), the DNS servers received from the DHCP server will be used and take precedence over any statically configured ones.` What I observed is that both sections simply get merged, without any particular preferences (formally DNS= listed DNSes are before dynamically obtained ones). Same applies to Domains= and UseDomains= - static and dynamic are simply merged. _Probably_ the same applies to NTP= / UseNTP= pair - but I haven't checked this yet. The behavior is as usual - the relevant DNSes are tried in turn (in case some of servers fail, tested with simple iptables rule)",0, description of usedns usedomains systemd version distro archlinux from manual page when true the default the dns servers received from the dhcp server will be used and take precedence over any statically configured ones what i observed is that both sections simply get merged without any particular preferences formally dns listed dnses are before dynamically obtained ones same applies to domains and usedomains static and dynamic are simply merged probably the same applies to ntp usentp pair but i haven t checked this yet the behavior is as usual the relevant dnses are tried in turn in case some of servers fail tested with simple iptables rule ,0 2755,12541172020.0,IssuesEvent,2020-06-05 11:49:36,input-output-hk/cardano-node,https://api.github.com/repos/input-output-hk/cardano-node,opened,[QA] - Min tx fees,e2e automation,"- check the min tx fees for different transaction types: 1-to-1, 1-to-10, 10-to-1, 10-to-10, 100-to-100 + different types of certificates - the scope of this test is to check that the tx fee remains constant between builds",1.0,"[QA] - Min tx fees - - check the min tx fees for different transaction types: 1-to-1, 1-to-10, 10-to-1, 10-to-10, 100-to-100 + different types of certificates - the scope of this test is to check that the tx fee remains constant between builds",1, min tx fees check the min tx fees for different transaction types to to to to to different types of certificates the scope of this test is to check that the tx fee remains constant between builds,1 221748,7395831014.0,IssuesEvent,2018-03-18 03:33:42,langbakk/HSS,https://api.github.com/repos/langbakk/HSS,closed,"Bug: leaving the bathroom, leaving a pair of panties, they might not be there on return",bug priority 2,"For some reason, it seems like panties aren't stored when leaving and reentering the bathroom",1.0,"Bug: leaving the bathroom, leaving a pair of panties, they might not be there on return - For some reason, it seems like panties aren't stored when leaving and reentering the bathroom",0,bug leaving the bathroom leaving a pair of panties they might not be there on return for some reason it seems like panties aren t stored when leaving and reentering the bathroom,0 766148,26873336335.0,IssuesEvent,2023-02-04 18:45:00,belav/csharpier,https://api.github.com/repos/belav/csharpier,closed,csharpier-repos has files that encoding detection fails on,type:bug priority:low,"The code below returns null for encoding on a few files. It does this even after they are written out with UTF8 ```c# var detectionResult = CharsetDetector.DetectFromFile(file); var encoding = detectionResult?.Detected?.Encoding; ``` The files are from the aspnetcore repository - /aspnetcore/src/Shared/test/Shared.Tests/UrlDecoderTests.cs /aspnetcore/src/Razor/Microsoft.AspNetCore.Razor.Language/src/BoundAttributeDescriptorComparer.cs /aspnetcore/src/Razor/Microsoft.AspNetCore.Razor.Language/src/TagHelperDescriptorComparer.cs /aspnetcore/src/Servers/Kestrel/shared/KnownHeaders.cs",1.0,"csharpier-repos has files that encoding detection fails on - The code below returns null for encoding on a few files. It does this even after they are written out with UTF8 ```c# var detectionResult = CharsetDetector.DetectFromFile(file); var encoding = detectionResult?.Detected?.Encoding; ``` The files are from the aspnetcore repository - /aspnetcore/src/Shared/test/Shared.Tests/UrlDecoderTests.cs /aspnetcore/src/Razor/Microsoft.AspNetCore.Razor.Language/src/BoundAttributeDescriptorComparer.cs /aspnetcore/src/Razor/Microsoft.AspNetCore.Razor.Language/src/TagHelperDescriptorComparer.cs /aspnetcore/src/Servers/Kestrel/shared/KnownHeaders.cs",0,csharpier repos has files that encoding detection fails on the code below returns null for encoding on a few files it does this even after they are written out with c var detectionresult charsetdetector detectfromfile file var encoding detectionresult detected encoding the files are from the aspnetcore repository aspnetcore src shared test shared tests urldecodertests cs aspnetcore src razor microsoft aspnetcore razor language src boundattributedescriptorcomparer cs aspnetcore src razor microsoft aspnetcore razor language src taghelperdescriptorcomparer cs aspnetcore src servers kestrel shared knownheaders cs,0 11092,13112317490.0,IssuesEvent,2020-08-05 01:49:46,eirannejad/pyRevit,https://api.github.com/repos/eirannejad/pyRevit,closed,pyrevit installation error,Installer Misc Compatibility,"Hi, I downloaded the 'pyRevit_4.7.4_signed' from GitHub and failed to intall it. Please see attached pics and log. Please let me know why is that. ![WeChat Image_20200530201332](https://user-images.githubusercontent.com/66166438/83327915-24975f80-a2b2-11ea-91fb-a56ede885cd5.png) ![WeChat Image_202005302013321](https://user-images.githubusercontent.com/66166438/83327917-28c37d00-a2b2-11ea-9d2d-6c0a2074a553.png) ![WeChat Image_202005302013322](https://user-images.githubusercontent.com/66166438/83327925-2cef9a80-a2b2-11ea-9faa-a222f3294347.png) ![WeChat Image_202005302013323](https://user-images.githubusercontent.com/66166438/83327930-30832180-a2b2-11ea-9002-31e26d0b5dd5.png) ![WeChat Image_202005302013324](https://user-images.githubusercontent.com/66166438/83327931-337e1200-a2b2-11ea-97e3-ba48b39ec338.png) [Microsoft_.NET_Core_Runtime_-_2.0.7_(x64)_20200530195844.log](https://github.com/eirannejad/pyRevit/files/4705595/Microsoft_.NET_Core_Runtime_-_2.0.7_.x64._20200530195844.log) ",True,"pyrevit installation error - Hi, I downloaded the 'pyRevit_4.7.4_signed' from GitHub and failed to intall it. Please see attached pics and log. Please let me know why is that. ![WeChat Image_20200530201332](https://user-images.githubusercontent.com/66166438/83327915-24975f80-a2b2-11ea-91fb-a56ede885cd5.png) ![WeChat Image_202005302013321](https://user-images.githubusercontent.com/66166438/83327917-28c37d00-a2b2-11ea-9d2d-6c0a2074a553.png) ![WeChat Image_202005302013322](https://user-images.githubusercontent.com/66166438/83327925-2cef9a80-a2b2-11ea-9faa-a222f3294347.png) ![WeChat Image_202005302013323](https://user-images.githubusercontent.com/66166438/83327930-30832180-a2b2-11ea-9002-31e26d0b5dd5.png) ![WeChat Image_202005302013324](https://user-images.githubusercontent.com/66166438/83327931-337e1200-a2b2-11ea-97e3-ba48b39ec338.png) [Microsoft_.NET_Core_Runtime_-_2.0.7_(x64)_20200530195844.log](https://github.com/eirannejad/pyRevit/files/4705595/Microsoft_.NET_Core_Runtime_-_2.0.7_.x64._20200530195844.log) ",0,pyrevit installation error hi i downloaded the pyrevit signed from github and failed to intall it please see attached pics and log please let me know why is that ,0 645418,21004358219.0,IssuesEvent,2022-03-29 20:48:25,status-im/status-desktop,https://api.github.com/repos/status-im/status-desktop,closed,Public chat button should lose its highlight state when its context menu was closed,bug ui Chat priority 4: minor,"This is what the button looks like after I've closed the menu (without doing anything): ![Screenshot from 2022-03-24 09-59-29](https://user-images.githubusercontent.com/445106/159880055-e16cffd3-9fd8-4710-9a43-1018d75f4581.png) It stays active, even though the menu is closed. It should become inactive as well. ",1.0,"Public chat button should lose its highlight state when its context menu was closed - This is what the button looks like after I've closed the menu (without doing anything): ![Screenshot from 2022-03-24 09-59-29](https://user-images.githubusercontent.com/445106/159880055-e16cffd3-9fd8-4710-9a43-1018d75f4581.png) It stays active, even though the menu is closed. It should become inactive as well. ",0,public chat button should lose its highlight state when its context menu was closed this is what the button looks like after i ve closed the menu without doing anything it stays active even though the menu is closed it should become inactive as well ,0 3690,14353545766.0,IssuesEvent,2020-11-30 07:06:20,dfernandezm/moneycol,https://api.github.com/repos/dfernandezm/moneycol,opened,CloudRun with VPC GKE connector,automation backend myiac,"- Leave in GKE only ElasticSearch and stateful data (batch, etc.) - Use CloudRun for compute (server, FE, collections",1.0,"CloudRun with VPC GKE connector - - Leave in GKE only ElasticSearch and stateful data (batch, etc.) - Use CloudRun for compute (server, FE, collections",1,cloudrun with vpc gke connector leave in gke only elasticsearch and stateful data batch etc use cloudrun for compute server fe collections,1 7433,24871455688.0,IssuesEvent,2022-10-27 15:30:28,Azure/azure-cli,https://api.github.com/repos/Azure/azure-cli,closed,Set runbook to draft state,question Automation customer-reported needs-author-feedback no-recent-activity CXP Attention Auto-Assign,"Is there no way to set an existing runbook to draft state? I cant work how to replace content ""az automation runbook replace-content"" without manually setting the runbook to draft in the Azure Portal --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: cbb98a7a-2739-73ec-f20b-03367f919215 * Version Independent ID: 3125cea7-248b-9ba0-f304-248b2c766edc * Content: [az automation runbook](https://learn.microsoft.com/en-us/cli/azure/automation/runbook?view=azure-cli-latest) * Content Source: [latest/docs-ref-autogen/automation/runbook.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/main/latest/docs-ref-autogen/automation/runbook.yml) * GitHub Login: @rloutlaw * Microsoft Alias: **routlaw**",1.0,"Set runbook to draft state - Is there no way to set an existing runbook to draft state? I cant work how to replace content ""az automation runbook replace-content"" without manually setting the runbook to draft in the Azure Portal --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: cbb98a7a-2739-73ec-f20b-03367f919215 * Version Independent ID: 3125cea7-248b-9ba0-f304-248b2c766edc * Content: [az automation runbook](https://learn.microsoft.com/en-us/cli/azure/automation/runbook?view=azure-cli-latest) * Content Source: [latest/docs-ref-autogen/automation/runbook.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/main/latest/docs-ref-autogen/automation/runbook.yml) * GitHub Login: @rloutlaw * Microsoft Alias: **routlaw**",1,set runbook to draft state is there no way to set an existing runbook to draft state i cant work how to replace content az automation runbook replace content without manually setting the runbook to draft in the azure portal document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source github login rloutlaw microsoft alias routlaw ,1 275130,23893552269.0,IssuesEvent,2022-09-08 13:18:13,ARUP-CAS/aiscr-webamcr,https://api.github.com/repos/ARUP-CAS/aiscr-webamcr,closed,Vícestupňové hesláře - revize,bug / maintanance TESTED,"Není implementováno správné chování vícestupňových heslářů (oddělovače v seznamech - funguje dobře např. pro typ akce, ale na mnoha jiných místech nikoli)",1.0,"Vícestupňové hesláře - revize - Není implementováno správné chování vícestupňových heslářů (oddělovače v seznamech - funguje dobře např. pro typ akce, ale na mnoha jiných místech nikoli)",0,vícestupňové hesláře revize není implementováno správné chování vícestupňových heslářů oddělovače v seznamech funguje dobře např pro typ akce ale na mnoha jiných místech nikoli ,0 4275,15930745478.0,IssuesEvent,2021-04-14 01:39:13,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,opened,[e2e] add integration test for Volumes used by MinIO workloads cannot be attached with state Degraded,require/automation-e2e,Ref: https://github.com/longhorn/longhorn/issues/2073,1.0,[e2e] add integration test for Volumes used by MinIO workloads cannot be attached with state Degraded - Ref: https://github.com/longhorn/longhorn/issues/2073,1, add integration test for volumes used by minio workloads cannot be attached with state degraded ref ,1 366230,25573759280.0,IssuesEvent,2022-11-30 20:03:30,lieion/OpenSource_Final_Project,https://api.github.com/repos/lieion/OpenSource_Final_Project,closed,현재 진행 상황 ,documentation,"

Template name

김건우 - 진행 상황

v 0.1.0

새로 추가할 기능, 페이지, 파일에 대한 suggest

Template Content

About

**현재 상황 안내** 1. 회원가입 시스템 구현 (express.js) (완) 2. 로그인 시스템 구현 (완) 3. 주문 내역 구현 중 (11/26 예정) 4. myPage 추가 (11/26 예정) 5. navbar issue 관련 수정 #6 (진행 중) 아마도 시간 상 로컬 server.js의 array 형태로 데이터베이스를 대체 (13강 chat 저장 방식 처럼) 회원 가입 요청에는 form 형태를 사용해서 사용자가 기입한 내용을 server로 전달 후 array에 저장. **Additional Info** `npm install body-parser` `npm install express -save` 필요 로그인 후 받는 토큰으로 인해서 충돌이 일어날 수 있음에 유의 ",1.0,"현재 진행 상황 -

Template name

김건우 - 진행 상황

v 0.1.0

새로 추가할 기능, 페이지, 파일에 대한 suggest

Template Content

About

**현재 상황 안내** 1. 회원가입 시스템 구현 (express.js) (완) 2. 로그인 시스템 구현 (완) 3. 주문 내역 구현 중 (11/26 예정) 4. myPage 추가 (11/26 예정) 5. navbar issue 관련 수정 #6 (진행 중) 아마도 시간 상 로컬 server.js의 array 형태로 데이터베이스를 대체 (13강 chat 저장 방식 처럼) 회원 가입 요청에는 form 형태를 사용해서 사용자가 기입한 내용을 server로 전달 후 array에 저장. **Additional Info** `npm install body-parser` `npm install express -save` 필요 로그인 후 받는 토큰으로 인해서 충돌이 일어날 수 있음에 유의 ",0,현재 진행 상황 template name 김건우 진행 상황 v 새로 추가할 기능 페이지 파일에 대한 suggest template content about 현재 상황 안내 회원가입 시스템 구현 express js 완 로그인 시스템 구현 완 주문 내역 구현 중 예정 mypage 추가 예정 navbar issue 관련 수정 진행 중 아마도 시간 상 로컬 server js의 array 형태로 데이터베이스를 대체 chat 저장 방식 처럼 회원 가입 요청에는 form 형태를 사용해서 사용자가 기입한 내용을 server로 전달 후 array에 저장 additional info npm install body parser npm install express save 필요 로그인 후 받는 토큰으로 인해서 충돌이 일어날 수 있음에 유의 ,0 761329,26676623435.0,IssuesEvent,2023-01-26 14:43:39,eclipse-sirius/sirius-components,https://api.github.com/repos/eclipse-sirius/sirius-components,opened,Upload document does not work properly if a special resource factory is needed,type: bug difficulty: starter priority: low package: core,"* [X] **I have checked that this bug has not yet been reported by someone else** * [X] **I have checked that this bug appears on Chrome** * [X] **I have specified the version** : latest * [X] **I have specified my environment** : All ### Actual behavior For example UML resource need special resource factory to instanciate and resolve the proxies. If the factory needed to resolved the pathmap protocol is not present, then it fails. ### Steps to reproduce no reproducible scenario in Sirius-Web yet. ### Expected behavior The registered special resrouce factory should be present on the resourceSet used to instanciate the uploaded resourceSet ",1.0,"Upload document does not work properly if a special resource factory is needed - * [X] **I have checked that this bug has not yet been reported by someone else** * [X] **I have checked that this bug appears on Chrome** * [X] **I have specified the version** : latest * [X] **I have specified my environment** : All ### Actual behavior For example UML resource need special resource factory to instanciate and resolve the proxies. If the factory needed to resolved the pathmap protocol is not present, then it fails. ### Steps to reproduce no reproducible scenario in Sirius-Web yet. ### Expected behavior The registered special resrouce factory should be present on the resourceSet used to instanciate the uploaded resourceSet ",0,upload document does not work properly if a special resource factory is needed i have checked that this bug has not yet been reported by someone else i have checked that this bug appears on chrome i have specified the version latest i have specified my environment all actual behavior for example uml resource need special resource factory to instanciate and resolve the proxies if the factory needed to resolved the pathmap protocol is not present then it fails steps to reproduce no reproducible scenario in sirius web yet expected behavior the registered special resrouce factory should be present on the resourceset used to instanciate the uploaded resourceset ,0 238892,7784187614.0,IssuesEvent,2018-06-06 12:32:35,umple/umple,https://api.github.com/repos/umple/umple,closed,Document unspecified in the user manual,Component-UserDocs Diffic-Easy Priority-Medium,"In the state machine section of the user manual, describe the use of the 'unspecified' event to handle situations where a message is received that is not understood. Note that this can be used in regular and queued state machines. Come up with a realistic example",1.0,"Document unspecified in the user manual - In the state machine section of the user manual, describe the use of the 'unspecified' event to handle situations where a message is received that is not understood. Note that this can be used in regular and queued state machines. Come up with a realistic example",0,document unspecified in the user manual in the state machine section of the user manual describe the use of the unspecified event to handle situations where a message is received that is not understood note that this can be used in regular and queued state machines come up with a realistic example,0 7233,24490276542.0,IssuesEvent,2022-10-10 00:12:18,astropy/astropy,https://api.github.com/repos/astropy/astropy,closed,Have good commit messages with commitizen,Feature Request needs-discussion dev-automation,"### Description https://commitizen-tools.github.io/commitizen/ + pre-commit to enforce good commit messages. ### Additional context ""Commitizen is a tool designed for teams. Its main purpose is to define a standard way of committing rules and communicating it (using the cli provided by commitizen). The reasoning behind it is that it is easier to read, and enforces writing descriptive commits. Besides that, having a convention on your commits makes it possible to parse them and use them for something else, like generating automatically the version or a changelog."" (https://commitizen-tools.github.io/commitizen/)",1.0,"Have good commit messages with commitizen - ### Description https://commitizen-tools.github.io/commitizen/ + pre-commit to enforce good commit messages. ### Additional context ""Commitizen is a tool designed for teams. Its main purpose is to define a standard way of committing rules and communicating it (using the cli provided by commitizen). The reasoning behind it is that it is easier to read, and enforces writing descriptive commits. Besides that, having a convention on your commits makes it possible to parse them and use them for something else, like generating automatically the version or a changelog."" (https://commitizen-tools.github.io/commitizen/)",1,have good commit messages with commitizen description pre commit to enforce good commit messages additional context commitizen is a tool designed for teams its main purpose is to define a standard way of committing rules and communicating it using the cli provided by commitizen the reasoning behind it is that it is easier to read and enforces writing descriptive commits besides that having a convention on your commits makes it possible to parse them and use them for something else like generating automatically the version or a changelog ,1 6725,7745110548.0,IssuesEvent,2018-05-29 17:16:09,aws/aws-sdk-ruby,https://api.github.com/repos/aws/aws-sdk-ruby,closed,run_instances with tag_specifications field with resource_type: spot-instances-request not supported!,closing-soon-if-no-response service api," Hi, Calling Aws::EC2::Client run_instances method with tag_specification resource type ""spot-instances-request"" stack the following: ``` [type:error] [error_type:Aws::EC2::Errors::InvalidParameterValue][message:""'spot-instances-request' is not a valid taggable resource type for this operation.""] /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:20:in `call' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/idempotency_token.rb:18:in `call' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/param_converter.rb:20:in `call' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/plugins/response_target.rb:21:in `call' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/request.rb:70:in `send_request' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/base.rb:207:in `block (2 levels) in define_operation_methods' ``` but in the [documentation](https://docs.aws.amazon.com/sdkforruby/api/Aws/EC2/Client.html) the support type are: ""accepts customer-gateway, dhcp-options, image, instance, internet-gateway, network-acl, network-interface, reserved-instances, route-table, snapshot, **spot-instances-request**, subnet, security-group, volume, vpc, vpn-connection, vpn-gateway"". aws-sdk-core (= 2.11.4) ruby:2.3 ubuntu 16.04 example call: ``` client.run_instances( ... tag_specifications: [{ resource_type: 'spot-instances-request', tags: { name: name } }], instance_market_options: { market_type: ""spot"", spot_options: { spot_instance_type: ""one-time"", instance_interruption_behavior: ""terminate"", }, } ) ```",1.0,"run_instances with tag_specifications field with resource_type: spot-instances-request not supported! - Hi, Calling Aws::EC2::Client run_instances method with tag_specification resource type ""spot-instances-request"" stack the following: ``` [type:error] [error_type:Aws::EC2::Errors::InvalidParameterValue][message:""'spot-instances-request' is not a valid taggable resource type for this operation.""] /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:20:in `call' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/idempotency_token.rb:18:in `call' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/param_converter.rb:20:in `call' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/plugins/response_target.rb:21:in `call' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/request.rb:70:in `send_request' /app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/base.rb:207:in `block (2 levels) in define_operation_methods' ``` but in the [documentation](https://docs.aws.amazon.com/sdkforruby/api/Aws/EC2/Client.html) the support type are: ""accepts customer-gateway, dhcp-options, image, instance, internet-gateway, network-acl, network-interface, reserved-instances, route-table, snapshot, **spot-instances-request**, subnet, security-group, volume, vpc, vpn-connection, vpn-gateway"". aws-sdk-core (= 2.11.4) ruby:2.3 ubuntu 16.04 example call: ``` client.run_instances( ... tag_specifications: [{ resource_type: 'spot-instances-request', tags: { name: name } }], instance_market_options: { market_type: ""spot"", spot_options: { spot_instance_type: ""one-time"", instance_interruption_behavior: ""terminate"", }, } ) ```",0,run instances with tag specifications field with resource type spot instances request not supported hi calling aws client run instances method with tag specification resource type spot instances request stack the following app vendor bundle ruby gems aws sdk core lib seahorse client plugins raise response errors rb in call app vendor bundle ruby gems aws sdk core lib aws sdk core plugins jsonvalue converter rb in call app vendor bundle ruby gems aws sdk core lib aws sdk core plugins idempotency token rb in call app vendor bundle ruby gems aws sdk core lib aws sdk core plugins param converter rb in call app vendor bundle ruby gems aws sdk core lib seahorse client plugins response target rb in call app vendor bundle ruby gems aws sdk core lib seahorse client request rb in send request app vendor bundle ruby gems aws sdk core lib seahorse client base rb in block levels in define operation methods but in the the support type are accepts customer gateway dhcp options image instance internet gateway network acl network interface reserved instances route table snapshot spot instances request subnet security group volume vpc vpn connection vpn gateway aws sdk core ruby ubuntu example call client run instances tag specifications resource type spot instances request tags name name instance market options market type spot spot options spot instance type one time instance interruption behavior terminate ,0 6610,23515552187.0,IssuesEvent,2022-08-18 20:56:28,o3de/o3de,https://api.github.com/repos/o3de/o3de,opened,PhysX Fixed Joint Component returns a memory access violation when getting its Component Property Tree with types,kind/bug needs-triage kind/automation sig/simulation,"**Describe the bug** When attempting to get the **Component Property Tree** from a **PhysX Fixed Joint Component** a memory access violation is returned **Steps to reproduce** Steps to reproduce the behavior: 1. Create a Python Editor Test that makes a call to get the **Component Property Tree** from the **PhysX Fixed Joint Component**. ``` test_entity = EditorEntity.create_editor_entity(""Test"") test_component = test_entity.add_component(""PhysX Fixed Joint"") print(test_component.get_property_type_visibility()) ``` or ``` test_entity = hydra.Entity(""test"") entity.create_entity(position, [""PhysX Fixed Joint""]) component = test_entity.components[0] print(hydra.get_property_tree(component) ``` 2. Run automation **Expected behavior** A property tree with paths is returned and printed to the stream **Actual behavior** A Read Access Memory exception is returned **Callstack** ``` ``` ",1.0,"PhysX Fixed Joint Component returns a memory access violation when getting its Component Property Tree with types - **Describe the bug** When attempting to get the **Component Property Tree** from a **PhysX Fixed Joint Component** a memory access violation is returned **Steps to reproduce** Steps to reproduce the behavior: 1. Create a Python Editor Test that makes a call to get the **Component Property Tree** from the **PhysX Fixed Joint Component**. ``` test_entity = EditorEntity.create_editor_entity(""Test"") test_component = test_entity.add_component(""PhysX Fixed Joint"") print(test_component.get_property_type_visibility()) ``` or ``` test_entity = hydra.Entity(""test"") entity.create_entity(position, [""PhysX Fixed Joint""]) component = test_entity.components[0] print(hydra.get_property_tree(component) ``` 2. Run automation **Expected behavior** A property tree with paths is returned and printed to the stream **Actual behavior** A Read Access Memory exception is returned **Callstack** ``` ``` ",1,physx fixed joint component returns a memory access violation when getting its component property tree with types describe the bug when attempting to get the component property tree from a physx fixed joint component a memory access violation is returned steps to reproduce steps to reproduce the behavior create a python editor test that makes a call to get the component property tree from the physx fixed joint component test entity editorentity create editor entity test test component test entity add component physx fixed joint print test component get property type visibility or test entity hydra entity test entity create entity position component test entity components print hydra get property tree component run automation expected behavior a property tree with paths is returned and printed to the stream actual behavior a read access memory exception is returned callstack ,1 29267,11738897192.0,IssuesEvent,2020-03-11 16:48:00,QubesOS/qubes-issues,https://api.github.com/repos/QubesOS/qubes-issues,reopened,Mount /rw and /home with nosuid + nodev,C: templates P: default T: enhancement security,"**The problem you're addressing (if any)** When a template has been configured to enforce internal user permissions, malware that gains a temporarily useful privilege escalation may continue as root user indefinitely in AppVMs by setting up executables in /home that have +s SUID bit set. The effect is that an OS patch for the initial vulnerability will not de-privilege malware that exploited it. Similarly, the ability to create device node files in /home can permit privilege escalation, and such nodes normally don't belong in /home. **Describe the solution you'd like** Change the /rw and /home entries in /etc/fstab to use the `nosuid` and `nodev` options. This works even with bind mounts. **Where is the value to a user, and who might that user be?** Users who do not want malware to persist indefinitely or easily gain root privileges may remove the 'qubes-core-agent-passwordless-root' package, or reconfigure templates according to the 'vm-sudo' doc or Qubes-VM-hardening. Mounting /rw and /home with `nosuid` + `nodev` bolsters security in such template configurations by giving OS security patches a chance to de-privilege malware. **Relevant [documentation](https://www.qubes-os.org/doc/) you've consulted** https://www.qubes-os.org/doc/vm-sudo/ https://github.com/tasket/Qubes-VM-hardening ",True,"Mount /rw and /home with nosuid + nodev - **The problem you're addressing (if any)** When a template has been configured to enforce internal user permissions, malware that gains a temporarily useful privilege escalation may continue as root user indefinitely in AppVMs by setting up executables in /home that have +s SUID bit set. The effect is that an OS patch for the initial vulnerability will not de-privilege malware that exploited it. Similarly, the ability to create device node files in /home can permit privilege escalation, and such nodes normally don't belong in /home. **Describe the solution you'd like** Change the /rw and /home entries in /etc/fstab to use the `nosuid` and `nodev` options. This works even with bind mounts. **Where is the value to a user, and who might that user be?** Users who do not want malware to persist indefinitely or easily gain root privileges may remove the 'qubes-core-agent-passwordless-root' package, or reconfigure templates according to the 'vm-sudo' doc or Qubes-VM-hardening. Mounting /rw and /home with `nosuid` + `nodev` bolsters security in such template configurations by giving OS security patches a chance to de-privilege malware. **Relevant [documentation](https://www.qubes-os.org/doc/) you've consulted** https://www.qubes-os.org/doc/vm-sudo/ https://github.com/tasket/Qubes-VM-hardening ",0,mount rw and home with nosuid nodev the problem you re addressing if any when a template has been configured to enforce internal user permissions malware that gains a temporarily useful privilege escalation may continue as root user indefinitely in appvms by setting up executables in home that have s suid bit set the effect is that an os patch for the initial vulnerability will not de privilege malware that exploited it similarly the ability to create device node files in home can permit privilege escalation and such nodes normally don t belong in home describe the solution you d like change the rw and home entries in etc fstab to use the nosuid and nodev options this works even with bind mounts where is the value to a user and who might that user be users who do not want malware to persist indefinitely or easily gain root privileges may remove the qubes core agent passwordless root package or reconfigure templates according to the vm sudo doc or qubes vm hardening mounting rw and home with nosuid nodev bolsters security in such template configurations by giving os security patches a chance to de privilege malware relevant you ve consulted ,0 10065,7078137500.0,IssuesEvent,2018-01-10 01:48:41,tensorflow/tensorflow,https://api.github.com/repos/tensorflow/tensorflow,closed,Cannot show stderr when using Jupyter,type:bug/performance,"Hello, Could you please have a look about this. I am using TF and Jupyter. But what makes me confuse is that the log text cannot be shown in Jupyter output cell (but it output correctly in ipython). I think it is because of the stderr. This issue have been discussed before in #3047. You add several lines to determine whether or not current context is in an interactive environment. However, even if I use Jupyter, the return value of ""sys.flags.interactive"" is still zero. and the logger lever can never be setted to ""info"" and use ""stdout"" instead of ""stderr"". Thanks a lot!",True,"Cannot show stderr when using Jupyter - Hello, Could you please have a look about this. I am using TF and Jupyter. But what makes me confuse is that the log text cannot be shown in Jupyter output cell (but it output correctly in ipython). I think it is because of the stderr. This issue have been discussed before in #3047. You add several lines to determine whether or not current context is in an interactive environment. However, even if I use Jupyter, the return value of ""sys.flags.interactive"" is still zero. and the logger lever can never be setted to ""info"" and use ""stdout"" instead of ""stderr"". Thanks a lot!",0,cannot show stderr when using jupyter hello could you please have a look about this i am using tf and jupyter but what makes me confuse is that the log text cannot be shown in jupyter output cell but it output correctly in ipython i think it is because of the stderr this issue have been discussed before in you add several lines to determine whether or not current context is in an interactive environment however even if i use jupyter the return value of sys flags interactive is still zero and the logger lever can never be setted to info and use stdout instead of stderr thanks a lot ,0 45744,2939041074.0,IssuesEvent,2015-07-01 14:24:36,HPI-SWA-Teaching/SWT15-Project-13,https://api.github.com/repos/HPI-SWA-Teaching/SWT15-Project-13,opened,Ausgabe von Rückgabewerten,priority: normal type: bug,"Die Ausgabe von Rückgabewerten sollte per ```printOn:``` passieren, nicht per ```asString```.",1.0,"Ausgabe von Rückgabewerten - Die Ausgabe von Rückgabewerten sollte per ```printOn:``` passieren, nicht per ```asString```.",0,ausgabe von rückgabewerten die ausgabe von rückgabewerten sollte per printon passieren nicht per asstring ,0 4073,15356139065.0,IssuesEvent,2021-03-01 12:01:47,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,opened,[XCUITest] Select latest stack Xcode 12.4 and iOS version 14.4 to run the tests in all schemes,eng:automation,"For RunAllXCUITests we are still using iOS 14.0 and due to recent changes with WKWebView app is crashing and around ~100 tests failing. ",1.0,"[XCUITest] Select latest stack Xcode 12.4 and iOS version 14.4 to run the tests in all schemes - For RunAllXCUITests we are still using iOS 14.0 and due to recent changes with WKWebView app is crashing and around ~100 tests failing. ",1, select latest stack xcode and ios version to run the tests in all schemes for runallxcuitests we are still using ios and due to recent changes with wkwebview app is crashing and around tests failing ,1 5838,21391225891.0,IssuesEvent,2022-04-21 07:19:59,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,closed,Improve automatic string import to not take changes in unrelated files,eng:automation,"In this PR the automation is changing the package.resolved file by removing one line: https://github.com/mozilla-mobile/firefox-ios/pull/10505/files#diff-6edf4db475d69aa9d1d8c8cc7cba4419a30e16fddfb130b90bf06e2a5b809cb4L142 In this case that's not critical but it could be in case there is a package change. We need to be sure that only `locale.lproj` files are changed ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4101) ",1.0,"Improve automatic string import to not take changes in unrelated files - In this PR the automation is changing the package.resolved file by removing one line: https://github.com/mozilla-mobile/firefox-ios/pull/10505/files#diff-6edf4db475d69aa9d1d8c8cc7cba4419a30e16fddfb130b90bf06e2a5b809cb4L142 In this case that's not critical but it could be in case there is a package change. We need to be sure that only `locale.lproj` files are changed ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4101) ",1,improve automatic string import to not take changes in unrelated files in this pr the automation is changing the package resolved file by removing one line in this case that s not critical but it could be in case there is a package change we need to be sure that only locale lproj files are changed ┆issue is synchronized with this ,1 5866,21508957367.0,IssuesEvent,2022-04-28 00:47:54,rancher-sandbox/rancher-desktop,https://api.github.com/repos/rancher-sandbox/rancher-desktop,opened,"Change ""restart"" to ""VM restart""",kind/enhancement area/automation,"```console PS C:\Users\Jan\Downloads> rdctl start --container-engine containerd Status: triggering a restart to apply changes. ``` ""Restart"" is alarming because it could mean a reboot of the host, but it really is just a reboot of the VM. The message should make that clear.",1.0,"Change ""restart"" to ""VM restart"" - ```console PS C:\Users\Jan\Downloads> rdctl start --container-engine containerd Status: triggering a restart to apply changes. ``` ""Restart"" is alarming because it could mean a reboot of the host, but it really is just a reboot of the VM. The message should make that clear.",1,change restart to vm restart console ps c users jan downloads rdctl start container engine containerd status triggering a restart to apply changes restart is alarming because it could mean a reboot of the host but it really is just a reboot of the vm the message should make that clear ,1 8684,27172086752.0,IssuesEvent,2023-02-17 20:26:37,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,HTTP Error 423 (Locked) not documented,area:Docs automation:Closed,"Hello, while using the REST API to delete a file I got a response with status 423 (Locked) and body: ``` { ""error"": { ""code"": ""accessDenied"", ""innerError"": { ""date"": ""2018-09-12T06:12:46"", ""request-id"": ""74d9b899-d03e-44c1-8e36-f3c80dc00718"" }, ""message"": ""Lock token does not match existing lock"" } } ``` The error is self-explanatory and the `accessDenied` code is documented; however the [error section](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/concepts/errors.md) does not say that the 423 status may be returned by the API.",1.0,"HTTP Error 423 (Locked) not documented - Hello, while using the REST API to delete a file I got a response with status 423 (Locked) and body: ``` { ""error"": { ""code"": ""accessDenied"", ""innerError"": { ""date"": ""2018-09-12T06:12:46"", ""request-id"": ""74d9b899-d03e-44c1-8e36-f3c80dc00718"" }, ""message"": ""Lock token does not match existing lock"" } } ``` The error is self-explanatory and the `accessDenied` code is documented; however the [error section](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/concepts/errors.md) does not say that the 423 status may be returned by the API.",1,http error locked not documented hello while using the rest api to delete a file i got a response with status locked and body error code accessdenied innererror date request id message lock token does not match existing lock the error is self explanatory and the accessdenied code is documented however the does not say that the status may be returned by the api ,1 1519,10272574502.0,IssuesEvent,2019-08-23 16:49:07,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,opened,Temporarily disable screenshots UI tests til androidx upgrade,eng:automation,"Android Gradle Plugin 3.5.0 requires AndroidX dependencies for testing , but screengrab doesn't work yet with AndroidX. @colintheshots has contributed back a patch (see: https://github.com/fastlane/fastlane/pull/15217). Until Google maintainers pick it up and create a build, we'll need to temporarily disable the screenshots tests. cc: @isabelrios @npark-mozilla see also: https://github.com/mozilla-mobile/fenix/pull/4903 ",1.0,"Temporarily disable screenshots UI tests til androidx upgrade - Android Gradle Plugin 3.5.0 requires AndroidX dependencies for testing , but screengrab doesn't work yet with AndroidX. @colintheshots has contributed back a patch (see: https://github.com/fastlane/fastlane/pull/15217). Until Google maintainers pick it up and create a build, we'll need to temporarily disable the screenshots tests. cc: @isabelrios @npark-mozilla see also: https://github.com/mozilla-mobile/fenix/pull/4903 ",1,temporarily disable screenshots ui tests til androidx upgrade android gradle plugin requires androidx dependencies for testing but screengrab doesn t work yet with androidx colintheshots has contributed back a patch see until google maintainers pick it up and create a build we ll need to temporarily disable the screenshots tests cc isabelrios npark mozilla see also ,1 106526,16682352536.0,IssuesEvent,2021-06-08 02:32:00,vipinsun/TrustID,https://api.github.com/repos/vipinsun/TrustID,opened,WS-2020-0132 (Medium) detected in jsrsasign-7.2.2.tgz,security vulnerability,"## WS-2020-0132 - Medium Severity Vulnerability
Vulnerable Library - jsrsasign-7.2.2.tgz

opensource free pure JavaScript cryptographic library supports RSA/RSAPSS/ECDSA/DSA signing/validation, ASN.1, PKCS#1/5/8 private/public key, X.509 certificate, CRL, OCSP, CMS SignedData, TimeStamp and CAdES and JSON Web Signature(JWS)/Token(JWT)/Key(JWK)

Library home page: https://registry.npmjs.org/jsrsasign/-/jsrsasign-7.2.2.tgz

Path to dependency file: TrustID/trustid-sdk/package.json

Path to vulnerable library: TrustID/trustid-sdk/node_modules/jsrsasign/package.json

Dependency Hierarchy: - fabric-ca-client-2.1.0.tgz (Root Library) - :x: **jsrsasign-7.2.2.tgz** (Vulnerable Library)

Found in HEAD commit: 1c9178c5a1b42520307da1fa7f9b1899276178ed

Found in base branch: master

Vulnerability Details

jsrsasign 4.0.0 through 8.0.12 is vulnerable to side-channel attack.

Publish Date: 2020-06-30

URL: WS-2020-0132

CVSS 3 Score Details (4.6)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-g753-jx37-7xwh

Release Date: 2020-07-14

Fix Resolution: jsrsasign - 8.0.13

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2020-0132 (Medium) detected in jsrsasign-7.2.2.tgz - ## WS-2020-0132 - Medium Severity Vulnerability
Vulnerable Library - jsrsasign-7.2.2.tgz

opensource free pure JavaScript cryptographic library supports RSA/RSAPSS/ECDSA/DSA signing/validation, ASN.1, PKCS#1/5/8 private/public key, X.509 certificate, CRL, OCSP, CMS SignedData, TimeStamp and CAdES and JSON Web Signature(JWS)/Token(JWT)/Key(JWK)

Library home page: https://registry.npmjs.org/jsrsasign/-/jsrsasign-7.2.2.tgz

Path to dependency file: TrustID/trustid-sdk/package.json

Path to vulnerable library: TrustID/trustid-sdk/node_modules/jsrsasign/package.json

Dependency Hierarchy: - fabric-ca-client-2.1.0.tgz (Root Library) - :x: **jsrsasign-7.2.2.tgz** (Vulnerable Library)

Found in HEAD commit: 1c9178c5a1b42520307da1fa7f9b1899276178ed

Found in base branch: master

Vulnerability Details

jsrsasign 4.0.0 through 8.0.12 is vulnerable to side-channel attack.

Publish Date: 2020-06-30

URL: WS-2020-0132

CVSS 3 Score Details (4.6)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-g753-jx37-7xwh

Release Date: 2020-07-14

Fix Resolution: jsrsasign - 8.0.13

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in jsrsasign tgz ws medium severity vulnerability vulnerable library jsrsasign tgz opensource free pure javascript cryptographic library supports rsa rsapss ecdsa dsa signing validation asn pkcs private public key x certificate crl ocsp cms signeddata timestamp and cades and json web signature jws token jwt key jwk library home page a href path to dependency file trustid trustid sdk package json path to vulnerable library trustid trustid sdk node modules jsrsasign package json dependency hierarchy fabric ca client tgz root library x jsrsasign tgz vulnerable library found in head commit a href found in base branch master vulnerability details jsrsasign through is vulnerable to side channel attack publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jsrsasign step up your open source security game with whitesource ,0 314323,9595462500.0,IssuesEvent,2019-05-09 16:06:28,carbon-design-system/carbon-components-react,https://api.github.com/repos/carbon-design-system/carbon-components-react,closed,React Tooltip - Accessibility markup cleanup,Severity 3 priority: high status: waiting for author's response type: a11y ♿ type: bug 🐛,"There are some unnecessary attributes in the accessibility-related markup for the React Tooltip. For an overview, please see the screen capture of the markup, below. Please note: - if the user provides visible text content for the bx--tooltip__label, then the button should use `aria-labelledby` to point to its id, and if the user doesn't have visible text, then they need to provide an aria-label for the button - the component should have a sensible default aria-label, like ""Info"" or ""Help"" (not ""tooltip""), and this default should be published in the doc for the aria-label prop (although users should be encouraged to provide their own) Please delete: - title - it is completely unnecessary - aria-owns - this attribute is not used in the tooltip pattern - role=""img"" and aria-label=""tooltip"" on the svg - these are not necessary because the button label overrides the image label (or keep role=""img"" and aria-label[ledby] on the svg and remove aria-label[ledby] from the button) - alt=""tooltip"" on the svg - just delete this - alt is not a valid attribute on svg elements - aria-labelledby on the tooltip div - this is not used in the tooltip pattern Please consider: - try to use a button element instead of `div role=""button"" tabindex=""0""` ... and use onclick instead of handling space and enter keys - feel free to test with [this little test case](https://carmacleod.github.io/playground/tooltip-test.html) before switching to button. It uses a real button with (mostly) bx styles, and it looks ok. ![image](https://user-images.githubusercontent.com/3331913/55920628-8fc23600-5bc7-11e9-8de1-91b239d50323.png) ",1.0,"React Tooltip - Accessibility markup cleanup - There are some unnecessary attributes in the accessibility-related markup for the React Tooltip. For an overview, please see the screen capture of the markup, below. Please note: - if the user provides visible text content for the bx--tooltip__label, then the button should use `aria-labelledby` to point to its id, and if the user doesn't have visible text, then they need to provide an aria-label for the button - the component should have a sensible default aria-label, like ""Info"" or ""Help"" (not ""tooltip""), and this default should be published in the doc for the aria-label prop (although users should be encouraged to provide their own) Please delete: - title - it is completely unnecessary - aria-owns - this attribute is not used in the tooltip pattern - role=""img"" and aria-label=""tooltip"" on the svg - these are not necessary because the button label overrides the image label (or keep role=""img"" and aria-label[ledby] on the svg and remove aria-label[ledby] from the button) - alt=""tooltip"" on the svg - just delete this - alt is not a valid attribute on svg elements - aria-labelledby on the tooltip div - this is not used in the tooltip pattern Please consider: - try to use a button element instead of `div role=""button"" tabindex=""0""` ... and use onclick instead of handling space and enter keys - feel free to test with [this little test case](https://carmacleod.github.io/playground/tooltip-test.html) before switching to button. It uses a real button with (mostly) bx styles, and it looks ok. ![image](https://user-images.githubusercontent.com/3331913/55920628-8fc23600-5bc7-11e9-8de1-91b239d50323.png) ",0,react tooltip accessibility markup cleanup there are some unnecessary attributes in the accessibility related markup for the react tooltip for an overview please see the screen capture of the markup below please note if the user provides visible text content for the bx tooltip label then the button should use aria labelledby to point to its id and if the user doesn t have visible text then they need to provide an aria label for the button the component should have a sensible default aria label like info or help not tooltip and this default should be published in the doc for the aria label prop although users should be encouraged to provide their own please delete title it is completely unnecessary aria owns this attribute is not used in the tooltip pattern role img and aria label tooltip on the svg these are not necessary because the button label overrides the image label or keep role img and aria label on the svg and remove aria label from the button alt tooltip on the svg just delete this alt is not a valid attribute on svg elements aria labelledby on the tooltip div this is not used in the tooltip pattern please consider try to use a button element instead of div role button tabindex and use onclick instead of handling space and enter keys feel free to test with before switching to button it uses a real button with mostly bx styles and it looks ok ,0 1120,9534991845.0,IssuesEvent,2019-04-30 04:41:23,askmench/mench-web-app,https://api.github.com/repos/askmench/mench-web-app,opened,Action Plan webview option to change OR answer,Bot/Chat-Automation Inputs/Forms,In the MVP version there is no function to change OR answers once selected by students. We can later build this functionality. ,1.0,Action Plan webview option to change OR answer - In the MVP version there is no function to change OR answers once selected by students. We can later build this functionality. ,1,action plan webview option to change or answer in the mvp version there is no function to change or answers once selected by students we can later build this functionality ,1 20960,27817510295.0,IssuesEvent,2023-03-18 21:19:10,cse442-at-ub/project_s23-iweatherify,https://api.github.com/repos/cse442-at-ub/project_s23-iweatherify,closed,Save the units and temperature settings to the database,Processing Task Sprint 2,"**Task Tests** *Test 1* 1. Go to the following URL: https://github.com/cse442-at-ub/project_s23-iweatherify/tree/dev 2. Click on the green `<> Code` button and download the ZIP file. ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/75c741f7-5ca3-4f64-879f-df960ad51a8b) 3. Unzip the downloaded file to a folder on your computer. 4. Open a terminal and navigate to the git repository folder using the `cd` command. 5. Run the `npm install` command in the terminal to install the necessary dependencies. 6. Run the `npm start` command in the terminal to start the application. 7. Check the output from the npm start command for the URL to access the application. The URL should be a localhost address (e.g., http://localhost:8080). 8. Navigate to http://localhost:8080/#/login 9. Ensure you have logged in to our app to see the page use UserID: `UB442` and Password:`Myub442@!` to login 10. Go to URL: http://localhost:8080/#/unitsSettings 11. Verify that the units page is displayed ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/3f9cbb63-e582-43f0-91bf-16970daf57eb) 12. Change the temperature unit to Celsius (°C) 13. Change the wind unit to km/h 14. Change the pressure unit to mm 15. Change the distance unit to km 16. Open the browser inspector tool and select console 17. Click the save button 18. You should see the message: `Units saved successfully.` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/ee5e91dc-9854-4474-b931-329a77f89996) 19. You should see the message: `{message: 'User settings saved successfully.'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/f6c50295-ae68-4e31-a536-93bc29bea727) 18. Open a different tab and go to: https://www-student.cse.buffalo.edu/tools/db/phpmyadmin/index.php 19. Input username: `jpan26` and password: `50314999` 20. Make sure the server choice is `oceanus.cse.buffalo.edu:3306` 21. Click go and you should see this page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/443bf59a-f5fd-4dbe-88a8-645493eaa713) 22. Click `cse442_2023_spring_team_a_db` first and then `saved_units` on the left side of the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/9cb87c0a-e93f-4327-9640-8f6c0478c3a2) 23. Verify you see a row with the exact same information as shown by the picture ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/fefc62d6-7c3f-4da5-948a-49b0de95a56d) *Test 2* 1. Repeat steps 1 to 9 from `Test 1` 2. Go to URL: http://localhost:8080/#/tempSettings 3. Verify that the temperature setting page is displayed ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/1feae072-4846-46ae-8639-9958248e9158) 4. Open the browser inspector tool and select console 5. Change the hot temperature to 80, you can either use the slider or input box and click save 6. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 7. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 8. Change the warm temperature to 65, you can either use the slider or input box and click save 9. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 10. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 11. Change the ideal temperature to 50, you can either use the slider or input box and click save 12. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 13. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 14. Change the chilly temperature to 0, you can either use the slider or input box and click save 15. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 16. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 17. Change the cold temperature to -65, you can either use the slider or input box and click save 18. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 19. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 20. Change the freezing temperature to -80, you can either use the slider or input box and click save 21. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 22. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 23. Repeat steps 18 to 21 from `Test 1` 24. Click `cse442_2023_spring_team_a_db` first and then `saved_temperatures` on the left side of the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/ae9e6ed8-1291-4690-89ab-ce8cb4223843) 25. Verify you see a row with the exact same information as shown by the picture ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/9331cadd-35f2-4ac4-b366-44722b56430e)",1.0,"Save the units and temperature settings to the database - **Task Tests** *Test 1* 1. Go to the following URL: https://github.com/cse442-at-ub/project_s23-iweatherify/tree/dev 2. Click on the green `<> Code` button and download the ZIP file. ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/75c741f7-5ca3-4f64-879f-df960ad51a8b) 3. Unzip the downloaded file to a folder on your computer. 4. Open a terminal and navigate to the git repository folder using the `cd` command. 5. Run the `npm install` command in the terminal to install the necessary dependencies. 6. Run the `npm start` command in the terminal to start the application. 7. Check the output from the npm start command for the URL to access the application. The URL should be a localhost address (e.g., http://localhost:8080). 8. Navigate to http://localhost:8080/#/login 9. Ensure you have logged in to our app to see the page use UserID: `UB442` and Password:`Myub442@!` to login 10. Go to URL: http://localhost:8080/#/unitsSettings 11. Verify that the units page is displayed ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/3f9cbb63-e582-43f0-91bf-16970daf57eb) 12. Change the temperature unit to Celsius (°C) 13. Change the wind unit to km/h 14. Change the pressure unit to mm 15. Change the distance unit to km 16. Open the browser inspector tool and select console 17. Click the save button 18. You should see the message: `Units saved successfully.` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/ee5e91dc-9854-4474-b931-329a77f89996) 19. You should see the message: `{message: 'User settings saved successfully.'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/f6c50295-ae68-4e31-a536-93bc29bea727) 18. Open a different tab and go to: https://www-student.cse.buffalo.edu/tools/db/phpmyadmin/index.php 19. Input username: `jpan26` and password: `50314999` 20. Make sure the server choice is `oceanus.cse.buffalo.edu:3306` 21. Click go and you should see this page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/443bf59a-f5fd-4dbe-88a8-645493eaa713) 22. Click `cse442_2023_spring_team_a_db` first and then `saved_units` on the left side of the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/9cb87c0a-e93f-4327-9640-8f6c0478c3a2) 23. Verify you see a row with the exact same information as shown by the picture ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/fefc62d6-7c3f-4da5-948a-49b0de95a56d) *Test 2* 1. Repeat steps 1 to 9 from `Test 1` 2. Go to URL: http://localhost:8080/#/tempSettings 3. Verify that the temperature setting page is displayed ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/1feae072-4846-46ae-8639-9958248e9158) 4. Open the browser inspector tool and select console 5. Change the hot temperature to 80, you can either use the slider or input box and click save 6. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 7. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 8. Change the warm temperature to 65, you can either use the slider or input box and click save 9. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 10. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 11. Change the ideal temperature to 50, you can either use the slider or input box and click save 12. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 13. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 14. Change the chilly temperature to 0, you can either use the slider or input box and click save 15. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 16. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 17. Change the cold temperature to -65, you can either use the slider or input box and click save 18. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 19. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 20. Change the freezing temperature to -80, you can either use the slider or input box and click save 21. You should see the message: `{result: 'success'}` in the console ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/689287f3-8376-4ed4-abd6-ff73816eb604) 22. You should see the message: `Temperatures Saved Successfully` on the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/30bea4ec-67c2-4410-855a-6ee558b521f7) 23. Repeat steps 18 to 21 from `Test 1` 24. Click `cse442_2023_spring_team_a_db` first and then `saved_temperatures` on the left side of the page ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/ae9e6ed8-1291-4690-89ab-ce8cb4223843) 25. Verify you see a row with the exact same information as shown by the picture ![image.png](https://images.zenhubusercontent.com/63e1796387907702186b8c6a/9331cadd-35f2-4ac4-b366-44722b56430e)",0,save the units and temperature settings to the database task tests test go to the following url click on the green code button and download the zip file unzip the downloaded file to a folder on your computer open a terminal and navigate to the git repository folder using the cd command run the npm install command in the terminal to install the necessary dependencies run the npm start command in the terminal to start the application check the output from the npm start command for the url to access the application the url should be a localhost address e g navigate to ensure you have logged in to our app to see the page use userid and password to login go to url verify that the units page is displayed change the temperature unit to celsius °c change the wind unit to km h change the pressure unit to mm change the distance unit to km open the browser inspector tool and select console click the save button you should see the message units saved successfully on the page you should see the message message user settings saved successfully in the console open a different tab and go to input username and password make sure the server choice is oceanus cse buffalo edu click go and you should see this page click spring team a db first and then saved units on the left side of the page verify you see a row with the exact same information as shown by the picture test repeat steps to from test go to url verify that the temperature setting page is displayed open the browser inspector tool and select console change the hot temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page change the warm temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page change the ideal temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page change the chilly temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page change the cold temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page change the freezing temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page repeat steps to from test click spring team a db first and then saved temperatures on the left side of the page verify you see a row with the exact same information as shown by the picture ,0 2081,11360349944.0,IssuesEvent,2020-01-26 05:56:51,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,`scopeQuery` does not filter down repository results,automation bug customer,"Reported by https://app.hubspot.com/contacts/2762526/contact/17877751 that the `scopeQuery` for a8n campaigns was not matching the number of search results when using `repohasfile` filter. [Slack thread](https://sourcegraph.slack.com/archives/CMMTWQQ49/p1579711776061700) notes: > Ohh, boy. I think I've got it: I need to call `.Results()` again on the `results` here: https://github.com/sourcegraph/sourcegraph/blob/11b5ebbe3458c01d9d35a766fbfa4b07b3472be6/cmd/frontend/graphqlbackend/search.go#L756-L760 ",1.0,"`scopeQuery` does not filter down repository results - Reported by https://app.hubspot.com/contacts/2762526/contact/17877751 that the `scopeQuery` for a8n campaigns was not matching the number of search results when using `repohasfile` filter. [Slack thread](https://sourcegraph.slack.com/archives/CMMTWQQ49/p1579711776061700) notes: > Ohh, boy. I think I've got it: I need to call `.Results()` again on the `results` here: https://github.com/sourcegraph/sourcegraph/blob/11b5ebbe3458c01d9d35a766fbfa4b07b3472be6/cmd/frontend/graphqlbackend/search.go#L756-L760 ",1, scopequery does not filter down repository results reported by that the scopequery for campaigns was not matching the number of search results when using repohasfile filter notes ohh boy i think i ve got it i need to call results again on the results here ,1 4255,15887660844.0,IssuesEvent,2021-04-10 03:30:11,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,"[Automation API] Unintuitive behavior with relative paths (FileAsset, closure serialization) and inline programs",area/automation-api kind/enhancement language/go language/javascript resolution/fixed,"Automation API inline programs create a temporary working directory (unless directly specified) to store pulumi.yaml and to invoke the CLI. This breaks any relative path references the user may define in their inline program: ```ts const pulumiProgram = () async => { // at runtime this actually resolves to $AUTOMATION_API_TEMP/build const relative = ""./build"" } ``` We have a few options: 1. Change Automation API to use `.` as the current working dir. This preserves relative paths, but has the downside of leaking files like `pulumi.yaml` and `pulumi.stack.yaml` which the user might not care about or might find confusing to see generated out of nowhere. 2. Document this behavior and leave it as is. encourage users to only use absolute paths with inline programs. 3. Do some sort of deeper fix where we specify the temp directory to be used only for settings (yaml files), and use `.` as the CWD for program execution. ",1.0,"[Automation API] Unintuitive behavior with relative paths (FileAsset, closure serialization) and inline programs - Automation API inline programs create a temporary working directory (unless directly specified) to store pulumi.yaml and to invoke the CLI. This breaks any relative path references the user may define in their inline program: ```ts const pulumiProgram = () async => { // at runtime this actually resolves to $AUTOMATION_API_TEMP/build const relative = ""./build"" } ``` We have a few options: 1. Change Automation API to use `.` as the current working dir. This preserves relative paths, but has the downside of leaking files like `pulumi.yaml` and `pulumi.stack.yaml` which the user might not care about or might find confusing to see generated out of nowhere. 2. Document this behavior and leave it as is. encourage users to only use absolute paths with inline programs. 3. Do some sort of deeper fix where we specify the temp directory to be used only for settings (yaml files), and use `.` as the CWD for program execution. ",1, unintuitive behavior with relative paths fileasset closure serialization and inline programs automation api inline programs create a temporary working directory unless directly specified to store pulumi yaml and to invoke the cli this breaks any relative path references the user may define in their inline program ts const pulumiprogram async at runtime this actually resolves to automation api temp build const relative build we have a few options change automation api to use as the current working dir this preserves relative paths but has the downside of leaking files like pulumi yaml and pulumi stack yaml which the user might not care about or might find confusing to see generated out of nowhere document this behavior and leave it as is encourage users to only use absolute paths with inline programs do some sort of deeper fix where we specify the temp directory to be used only for settings yaml files and use as the cwd for program execution ,1 288516,31861429545.0,IssuesEvent,2023-09-15 11:12:55,nidhi7598/linux-v4.19.72_CVE-2022-3564,https://api.github.com/repos/nidhi7598/linux-v4.19.72_CVE-2022-3564,opened,"CVE-2022-3565 (High) detected in linuxlinux-4.19.294, linuxlinux-4.19.294",Mend: dependency security vulnerability,"## CVE-2022-3565 - High Severity Vulnerability
Vulnerable Libraries - linuxlinux-4.19.294, linuxlinux-4.19.294

Vulnerability Details

A vulnerability, which was classified as critical, has been found in Linux Kernel. Affected by this issue is the function del_timer of the file drivers/isdn/mISDN/l1oip_core.c of the component Bluetooth. The manipulation leads to use after free. It is recommended to apply a patch to fix this issue. The identifier of this vulnerability is VDB-211088.

Publish Date: 2022-10-17

URL: CVE-2022-3565

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.linuxkernelcves.com/cves/CVE-2022-3565

Release Date: 2022-10-17

Fix Resolution: v4.9.331,v4.14.296,v4.19.262,v5.4.220,v5.10.150,v5.15.75,v6.0.3

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-3565 (High) detected in linuxlinux-4.19.294, linuxlinux-4.19.294 - ## CVE-2022-3565 - High Severity Vulnerability
Vulnerable Libraries - linuxlinux-4.19.294, linuxlinux-4.19.294

Vulnerability Details

A vulnerability, which was classified as critical, has been found in Linux Kernel. Affected by this issue is the function del_timer of the file drivers/isdn/mISDN/l1oip_core.c of the component Bluetooth. The manipulation leads to use after free. It is recommended to apply a patch to fix this issue. The identifier of this vulnerability is VDB-211088.

Publish Date: 2022-10-17

URL: CVE-2022-3565

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.linuxkernelcves.com/cves/CVE-2022-3565

Release Date: 2022-10-17

Fix Resolution: v4.9.331,v4.14.296,v4.19.262,v5.4.220,v5.10.150,v5.15.75,v6.0.3

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linuxlinux linuxlinux cve high severity vulnerability vulnerable libraries linuxlinux linuxlinux vulnerability details a vulnerability which was classified as critical has been found in linux kernel affected by this issue is the function del timer of the file drivers isdn misdn core c of the component bluetooth the manipulation leads to use after free it is recommended to apply a patch to fix this issue the identifier of this vulnerability is vdb publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0 5357,19295477955.0,IssuesEvent,2021-12-12 14:15:31,Azure/PSRule.Rules.Azure,https://api.github.com/repos/Azure/PSRule.Rules.Azure,closed,Automation accounts should enable diagnostic logs,rule: automation-account,"# Rule request Automation accounts should enable the following diagnostic logs: - JobLogs - JobStreams - DSCNodeStatus - Metrics ## Applies to the following The rule applies to the following: - Resource type: **Microsoft.Automation/automationAccounts** ## Additional context [Template reference](https://docs.microsoft.com/en-us/azure/templates/microsoft.automation/automationaccounts?tabs=bicep) ",1.0,"Automation accounts should enable diagnostic logs - # Rule request Automation accounts should enable the following diagnostic logs: - JobLogs - JobStreams - DSCNodeStatus - Metrics ## Applies to the following The rule applies to the following: - Resource type: **Microsoft.Automation/automationAccounts** ## Additional context [Template reference](https://docs.microsoft.com/en-us/azure/templates/microsoft.automation/automationaccounts?tabs=bicep) ",1,automation accounts should enable diagnostic logs rule request automation accounts should enable the following diagnostic logs joblogs jobstreams dscnodestatus metrics applies to the following the rule applies to the following resource type microsoft automation automationaccounts additional context ,1 1047,9257177564.0,IssuesEvent,2019-03-17 03:00:32,askmench/mench-web-app,https://api.github.com/repos/askmench/mench-web-app,opened,Ping for Status Level-Up,Bot/Chat-Automation Communication Tool Team Communication,"Since a single person would not be able to publish content on Mench as everyone would require at-least 1 other person to review/iterate their work, we can add a feature to ""Ping"" another miner to a particular intent/entity to have it published. It's a communication tool that would help miners use each other's help to publish content to Mench. Workflow: 1. Miner mines intents/messages they want to mine 2. Once they feel ready for review, they would ping either a specific miner or ""any"" miner and Mench personal assistant would send a message to miners to notify them about the intent/message that needs attention 3. The second miner loads up the intent, does the review, iterates the content if needed, and then changes the status by 1 level. Note that there is a special permission called ""[Double Status Level-Up](https://mench.com/entities/6084)"" which if set to a miner, would allow them to change the status of New to Published if they deem the intent is ready to go live. They can also choose to level-up by one (new to drafting) and then have some other miner (could be the original author again) do another review/iteration and then level-up from drafting to published. The idea is to create a step-by-step inter-dependancy workflow designed around the principles of collaboration. @grumo Thoughts?",1.0,"Ping for Status Level-Up - Since a single person would not be able to publish content on Mench as everyone would require at-least 1 other person to review/iterate their work, we can add a feature to ""Ping"" another miner to a particular intent/entity to have it published. It's a communication tool that would help miners use each other's help to publish content to Mench. Workflow: 1. Miner mines intents/messages they want to mine 2. Once they feel ready for review, they would ping either a specific miner or ""any"" miner and Mench personal assistant would send a message to miners to notify them about the intent/message that needs attention 3. The second miner loads up the intent, does the review, iterates the content if needed, and then changes the status by 1 level. Note that there is a special permission called ""[Double Status Level-Up](https://mench.com/entities/6084)"" which if set to a miner, would allow them to change the status of New to Published if they deem the intent is ready to go live. They can also choose to level-up by one (new to drafting) and then have some other miner (could be the original author again) do another review/iteration and then level-up from drafting to published. The idea is to create a step-by-step inter-dependancy workflow designed around the principles of collaboration. @grumo Thoughts?",1,ping for status level up since a single person would not be able to publish content on mench as everyone would require at least other person to review iterate their work we can add a feature to ping another miner to a particular intent entity to have it published it s a communication tool that would help miners use each other s help to publish content to mench workflow miner mines intents messages they want to mine once they feel ready for review they would ping either a specific miner or any miner and mench personal assistant would send a message to miners to notify them about the intent message that needs attention the second miner loads up the intent does the review iterates the content if needed and then changes the status by level note that there is a special permission called which if set to a miner would allow them to change the status of new to published if they deem the intent is ready to go live they can also choose to level up by one new to drafting and then have some other miner could be the original author again do another review iteration and then level up from drafting to published the idea is to create a step by step inter dependancy workflow designed around the principles of collaboration grumo thoughts ,1 739268,25588465743.0,IssuesEvent,2022-12-01 11:07:16,markmcsherry/testproj,https://api.github.com/repos/markmcsherry/testproj,opened,US - Uber Feature,type:user-story :moneybag: priority:2,"**As a _persona_ I want to _do something_ so that I can _achieve some benefit_** ### Description Who what where & why... And a few more details... --- ### Design What's it going to look like ### Acceptance Criteria ... ### Notes --- ### Tasks - [ ] ",1.0,"US - Uber Feature - **As a _persona_ I want to _do something_ so that I can _achieve some benefit_** ### Description Who what where & why... And a few more details... --- ### Design What's it going to look like ### Acceptance Criteria ... ### Notes --- ### Tasks - [ ] ",0,us uber feature as a persona i want to do something so that i can achieve some benefit description who what where why and a few more details design what s it going to look like acceptance criteria notes tasks ,0 62086,6775884543.0,IssuesEvent,2017-10-27 15:44:19,apache/incubator-openwhisk-wskdeploy,https://api.github.com/repos/apache/incubator-openwhisk-wskdeploy,closed,WIP: Enable Action Limits unit test,priority: high tests: unit,"At some point, the unit test for testing Action Limits within **parsers/manifest_parser_test.go** called ""_TestComposeActionsForLimits_"" was commented out with a **TODO**: _""uncomment this test case after issue # 312 is fixed""_ Issue 312 has been closed and merged via PR 556, yet this test remains commented out. - https://github.com/apache/incubator-openwhisk-wskdeploy/issues/312 - https://github.com/apache/incubator-openwhisk-wskdeploy/pull/556 which confuses me further, as PR 556 added the testcase that was commented out. We need a working unit test AND figure out when/why it was commented out ",1.0,"WIP: Enable Action Limits unit test - At some point, the unit test for testing Action Limits within **parsers/manifest_parser_test.go** called ""_TestComposeActionsForLimits_"" was commented out with a **TODO**: _""uncomment this test case after issue # 312 is fixed""_ Issue 312 has been closed and merged via PR 556, yet this test remains commented out. - https://github.com/apache/incubator-openwhisk-wskdeploy/issues/312 - https://github.com/apache/incubator-openwhisk-wskdeploy/pull/556 which confuses me further, as PR 556 added the testcase that was commented out. We need a working unit test AND figure out when/why it was commented out ",0,wip enable action limits unit test at some point the unit test for testing action limits within parsers manifest parser test go called testcomposeactionsforlimits was commented out with a todo uncomment this test case after issue is fixed issue has been closed and merged via pr yet this test remains commented out which confuses me further as pr added the testcase that was commented out we need a working unit test and figure out when why it was commented out ,0 9888,30707273794.0,IssuesEvent,2023-07-27 07:17:22,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,opened,rename System Capacity to System capacity on UI,ui_automation,"new Header fails the test test_dashboard_validation_ui on 4.14 and 4.13 Pay attention on other headers, the second word of the header starts from lower case letter Failure: https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/557/13044/596923/597228/597230/log http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-291ai3c333-t1/j-291ai3c333-t1_20230725T134831/logs/ui_logs_dir_1690296996/screenshots_ui/test_dashboard_validation_ui/2023-07-26T06-02-46.842175.png",1.0,"rename System Capacity to System capacity on UI - new Header fails the test test_dashboard_validation_ui on 4.14 and 4.13 Pay attention on other headers, the second word of the header starts from lower case letter Failure: https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/557/13044/596923/597228/597230/log http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-291ai3c333-t1/j-291ai3c333-t1_20230725T134831/logs/ui_logs_dir_1690296996/screenshots_ui/test_dashboard_validation_ui/2023-07-26T06-02-46.842175.png",1,rename system capacity to system capacity on ui new header fails the test test dashboard validation ui on and pay attention on other headers the second word of the header starts from lower case letter failure ,1 277055,30602418261.0,IssuesEvent,2023-07-22 15:04:04,TolMen/Project5_OC_Blog,https://api.github.com/repos/TolMen/Project5_OC_Blog,opened,[#8] Checking for security vulnerabilities,bug security,"Perform security testing to ensure there are no security vulnerabilities (XSS, CSRF, SQL Injection, etc.) in the blog",True,"[#8] Checking for security vulnerabilities - Perform security testing to ensure there are no security vulnerabilities (XSS, CSRF, SQL Injection, etc.) in the blog",0, checking for security vulnerabilities perform security testing to ensure there are no security vulnerabilities xss csrf sql injection etc in the blog,0 32833,15684464570.0,IssuesEvent,2021-03-25 10:01:59,PrehistoricKingdom/feedback,https://api.github.com/repos/PrehistoricKingdom/feedback,closed,PK start loading bug,duplicate performance saving-loading,when the loading bar is loading it goes back and then the game will keep loading then stop and crash or when i get in to the game and its fine but then will crash for no reason,True,PK start loading bug - when the loading bar is loading it goes back and then the game will keep loading then stop and crash or when i get in to the game and its fine but then will crash for no reason,0,pk start loading bug when the loading bar is loading it goes back and then the game will keep loading then stop and crash or when i get in to the game and its fine but then will crash for no reason,0 8096,26170420078.0,IssuesEvent,2023-01-01 21:08:47,tm24fan8/Home-Assistant-Configs,https://api.github.com/repos/tm24fan8/Home-Assistant-Configs,opened,Continue making Holiday Mode more versatile,enhancement lighting presence detection automation,"Need to support more holidays, right now it's mainly set up for Christmas.",1.0,"Continue making Holiday Mode more versatile - Need to support more holidays, right now it's mainly set up for Christmas.",1,continue making holiday mode more versatile need to support more holidays right now it s mainly set up for christmas ,1 637146,20622013073.0,IssuesEvent,2022-03-07 18:21:15,grpc/grpc,https://api.github.com/repos/grpc/grpc,opened,ruby macos build flake - /bin/sh: /bin/sh: cannot execute binary file,kind/bug priority/P2,"``` [C] Compiling src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.c mkdir -p `dirname /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o` clang -fdeclspec -Ithird_party/boringssl-with-bazel/src/include -Ithird_party/address_sorting/include -Ithird_party/cares/cares/include -Ithird_party/cares -Ithird_party/cares/cares -DGPR_BACKWARDS_COMPATIBILITY_MODE -DGRPC_XDS_USER_AGENT_NAME_SUFFIX=""\""RUBY\"""" -DGRPC_XDS_USER_AGENT_VERSION_SUFFIX=""\""1.45.0.dev\"""" -g -Wall -Wextra -DOSATOMIC_USE_INLINED=1 -Ithird_party/abseil-cpp -Ithird_party/re2 -Ithird_party/upb -Isrc/core/ext/upb-generated -Isrc/core/ext/upbdefs-generated -Ithird_party/xxhash -O2 -Wframe-larger-than=16384 -fPIC -I. -Iinclude -I/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/gens -I/usr/local/include -DNDEBUG -DINSTALL_PREFIX=\""/usr/local\"" -arch i386 -arch x86_64 -Ithird_party/zlib -std=c99 -Wextra-semi -g -MMD -MF /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.dep -c -o /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.c [C] Compiling src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.c /bin/sh: /bin/sh: cannot execute binary file mkdir -p `dirname /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.o` make: *** [/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o] Error 126 make: *** Waiting for unfinished jobs.... clang -fdeclspec -Ithird_party/boringssl-with-bazel/src/include -Ithird_party/address_sorting/include -Ithird_party/cares/cares/include -Ithird_party/cares -Ithird_party/cares/cares -DGPR_BACKWARDS_COMPATIBILITY_MODE -DGRPC_XDS_USER_AGENT_NAME_SUFFIX=""\""RUBY\"""" -DGRPC_XDS_USER_AGENT_VERSION_SUFFIX=""\""1.45.0.dev\"""" -g -Wall -Wextra -DOSATOMIC_USE_INLINED=1 -Ithird_party/abseil-cpp -Ithird_party/re2 -Ithird_party/upb -Isrc/core/ext/upb-generated -Isrc/core/ext/upbdefs-generated -Ithird_party/xxhash -O2 -Wframe-larger-than=16384 -fPIC -I. -Iinclude -I/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/gens -I/usr/local/include -DNDEBUG -DINSTALL_PREFIX=\""/usr/local\"" -arch i386 -arch x86_64 -Ithird_party/zlib -std=c99 -Wextra-semi -g -MMD -MF /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.dep -c -o /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.o src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.c *** ../../../../src/ruby/ext/grpc/extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=../../../../src/ruby/ext/grpc --curdir --ruby=/Users/kbuilder/.rake-compiler/ruby/x86_64-darwin11/ruby-3.0.0/bin/$(RUBY_BASE_NAME) rake aborted! Command failed with status (1): [/Users/kbuilder/.rvm/rubies/ruby-2.5.0/bin...] /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-compiler-1.1.1/lib/rake/extensiontask.rb:206:in `block (2 levels) in define_compile_tasks' /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-compiler-1.1.1/lib/rake/extensiontask.rb:203:in `block in define_compile_tasks' /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-13.0.6/exe/rake:27:in `' /Users/kbuilder/.rvm/gems/ruby-2.5.0/bin/bundle:25:in `load' /Users/kbuilder/.rvm/gems/ruby-2.5.0/bin/bundle:25:in `
' Tasks: TOP => native => native:universal-darwin => native:grpc:universal-darwin => tmp/universal-darwin/stage/src/ruby/lib/grpc/3.0/grpc_c.bundle => copy:grpc_c:universal-darwin:3.0.0 => tmp/universal-darwin/grpc_c/3.0.0/grpc_c.bundle => tmp/universal-darwin/grpc_c/3.0.0/Makefile (See full trace by running task with --trace) + '[' Darwin == Darwin ']' ++ ls 'pkg/*.gem' ++ grep -v darwin ls: pkg/*.gem: No such file or directory + rm usage: rm [-f | -i] [-dPRrvW] file ... unlink file ``` https://source.cloud.google.com/results/invocations/a5266702-9573-493b-a946-2f0a779cbd1e/targets/grpc%2Fcore%2Fpull_request%2Fmacos%2Fgrpc_build_artifacts/log",1.0,"ruby macos build flake - /bin/sh: /bin/sh: cannot execute binary file - ``` [C] Compiling src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.c mkdir -p `dirname /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o` clang -fdeclspec -Ithird_party/boringssl-with-bazel/src/include -Ithird_party/address_sorting/include -Ithird_party/cares/cares/include -Ithird_party/cares -Ithird_party/cares/cares -DGPR_BACKWARDS_COMPATIBILITY_MODE -DGRPC_XDS_USER_AGENT_NAME_SUFFIX=""\""RUBY\"""" -DGRPC_XDS_USER_AGENT_VERSION_SUFFIX=""\""1.45.0.dev\"""" -g -Wall -Wextra -DOSATOMIC_USE_INLINED=1 -Ithird_party/abseil-cpp -Ithird_party/re2 -Ithird_party/upb -Isrc/core/ext/upb-generated -Isrc/core/ext/upbdefs-generated -Ithird_party/xxhash -O2 -Wframe-larger-than=16384 -fPIC -I. -Iinclude -I/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/gens -I/usr/local/include -DNDEBUG -DINSTALL_PREFIX=\""/usr/local\"" -arch i386 -arch x86_64 -Ithird_party/zlib -std=c99 -Wextra-semi -g -MMD -MF /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.dep -c -o /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.c [C] Compiling src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.c /bin/sh: /bin/sh: cannot execute binary file mkdir -p `dirname /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.o` make: *** [/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o] Error 126 make: *** Waiting for unfinished jobs.... clang -fdeclspec -Ithird_party/boringssl-with-bazel/src/include -Ithird_party/address_sorting/include -Ithird_party/cares/cares/include -Ithird_party/cares -Ithird_party/cares/cares -DGPR_BACKWARDS_COMPATIBILITY_MODE -DGRPC_XDS_USER_AGENT_NAME_SUFFIX=""\""RUBY\"""" -DGRPC_XDS_USER_AGENT_VERSION_SUFFIX=""\""1.45.0.dev\"""" -g -Wall -Wextra -DOSATOMIC_USE_INLINED=1 -Ithird_party/abseil-cpp -Ithird_party/re2 -Ithird_party/upb -Isrc/core/ext/upb-generated -Isrc/core/ext/upbdefs-generated -Ithird_party/xxhash -O2 -Wframe-larger-than=16384 -fPIC -I. -Iinclude -I/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/gens -I/usr/local/include -DNDEBUG -DINSTALL_PREFIX=\""/usr/local\"" -arch i386 -arch x86_64 -Ithird_party/zlib -std=c99 -Wextra-semi -g -MMD -MF /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.dep -c -o /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.o src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.c *** ../../../../src/ruby/ext/grpc/extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=../../../../src/ruby/ext/grpc --curdir --ruby=/Users/kbuilder/.rake-compiler/ruby/x86_64-darwin11/ruby-3.0.0/bin/$(RUBY_BASE_NAME) rake aborted! Command failed with status (1): [/Users/kbuilder/.rvm/rubies/ruby-2.5.0/bin...] /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-compiler-1.1.1/lib/rake/extensiontask.rb:206:in `block (2 levels) in define_compile_tasks' /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-compiler-1.1.1/lib/rake/extensiontask.rb:203:in `block in define_compile_tasks' /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-13.0.6/exe/rake:27:in `' /Users/kbuilder/.rvm/gems/ruby-2.5.0/bin/bundle:25:in `load' /Users/kbuilder/.rvm/gems/ruby-2.5.0/bin/bundle:25:in `
' Tasks: TOP => native => native:universal-darwin => native:grpc:universal-darwin => tmp/universal-darwin/stage/src/ruby/lib/grpc/3.0/grpc_c.bundle => copy:grpc_c:universal-darwin:3.0.0 => tmp/universal-darwin/grpc_c/3.0.0/grpc_c.bundle => tmp/universal-darwin/grpc_c/3.0.0/Makefile (See full trace by running task with --trace) + '[' Darwin == Darwin ']' ++ ls 'pkg/*.gem' ++ grep -v darwin ls: pkg/*.gem: No such file or directory + rm usage: rm [-f | -i] [-dPRrvW] file ... unlink file ``` https://source.cloud.google.com/results/invocations/a5266702-9573-493b-a946-2f0a779cbd1e/targets/grpc%2Fcore%2Fpull_request%2Fmacos%2Fgrpc_build_artifacts/log",0,ruby macos build flake bin sh bin sh cannot execute binary file compiling src core ext upbdefs generated xds annotations sensitive upbdefs c mkdir p dirname volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations sensitive upbdefs o clang fdeclspec ithird party boringssl with bazel src include ithird party address sorting include ithird party cares cares include ithird party cares ithird party cares cares dgpr backwards compatibility mode dgrpc xds user agent name suffix ruby dgrpc xds user agent version suffix dev g wall wextra dosatomic use inlined ithird party abseil cpp ithird party ithird party upb isrc core ext upb generated isrc core ext upbdefs generated ithird party xxhash wframe larger than fpic i iinclude i volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c gens i usr local include dndebug dinstall prefix usr local arch arch ithird party zlib std wextra semi g mmd mf volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations sensitive upbdefs dep c o volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations sensitive upbdefs o src core ext upbdefs generated xds annotations sensitive upbdefs c compiling src core ext upbdefs generated xds annotations status upbdefs c bin sh bin sh cannot execute binary file mkdir p dirname volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations status upbdefs o make error make waiting for unfinished jobs clang fdeclspec ithird party boringssl with bazel src include ithird party address sorting include ithird party cares cares include ithird party cares ithird party cares cares dgpr backwards compatibility mode dgrpc xds user agent name suffix ruby dgrpc xds user agent version suffix dev g wall wextra dosatomic use inlined ithird party abseil cpp ithird party ithird party upb isrc core ext upb generated isrc core ext upbdefs generated ithird party xxhash wframe larger than fpic i iinclude i volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c gens i usr local include dndebug dinstall prefix usr local arch arch ithird party zlib std wextra semi g mmd mf volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations status upbdefs dep c o volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations status upbdefs o src core ext upbdefs generated xds annotations status upbdefs c src ruby ext grpc extconf rb failed could not create makefile due to some reason probably lack of necessary libraries and or headers check the mkmf log file for more details you may need configuration options provided configuration options with opt dir without opt dir with opt include without opt include opt dir include with opt lib without opt lib opt dir lib with make prog without make prog srcdir src ruby ext grpc curdir ruby users kbuilder rake compiler ruby ruby bin ruby base name rake aborted command failed with status volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin bundle local gems ruby gems rake compiler lib rake extensiontask rb in block levels in define compile tasks volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin bundle local gems ruby gems rake compiler lib rake extensiontask rb in block in define compile tasks volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin bundle local gems ruby gems rake exe rake in users kbuilder rvm gems ruby bin bundle in load users kbuilder rvm gems ruby bin bundle in tasks top native native universal darwin native grpc universal darwin tmp universal darwin stage src ruby lib grpc grpc c bundle copy grpc c universal darwin tmp universal darwin grpc c grpc c bundle tmp universal darwin grpc c makefile see full trace by running task with trace ls pkg gem grep v darwin ls pkg gem no such file or directory rm usage rm file unlink file ,0 338,5557520309.0,IssuesEvent,2017-03-24 12:19:50,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,NavigateTo command should allow to navigate to about:blank page,AREA: client SYSTEM: automations TYPE: enhancement,"### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? error raise during protocol checking ### What is the expected behavior? should navigate to `about:blank` page",1.0,"NavigateTo command should allow to navigate to about:blank page - ### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? error raise during protocol checking ### What is the expected behavior? should navigate to `about:blank` page",1,navigateto command should allow to navigate to about blank page are you requesting a feature or reporting a bug bug what is the current behavior error raise during protocol checking what is the expected behavior should navigate to about blank page,1 9175,27712374403.0,IssuesEvent,2023-03-14 14:58:41,camunda/camunda-bpm-platform,https://api.github.com/repos/camunda/camunda-bpm-platform,closed,Change a job's due date when I set the retries to > 0 again,version:7.19.0 type:feature component:c7-automation-platform,"This issue was imported from JIRA: | Field | Value | | ---------------------------------- | ------------------------------------------------------ | | JIRA Link | [CAM-14601](https://jira.camunda.com/browse/CAM-14601) | | Reporter | @toco-cam | | Has restricted visibility comments | true| ___ **Problem** As Operations Engineer I want to ""Increment Number of Retries"" for a process, for which the retries are expired. The first of these retries should be executed in a timely manner. Currently, a retry starts with the last timer element defined in the ""retry time cycle"" (technical implementation). If that element is e.g. 1 day (see screenshot), it takes one day until the first retry is executed. With a REST call ""Set due date"" the retry can be triggered instantly. One solution can be to allow ""set due date"" in the ""Increment Number of Retries"" job. **User Story (Required on creation):** - (A1) As an operator, I want to be able to choose when the retry starts first for a retry batch job. I want to choose between: Now (Default), Absolut and Legacy. I can look up the details for legacy in the docs. - (A2) As an operator, I want to be able to choose when the retry starts first for a single job. I want to choose between: Now (Default), Absolut and Legacy. I can look up the details for legacy in the docs. - (A3) As a developer, I want to be able to define a duedate that overwrites the current due date of the job. ![](image-2022-05-12-13-47-33-581.png) **Functional Requirements (Required before implementation):** * Allow to ""set due date"" for ""Increment Number of Retries"" batch operations * If the user does not choose to set a due date, the UI informs them what the default behavior is * Decide: Should setting a due date also be possible for the non-batch operations for incrementing job retries? (see ) **Breakdown** Backend - [x] Add support for due date parameter when incrementing retries for a job or multiple jobs (batch) in Java API. Consider introducing a fluent builder to avoid method duplication. #3060 - [x] #3176 - [x] Add support for due date parameter when incrementing retries for a job or multiple jobs (batch) in REST API #3070 - [x] #3221 Frontend - [x] #3067 - On the batch operation ""Set retries of Jobs belonging to the process instances"", display a new section: - Modify the due date - Display the due date selection form - In the set retries for single job dialogs, display the due date selection form Due date selection form: - ""Absolute"" (radio button): A date picker is shown when this option is selected. The chosen date and time should be used as parameter for the REST call. A question mark icon will display an explanation of the option when hovered over. - ""Due Date"" (date picker): Only visible when ""Absolute"" is checked. Used to select the date and time which should be used as parameter for the REST call. The current date (now) is preselected. - ""No change"" (radio button): Selected by default. Do not use the due date parameter for the REST call. A question mark icon will display an explanation of the option when hovered over. Docs - [x] REST/Open API docs - [x] #3167: Add information about this feature to https://docs.camunda.org/manual/latest/user-guide/process-engine/the-job-executor/#retry-time-cycle-configuration - [x] #3168 - [x] #3194 **Limitations of Scope (Optional):**   **Hints (Optional):** * See ""set removal time"" batch operation options for setting ""due date"" **Links:** * is related to https://jira.camunda.com/browse/SUPPORT-13262 ",1.0,"Change a job's due date when I set the retries to > 0 again - This issue was imported from JIRA: | Field | Value | | ---------------------------------- | ------------------------------------------------------ | | JIRA Link | [CAM-14601](https://jira.camunda.com/browse/CAM-14601) | | Reporter | @toco-cam | | Has restricted visibility comments | true| ___ **Problem** As Operations Engineer I want to ""Increment Number of Retries"" for a process, for which the retries are expired. The first of these retries should be executed in a timely manner. Currently, a retry starts with the last timer element defined in the ""retry time cycle"" (technical implementation). If that element is e.g. 1 day (see screenshot), it takes one day until the first retry is executed. With a REST call ""Set due date"" the retry can be triggered instantly. One solution can be to allow ""set due date"" in the ""Increment Number of Retries"" job. **User Story (Required on creation):** - (A1) As an operator, I want to be able to choose when the retry starts first for a retry batch job. I want to choose between: Now (Default), Absolut and Legacy. I can look up the details for legacy in the docs. - (A2) As an operator, I want to be able to choose when the retry starts first for a single job. I want to choose between: Now (Default), Absolut and Legacy. I can look up the details for legacy in the docs. - (A3) As a developer, I want to be able to define a duedate that overwrites the current due date of the job. ![](image-2022-05-12-13-47-33-581.png) **Functional Requirements (Required before implementation):** * Allow to ""set due date"" for ""Increment Number of Retries"" batch operations * If the user does not choose to set a due date, the UI informs them what the default behavior is * Decide: Should setting a due date also be possible for the non-batch operations for incrementing job retries? (see ) **Breakdown** Backend - [x] Add support for due date parameter when incrementing retries for a job or multiple jobs (batch) in Java API. Consider introducing a fluent builder to avoid method duplication. #3060 - [x] #3176 - [x] Add support for due date parameter when incrementing retries for a job or multiple jobs (batch) in REST API #3070 - [x] #3221 Frontend - [x] #3067 - On the batch operation ""Set retries of Jobs belonging to the process instances"", display a new section: - Modify the due date - Display the due date selection form - In the set retries for single job dialogs, display the due date selection form Due date selection form: - ""Absolute"" (radio button): A date picker is shown when this option is selected. The chosen date and time should be used as parameter for the REST call. A question mark icon will display an explanation of the option when hovered over. - ""Due Date"" (date picker): Only visible when ""Absolute"" is checked. Used to select the date and time which should be used as parameter for the REST call. The current date (now) is preselected. - ""No change"" (radio button): Selected by default. Do not use the due date parameter for the REST call. A question mark icon will display an explanation of the option when hovered over. Docs - [x] REST/Open API docs - [x] #3167: Add information about this feature to https://docs.camunda.org/manual/latest/user-guide/process-engine/the-job-executor/#retry-time-cycle-configuration - [x] #3168 - [x] #3194 **Limitations of Scope (Optional):**   **Hints (Optional):** * See ""set removal time"" batch operation options for setting ""due date"" **Links:** * is related to https://jira.camunda.com/browse/SUPPORT-13262 ",1,change a job s due date when i set the retries to again this issue was imported from jira field value jira link reporter toco cam has restricted visibility comments true problem as operations engineer i want to increment number of retries for a process for which the retries are expired the first of these retries should be executed in a timely manner currently a retry starts with the last timer element defined in the retry time cycle technical implementation if that element is e g day see screenshot it takes one day until the first retry is executed with a rest call set due date the retry can be triggered instantly one solution can be to allow set due date in the increment number of retries job user story required on creation as an operator i want to be able to choose when the retry starts first for a retry batch job i want to choose between now default absolut and legacy i can look up the details for legacy in the docs as an operator i want to be able to choose when the retry starts first for a single job i want to choose between now default absolut and legacy i can look up the details for legacy in the docs as a developer i want to be able to define a duedate that overwrites the current due date of the job image png functional requirements required before implementation allow to set due date for increment number of retries batch operations if the user does not choose to set a due date the ui informs them what the default behavior is decide should setting a due date also be possible for the non batch operations for incrementing job retries see breakdown backend add support for due date parameter when incrementing retries for a job or multiple jobs batch in java api consider introducing a fluent builder to avoid method duplication add support for due date parameter when incrementing retries for a job or multiple jobs batch in rest api frontend on the batch operation set retries of jobs belonging to the process instances display a new section modify the due date display the due date selection form in the set retries for single job dialogs display the due date selection form due date selection form absolute radio button a date picker is shown when this option is selected the chosen date and time should be used as parameter for the rest call a question mark icon will display an explanation of the option when hovered over due date date picker only visible when absolute is checked used to select the date and time which should be used as parameter for the rest call the current date now is preselected no change radio button selected by default do not use the due date parameter for the rest call a question mark icon will display an explanation of the option when hovered over docs rest open api docs add information about this feature to limitations of scope optional   hints optional see set removal time batch operation options for setting due date links is related to ,1 20341,10720239589.0,IssuesEvent,2019-10-26 16:16:06,becurrie/titandash,https://api.github.com/repos/becurrie/titandash,closed,Background Click/Function Implementation,enhancement help wanted major performance,"This is potentially a major feature that can be added to the bot. Ideally... Based on the window selected, we need a way to send out clicks to the window in the background... This would allow for the following major features: - Run the bot while doing other things. - Fully support multiple sessions. We need to investigate how difficult this would be to implement.. We have the HWND of the window, that should be all we need, and some research into how the Win32API Can be used to accomplish this. Some of the issue we may run into here would come up with either the: - Mouse drags (unsure how supported this is). - Emulator restart is going to cause the `hwnd`to be modified. We'll need a way to get the window again on a restart.",True,"Background Click/Function Implementation - This is potentially a major feature that can be added to the bot. Ideally... Based on the window selected, we need a way to send out clicks to the window in the background... This would allow for the following major features: - Run the bot while doing other things. - Fully support multiple sessions. We need to investigate how difficult this would be to implement.. We have the HWND of the window, that should be all we need, and some research into how the Win32API Can be used to accomplish this. Some of the issue we may run into here would come up with either the: - Mouse drags (unsure how supported this is). - Emulator restart is going to cause the `hwnd`to be modified. We'll need a way to get the window again on a restart.",0,background click function implementation this is potentially a major feature that can be added to the bot ideally based on the window selected we need a way to send out clicks to the window in the background this would allow for the following major features run the bot while doing other things fully support multiple sessions we need to investigate how difficult this would be to implement we have the hwnd of the window that should be all we need and some research into how the can be used to accomplish this some of the issue we may run into here would come up with either the mouse drags unsure how supported this is emulator restart is going to cause the hwnd to be modified we ll need a way to get the window again on a restart ,0 6539,23379572012.0,IssuesEvent,2022-08-11 08:11:38,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Device Automation trigger validation fails if trigger is missing `domain` property,stale integration: device_automation,"### The problem When creating a trigger, the documentation for device triggers is a bit sparse, it's kind of left up to each integration to provide additional detail. Unfortunately, the device trigger validation (https://github.com/home-assistant/core/blob/dev/homeassistant/components/device_automation/trigger.py#L65) uses the `domain` property to determine which platform should be used to validate the trigger. If the `domain` property is missing, it will fail with an unhelpful error: ``` homeassistant | 2022-07-03T07:03:00.970072960Z File ""/usr/src/homeassistant/homeassistant/components/device_automation/trigger.py"", line 69, in async_validate_trigger_config homeassistant | 2022-07-03T07:03:00.970105147Z hass, config[CONF_DOMAIN], DeviceAutomationType.TRIGGER homeassistant | 2022-07-03T07:03:00.970134730Z KeyError: 'domain' ``` which can be hard to troubleshoot if you don't know that `domain` is required. Device Automation does define a schema (https://github.com/home-assistant/core/blob/dev/homeassistant/components/device_automation/__init__.py#L49) that defines domain as required, and seems to have an example `TRIGGER_SCHEMA` that extends it, but this doesn't seem to be used anywhere. ### What version of Home Assistant Core has the issue? 2022.6.5 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant Container ### Integration causing the issue Device Automation ### Link to integration documentation on our website https://www.home-assistant.io/docs/automation/trigger/#device-triggers ### Diagnostics information _No response_ ### Example YAML snippet _No response_ ### Anything in the logs that might be useful for us? _No response_ ### Additional information I'm happy to provide a fix for this, just interested in verifying in it's expected behavior for some reason first!",1.0,"Device Automation trigger validation fails if trigger is missing `domain` property - ### The problem When creating a trigger, the documentation for device triggers is a bit sparse, it's kind of left up to each integration to provide additional detail. Unfortunately, the device trigger validation (https://github.com/home-assistant/core/blob/dev/homeassistant/components/device_automation/trigger.py#L65) uses the `domain` property to determine which platform should be used to validate the trigger. If the `domain` property is missing, it will fail with an unhelpful error: ``` homeassistant | 2022-07-03T07:03:00.970072960Z File ""/usr/src/homeassistant/homeassistant/components/device_automation/trigger.py"", line 69, in async_validate_trigger_config homeassistant | 2022-07-03T07:03:00.970105147Z hass, config[CONF_DOMAIN], DeviceAutomationType.TRIGGER homeassistant | 2022-07-03T07:03:00.970134730Z KeyError: 'domain' ``` which can be hard to troubleshoot if you don't know that `domain` is required. Device Automation does define a schema (https://github.com/home-assistant/core/blob/dev/homeassistant/components/device_automation/__init__.py#L49) that defines domain as required, and seems to have an example `TRIGGER_SCHEMA` that extends it, but this doesn't seem to be used anywhere. ### What version of Home Assistant Core has the issue? 2022.6.5 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant Container ### Integration causing the issue Device Automation ### Link to integration documentation on our website https://www.home-assistant.io/docs/automation/trigger/#device-triggers ### Diagnostics information _No response_ ### Example YAML snippet _No response_ ### Anything in the logs that might be useful for us? _No response_ ### Additional information I'm happy to provide a fix for this, just interested in verifying in it's expected behavior for some reason first!",1,device automation trigger validation fails if trigger is missing domain property the problem when creating a trigger the documentation for device triggers is a bit sparse it s kind of left up to each integration to provide additional detail unfortunately the device trigger validation uses the domain property to determine which platform should be used to validate the trigger if the domain property is missing it will fail with an unhelpful error homeassistant file usr src homeassistant homeassistant components device automation trigger py line in async validate trigger config homeassistant hass config deviceautomationtype trigger homeassistant keyerror domain which can be hard to troubleshoot if you don t know that domain is required device automation does define a schema that defines domain as required and seems to have an example trigger schema that extends it but this doesn t seem to be used anywhere what version of home assistant core has the issue what was the last working version of home assistant core no response what type of installation are you running home assistant container integration causing the issue device automation link to integration documentation on our website diagnostics information no response example yaml snippet no response anything in the logs that might be useful for us no response additional information i m happy to provide a fix for this just interested in verifying in it s expected behavior for some reason first ,1 5895,21578734022.0,IssuesEvent,2022-05-02 16:18:52,rancher-sandbox/rancher-desktop,https://api.github.com/repos/rancher-sandbox/rancher-desktop,opened,Incorrect error message when `rdctl start` is run while a session is already running,kind/bug area/automation,"If you run `rdctl start` (may be by mistake), when you have a session running already, the command prints below message which is not correct. ``` rdctl start Error: set command: no settings to change were given Usage: rdctl start [flags] Flags: --container-engine string Set engine to containerd or moby (aka docker). --flannel-enabled Control whether flannel is enabled. Use to disable flannel so you can install your own CNI. (default true) -h, --help help for start --kubernetes-enabled Control whether kubernetes runs in the backend. --kubernetes-version string Choose which version of kubernetes to run. -p, --path string Path to main executable. Global Flags: --config-path string config file (default C:\Users\GunasekharMatamalam\AppData\Roaming\rancher-desktop\rd-engine.json) --host string default is localhost; most useful for WSL --password string overrides the password setting in the config file --port string overrides the port setting in the config file --user string overrides the user setting in the config file ```",1.0,"Incorrect error message when `rdctl start` is run while a session is already running - If you run `rdctl start` (may be by mistake), when you have a session running already, the command prints below message which is not correct. ``` rdctl start Error: set command: no settings to change were given Usage: rdctl start [flags] Flags: --container-engine string Set engine to containerd or moby (aka docker). --flannel-enabled Control whether flannel is enabled. Use to disable flannel so you can install your own CNI. (default true) -h, --help help for start --kubernetes-enabled Control whether kubernetes runs in the backend. --kubernetes-version string Choose which version of kubernetes to run. -p, --path string Path to main executable. Global Flags: --config-path string config file (default C:\Users\GunasekharMatamalam\AppData\Roaming\rancher-desktop\rd-engine.json) --host string default is localhost; most useful for WSL --password string overrides the password setting in the config file --port string overrides the port setting in the config file --user string overrides the user setting in the config file ```",1,incorrect error message when rdctl start is run while a session is already running if you run rdctl start may be by mistake when you have a session running already the command prints below message which is not correct rdctl start error set command no settings to change were given usage rdctl start flags container engine string set engine to containerd or moby aka docker flannel enabled control whether flannel is enabled use to disable flannel so you can install your own cni default true h help help for start kubernetes enabled control whether kubernetes runs in the backend kubernetes version string choose which version of kubernetes to run p path string path to main executable global flags config path string config file default c users gunasekharmatamalam appdata roaming rancher desktop rd engine json host string default is localhost most useful for wsl password string overrides the password setting in the config file port string overrides the port setting in the config file user string overrides the user setting in the config file ,1 16027,11802080215.0,IssuesEvent,2020-03-18 20:47:55,spring-projects/spring-batch,https://api.github.com/repos/spring-projects/spring-batch,closed,AbstractCursorItemReader doClose() method is not reentrant [BATCH-2737],has: backports in: infrastructure type: bug,"**[Tommy](https://jira.spring.io/secure/ViewProfile.jspa?name=tommy)** opened **[BATCH-2737](https://jira.spring.io/browse/BATCH-2737?redirect=false)** and commented The following warning coming up from the `DisposableBeanAdapter`, when it tries to destroy any reader extended from the `AbstractCursorItemReader` by the auto-discovered `close()` method. `DisposableBeanAdapter : Invocation of destroy method 'close' failed on bean with name 'reader': org.springframework.batch.item.ItemStreamException: Error while closing item reader` Since the invocation of the `close()` method is already part of the Spring-Batch life-cycle, the `doClose()` method of this class should be reentrant. The problem lies in the incomplete check around resetting the `autoCommit` state of the underlying connection, which does not respect the already closed connection. The check should look like something similar ```java if(this.con != null && !this.conn.isClosed()) { this.con.setAutoCommit(this.initialConnectionAutoCommit); } ``` --- **Affects:** 4.0.0 1 votes, 2 watchers ",1.0,"AbstractCursorItemReader doClose() method is not reentrant [BATCH-2737] - **[Tommy](https://jira.spring.io/secure/ViewProfile.jspa?name=tommy)** opened **[BATCH-2737](https://jira.spring.io/browse/BATCH-2737?redirect=false)** and commented The following warning coming up from the `DisposableBeanAdapter`, when it tries to destroy any reader extended from the `AbstractCursorItemReader` by the auto-discovered `close()` method. `DisposableBeanAdapter : Invocation of destroy method 'close' failed on bean with name 'reader': org.springframework.batch.item.ItemStreamException: Error while closing item reader` Since the invocation of the `close()` method is already part of the Spring-Batch life-cycle, the `doClose()` method of this class should be reentrant. The problem lies in the incomplete check around resetting the `autoCommit` state of the underlying connection, which does not respect the already closed connection. The check should look like something similar ```java if(this.con != null && !this.conn.isClosed()) { this.con.setAutoCommit(this.initialConnectionAutoCommit); } ``` --- **Affects:** 4.0.0 1 votes, 2 watchers ",0,abstractcursoritemreader doclose method is not reentrant opened and commented the following warning coming up from the disposablebeanadapter when it tries to destroy any reader extended from the abstractcursoritemreader by the auto discovered close method disposablebeanadapter invocation of destroy method close failed on bean with name reader org springframework batch item itemstreamexception error while closing item reader since the invocation of the close method is already part of the spring batch life cycle the doclose method of this class should be reentrant the problem lies in the incomplete check around resetting the autocommit state of the underlying connection which does not respect the already closed connection the check should look like something similar java if this con null this conn isclosed this con setautocommit this initialconnectionautocommit affects votes watchers ,0 340764,30541302326.0,IssuesEvent,2023-07-19 21:41:07,gotsiridzes/mit-08-final,https://api.github.com/repos/gotsiridzes/mit-08-final,opened,bf4d790 failed unit and formatting tests.,ci-black ci-pytest,"CI failed on commit: bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a **Author:** tian.zhang@triflesoft.org **Pytest Report:** https://gotsiridzes.github.io/mit-08-final-report/bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a-1689802563/pytest_report.html First commit that introduced pytest's failure: a3c625c52821a22b3ca0179c19b90abdfddbd5f1 **Black Report:** https://gotsiridzes.github.io/mit-08-final-report/bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a-1689802563/black_report.html First commit that introduced black's failure: a3c625c52821a22b3ca0179c19b90abdfddbd5f1 ",1.0,"bf4d790 failed unit and formatting tests. - CI failed on commit: bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a **Author:** tian.zhang@triflesoft.org **Pytest Report:** https://gotsiridzes.github.io/mit-08-final-report/bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a-1689802563/pytest_report.html First commit that introduced pytest's failure: a3c625c52821a22b3ca0179c19b90abdfddbd5f1 **Black Report:** https://gotsiridzes.github.io/mit-08-final-report/bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a-1689802563/black_report.html First commit that introduced black's failure: a3c625c52821a22b3ca0179c19b90abdfddbd5f1 ",0, failed unit and formatting tests ci failed on commit author tian zhang triflesoft org pytest report first commit that introduced pytest s failure black report first commit that introduced black s failure ,0 300632,25982669104.0,IssuesEvent,2022-12-19 20:22:16,department-of-veterans-affairs/va.gov-cms,https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms,opened,Scrape module release page for info and post in a comment on Dependabot PRs.,Automated testing ⭐️ Sitewide CMS Quality Assurance,"## Description I've done this once and I'll do it again. ## Acceptance Criteria - [ ] Testable_Outcome_X - [ ] Testable_Outcome_Y - [ ] Testable_Outcome_Z - [ ] Requires design review ",1.0,"Scrape module release page for info and post in a comment on Dependabot PRs. - ## Description I've done this once and I'll do it again. ## Acceptance Criteria - [ ] Testable_Outcome_X - [ ] Testable_Outcome_Y - [ ] Testable_Outcome_Z - [ ] Requires design review ",0,scrape module release page for info and post in a comment on dependabot prs description i ve done this once and i ll do it again acceptance criteria testable outcome x testable outcome y testable outcome z requires design review ,0 1584,10361005168.0,IssuesEvent,2019-09-06 09:01:19,elastic/metricbeat-tests-poc,https://api.github.com/repos/elastic/metricbeat-tests-poc,opened,Represent the state of the running services in a standard file,automation,"We will help teams to understand what is running and where Document how to read services on each language",1.0,"Represent the state of the running services in a standard file - We will help teams to understand what is running and where Document how to read services on each language",1,represent the state of the running services in a standard file we will help teams to understand what is running and where document how to read services on each language,1 1370,9991308380.0,IssuesEvent,2019-07-11 10:47:08,mozilla-mobile/android-components,https://api.github.com/repos/mozilla-mobile/android-components,closed,Unexpected failure during lint analysis of module-info.class,🤖 automation,"Lint for `support-test` is failing in the 0.38.0 release task: https://tools.taskcluster.net/groups/UJ-v477WQZq5QY2gtNQ7Xw/tasks/U5fFFP4nTAiHckdK_fq8-Q/runs/3/logs/public%2Flogs%2Flive.log We re-run the task multiple times but it always fails with that error. The same commit has passed on master without any issues.",1.0,"Unexpected failure during lint analysis of module-info.class - Lint for `support-test` is failing in the 0.38.0 release task: https://tools.taskcluster.net/groups/UJ-v477WQZq5QY2gtNQ7Xw/tasks/U5fFFP4nTAiHckdK_fq8-Q/runs/3/logs/public%2Flogs%2Flive.log We re-run the task multiple times but it always fails with that error. The same commit has passed on master without any issues.",1,unexpected failure during lint analysis of module info class lint for support test is failing in the release task we re run the task multiple times but it always fails with that error the same commit has passed on master without any issues ,1 139501,5377000094.0,IssuesEvent,2017-02-23 10:45:51,datproject/dat-desktop,https://api.github.com/repos/datproject/dat-desktop,closed,generate brew install script,Priority: Low Status: Proposal Type: Enhancement,"stumbled on [pup](https://github.com/ericchiang/pup) which has a clever trick to do homebrew installs: ```sh brew install https://raw.githubusercontent.com/EricChiang/pup/master/pup.rb ``` resolves to: ```rb # This file was generated by release.sh require 'formula' class Pup < Formula homepage 'https://github.com/ericchiang/pup' version '0.4.0' if Hardware.is_64_bit? url 'https://github.com/ericchiang/pup/releases/download/v0.4.0/pup_v0.4.0_darwin_amd64.zip' sha256 'c539a697efee2f8e56614a54cb3b215338e00de1f6a7c2fa93144ab6e1db8ebe' else url 'https://github.com/ericchiang/pup/releases/download/v0.4.0/pup_v0.4.0_darwin_386.zip' sha256 '75c27caa0008a9cc639beb7506077ad9f32facbffcc4e815e999eaf9588a527e' end def install bin.install 'pup' end end ``` Which in turn we can use for `dat-desktop`. Think this is pretty cool; perhaps we could even leverage this to bundle the `dat` command. For all those folks that for whatever reason can't pull node onto their system first",1.0,"generate brew install script - stumbled on [pup](https://github.com/ericchiang/pup) which has a clever trick to do homebrew installs: ```sh brew install https://raw.githubusercontent.com/EricChiang/pup/master/pup.rb ``` resolves to: ```rb # This file was generated by release.sh require 'formula' class Pup < Formula homepage 'https://github.com/ericchiang/pup' version '0.4.0' if Hardware.is_64_bit? url 'https://github.com/ericchiang/pup/releases/download/v0.4.0/pup_v0.4.0_darwin_amd64.zip' sha256 'c539a697efee2f8e56614a54cb3b215338e00de1f6a7c2fa93144ab6e1db8ebe' else url 'https://github.com/ericchiang/pup/releases/download/v0.4.0/pup_v0.4.0_darwin_386.zip' sha256 '75c27caa0008a9cc639beb7506077ad9f32facbffcc4e815e999eaf9588a527e' end def install bin.install 'pup' end end ``` Which in turn we can use for `dat-desktop`. Think this is pretty cool; perhaps we could even leverage this to bundle the `dat` command. For all those folks that for whatever reason can't pull node onto their system first",0,generate brew install script stumbled on which has a clever trick to do homebrew installs sh brew install resolves to rb this file was generated by release sh require formula class pup formula homepage version if hardware is bit url else url end def install bin install pup end end which in turn we can use for dat desktop think this is pretty cool perhaps we could even leverage this to bundle the dat command for all those folks that for whatever reason can t pull node onto their system first,0 7878,19761013736.0,IssuesEvent,2022-01-16 12:17:30,graphhopper/graphhopper,https://api.github.com/repos/graphhopper/graphhopper,closed,Check if moving TestAlgoCollector into test package of osm module is possible,improvement architecture,Then we could also use the TranslationMapTest.SINGLETON instead of the newly created instance.,1.0,Check if moving TestAlgoCollector into test package of osm module is possible - Then we could also use the TranslationMapTest.SINGLETON instead of the newly created instance.,0,check if moving testalgocollector into test package of osm module is possible then we could also use the translationmaptest singleton instead of the newly created instance ,0 2594,12323523035.0,IssuesEvent,2020-05-13 12:19:35,coolOrangeLabs/powerGateTemplate,https://api.github.com/repos/coolOrangeLabs/powerGateTemplate,closed,Merge powerGate Teamplate with other customizations,Automation,"## Questions: Existing other customizations + What other customizations are present on the environment of the customer? + Reseller customizations? What kind of? + Other 3rd party tools? ## coolOrange Tasks If there are Data standard customizations then we need to accomplish the tasks below. ### Inventor + [ ] Get the following customized files from the customer: + [ ] `Inventor.xaml` + [ ] Merge the customized `Inventor.xaml` with the Inventor.xaml from the powerGate Template ### Vault + [ ] Get the following customized files from the customer: + [ ] `Default.ps1` where the powershell function `OnTabContextChanged` is overriden + [ ] Merge the customized `OnTabContextChanged` with the code from the powerGate Template",1.0,"Merge powerGate Teamplate with other customizations - ## Questions: Existing other customizations + What other customizations are present on the environment of the customer? + Reseller customizations? What kind of? + Other 3rd party tools? ## coolOrange Tasks If there are Data standard customizations then we need to accomplish the tasks below. ### Inventor + [ ] Get the following customized files from the customer: + [ ] `Inventor.xaml` + [ ] Merge the customized `Inventor.xaml` with the Inventor.xaml from the powerGate Template ### Vault + [ ] Get the following customized files from the customer: + [ ] `Default.ps1` where the powershell function `OnTabContextChanged` is overriden + [ ] Merge the customized `OnTabContextChanged` with the code from the powerGate Template",1,merge powergate teamplate with other customizations questions existing other customizations what other customizations are present on the environment of the customer reseller customizations what kind of other party tools coolorange tasks if there are data standard customizations then we need to accomplish the tasks below inventor get the following customized files from the customer inventor xaml merge the customized inventor xaml with the inventor xaml from the powergate template vault get the following customized files from the customer default where the powershell function ontabcontextchanged is overriden merge the customized ontabcontextchanged with the code from the powergate template,1 79412,28182535580.0,IssuesEvent,2023-04-04 04:34:37,apache/jmeter,https://api.github.com/repos/apache/jmeter,opened,want to run multiple user sequentially,defect to-triage,"### Expected behavior i have 800 user and i want to run that the next request is sent only after the prior request is completed? ### Actual behavior they run with same timing ### Steps to reproduce the problem create a multiple thread add http request csv file add listner ### JMeter Version 5.5 ### Java Version _No response_ ### OS Version _No response_",1.0,"want to run multiple user sequentially - ### Expected behavior i have 800 user and i want to run that the next request is sent only after the prior request is completed? ### Actual behavior they run with same timing ### Steps to reproduce the problem create a multiple thread add http request csv file add listner ### JMeter Version 5.5 ### Java Version _No response_ ### OS Version _No response_",0,want to run multiple user sequentially expected behavior i have user and i want to run that the next request is sent only after the prior request is completed actual behavior they run with same timing steps to reproduce the problem create a multiple thread add http request csv file add listner jmeter version java version no response os version no response ,0 342711,24754456202.0,IssuesEvent,2022-10-21 16:19:59,department-of-veterans-affairs/va.gov-team,https://api.github.com/repos/department-of-veterans-affairs/va.gov-team,closed,[Application Hosting and Deployment] Operational documentation,Epic operations documentation infrastructure eks,"## Product Outline [Application Hosting and Deployment using Container Orchestration (EKS)](https://vfs.atlassian.net/wiki/spaces/OT/pages/1474593866/Application+Hosting+and+Deployment+using+Container+Orchestration+EKS) ## High-Level User Story/ies As an operator of the VA.Gov Platform, I need to understand the tools and processes involved in supporting the platform's application hosting and deployment system. ## Hypothesis or Bet If we provide accurate documentation, operators will know how to manage the application hosting and deployment system. ## Definition of done ### What must be true in order for you to consider this epic complete? There are diagrams that depict the following... - Cluster topology - worker nodes + auto-scaling group - subnets / CNI - AWS resources that together make up the cluster - Service topology (1 for deployment cluster, 1 for tooling/utility cluster) - Traefik / Ingress - Datadog agents - Cert manager - External DNS - External Secrets - Metrics server - Cluster auto-scaler - Automation flow throughout the platform There is operational documentation that explains how... - to troubleshoot application deployments - to do necessary maintenance on the ArgoCD, EKS, etc. - build - deploy - upgrade - teardown",1.0,"[Application Hosting and Deployment] Operational documentation - ## Product Outline [Application Hosting and Deployment using Container Orchestration (EKS)](https://vfs.atlassian.net/wiki/spaces/OT/pages/1474593866/Application+Hosting+and+Deployment+using+Container+Orchestration+EKS) ## High-Level User Story/ies As an operator of the VA.Gov Platform, I need to understand the tools and processes involved in supporting the platform's application hosting and deployment system. ## Hypothesis or Bet If we provide accurate documentation, operators will know how to manage the application hosting and deployment system. ## Definition of done ### What must be true in order for you to consider this epic complete? There are diagrams that depict the following... - Cluster topology - worker nodes + auto-scaling group - subnets / CNI - AWS resources that together make up the cluster - Service topology (1 for deployment cluster, 1 for tooling/utility cluster) - Traefik / Ingress - Datadog agents - Cert manager - External DNS - External Secrets - Metrics server - Cluster auto-scaler - Automation flow throughout the platform There is operational documentation that explains how... - to troubleshoot application deployments - to do necessary maintenance on the ArgoCD, EKS, etc. - build - deploy - upgrade - teardown",0, operational documentation product outline high level user story ies as an operator of the va gov platform i need to understand the tools and processes involved in supporting the platform s application hosting and deployment system hypothesis or bet if we provide accurate documentation operators will know how to manage the application hosting and deployment system definition of done what must be true in order for you to consider this epic complete there are diagrams that depict the following cluster topology worker nodes auto scaling group subnets cni aws resources that together make up the cluster service topology for deployment cluster for tooling utility cluster traefik ingress datadog agents cert manager external dns external secrets metrics server cluster auto scaler automation flow throughout the platform there is operational documentation that explains how to troubleshoot application deployments to do necessary maintenance on the argocd eks etc build deploy upgrade teardown,0 590626,17782960923.0,IssuesEvent,2021-08-31 07:41:40,teamforus/general,https://api.github.com/repos/teamforus/general,closed,Geertruidenberg footer has WCAG mistakes,Priority: Must have Urgency: Medium Client: Geertruidenberg,"Learn more about change requests here: https://bit.ly/39CWeEE ### Requested by: - ### Change description ![image](https://user-images.githubusercontent.com/10818702/104726232-933b0780-5733-11eb-8383-12a21ca5585c.png) wcag.nl/quickscan/aNoyLjxtMkYkdxiMvR4K",1.0,"Geertruidenberg footer has WCAG mistakes - Learn more about change requests here: https://bit.ly/39CWeEE ### Requested by: - ### Change description ![image](https://user-images.githubusercontent.com/10818702/104726232-933b0780-5733-11eb-8383-12a21ca5585c.png) wcag.nl/quickscan/aNoyLjxtMkYkdxiMvR4K",0,geertruidenberg footer has wcag mistakes learn more about change requests here requested by change description wcag nl quickscan ,0 6181,22366462828.0,IssuesEvent,2022-06-16 04:58:59,harvester/harvester,https://api.github.com/repos/harvester/harvester,closed,[FEATURE] Add Harvester backport issue bot,enhancement priority/2 area/automation,"**Is your feature request related to a problem? Please describe.** Add Harvester bot to auto crate backport issues based on the backport label. **Describe the solution you'd like** - Title: [Backport v1.x] copy-the-title. - Description: backport the issue #link-id - Copy assignees and all labels except the `backport-needed` and add the `not-require/test-plan` label. - Move the issue to the associated milestone and release. **Describe alternatives you've considered** **Additional context** ",1.0,"[FEATURE] Add Harvester backport issue bot - **Is your feature request related to a problem? Please describe.** Add Harvester bot to auto crate backport issues based on the backport label. **Describe the solution you'd like** - Title: [Backport v1.x] copy-the-title. - Description: backport the issue #link-id - Copy assignees and all labels except the `backport-needed` and add the `not-require/test-plan` label. - Move the issue to the associated milestone and release. **Describe alternatives you've considered** **Additional context** ",1, add harvester backport issue bot is your feature request related to a problem please describe add harvester bot to auto crate backport issues based on the backport label describe the solution you d like title copy the title description backport the issue link id copy assignees and all labels except the backport needed and add the not require test plan label move the issue to the associated milestone and release describe alternatives you ve considered additional context ,1 199,4567290480.0,IssuesEvent,2016-09-15 10:31:56,MISP/MISP,https://api.github.com/repos/MISP/MISP,closed,REST API - Get openIOC output,automation import/export,"Might be needed for integration with [openioc_scan (volatility plugin)](https://github.com/TakahiroHaruyama/openioc_scan), see https://github.com/TakahiroHaruyama/openioc_scan/issues/2 Both for individual events, and a global one.",1.0,"REST API - Get openIOC output - Might be needed for integration with [openioc_scan (volatility plugin)](https://github.com/TakahiroHaruyama/openioc_scan), see https://github.com/TakahiroHaruyama/openioc_scan/issues/2 Both for individual events, and a global one.",1,rest api get openioc output might be needed for integration with see both for individual events and a global one ,1 51090,13188098770.0,IssuesEvent,2020-08-13 05:33:08,icecube-trac/tix3,https://api.github.com/repos/icecube-trac/tix3,closed,[MuonGun] Surfaces refactor broke deserialization of pre-IceSim5 S frames (Trac #1956),Migrated from Trac combo core defect,"Trying to deserialize an S frame written with IceSim 4 with current software fails with ```text FATAL (phys-services): Version 117 is from the future (SamplingSurface.cxx:50 in void I3Surfaces::SamplingSurface::serialize(Archive&, unsigned int) [with Archive = icecube::archive::portable_binary_iarchive]) ``` This is probably because the refactor added a new layer in the inheritance tree, the current code tries to read a class ID and version from the stream that are not there. While empty base classes do not take up space in memory, they turn out to matter quite a bit for serialization.
Migrated from https://code.icecube.wisc.edu/ticket/1956, reported by jvansanten and owned by

```json { ""status"": ""closed"", ""changetime"": ""2017-03-14T20:23:10"", ""description"": ""Trying to deserialize an S frame written with IceSim 4 with current software fails with \n{{{\nFATAL (phys-services): Version 117 is from the future (SamplingSurface.cxx:50 in void I3Surfaces::SamplingSurface::serialize(Archive&, unsigned int) [with Archive = icecube::archive::portable_binary_iarchive])\n}}}\n\nThis is probably because the refactor added a new layer in the inheritance tree, the current code tries to read a class ID and version from the stream that are not there. While empty base classes do not take up space in memory, they turn out to matter quite a bit for serialization."", ""reporter"": ""jvansanten"", ""cc"": """", ""resolution"": ""invalid"", ""_ts"": ""1489522990898099"", ""component"": ""combo core"", ""summary"": ""[MuonGun] Surfaces refactor broke deserialization of pre-IceSim5 S frames"", ""priority"": ""critical"", ""keywords"": """", ""time"": ""2017-03-14T15:16:30"", ""milestone"": """", ""owner"": """", ""type"": ""defect"" } ```

",1.0,"[MuonGun] Surfaces refactor broke deserialization of pre-IceSim5 S frames (Trac #1956) - Trying to deserialize an S frame written with IceSim 4 with current software fails with ```text FATAL (phys-services): Version 117 is from the future (SamplingSurface.cxx:50 in void I3Surfaces::SamplingSurface::serialize(Archive&, unsigned int) [with Archive = icecube::archive::portable_binary_iarchive]) ``` This is probably because the refactor added a new layer in the inheritance tree, the current code tries to read a class ID and version from the stream that are not there. While empty base classes do not take up space in memory, they turn out to matter quite a bit for serialization.
Migrated from https://code.icecube.wisc.edu/ticket/1956, reported by jvansanten and owned by

```json { ""status"": ""closed"", ""changetime"": ""2017-03-14T20:23:10"", ""description"": ""Trying to deserialize an S frame written with IceSim 4 with current software fails with \n{{{\nFATAL (phys-services): Version 117 is from the future (SamplingSurface.cxx:50 in void I3Surfaces::SamplingSurface::serialize(Archive&, unsigned int) [with Archive = icecube::archive::portable_binary_iarchive])\n}}}\n\nThis is probably because the refactor added a new layer in the inheritance tree, the current code tries to read a class ID and version from the stream that are not there. While empty base classes do not take up space in memory, they turn out to matter quite a bit for serialization."", ""reporter"": ""jvansanten"", ""cc"": """", ""resolution"": ""invalid"", ""_ts"": ""1489522990898099"", ""component"": ""combo core"", ""summary"": ""[MuonGun] Surfaces refactor broke deserialization of pre-IceSim5 S frames"", ""priority"": ""critical"", ""keywords"": """", ""time"": ""2017-03-14T15:16:30"", ""milestone"": """", ""owner"": """", ""type"": ""defect"" } ```

",0, surfaces refactor broke deserialization of pre s frames trac trying to deserialize an s frame written with icesim with current software fails with text fatal phys services version is from the future samplingsurface cxx in void samplingsurface serialize archive unsigned int this is probably because the refactor added a new layer in the inheritance tree the current code tries to read a class id and version from the stream that are not there while empty base classes do not take up space in memory they turn out to matter quite a bit for serialization migrated from json status closed changetime description trying to deserialize an s frame written with icesim with current software fails with n nfatal phys services version is from the future samplingsurface cxx in void samplingsurface serialize archive unsigned int n n nthis is probably because the refactor added a new layer in the inheritance tree the current code tries to read a class id and version from the stream that are not there while empty base classes do not take up space in memory they turn out to matter quite a bit for serialization reporter jvansanten cc resolution invalid ts component combo core summary surfaces refactor broke deserialization of pre s frames priority critical keywords time milestone owner type defect ,0 20295,29517890346.0,IssuesEvent,2023-06-04 18:00:06,SodiumZH/Days-with-Monster-Girls,https://api.github.com/repos/SodiumZH/Days-with-Monster-Girls,closed,Mod does now Allow you to use Quantum Catcher from Forbidden and Arcanus,compatibility," Every time I try to capture the tamed mob to bring somewhere else all I get is a armor placement opening up, or the mob is following or staying text, there needs to be a tool assigned to causing the tamed monster to follow or not, not the hand, as it interferes with other mods",True,"Mod does now Allow you to use Quantum Catcher from Forbidden and Arcanus - Every time I try to capture the tamed mob to bring somewhere else all I get is a armor placement opening up, or the mob is following or staying text, there needs to be a tool assigned to causing the tamed monster to follow or not, not the hand, as it interferes with other mods",0,mod does now allow you to use quantum catcher from forbidden and arcanus img width alt image src every time i try to capture the tamed mob to bring somewhere else all i get is a armor placement opening up or the mob is following or staying text there needs to be a tool assigned to causing the tamed monster to follow or not not the hand as it interferes with other mods,0 220320,16920598199.0,IssuesEvent,2021-06-25 04:41:33,old-rookies/tech-demo-client,https://api.github.com/repos/old-rookies/tech-demo-client,opened,컴포넌트 안의 이벤트,documentation,"https://github.com/old-rookies/tech-demo-client/blob/825a16b6979b8ba3a092954dce37434ead63ffa6/src/scenes/Games/GameScene/index.tsx#L37 컴포넌트 안에서 addEventListener를 사용하게 되면, 컴포넌트가 재 랜더링 될때마다 새로 이벤트 리스닝이 등록될 수 있습니당. 예를들어 만약 해당 파일이 state를 갖게된다면, event가 destroy되지않고 새로 이벤트 함수가 등록되게 되어서 한 이벤트에 대해 처리가 2번씩 일어나는 경우가 생길 수도있는것이져. 그리고 씬이 변경되더라도, window 객체가 새로 reload 되지 않을 수 있기때문에, 이벤트 리스너는 그대로 등록되어 있을수도 있습니당. 씬은 없는데 그 이벤트 리스너는 객체 안에 남아있을 수도 있는 것이져. 그렇기 때문에 만약에 이벤트를 컴포넌트 안에서 등록해야한다면, ```tsx const evtFn = ()=>console.log('some event fired'); useEffect(()=>{ window.addEventListener('EVENT_NAME' ,evtFn); return ()=>{ window.removeEventListener('EVENT_NAME' , evtFn); } },[]); ``` 아래와 같이 해주어야 새로 랜더가 되거나, 데이터가 갱신되어 페이지가 re-render되더라도 이벤트가 중첩 실행 혹은 남아있는 경우를 피할 수 있습니당. 그냥 그렇다굽쇼 ",1.0,"컴포넌트 안의 이벤트 - https://github.com/old-rookies/tech-demo-client/blob/825a16b6979b8ba3a092954dce37434ead63ffa6/src/scenes/Games/GameScene/index.tsx#L37 컴포넌트 안에서 addEventListener를 사용하게 되면, 컴포넌트가 재 랜더링 될때마다 새로 이벤트 리스닝이 등록될 수 있습니당. 예를들어 만약 해당 파일이 state를 갖게된다면, event가 destroy되지않고 새로 이벤트 함수가 등록되게 되어서 한 이벤트에 대해 처리가 2번씩 일어나는 경우가 생길 수도있는것이져. 그리고 씬이 변경되더라도, window 객체가 새로 reload 되지 않을 수 있기때문에, 이벤트 리스너는 그대로 등록되어 있을수도 있습니당. 씬은 없는데 그 이벤트 리스너는 객체 안에 남아있을 수도 있는 것이져. 그렇기 때문에 만약에 이벤트를 컴포넌트 안에서 등록해야한다면, ```tsx const evtFn = ()=>console.log('some event fired'); useEffect(()=>{ window.addEventListener('EVENT_NAME' ,evtFn); return ()=>{ window.removeEventListener('EVENT_NAME' , evtFn); } },[]); ``` 아래와 같이 해주어야 새로 랜더가 되거나, 데이터가 갱신되어 페이지가 re-render되더라도 이벤트가 중첩 실행 혹은 남아있는 경우를 피할 수 있습니당. 그냥 그렇다굽쇼 ",0,컴포넌트 안의 이벤트 컴포넌트 안에서 addeventlistener를 사용하게 되면 컴포넌트가 재 랜더링 될때마다 새로 이벤트 리스닝이 등록될 수 있습니당 예를들어 만약 해당 파일이 state를 갖게된다면 event가 destroy되지않고 새로 이벤트 함수가 등록되게 되어서 한 이벤트에 대해 처리가 일어나는 경우가 생길 수도있는것이져 그리고 씬이 변경되더라도 window 객체가 새로 reload 되지 않을 수 있기때문에 이벤트 리스너는 그대로 등록되어 있을수도 있습니당 씬은 없는데 그 이벤트 리스너는 객체 안에 남아있을 수도 있는 것이져 그렇기 때문에 만약에 이벤트를 컴포넌트 안에서 등록해야한다면 tsx const evtfn console log some event fired useeffect window addeventlistener event name evtfn return window removeeventlistener event name evtfn 아래와 같이 해주어야 새로 랜더가 되거나 데이터가 갱신되어 페이지가 re render되더라도 이벤트가 중첩 실행 혹은 남아있는 경우를 피할 수 있습니당 그냥 그렇다굽쇼 ,0 9839,30621318059.0,IssuesEvent,2023-07-24 08:28:26,zaproxy/zaproxy,https://api.github.com/repos/zaproxy/zaproxy,closed,maxAlertsPerRule for activeScan job,enhancement add-on in:automation,"### Is your feature request related to a problem? Please describe. Because you can not set maxAlertsPerRule in activeScan, the reports are very bloated. ### Describe the solution you'd like I would like to have a possibilty to configure maxAlertsPerRule also for the activeScan like it is possible for passiveScan ### Describe alternatives you've considered unfortunately there is no alternative ### Screenshots _No response_ ### Additional context _No response_ ### Would you like to help fix this issue? - [ ] Yes",1.0,"maxAlertsPerRule for activeScan job - ### Is your feature request related to a problem? Please describe. Because you can not set maxAlertsPerRule in activeScan, the reports are very bloated. ### Describe the solution you'd like I would like to have a possibilty to configure maxAlertsPerRule also for the activeScan like it is possible for passiveScan ### Describe alternatives you've considered unfortunately there is no alternative ### Screenshots _No response_ ### Additional context _No response_ ### Would you like to help fix this issue? - [ ] Yes",1,maxalertsperrule for activescan job is your feature request related to a problem please describe because you can not set maxalertsperrule in activescan the reports are very bloated describe the solution you d like i would like to have a possibilty to configure maxalertsperrule also for the activescan like it is possible for passivescan describe alternatives you ve considered unfortunately there is no alternative screenshots no response additional context no response would you like to help fix this issue yes,1 8761,27172219204.0,IssuesEvent,2023-02-17 20:33:54,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,What is the best way to specify fileSystemInfo.createdDateTime on Linux?,type:question automation:Closed,"#### Category - [x] Question - [ ] Documentation issue - [ ] Bug [GNU stat](https://www.gnu.org/software/coreutils/manual/html_node/stat-invocation.html) has 'birth time'. but it's not standard (https://unix.stackexchange.com/a/67895) What should I set for fileSystemInfo.createdDateTime? ",1.0,"What is the best way to specify fileSystemInfo.createdDateTime on Linux? - #### Category - [x] Question - [ ] Documentation issue - [ ] Bug [GNU stat](https://www.gnu.org/software/coreutils/manual/html_node/stat-invocation.html) has 'birth time'. but it's not standard (https://unix.stackexchange.com/a/67895) What should I set for fileSystemInfo.createdDateTime? ",1,what is the best way to specify filesysteminfo createddatetime on linux category question documentation issue bug has birth time but it s not standard what should i set for filesysteminfo createddatetime ,1 424136,12306281436.0,IssuesEvent,2020-05-12 00:58:30,LLNL/PyDV,https://api.github.com/repos/LLNL/PyDV,opened,Add filter command,Low Priority enhancement,"Procedure: Remove points from the curves that fail the specified domain predicate or range predicate. The predicates must be procedures that return true or false when applied to elements of a domain or range. Usage: filter curve-list domain-predicate range-predicate",1.0,"Add filter command - Procedure: Remove points from the curves that fail the specified domain predicate or range predicate. The predicates must be procedures that return true or false when applied to elements of a domain or range. Usage: filter curve-list domain-predicate range-predicate",0,add filter command procedure remove points from the curves that fail the specified domain predicate or range predicate the predicates must be procedures that return true or false when applied to elements of a domain or range usage filter curve list domain predicate range predicate,0 5560,20103602512.0,IssuesEvent,2022-02-07 08:15:36,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,closed,"br: after br restore, tikv used storage is not balance ",type/bug severity/major component/br found/automation affects-5.3 affects-5.4,"## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) run oltp_fun_001 ### 2. What did you expect to see? (Required) Restore finished at 18:26. Tikv used should be balance in all nodes. ### 3. What did you see instead (Required) ![image](https://user-images.githubusercontent.com/9443637/147431117-e008a9be-90de-43b5-823b-f50facb886b6.png) ### 4. What is your TiDB version? (Required) / # /br -V Release Version: v5.4.0-nightly Git Commit Hash: 76aae0d5c594f538af62caa883c73188a44170c4 Git Branch: heads/refs/tags/v5.4.0-nightly Go Version: go1.16.4 UTC Build Time: 2021-12-26 08:07:37 Race Enabled: false / # /tidb-server -V Release Version: v5.4.0-nightly Edition: Community Git Commit Hash: 76aae0d5c594f538af62caa883c73188a44170c4 Git Branch: heads/refs/tags/v5.4.0-nightly UTC Build Time: 2021-12-26 08:09:11 GoVersion: go1.16.4 Race Enabled: false TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306 Check Table Before Drop: false Logs and monitor can be get from minio using following testbed name. endless-oltp--tps-542284-1-875",1.0,"br: after br restore, tikv used storage is not balance - ## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) run oltp_fun_001 ### 2. What did you expect to see? (Required) Restore finished at 18:26. Tikv used should be balance in all nodes. ### 3. What did you see instead (Required) ![image](https://user-images.githubusercontent.com/9443637/147431117-e008a9be-90de-43b5-823b-f50facb886b6.png) ### 4. What is your TiDB version? (Required) / # /br -V Release Version: v5.4.0-nightly Git Commit Hash: 76aae0d5c594f538af62caa883c73188a44170c4 Git Branch: heads/refs/tags/v5.4.0-nightly Go Version: go1.16.4 UTC Build Time: 2021-12-26 08:07:37 Race Enabled: false / # /tidb-server -V Release Version: v5.4.0-nightly Edition: Community Git Commit Hash: 76aae0d5c594f538af62caa883c73188a44170c4 Git Branch: heads/refs/tags/v5.4.0-nightly UTC Build Time: 2021-12-26 08:09:11 GoVersion: go1.16.4 Race Enabled: false TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306 Check Table Before Drop: false Logs and monitor can be get from minio using following testbed name. endless-oltp--tps-542284-1-875",1,br after br restore tikv used storage is not balance bug report please answer these questions before submitting your issue thanks minimal reproduce step required run oltp fun what did you expect to see required restore finished at tikv used should be balance in all nodes what did you see instead required what is your tidb version required br v release version nightly git commit hash git branch heads refs tags nightly go version utc build time race enabled false tidb server v release version nightly edition community git commit hash git branch heads refs tags nightly utc build time goversion race enabled false tikv min version check table before drop false logs and monitor can be get from minio using following testbed name endless oltp tps ,1 20993,16396613538.0,IssuesEvent,2021-05-18 01:11:59,mkrumholz/relational_rails,https://api.github.com/repos/mkrumholz/relational_rails,opened,Ability to Delete Plot from Plots Index,enhancement iteration 3 usability,"User Story 23, Child Delete From Childs Index Page (x1) As a visitor When I visit the `child_table_name` index page or a parent `child_table_name` index page Next to every child, I see a link to delete that child When I click the link I should be taken to the `child_table_name` index page where I no longer see that child",True,"Ability to Delete Plot from Plots Index - User Story 23, Child Delete From Childs Index Page (x1) As a visitor When I visit the `child_table_name` index page or a parent `child_table_name` index page Next to every child, I see a link to delete that child When I click the link I should be taken to the `child_table_name` index page where I no longer see that child",0,ability to delete plot from plots index user story child delete from childs index page as a visitor when i visit the child table name index page or a parent child table name index page next to every child i see a link to delete that child when i click the link i should be taken to the child table name index page where i no longer see that child,0 211288,7200024930.0,IssuesEvent,2018-02-05 17:40:16,robotology-playground/wholeBodyControllers,https://api.github.com/repos/robotology-playground/wholeBodyControllers,opened,Check mex-wholebodymodel status and eventually port the code into wholeBodyControllers,priority: high,[mex-wholeBodyModel](https://github.com/robotology/mex-wholebodymodel) will be used by @ahmadgazar for simulating iCub and Walkman with SEA. It is therefore necessary to check if the code still compiles and eventually port it into this repo.,1.0,Check mex-wholebodymodel status and eventually port the code into wholeBodyControllers - [mex-wholeBodyModel](https://github.com/robotology/mex-wholebodymodel) will be used by @ahmadgazar for simulating iCub and Walkman with SEA. It is therefore necessary to check if the code still compiles and eventually port it into this repo.,0,check mex wholebodymodel status and eventually port the code into wholebodycontrollers will be used by ahmadgazar for simulating icub and walkman with sea it is therefore necessary to check if the code still compiles and eventually port it into this repo ,0 6941,24042230448.0,IssuesEvent,2022-09-16 03:45:58,AdamXweb/awesome-aussie,https://api.github.com/repos/AdamXweb/awesome-aussie,opened,[ADDITION] AmazingCo,Awaiting Review Added to Airtable Automation from Airtable,"### Category Other ### Software to be added AmazingCo ### Supporting Material URL: https://www.amazingco.me Description: AmazingCo is an experiences and activities creator, helping people all around the world enjoy better real-world experiences. Size: HQ: Melbourne LinkedIn: https://www.linkedin.com/company/amazingco/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rec9v3eGQqJQlxlVD",1.0,"[ADDITION] AmazingCo - ### Category Other ### Software to be added AmazingCo ### Supporting Material URL: https://www.amazingco.me Description: AmazingCo is an experiences and activities creator, helping people all around the world enjoy better real-world experiences. Size: HQ: Melbourne LinkedIn: https://www.linkedin.com/company/amazingco/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rec9v3eGQqJQlxlVD",1, amazingco category other software to be added amazingco supporting material url description amazingco is an experiences and activities creator helping people all around the world enjoy better real world experiences size hq melbourne linkedin see record on airtable ,1 3718,14406688969.0,IssuesEvent,2020-12-03 20:36:48,SynBioDex/SBOL-visual,https://api.github.com/repos/SynBioDex/SBOL-visual,closed,Website should implicitly link to latest release,automation,"The website currently has to be manually updated for each release. Once we generate release artifacts automatically (#119), the website can instead point all of its links just to ""latest release"" URLs in GitHub, such that when we make a new release the website will be mostly automatically updated. We can also stop linking old release material on the website, and just give an ""old releases here"" pointer to the release collection on GitHub.",1.0,"Website should implicitly link to latest release - The website currently has to be manually updated for each release. Once we generate release artifacts automatically (#119), the website can instead point all of its links just to ""latest release"" URLs in GitHub, such that when we make a new release the website will be mostly automatically updated. We can also stop linking old release material on the website, and just give an ""old releases here"" pointer to the release collection on GitHub.",1,website should implicitly link to latest release the website currently has to be manually updated for each release once we generate release artifacts automatically the website can instead point all of its links just to latest release urls in github such that when we make a new release the website will be mostly automatically updated we can also stop linking old release material on the website and just give an old releases here pointer to the release collection on github ,1 1566,10343286757.0,IssuesEvent,2019-09-04 08:37:58,elastic/apm-agent-nodejs,https://api.github.com/repos/elastic/apm-agent-nodejs,closed,Jenkins doesn't detect invalid commit messages,[zube]: Inbox automation ci,"We are linting the PR commit messages as part of our Jenkins pipeline, but as seen in #1312, it somehow doesn't work and simply just marks commits as ok even though they are not.",1.0,"Jenkins doesn't detect invalid commit messages - We are linting the PR commit messages as part of our Jenkins pipeline, but as seen in #1312, it somehow doesn't work and simply just marks commits as ok even though they are not.",1,jenkins doesn t detect invalid commit messages we are linting the pr commit messages as part of our jenkins pipeline but as seen in it somehow doesn t work and simply just marks commits as ok even though they are not ,1 492027,14175404073.0,IssuesEvent,2020-11-12 21:32:19,rtCamp/web-stories-wp,https://api.github.com/repos/rtCamp/web-stories-wp,opened,Lightbox Effect - Close Button Issue,priority:high type:bug,"If the cover image option is disabled and a user clicks on one of the stories, the close option does not appear when the lightbox effect is triggered. ",1.0,"Lightbox Effect - Close Button Issue - If the cover image option is disabled and a user clicks on one of the stories, the close option does not appear when the lightbox effect is triggered. ",0,lightbox effect close button issue if the cover image option is disabled and a user clicks on one of the stories the close option does not appear when the lightbox effect is triggered ,0 6442,23152072270.0,IssuesEvent,2022-07-29 09:17:51,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] The last healthy replica may be evicted or removed,kind/bug area/manager severity/1 require/automation-e2e kind/regression feature/scheduling backport-needed/1.2.5 backport-needed/1.3.1,"## Describe the bug `test_disk_eviction_with_node_level_soft_anti_affinity_disabled` failed in master-head [edc1b83](https://github.com/longhorn/longhorn/commit/edc1b83c5fe906b1ec4248c0b1279cb13813bda9) Double verified in release version, the fail situation not happen on V1.3.0 ## To Reproduce Steps to reproduce the behavior: 1. Setup longhorn with 3 nodes 2. Deploy longhorn-test 3. Run `test_disk_eviction_with_node_level_soft_anti_affinity_disabled` 4. After test [steps 6](https://github.com/longhorn/longhorn-tests/blob/18435ee9f786477c5ee1734d4047a47bd1f2e31e/manager/integration/tests/test_node.py#L2600), volume will keep in attaching state and no replica exist ## Expected behavior Test case should pass ## Log or Support bundle [longhorn-support-bundle_35fabdcc-d73a-4168-a2dd-65c2298709b1_2022-07-15T06-48-21Z.zip](https://github.com/longhorn/longhorn/files/9118546/longhorn-support-bundle_35fabdcc-d73a-4168-a2dd-65c2298709b1_2022-07-15T06-48-21Z.zip) ## Environment - Longhorn version: edc1b83 - Installation method (e.g. Rancher Catalog App/Helm/Kubectl): kubectl - Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s - Number of management node in the cluster: 1 - Number of worker node in the cluster: 3 - Node config - OS type and version: Ubuntu 20.04 ## Additional context https://ci.longhorn.io/job/public/job/master/job/sles/job/amd64/job/longhorn-tests-sles-amd64/186/testReport/junit/tests/test_node/test_disk_eviction_with_node_level_soft_anti_affinity_disabled/ ",1.0,"[BUG] The last healthy replica may be evicted or removed - ## Describe the bug `test_disk_eviction_with_node_level_soft_anti_affinity_disabled` failed in master-head [edc1b83](https://github.com/longhorn/longhorn/commit/edc1b83c5fe906b1ec4248c0b1279cb13813bda9) Double verified in release version, the fail situation not happen on V1.3.0 ## To Reproduce Steps to reproduce the behavior: 1. Setup longhorn with 3 nodes 2. Deploy longhorn-test 3. Run `test_disk_eviction_with_node_level_soft_anti_affinity_disabled` 4. After test [steps 6](https://github.com/longhorn/longhorn-tests/blob/18435ee9f786477c5ee1734d4047a47bd1f2e31e/manager/integration/tests/test_node.py#L2600), volume will keep in attaching state and no replica exist ## Expected behavior Test case should pass ## Log or Support bundle [longhorn-support-bundle_35fabdcc-d73a-4168-a2dd-65c2298709b1_2022-07-15T06-48-21Z.zip](https://github.com/longhorn/longhorn/files/9118546/longhorn-support-bundle_35fabdcc-d73a-4168-a2dd-65c2298709b1_2022-07-15T06-48-21Z.zip) ## Environment - Longhorn version: edc1b83 - Installation method (e.g. Rancher Catalog App/Helm/Kubectl): kubectl - Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s - Number of management node in the cluster: 1 - Number of worker node in the cluster: 3 - Node config - OS type and version: Ubuntu 20.04 ## Additional context https://ci.longhorn.io/job/public/job/master/job/sles/job/amd64/job/longhorn-tests-sles-amd64/186/testReport/junit/tests/test_node/test_disk_eviction_with_node_level_soft_anti_affinity_disabled/ ",1, the last healthy replica may be evicted or removed describe the bug test disk eviction with node level soft anti affinity disabled failed in master head double verified in release version the fail situation not happen on to reproduce steps to reproduce the behavior setup longhorn with nodes deploy longhorn test run test disk eviction with node level soft anti affinity disabled after test volume will keep in attaching state and no replica exist expected behavior test case should pass log or support bundle environment longhorn version installation method e g rancher catalog app helm kubectl kubectl kubernetes distro e g rke eks openshift and version number of management node in the cluster number of worker node in the cluster node config os type and version ubuntu additional context ,1 3975,15054922549.0,IssuesEvent,2021-02-03 18:04:22,IBM/FHIR,https://api.github.com/repos/IBM/FHIR,opened,Migrate from Bintray,automation,"JFrog is sunsetting their bintray offering. A replacement is needed Consider migration to Artifactory or GitHub Packages. Must complete by May 2021.",1.0,"Migrate from Bintray - JFrog is sunsetting their bintray offering. A replacement is needed Consider migration to Artifactory or GitHub Packages. Must complete by May 2021.",1,migrate from bintray jfrog is sunsetting their bintray offering a replacement is needed consider migration to artifactory or github packages must complete by may ,1 4820,17645936105.0,IssuesEvent,2021-08-20 06:03:35,keptn/keptn,https://api.github.com/repos/keptn/keptn,opened,[doc] Automation for creating documentation for a new release and versioning of release docu,doc research ready-for-refinement release-automation,"## User story When releasing a new version of Keptn, the documentation is also released based on the same release tag. As a user, I can switch between the release versions, while the latest stable version is shown by default. ### Details * Using tagging / branching to create release documentation in https://github.com/keptn/keptn.github.io * On the page, I can switch between the release docu. For example, see istio.io: ![image](https://user-images.githubusercontent.com/729071/130186885-b185163d-fb99-4ee1-b707-df6fd455b12e.png) * When switching to an older release, the release version is reflected in the URL: https://istio.io/v1.9/ * Consequently, we should not show the documentation for previous releases, but rather the release docu the user has selected: ![image](https://user-images.githubusercontent.com/729071/130187166-76a3fc25-d684-487d-b0dc-4637b456feb7.png) ### Advantage * By applying this approach, it becomes obsolete to duplicate the docu for each release in: https://github.com/keptn/keptn.github.io/tree/master/content/docs",1.0,"[doc] Automation for creating documentation for a new release and versioning of release docu - ## User story When releasing a new version of Keptn, the documentation is also released based on the same release tag. As a user, I can switch between the release versions, while the latest stable version is shown by default. ### Details * Using tagging / branching to create release documentation in https://github.com/keptn/keptn.github.io * On the page, I can switch between the release docu. For example, see istio.io: ![image](https://user-images.githubusercontent.com/729071/130186885-b185163d-fb99-4ee1-b707-df6fd455b12e.png) * When switching to an older release, the release version is reflected in the URL: https://istio.io/v1.9/ * Consequently, we should not show the documentation for previous releases, but rather the release docu the user has selected: ![image](https://user-images.githubusercontent.com/729071/130187166-76a3fc25-d684-487d-b0dc-4637b456feb7.png) ### Advantage * By applying this approach, it becomes obsolete to duplicate the docu for each release in: https://github.com/keptn/keptn.github.io/tree/master/content/docs",1, automation for creating documentation for a new release and versioning of release docu user story when releasing a new version of keptn the documentation is also released based on the same release tag as a user i can switch between the release versions while the latest stable version is shown by default details using tagging branching to create release documentation in on the page i can switch between the release docu for example see istio io when switching to an older release the release version is reflected in the url consequently we should not show the documentation for previous releases but rather the release docu the user has selected advantage by applying this approach it becomes obsolete to duplicate the docu for each release in ,1 8885,3010710554.0,IssuesEvent,2015-07-28 14:32:48,joe-bader/test-repo,https://api.github.com/repos/joe-bader/test-repo,opened,"[CNVERG-54] iPhone 6, iPad 3 mini. Space area: Context Menu: Draw on Canvas: when an user draws something the image isn't displayed ",Crossplatform Mobile Testing QA,"[reporter=""a.shemerey"", created=""Wed, 22 Jul 2015 15:45:05 +0300""]
  1. Log in like an user
  2. Go to a Space area
  3. Open context menu
  4. Click 'Draw on Canvas'
  5. Draw something

Result: when an user draws something the image isn't displayed on the screen, but I can see what I have drown on this space in any other browser / device

",1.0,"[CNVERG-54] iPhone 6, iPad 3 mini. Space area: Context Menu: Draw on Canvas: when an user draws something the image isn't displayed - [reporter=""a.shemerey"", created=""Wed, 22 Jul 2015 15:45:05 +0300""]
  1. Log in like an user
  2. Go to a Space area
  3. Open context menu
  4. Click 'Draw on Canvas'
  5. Draw something

Result: when an user draws something the image isn't displayed on the screen, but I can see what I have drown on this space in any other browser / device

",0, iphone ipad mini space area context menu draw on canvas when an user draws something the image isn t displayed log in like an user go to a space area open context menu click draw on canvas draw something result when an user draws something the image isn t displayed on the screen but i can see what i have drown on this space in any other browser device ,0 344073,24796741785.0,IssuesEvent,2022-10-24 17:57:18,Eleanorgruth/whats-cookin,https://api.github.com/repos/Eleanorgruth/whats-cookin,closed,README.md,documentation,"- [x] Overview of project and goals - [x] Overview of technologies used, challenges, wins, and any other reflections - [x] Screenshots/gifs of your app - [x] List of contributors",1.0,"README.md - - [x] Overview of project and goals - [x] Overview of technologies used, challenges, wins, and any other reflections - [x] Screenshots/gifs of your app - [x] List of contributors",0,readme md overview of project and goals overview of technologies used challenges wins and any other reflections screenshots gifs of your app list of contributors,0 8766,27172225269.0,IssuesEvent,2023-02-17 20:34:15,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,App registration is not up to date,area:Docs automation:Closed,"App registration has moved in to Azure. Corresponding settings that should be done under Platforms header is hard to find. //Olof --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: f02010cc-2715-86ca-b3fe-e4f92e934fdb * Version Independent ID: 27b3e16e-f80f-32f4-0e4f-69cfcd1cf769 * Content: [Create an app with Microsoft Graph - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online#feedback) * Content Source: [docs/rest-api/getting-started/app-registration.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/getting-started/app-registration.md) * Product: **onedrive** * GitHub Login: @rgregg * Microsoft Alias: **rgregg**",1.0,"App registration is not up to date - App registration has moved in to Azure. Corresponding settings that should be done under Platforms header is hard to find. //Olof --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: f02010cc-2715-86ca-b3fe-e4f92e934fdb * Version Independent ID: 27b3e16e-f80f-32f4-0e4f-69cfcd1cf769 * Content: [Create an app with Microsoft Graph - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online#feedback) * Content Source: [docs/rest-api/getting-started/app-registration.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/getting-started/app-registration.md) * Product: **onedrive** * GitHub Login: @rgregg * Microsoft Alias: **rgregg**",1,app registration is not up to date app registration has moved in to azure corresponding settings that should be done under platforms header is hard to find olof document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product onedrive github login rgregg microsoft alias rgregg ,1 4203,15797217504.0,IssuesEvent,2021-04-02 16:18:06,uiowa/uiowa,https://api.github.com/repos/uiowa/uiowa,closed,Replace broken links in admissions AOS miration postImportProcess.,5 points admissions.uiowa.edu automation,"There are 1500+ broken links in the AOS migration. The report is happening in the `postImportProcess` method now. I think we can stick with that and expand it. We can replace some using the migration map and others using a manual map of source -> destination NIDs that admissions created. We should double check all fields are being scanned that need to be.",1.0,"Replace broken links in admissions AOS miration postImportProcess. - There are 1500+ broken links in the AOS migration. The report is happening in the `postImportProcess` method now. I think we can stick with that and expand it. We can replace some using the migration map and others using a manual map of source -> destination NIDs that admissions created. We should double check all fields are being scanned that need to be.",1,replace broken links in admissions aos miration postimportprocess there are broken links in the aos migration the report is happening in the postimportprocess method now i think we can stick with that and expand it we can replace some using the migration map and others using a manual map of source destination nids that admissions created we should double check all fields are being scanned that need to be ,1 1713,10595012391.0,IssuesEvent,2019-10-09 18:02:27,IBM/ibm-spectrum-scale-csi-operator,https://api.github.com/repos/IBM/ibm-spectrum-scale-csi-operator,closed,Convert Operator deployment to playbook,Component: Automation Component: Bundling Phase: Development Severity: 2 Type: Enhancement Type: wontfix,The Operator deployment shouldn't be Bash. The original bash scripts were written when I wasn't as Ansible literate. I believe a playbook would be easier to comprehend and could ensure stateful information before the operator even triggers,1.0,Convert Operator deployment to playbook - The Operator deployment shouldn't be Bash. The original bash scripts were written when I wasn't as Ansible literate. I believe a playbook would be easier to comprehend and could ensure stateful information before the operator even triggers,1,convert operator deployment to playbook the operator deployment shouldn t be bash the original bash scripts were written when i wasn t as ansible literate i believe a playbook would be easier to comprehend and could ensure stateful information before the operator even triggers,1 978,8953064301.0,IssuesEvent,2019-01-25 18:20:05,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,Setup testing track on Google Play,🤖 automation,"Build, sign and upload Nightly builds to Google Play testing track.",1.0,"Setup testing track on Google Play - Build, sign and upload Nightly builds to Google Play testing track.",1,setup testing track on google play build sign and upload nightly builds to google play testing track ,1 88910,3787374593.0,IssuesEvent,2016-03-21 10:21:18,HubTurbo/HubTurbo,https://api.github.com/repos/HubTurbo/HubTurbo,closed,CONTRIBUTING.md should also reference process.md,aspect-devops forFirstTimers priority.medium,"Currently only points to [dev guide](https://github.com/HubTurbo/HubTurbo/blob/master/docs/developerGuide.md) and [workflow.md](https://github.com/HubTurbo/HubTurbo/blob/master/docs/workflow.md). Making [**process.md**](https://github.com/HubTurbo/HubTurbo/blob/master/docs/process.md) immediately visible from [CONTRIBUTING.md](https://github.com/HubTurbo/HubTurbo/blob/master/CONTRIBUTING.md) means that new contributors will be handed the following information on a silver platter: 1. The guidelines and conventions for submitting pull requests 2. Exposes how pull requests are approved for merging. This change should (hopefully) let new contributors catch simple problems in their PRs without a dev having to step in.",1.0,"CONTRIBUTING.md should also reference process.md - Currently only points to [dev guide](https://github.com/HubTurbo/HubTurbo/blob/master/docs/developerGuide.md) and [workflow.md](https://github.com/HubTurbo/HubTurbo/blob/master/docs/workflow.md). Making [**process.md**](https://github.com/HubTurbo/HubTurbo/blob/master/docs/process.md) immediately visible from [CONTRIBUTING.md](https://github.com/HubTurbo/HubTurbo/blob/master/CONTRIBUTING.md) means that new contributors will be handed the following information on a silver platter: 1. The guidelines and conventions for submitting pull requests 2. Exposes how pull requests are approved for merging. This change should (hopefully) let new contributors catch simple problems in their PRs without a dev having to step in.",0,contributing md should also reference process md currently only points to and making immediately visible from means that new contributors will be handed the following information on a silver platter the guidelines and conventions for submitting pull requests exposes how pull requests are approved for merging this change should hopefully let new contributors catch simple problems in their prs without a dev having to step in ,0 335546,10155142005.0,IssuesEvent,2019-08-06 09:38:01,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,m.flickr.com - see bug description,browser-firefox-mobile engine-gecko priority-important," **URL**: https://m.flickr.com/#/photos/65665666@N06/8679478921 **Browser / Version**: Firefox Mobile 68.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Something else **Description**: content not visible **Steps to Reproduce**: It is not possible to see pictures posted on this site
Browser Configuration
  • None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"m.flickr.com - see bug description - **URL**: https://m.flickr.com/#/photos/65665666@N06/8679478921 **Browser / Version**: Firefox Mobile 68.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Something else **Description**: content not visible **Steps to Reproduce**: It is not possible to see pictures posted on this site
Browser Configuration
  • None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,m flickr com see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description content not visible steps to reproduce it is not possible to see pictures posted on this site browser configuration none from with ❤️ ,0 4783,17468711587.0,IssuesEvent,2021-08-06 21:19:19,dotnet/arcade,https://api.github.com/repos/dotnet/arcade,closed,http client timeouts when uploading blobs to storage account during publishing,First Responder Detected By - Automation," - [ ] This issue is blocking - [X] This issue is causing unreasonable pain We're seeing some failed publishing jobs where we fail to upload a blob because we hit the default 100 second HttpClient timeout. We should see if we can increase the timeout in the Azure libraries, and whether that helps or not. Some example failures: * https://dev.azure.com/dnceng/internal/_build/results?buildId=1265525&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a * https://dev.azure.com/dnceng/internal/_build/results?buildId=1264791&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a * https://dev.azure.com/dnceng/internal/_build/results?buildId=1265145&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a The operation that is failing is here: https://github.com/dotnet/arcade/blob/b038a54d9137901e22868692b3d1f5c050e968c8/src/Microsoft.DotNet.Build.Tasks.Feed/src/common/AzureStorageUtils.cs#L85 ",1.0,"http client timeouts when uploading blobs to storage account during publishing - - [ ] This issue is blocking - [X] This issue is causing unreasonable pain We're seeing some failed publishing jobs where we fail to upload a blob because we hit the default 100 second HttpClient timeout. We should see if we can increase the timeout in the Azure libraries, and whether that helps or not. Some example failures: * https://dev.azure.com/dnceng/internal/_build/results?buildId=1265525&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a * https://dev.azure.com/dnceng/internal/_build/results?buildId=1264791&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a * https://dev.azure.com/dnceng/internal/_build/results?buildId=1265145&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a The operation that is failing is here: https://github.com/dotnet/arcade/blob/b038a54d9137901e22868692b3d1f5c050e968c8/src/Microsoft.DotNet.Build.Tasks.Feed/src/common/AzureStorageUtils.cs#L85 ",1,http client timeouts when uploading blobs to storage account during publishing this issue is blocking this issue is causing unreasonable pain we re seeing some failed publishing jobs where we fail to upload a blob because we hit the default second httpclient timeout we should see if we can increase the timeout in the azure libraries and whether that helps or not some example failures the operation that is failing is here ,1 7766,25568323409.0,IssuesEvent,2022-11-30 15:48:12,hackforla/website,https://api.github.com/repos/hackforla/website,opened,ER: github-actions bot removing Draft label,Size: Large Feature: Board/GitHub Maintenance automation role: dev leads Draft size: 0.25pt,"### Emergent Requirement - Problem The github-actions bot is removing the `Draft` label ### Issue you discovered this emergent requirement in - # ### Date discovered ### Did you have to do something temporarily - [ ] YES - [ ] NO ### Who was involved @ ### What happens if this is not addressed ### Resources ### Recommended Action Items - [ ] Make a new issue - [ ] Discuss with team - [ ] Let a Team Lead know ### Potential solutions [draft] ",1.0,"ER: github-actions bot removing Draft label - ### Emergent Requirement - Problem The github-actions bot is removing the `Draft` label ### Issue you discovered this emergent requirement in - # ### Date discovered ### Did you have to do something temporarily - [ ] YES - [ ] NO ### Who was involved @ ### What happens if this is not addressed ### Resources ### Recommended Action Items - [ ] Make a new issue - [ ] Discuss with team - [ ] Let a Team Lead know ### Potential solutions [draft] ",1,er github actions bot removing draft label emergent requirement problem the github actions bot is removing the draft label issue you discovered this emergent requirement in date discovered did you have to do something temporarily yes no who was involved what happens if this is not addressed resources recommended action items make a new issue discuss with team let a team lead know potential solutions ,1 4261,15893773621.0,IssuesEvent,2021-04-11 07:31:49,openhab/openhab-core,https://api.github.com/repos/openhab/openhab-core,closed,[automation] Schedule shows disabled rules,PR pending UI automation,The schedule displays all rules - independent of the state. Event when rules are disabled by user they will be displayed in the schedule - this is really confusing.,1.0,[automation] Schedule shows disabled rules - The schedule displays all rules - independent of the state. Event when rules are disabled by user they will be displayed in the schedule - this is really confusing.,1, schedule shows disabled rules the schedule displays all rules independent of the state event when rules are disabled by user they will be displayed in the schedule this is really confusing ,1 112929,14347543510.0,IssuesEvent,2020-11-29 07:52:44,mexyn/statev_v2_issues,https://api.github.com/repos/mexyn/statev_v2_issues,closed,Versetztung mehrerer Ladepunkte ,gamedesign solved,"Jason_Rains 27.11.2020 14:31 Uhr Versetzung folgender Ladezonen Firmenhash pfHawick3 (Bild 1) Firmenhash pfDownVine28 (Bild 2) ![ladepunkt-rains-recycling](https://user-images.githubusercontent.com/75127724/100455117-77659e80-30be-11eb-9053-ffbb092f0afc.png) ![ladepunkt rains industries](https://user-images.githubusercontent.com/75127724/100455125-79c7f880-30be-11eb-8379-2b60cc3a176f.png) Die Ladezonen sind mit größeren Fahrzeugen nicht erreichbar daher bitte die Versetzung an die Stelle an denen sich die Personen auf dem Bild befinden ",1.0,"Versetztung mehrerer Ladepunkte - Jason_Rains 27.11.2020 14:31 Uhr Versetzung folgender Ladezonen Firmenhash pfHawick3 (Bild 1) Firmenhash pfDownVine28 (Bild 2) ![ladepunkt-rains-recycling](https://user-images.githubusercontent.com/75127724/100455117-77659e80-30be-11eb-9053-ffbb092f0afc.png) ![ladepunkt rains industries](https://user-images.githubusercontent.com/75127724/100455125-79c7f880-30be-11eb-8379-2b60cc3a176f.png) Die Ladezonen sind mit größeren Fahrzeugen nicht erreichbar daher bitte die Versetzung an die Stelle an denen sich die Personen auf dem Bild befinden ",0,versetztung mehrerer ladepunkte jason rains uhr versetzung folgender ladezonen firmenhash bild firmenhash bild die ladezonen sind mit größeren fahrzeugen nicht erreichbar daher bitte die versetzung an die stelle an denen sich die personen auf dem bild befinden ,0 119473,10054324421.0,IssuesEvent,2019-07-22 00:38:27,wesnoth/wesnoth,https://api.github.com/repos/wesnoth/wesnoth,closed,Add the asymmetric theme?,Enhancement Ready for testing UI,"Since gloccusv posted his [asymmetric theme](https://forums.wesnoth.org/viewtopic.php?f=6&t=41065&start=15), I've been using a modified version of it ([code](http://sprunge.us/uLV6Pj), [screenshot](https://forums.wesnoth.org/download/file.php?id=83887&mode=view)). My version works best on master (because it uses some of the features from #3852). I've considered packaging it [as an add-on](https://forums.wesnoth.org/viewtopic.php?f=21&t=50213) but I wonder if it'll be easier to just add it to mainline? *edit* That code patch is just what I'm using right now in my personal branch. I am **not** proposing to just apply that to master as-is; if the concept is acceptable, I'll clean the patch up before merging it. - [ ] On 2560x1440 the left bar is 1106 pixels high, not full length.",1.0,"Add the asymmetric theme? - Since gloccusv posted his [asymmetric theme](https://forums.wesnoth.org/viewtopic.php?f=6&t=41065&start=15), I've been using a modified version of it ([code](http://sprunge.us/uLV6Pj), [screenshot](https://forums.wesnoth.org/download/file.php?id=83887&mode=view)). My version works best on master (because it uses some of the features from #3852). I've considered packaging it [as an add-on](https://forums.wesnoth.org/viewtopic.php?f=21&t=50213) but I wonder if it'll be easier to just add it to mainline? *edit* That code patch is just what I'm using right now in my personal branch. I am **not** proposing to just apply that to master as-is; if the concept is acceptable, I'll clean the patch up before merging it. - [ ] On 2560x1440 the left bar is 1106 pixels high, not full length.",0,add the asymmetric theme since gloccusv posted his i ve been using a modified version of it my version works best on master because it uses some of the features from i ve considered packaging it but i wonder if it ll be easier to just add it to mainline edit that code patch is just what i m using right now in my personal branch i am not proposing to just apply that to master as is if the concept is acceptable i ll clean the patch up before merging it on the left bar is pixels high not full length ,0 4067,15345054762.0,IssuesEvent,2021-02-28 04:52:07,pc2ccs/pc2v9,https://api.github.com/repos/pc2ccs/pc2v9,closed,Load reject.ini from CDP config directory.,automation enhancement,"**Is your feature request related to a problem?** During a contest/test on 2/13/2021 it was clear that the reject.ini could not be loaded from a CDP. This feature will automate the loading of judgements from a CDP config directory. This allows the judgements to be specified in a CDP, otherwise the reject.ini must be copied to wherever the pc2 server is installed. **Feature Description**: When contest.yaml is loaded and if a reject.ini is present in that same directory - load judgements from that reject.ini Precedence would be: 1 - Load judgements from CDP/config/reject.ini 2 - Load judgements from reject.ini in directory where server started 3 - Load default judgements Add/Implement for: -load option - load yaml on the Admin - export/write contest yaml **Have you considered other ways to accomplish the same thing?** Yes. The reject.ini must be copied to wherever the pc2 server is installed. **Additional context**: none.",1.0,"Load reject.ini from CDP config directory. - **Is your feature request related to a problem?** During a contest/test on 2/13/2021 it was clear that the reject.ini could not be loaded from a CDP. This feature will automate the loading of judgements from a CDP config directory. This allows the judgements to be specified in a CDP, otherwise the reject.ini must be copied to wherever the pc2 server is installed. **Feature Description**: When contest.yaml is loaded and if a reject.ini is present in that same directory - load judgements from that reject.ini Precedence would be: 1 - Load judgements from CDP/config/reject.ini 2 - Load judgements from reject.ini in directory where server started 3 - Load default judgements Add/Implement for: -load option - load yaml on the Admin - export/write contest yaml **Have you considered other ways to accomplish the same thing?** Yes. The reject.ini must be copied to wherever the pc2 server is installed. **Additional context**: none.",1,load reject ini from cdp config directory is your feature request related to a problem during a contest test on it was clear that the reject ini could not be loaded from a cdp this feature will automate the loading of judgements from a cdp config directory this allows the judgements to be specified in a cdp otherwise the reject ini must be copied to wherever the server is installed feature description when contest yaml is loaded and if a reject ini is present in that same directory load judgements from that reject ini precedence would be load judgements from cdp config reject ini load judgements from reject ini in directory where server started load default judgements add implement for load option load yaml on the admin export write contest yaml have you considered other ways to accomplish the same thing yes the reject ini must be copied to wherever the server is installed additional context none ,1 6526,23344694284.0,IssuesEvent,2022-08-09 16:48:14,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,FAILED: Automated Tests(10),automation,"Stats: { ""suites"": 40, ""tests"": 302, ""passes"": 292, ""pending"": 0, ""failures"": 10, ""start"": ""2022-08-08T20:21:52.637Z"", ""end"": ""2022-08-08T20:38:05.261Z"", ""duration"": 671495, ""testsRegistered"": 302, ""passPercent"": 96.6887417218543, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 0, ""hasSkipped"": false } Failed Tests: ""Verify that the option to approve request is displayed"" ""Grant only \""Namespace.Manage\"" permission to Wendy"" ""Verify that all the namespace options and activities are displayed"" ""Grant only \""CredentialIssuer.Admin\"" access to Wendy (access manager)"" ""Verify that only Authorization Profile option is displayed in Namespace page"" ""Grant only \""Namespace.View\"" permission to Mark"" ""Verify that the option to approve request is not displayed"" ""Verify that service accounts are not created"" ""Grant \""GatewayConfig.Publish\"" and \""Namespace.View\"" access to Wendy (access manager)"" ""Verify that GWA API allows user to publish the API to Kong gateway"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/2820624293",1.0,"FAILED: Automated Tests(10) - Stats: { ""suites"": 40, ""tests"": 302, ""passes"": 292, ""pending"": 0, ""failures"": 10, ""start"": ""2022-08-08T20:21:52.637Z"", ""end"": ""2022-08-08T20:38:05.261Z"", ""duration"": 671495, ""testsRegistered"": 302, ""passPercent"": 96.6887417218543, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 0, ""hasSkipped"": false } Failed Tests: ""Verify that the option to approve request is displayed"" ""Grant only \""Namespace.Manage\"" permission to Wendy"" ""Verify that all the namespace options and activities are displayed"" ""Grant only \""CredentialIssuer.Admin\"" access to Wendy (access manager)"" ""Verify that only Authorization Profile option is displayed in Namespace page"" ""Grant only \""Namespace.View\"" permission to Mark"" ""Verify that the option to approve request is not displayed"" ""Verify that service accounts are not created"" ""Grant \""GatewayConfig.Publish\"" and \""Namespace.View\"" access to Wendy (access manager)"" ""Verify that GWA API allows user to publish the API to Kong gateway"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/2820624293",1,failed automated tests stats suites tests passes pending failures start end duration testsregistered passpercent pendingpercent other hasother false skipped hasskipped false failed tests verify that the option to approve request is displayed grant only namespace manage permission to wendy verify that all the namespace options and activities are displayed grant only credentialissuer admin access to wendy access manager verify that only authorization profile option is displayed in namespace page grant only namespace view permission to mark verify that the option to approve request is not displayed verify that service accounts are not created grant gatewayconfig publish and namespace view access to wendy access manager verify that gwa api allows user to publish the api to kong gateway run link ,1 208588,15895752566.0,IssuesEvent,2021-04-11 15:04:51,lewiswatson55/SEM-Group19,https://api.github.com/repos/lewiswatson55/SEM-Group19,closed,ToString Unit Test (For All Classes),Testing Missing,Working example done for language class in Unit Test File.,1.0,ToString Unit Test (For All Classes) - Working example done for language class in Unit Test File.,0,tostring unit test for all classes working example done for language class in unit test file ,0 8907,27194445312.0,IssuesEvent,2023-02-20 03:06:07,AnthonyMonterrosa/C-sharp-service-stack,https://api.github.com/repos/AnthonyMonterrosa/C-sharp-service-stack,closed,Only Allow PRs That Follow Environment Pipeline.,automation enhancement,"Currently, any branch can have a PR merged into `test` and `production`, but we only want `main`->`test` and `test`->`production` to be possible. I believe this can be enforced with a GitHub Action that fails depending on the branch name and the branch to merge into which are both available from github via `github.head_ref` and `github.base_ref`, respectively. https://stackoverflow.com/questions/58033366/how-to-get-the-current-branch-within-github-actions",1.0,"Only Allow PRs That Follow Environment Pipeline. - Currently, any branch can have a PR merged into `test` and `production`, but we only want `main`->`test` and `test`->`production` to be possible. I believe this can be enforced with a GitHub Action that fails depending on the branch name and the branch to merge into which are both available from github via `github.head_ref` and `github.base_ref`, respectively. https://stackoverflow.com/questions/58033366/how-to-get-the-current-branch-within-github-actions",1,only allow prs that follow environment pipeline currently any branch can have a pr merged into test and production but we only want main test and test production to be possible i believe this can be enforced with a github action that fails depending on the branch name and the branch to merge into which are both available from github via github head ref and github base ref respectively ,1 10059,31468842600.0,IssuesEvent,2023-08-30 05:43:52,ntut-open-source-club/practical-tools-for-simple-design,https://api.github.com/repos/ntut-open-source-club/practical-tools-for-simple-design,closed,Fix lint/format warnings and `-Werror`,refactoring automation,"1. [ ] Fix clang-tidy and clang-format warnings 2. [ ] Add `-Werror` in Github Action",1.0,"Fix lint/format warnings and `-Werror` - 1. [ ] Fix clang-tidy and clang-format warnings 2. [ ] Add `-Werror` in Github Action",1,fix lint format warnings and werror fix clang tidy and clang format warnings add werror in github action,1 5780,21076860114.0,IssuesEvent,2022-04-02 09:13:07,SmartDataAnalytics/OpenResearch,https://api.github.com/repos/SmartDataAnalytics/OpenResearch,closed,Duplicate acronyms,WP 2.7 Hosting automation Migration needs manual fixing,"``` Exception: INSERT INTO Event (acronym,ordinal,homepage,title,startDate,endDate) values (:acronym,:ordinal,:homepage,:title,:startDate,:endDate) failed:UNIQUE constraint failed: Event.acronym record #281 ``` ",1.0,"Duplicate acronyms - ``` Exception: INSERT INTO Event (acronym,ordinal,homepage,title,startDate,endDate) values (:acronym,:ordinal,:homepage,:title,:startDate,:endDate) failed:UNIQUE constraint failed: Event.acronym record #281 ``` ",1,duplicate acronyms exception insert into event acronym ordinal homepage title startdate enddate values acronym ordinal homepage title startdate enddate failed unique constraint failed event acronym record ,1 7172,24345571343.0,IssuesEvent,2022-10-02 09:06:41,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,"Invalid Automation added to automations.yaml, not showing up on the automations list",integration: automation stale,"### The problem I'm working on a blueprint for automating my lights. I obviously have an issue with it which I'm still figuring out. The problem is that when I add a new automation based on the broken blueprint NOTHING shows up in the automation list (since it's broken), however it still gets added to the `automations.yaml` file and still tries to run, even if the blueprint is further changed. This causes errors while restarting since HA detects that there is an invalid automation. This is expected, sure. However there is no indication on the Automations screen that the broken automation is active. I spent 30 minutes changing stuff, removing files, tried to restart HA only to get an error over and over again. Finally I opened the `automations.yaml` file, where I assumed I have not made any changes (I only tinkered with the blueprint and I had no new automations in the UI). There I found like 4-5 automations based on the blueprint that were added (one for each time I attempted to create an automation from the blueprint). My request would be to display all automations from the `automations.yaml` file, even the ones that have invalid configuration/blueprint, but mark them as disabled and give the user the option to delete them. Since there is obviously a logic for filtering invalid automations, I'm hoping it would be easy to instead display them and provide more visibility for the user. ### What version of Home Assistant Core has the issue? 2022.8.6 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant OS ### Integration causing the issue Automations ### Link to integration documentation on our website https://www.home-assistant.io/docs/automation/ ### Diagnostics information _No response_ ### Example YAML snippet ```yaml blueprint: name: Motion Light with Override description: Turn a light on based on detected motion domain: automation input: input_motion_sensor_entity: name: Motion Sensor selector: entity: domain: binary_sensor device_class: door # device_class: motion input_light_switch_entity: name: Light Switch Entity selector: entity: domain: binary_sensor device_class: plug # device_class: power cooldown_period: name: Cooldown Period selector: number: min: 0 max: 1800 step: 1 mode: box unit_of_measurement: seconds target_light_entity: name: The light to control selector: entity: domain: light alias: Test Light Motion (Duplicate) description: """" trigger: - platform: state entity_id: - !input input_motion_sensor_entity id: Motion Triggered from: ""off"" to: ""on"" - platform: state entity_id: - !input input_motion_sensor_entity id: Motion Cleared from: ""on"" to: ""off"" condition: [] # variables: # lightSwitchEntity: !input input_light_switch_entity # cooldownPeriod: !input cooldown_period action: - choose: - conditions: - condition: and conditions: - condition: trigger id: Motion Triggered - condition: state entity_id: !input input_light_switch_entity state: ""off"" - condition: template value_template: >- {{ as_timestamp(now()) - as_timestamp(states[lightSwitchEntity].last_changed) > 10 }} sequence: - type: turn_on entity_id: !input target_light_entity domain: light - conditions: - condition: and conditions: - condition: trigger id: Motion Cleared - condition: state entity_id: !input input_light_switch_entity state: ""off"" - condition: template value_template: >- {{ as_timestamp(now()) - as_timestamp(states[lightSwitchEntity].last_changed) > 10 }} sequence: - type: turn_off entity_id: !input target_light_entity domain: light default: [] mode: parallel max: 3 ``` ### Anything in the logs that might be useful for us? ```txt 2022-08-26 09:07:38.625 ERROR (MainThread) [homeassistant.components.automation] Blueprint Motion Light with Override generated invalid automation with inputs OrderedDict([('input_motion_sensor_entity', 'binary_sensor.washing_machine_door_sensor_contact'), ('input_light_switch_entity', 'binary_sensor.p30pro_is_charging'), ('cooldown_period', 10), ('target_light_entity', 'light.zigbee_bulb')]): Unable to determine action @ data['action'][0]['choose'][0]['sequence'][0]. Got None ``` ### Additional information _No response_",1.0,"Invalid Automation added to automations.yaml, not showing up on the automations list - ### The problem I'm working on a blueprint for automating my lights. I obviously have an issue with it which I'm still figuring out. The problem is that when I add a new automation based on the broken blueprint NOTHING shows up in the automation list (since it's broken), however it still gets added to the `automations.yaml` file and still tries to run, even if the blueprint is further changed. This causes errors while restarting since HA detects that there is an invalid automation. This is expected, sure. However there is no indication on the Automations screen that the broken automation is active. I spent 30 minutes changing stuff, removing files, tried to restart HA only to get an error over and over again. Finally I opened the `automations.yaml` file, where I assumed I have not made any changes (I only tinkered with the blueprint and I had no new automations in the UI). There I found like 4-5 automations based on the blueprint that were added (one for each time I attempted to create an automation from the blueprint). My request would be to display all automations from the `automations.yaml` file, even the ones that have invalid configuration/blueprint, but mark them as disabled and give the user the option to delete them. Since there is obviously a logic for filtering invalid automations, I'm hoping it would be easy to instead display them and provide more visibility for the user. ### What version of Home Assistant Core has the issue? 2022.8.6 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant OS ### Integration causing the issue Automations ### Link to integration documentation on our website https://www.home-assistant.io/docs/automation/ ### Diagnostics information _No response_ ### Example YAML snippet ```yaml blueprint: name: Motion Light with Override description: Turn a light on based on detected motion domain: automation input: input_motion_sensor_entity: name: Motion Sensor selector: entity: domain: binary_sensor device_class: door # device_class: motion input_light_switch_entity: name: Light Switch Entity selector: entity: domain: binary_sensor device_class: plug # device_class: power cooldown_period: name: Cooldown Period selector: number: min: 0 max: 1800 step: 1 mode: box unit_of_measurement: seconds target_light_entity: name: The light to control selector: entity: domain: light alias: Test Light Motion (Duplicate) description: """" trigger: - platform: state entity_id: - !input input_motion_sensor_entity id: Motion Triggered from: ""off"" to: ""on"" - platform: state entity_id: - !input input_motion_sensor_entity id: Motion Cleared from: ""on"" to: ""off"" condition: [] # variables: # lightSwitchEntity: !input input_light_switch_entity # cooldownPeriod: !input cooldown_period action: - choose: - conditions: - condition: and conditions: - condition: trigger id: Motion Triggered - condition: state entity_id: !input input_light_switch_entity state: ""off"" - condition: template value_template: >- {{ as_timestamp(now()) - as_timestamp(states[lightSwitchEntity].last_changed) > 10 }} sequence: - type: turn_on entity_id: !input target_light_entity domain: light - conditions: - condition: and conditions: - condition: trigger id: Motion Cleared - condition: state entity_id: !input input_light_switch_entity state: ""off"" - condition: template value_template: >- {{ as_timestamp(now()) - as_timestamp(states[lightSwitchEntity].last_changed) > 10 }} sequence: - type: turn_off entity_id: !input target_light_entity domain: light default: [] mode: parallel max: 3 ``` ### Anything in the logs that might be useful for us? ```txt 2022-08-26 09:07:38.625 ERROR (MainThread) [homeassistant.components.automation] Blueprint Motion Light with Override generated invalid automation with inputs OrderedDict([('input_motion_sensor_entity', 'binary_sensor.washing_machine_door_sensor_contact'), ('input_light_switch_entity', 'binary_sensor.p30pro_is_charging'), ('cooldown_period', 10), ('target_light_entity', 'light.zigbee_bulb')]): Unable to determine action @ data['action'][0]['choose'][0]['sequence'][0]. Got None ``` ### Additional information _No response_",1,invalid automation added to automations yaml not showing up on the automations list the problem i m working on a blueprint for automating my lights i obviously have an issue with it which i m still figuring out the problem is that when i add a new automation based on the broken blueprint nothing shows up in the automation list since it s broken however it still gets added to the automations yaml file and still tries to run even if the blueprint is further changed this causes errors while restarting since ha detects that there is an invalid automation this is expected sure however there is no indication on the automations screen that the broken automation is active i spent minutes changing stuff removing files tried to restart ha only to get an error over and over again finally i opened the automations yaml file where i assumed i have not made any changes i only tinkered with the blueprint and i had no new automations in the ui there i found like automations based on the blueprint that were added one for each time i attempted to create an automation from the blueprint my request would be to display all automations from the automations yaml file even the ones that have invalid configuration blueprint but mark them as disabled and give the user the option to delete them since there is obviously a logic for filtering invalid automations i m hoping it would be easy to instead display them and provide more visibility for the user what version of home assistant core has the issue what was the last working version of home assistant core no response what type of installation are you running home assistant os integration causing the issue automations link to integration documentation on our website diagnostics information no response example yaml snippet yaml blueprint name motion light with override description turn a light on based on detected motion domain automation input input motion sensor entity name motion sensor selector entity domain binary sensor device class door device class motion input light switch entity name light switch entity selector entity domain binary sensor device class plug device class power cooldown period name cooldown period selector number min max step mode box unit of measurement seconds target light entity name the light to control selector entity domain light alias test light motion duplicate description trigger platform state entity id input input motion sensor entity id motion triggered from off to on platform state entity id input input motion sensor entity id motion cleared from on to off condition variables lightswitchentity input input light switch entity cooldownperiod input cooldown period action choose conditions condition and conditions condition trigger id motion triggered condition state entity id input input light switch entity state off condition template value template as timestamp now as timestamp states last changed sequence type turn on entity id input target light entity domain light conditions condition and conditions condition trigger id motion cleared condition state entity id input input light switch entity state off condition template value template as timestamp now as timestamp states last changed sequence type turn off entity id input target light entity domain light default mode parallel max anything in the logs that might be useful for us txt error mainthread blueprint motion light with override generated invalid automation with inputs ordereddict unable to determine action data got none additional information no response ,1 1840,10920483976.0,IssuesEvent,2019-11-21 21:22:37,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,opened,a8n: Align API for previews and external changesets,automation,"I know I signed this off earlier, but it turns out to be quite a hassle that `ChangesetPlan` and `ExternalChangeset` are not very similar in their structure. It's just really confusing to use and if we get stuck with it during WebApp development already, how should our customers understand it. ```graphql type ChangesetPlan { repository: Repository! fileDiffs(first: Int): PreviewFileDiffConnection! } type PreviewFileDiffConnection { nodes: [PreviewFileDiff!]! totalCount: Int pageInfo: PageInfo! diffStat: DiffStat! rawDiff: String! } type PreviewFileDiff { oldPath: String oldFile: File2 newPath: String hunks: [FileDiffHunk!]! stat: DiffStat! internalID: String! } ``` ```graphql type ExternalChangeset { repository: Repository! diff: RepositoryComparison ... others } type RepositoryComparison { baseRepository: Repository! headRepository: Repository! range: GitRevisionRange! commits(first: Int): GitCommitConnection! fileDiffs(first: Int): FileDiffConnection! } type FileDiffConnection { nodes: [FileDiff!]! totalCount: Int pageInfo: PageInfo! diffStat: DiffStat! rawDiff: String! } type FileDiff { oldPath: String oldFile: File2 newPath: String + newFile: File2 + mostRelevantFile: File2! hunks: [FileDiffHunk!]! stat: DiffStat! internalID: String! } ``` If we would reintroduce the `RepositoryComparison` on the preview level, the structure would be more aligned and it would also lay the foundation for codeintel on previews, where ultimately it would be useful to have equal `FileDiff` and `PreviewFileDiff`, so the connections can also be the same. the only thing different would then be the `RepositoryComparison`, and the frontend would use the file at the base and apply the patch of the changesetplan to return the `File2` from a ""virtual file system"" to be provided to the hover providers. So I'm suggesting here that we reintroduce ``` type PreviewRepositoryComparison { baseRepository: Repository! fileDiffs(first: Int): PreviewFileDiffConnection! } ```",1.0,"a8n: Align API for previews and external changesets - I know I signed this off earlier, but it turns out to be quite a hassle that `ChangesetPlan` and `ExternalChangeset` are not very similar in their structure. It's just really confusing to use and if we get stuck with it during WebApp development already, how should our customers understand it. ```graphql type ChangesetPlan { repository: Repository! fileDiffs(first: Int): PreviewFileDiffConnection! } type PreviewFileDiffConnection { nodes: [PreviewFileDiff!]! totalCount: Int pageInfo: PageInfo! diffStat: DiffStat! rawDiff: String! } type PreviewFileDiff { oldPath: String oldFile: File2 newPath: String hunks: [FileDiffHunk!]! stat: DiffStat! internalID: String! } ``` ```graphql type ExternalChangeset { repository: Repository! diff: RepositoryComparison ... others } type RepositoryComparison { baseRepository: Repository! headRepository: Repository! range: GitRevisionRange! commits(first: Int): GitCommitConnection! fileDiffs(first: Int): FileDiffConnection! } type FileDiffConnection { nodes: [FileDiff!]! totalCount: Int pageInfo: PageInfo! diffStat: DiffStat! rawDiff: String! } type FileDiff { oldPath: String oldFile: File2 newPath: String + newFile: File2 + mostRelevantFile: File2! hunks: [FileDiffHunk!]! stat: DiffStat! internalID: String! } ``` If we would reintroduce the `RepositoryComparison` on the preview level, the structure would be more aligned and it would also lay the foundation for codeintel on previews, where ultimately it would be useful to have equal `FileDiff` and `PreviewFileDiff`, so the connections can also be the same. the only thing different would then be the `RepositoryComparison`, and the frontend would use the file at the base and apply the patch of the changesetplan to return the `File2` from a ""virtual file system"" to be provided to the hover providers. So I'm suggesting here that we reintroduce ``` type PreviewRepositoryComparison { baseRepository: Repository! fileDiffs(first: Int): PreviewFileDiffConnection! } ```",1, align api for previews and external changesets i know i signed this off earlier but it turns out to be quite a hassle that changesetplan and externalchangeset are not very similar in their structure it s just really confusing to use and if we get stuck with it during webapp development already how should our customers understand it graphql type changesetplan repository repository filediffs first int previewfilediffconnection type previewfilediffconnection nodes totalcount int pageinfo pageinfo diffstat diffstat rawdiff string type previewfilediff oldpath string oldfile newpath string hunks stat diffstat internalid string graphql type externalchangeset repository repository diff repositorycomparison others type repositorycomparison baserepository repository headrepository repository range gitrevisionrange commits first int gitcommitconnection filediffs first int filediffconnection type filediffconnection nodes totalcount int pageinfo pageinfo diffstat diffstat rawdiff string type filediff oldpath string oldfile newpath string newfile mostrelevantfile hunks stat diffstat internalid string if we would reintroduce the repositorycomparison on the preview level the structure would be more aligned and it would also lay the foundation for codeintel on previews where ultimately it would be useful to have equal filediff and previewfilediff so the connections can also be the same the only thing different would then be the repositorycomparison and the frontend would use the file at the base and apply the patch of the changesetplan to return the from a virtual file system to be provided to the hover providers so i m suggesting here that we reintroduce type previewrepositorycomparison baserepository repository filediffs first int previewfilediffconnection ,1 5720,20841255977.0,IssuesEvent,2022-03-21 00:07:38,theglus/Home-Assistant-Config,https://api.github.com/repos/theglus/Home-Assistant-Config,closed,Setup PC Switchbot,integration automation desktop,"# Requirements - [x] Create automation to switch KVM when WoL. - [x] Create automation to switch KVM when Shutdown is triggered. # Resources",1.0,"Setup PC Switchbot - # Requirements - [x] Create automation to switch KVM when WoL. - [x] Create automation to switch KVM when Shutdown is triggered. # Resources",1,setup pc switchbot requirements create automation to switch kvm when wol create automation to switch kvm when shutdown is triggered resources,1 611358,18953076108.0,IssuesEvent,2021-11-18 17:02:01,unicode-org/icu4x,https://api.github.com/repos/unicode-org/icu4x,opened,Figure out plan for constructing DateTimeFormat for different calendars,C-datetime discuss-priority,"Part of https://github.com/unicode-org/icu4x/issues/1115 The status quo of calendar support is that: - We support `Date` for different `Calendar`s `C` (there's an `AsCalendar` trait in there but we can mostly ignore it). Dates are strongly typed. - At _some point_ we would like to add support for `ErasedCalendar` which can contain dates from any calendar. This does not currently exist, but one can imagine it as an enum of calendar values that raises errors when calendars are mixed. - DateTimeFormat accepts `DateInput` objects. Currently only `Date` supports being used as a `DateInput`. Of course, we want to change that - DTF data is split by variant; so you have to specify `variant: buddhist` (etc) when loading DTF data - `DateTimeFormat::try_new()` loads data at construction time, so it too must specify a variant at construction time. It _can_ load multiple variants at once if desired. We would like to add support for formatting non gregorian calendars with DTF. Some preexisting requirements are: - **Architectural**: We have an existing architectural decision that data loading should be independent of formatting: You should walk into formatting with the appropriate data loaded already. - **Performance**: We would strongly prefer to not unconditionally load all calendar data at once ## Option 1: Type parameter on DTF (compile time checks) ```rust struct DateTimeFormat {...} impl DateTimeFormat { fn try_new(...) -> Result } trait DateInput { ... } ``` DTF is parametrized on the calendar type, so at compile time, one must choose to construct `DateTimeFormat` or `DateTimeFormat`. `DateTimeFormat` will only accept `DateInput`s with the same calendar, enforced at compile time. `DateTimeFormat` will load all calendar data at once. If you wish to format values from multiple calendars, you have two options: - At compile time: you can construct a DTF for each calendar you're going to be formatting; given that the dates for different calendars have different types anyway - At runtime: You can construct a `DTF`, which will accept `Date` as well as specific calendars like `Date` (etc). Note that the naïve way of writing this can lead to code bloat: Given that the calendar type is only needed at construction time, the way to write this would be to write `DTFInner` which has a `try_new()` that takes in a string or enum value for calendar type, and wrap it in a `DTF` that is a thin wrapper. Otherwise Rust is likely to generate multiple copies of the mostly-identical functions. For `DTF` to work, `DTF` will need to be able to store a map of calendar data at once. I do not plan to do this immediately, but it's something that can be done when we add support for erased calendars. This method does not really leave space open for dynamic data loading though I guess that can be achieved on `DTF`. ## Option 2: Runtime option ```rust struct DateTimeFormat {} enum CalendarType { // This can also be a full enum with variants like Gregorian/Buddhist/etc BCP(&'static str), All } impl DateTimeFormat { fn try_new(..., c: CalendarType) -> Result {} // OR // This is essentially a fancier way of writing the above function // without requiring an additional parameter fn try_new(...) -> Result {} } trait DateInput { const NeededCalendarType: CalendarType; ... } ``` Here we specify the calendar we need at data load time, and DTF will attempt to load this data. If you attempt to format a date that uses a different calendar, DTF will error at runtime. Similarly to the previous option, if and when we add support for `Erased` calendars and/or `CalendarType::All`, we'll need to have this contain some kind of map from calendar type to loaded data. I do not intend to do this immediately but I want to plan for it. The nice thing is that this can be extended to support dynamic data loading in a much cleaner way (see below section). Pros: - More flexible at runtime - Allows for dynamic data loading Cons: - Will error at runtime, not compile time ### Option 2 extension: dynamic data loading This can work on Option 1 (`impl DateTimeFormat`) as well, but it's cleaner with Option 2. We can add dynamic data loading of the form ```rust impl DateTimeFormat { fn load_data_for::(&mut self); // or, for convenience fn load_data_for_date::(&mut self, d: &D); } ``` that allows users to dynamically stuff more data into the DTF as needed. ## Option 3: Give up on a requirement We can also give up on either the **Architectural** or **Performance** constraints as given above. I'm not super happy with doing this, but it's worth listing as an option. Thoughts? Input requested from: - [ ] @zbraniecki - [ ] @gregtatum - [ ] @nordzilla - [ ] @sffc ",1.0,"Figure out plan for constructing DateTimeFormat for different calendars - Part of https://github.com/unicode-org/icu4x/issues/1115 The status quo of calendar support is that: - We support `Date` for different `Calendar`s `C` (there's an `AsCalendar` trait in there but we can mostly ignore it). Dates are strongly typed. - At _some point_ we would like to add support for `ErasedCalendar` which can contain dates from any calendar. This does not currently exist, but one can imagine it as an enum of calendar values that raises errors when calendars are mixed. - DateTimeFormat accepts `DateInput` objects. Currently only `Date` supports being used as a `DateInput`. Of course, we want to change that - DTF data is split by variant; so you have to specify `variant: buddhist` (etc) when loading DTF data - `DateTimeFormat::try_new()` loads data at construction time, so it too must specify a variant at construction time. It _can_ load multiple variants at once if desired. We would like to add support for formatting non gregorian calendars with DTF. Some preexisting requirements are: - **Architectural**: We have an existing architectural decision that data loading should be independent of formatting: You should walk into formatting with the appropriate data loaded already. - **Performance**: We would strongly prefer to not unconditionally load all calendar data at once ## Option 1: Type parameter on DTF (compile time checks) ```rust struct DateTimeFormat {...} impl DateTimeFormat { fn try_new(...) -> Result } trait DateInput { ... } ``` DTF is parametrized on the calendar type, so at compile time, one must choose to construct `DateTimeFormat` or `DateTimeFormat`. `DateTimeFormat` will only accept `DateInput`s with the same calendar, enforced at compile time. `DateTimeFormat` will load all calendar data at once. If you wish to format values from multiple calendars, you have two options: - At compile time: you can construct a DTF for each calendar you're going to be formatting; given that the dates for different calendars have different types anyway - At runtime: You can construct a `DTF`, which will accept `Date` as well as specific calendars like `Date` (etc). Note that the naïve way of writing this can lead to code bloat: Given that the calendar type is only needed at construction time, the way to write this would be to write `DTFInner` which has a `try_new()` that takes in a string or enum value for calendar type, and wrap it in a `DTF` that is a thin wrapper. Otherwise Rust is likely to generate multiple copies of the mostly-identical functions. For `DTF` to work, `DTF` will need to be able to store a map of calendar data at once. I do not plan to do this immediately, but it's something that can be done when we add support for erased calendars. This method does not really leave space open for dynamic data loading though I guess that can be achieved on `DTF`. ## Option 2: Runtime option ```rust struct DateTimeFormat {} enum CalendarType { // This can also be a full enum with variants like Gregorian/Buddhist/etc BCP(&'static str), All } impl DateTimeFormat { fn try_new(..., c: CalendarType) -> Result {} // OR // This is essentially a fancier way of writing the above function // without requiring an additional parameter fn try_new(...) -> Result {} } trait DateInput { const NeededCalendarType: CalendarType; ... } ``` Here we specify the calendar we need at data load time, and DTF will attempt to load this data. If you attempt to format a date that uses a different calendar, DTF will error at runtime. Similarly to the previous option, if and when we add support for `Erased` calendars and/or `CalendarType::All`, we'll need to have this contain some kind of map from calendar type to loaded data. I do not intend to do this immediately but I want to plan for it. The nice thing is that this can be extended to support dynamic data loading in a much cleaner way (see below section). Pros: - More flexible at runtime - Allows for dynamic data loading Cons: - Will error at runtime, not compile time ### Option 2 extension: dynamic data loading This can work on Option 1 (`impl DateTimeFormat`) as well, but it's cleaner with Option 2. We can add dynamic data loading of the form ```rust impl DateTimeFormat { fn load_data_for::(&mut self); // or, for convenience fn load_data_for_date::(&mut self, d: &D); } ``` that allows users to dynamically stuff more data into the DTF as needed. ## Option 3: Give up on a requirement We can also give up on either the **Architectural** or **Performance** constraints as given above. I'm not super happy with doing this, but it's worth listing as an option. Thoughts? Input requested from: - [ ] @zbraniecki - [ ] @gregtatum - [ ] @nordzilla - [ ] @sffc ",0,figure out plan for constructing datetimeformat for different calendars part of the status quo of calendar support is that we support date for different calendar s c there s an ascalendar trait in there but we can mostly ignore it dates are strongly typed at some point we would like to add support for erasedcalendar which can contain dates from any calendar this does not currently exist but one can imagine it as an enum of calendar values that raises errors when calendars are mixed datetimeformat accepts dateinput objects currently only date supports being used as a dateinput of course we want to change that dtf data is split by variant so you have to specify variant buddhist etc when loading dtf data datetimeformat try new loads data at construction time so it too must specify a variant at construction time it can load multiple variants at once if desired we would like to add support for formatting non gregorian calendars with dtf some preexisting requirements are architectural we have an existing architectural decision that data loading should be independent of formatting you should walk into formatting with the appropriate data loaded already performance we would strongly prefer to not unconditionally load all calendar data at once option type parameter on dtf compile time checks rust struct datetimeformat impl datetimeformat fn try new result trait dateinput dtf is parametrized on the calendar type so at compile time one must choose to construct datetimeformat or datetimeformat datetimeformat will only accept dateinput s with the same calendar enforced at compile time datetimeformat will load all calendar data at once if you wish to format values from multiple calendars you have two options at compile time you can construct a dtf for each calendar you re going to be formatting given that the dates for different calendars have different types anyway at runtime you can construct a dtf which will accept date as well as specific calendars like date etc note that the naïve way of writing this can lead to code bloat given that the calendar type is only needed at construction time the way to write this would be to write dtfinner which has a try new that takes in a string or enum value for calendar type and wrap it in a dtf that is a thin wrapper otherwise rust is likely to generate multiple copies of the mostly identical functions for dtf to work dtf will need to be able to store a map of calendar data at once i do not plan to do this immediately but it s something that can be done when we add support for erased calendars this method does not really leave space open for dynamic data loading though i guess that can be achieved on dtf option runtime option rust struct datetimeformat enum calendartype this can also be a full enum with variants like gregorian buddhist etc bcp static str all impl datetimeformat fn try new c calendartype result or this is essentially a fancier way of writing the above function without requiring an additional parameter fn try new result trait dateinput const neededcalendartype calendartype here we specify the calendar we need at data load time and dtf will attempt to load this data if you attempt to format a date that uses a different calendar dtf will error at runtime similarly to the previous option if and when we add support for erased calendars and or calendartype all we ll need to have this contain some kind of map from calendar type to loaded data i do not intend to do this immediately but i want to plan for it the nice thing is that this can be extended to support dynamic data loading in a much cleaner way see below section pros more flexible at runtime allows for dynamic data loading cons will error at runtime not compile time option extension dynamic data loading this can work on option impl datetimeformat as well but it s cleaner with option we can add dynamic data loading of the form rust impl datetimeformat fn load data for mut self or for convenience fn load data for date mut self d d that allows users to dynamically stuff more data into the dtf as needed option give up on a requirement we can also give up on either the architectural or performance constraints as given above i m not super happy with doing this but it s worth listing as an option thoughts input requested from zbraniecki gregtatum nordzilla sffc ,0 512989,14913745432.0,IssuesEvent,2021-01-22 14:32:58,arkhn/fhir-river,https://api.github.com/repos/arkhn/fhir-river,closed,move cleaning-scripts to river,enhancement high priority,"the ""scripts"" API should be moved over to river It should not clone the git repo when starting. The only route we need to keep is the one that lists scripts. This is blocking at RMS since we cannot make outside calls (therefore cloning the repo does not work)",1.0,"move cleaning-scripts to river - the ""scripts"" API should be moved over to river It should not clone the git repo when starting. The only route we need to keep is the one that lists scripts. This is blocking at RMS since we cannot make outside calls (therefore cloning the repo does not work)",0,move cleaning scripts to river the scripts api should be moved over to river it should not clone the git repo when starting the only route we need to keep is the one that lists scripts this is blocking at rms since we cannot make outside calls therefore cloning the repo does not work ,0 7268,24542176241.0,IssuesEvent,2022-10-12 05:25:45,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Inform a user that a click was made on a wrong element,TYPE: enhancement SYSTEM: automations,"When testcafe clicks an element which is overlapped by the second element, it waits for the first element to appear. If the first element do not appear during selector timeout period, then it clicks the second element. It would be nice to inform users that a click is made on a wrong element. ```HTML
``` ```JS fixture `fixture` .page `../pages/index.html`; test(`test`, async t => { await t .click('#child1'); }); ```",1.0,"Inform a user that a click was made on a wrong element - When testcafe clicks an element which is overlapped by the second element, it waits for the first element to appear. If the first element do not appear during selector timeout period, then it clicks the second element. It would be nice to inform users that a click is made on a wrong element. ```HTML
``` ```JS fixture `fixture` .page `../pages/index.html`; test(`test`, async t => { await t .click('#child1'); }); ```",1,inform a user that a click was made on a wrong element when testcafe clicks an element which is overlapped by the second element it waits for the first element to appear if the first element do not appear during selector timeout period then it clicks the second element it would be nice to inform users that a click is made on a wrong element html parent position relative width height background color blue position absolute top left width height background color red position absolute top left js fixture fixture page pages index html test test async t await t click ,1 8164,26354694589.0,IssuesEvent,2023-01-11 08:48:43,nocodb/nocodb,https://api.github.com/repos/nocodb/nocodb,closed,After Update automation has access to old field value,🔦 Type: Feature 🚘 Scope : Automation,"Automation ""After Update"" event should have access to the old value of the field, not only new.",1.0,"After Update automation has access to old field value - Automation ""After Update"" event should have access to the old value of the field, not only new.",1,after update automation has access to old field value automation after update event should have access to the old value of the field not only new ,1 7239,24501184140.0,IssuesEvent,2022-10-10 12:56:06,smcnab1/op-question-mark,https://api.github.com/repos/smcnab1/op-question-mark,opened,[BUG] Reduce Notification Spam,🔬Status: Review Needed 🐛Type: Bug 🏔Priority: High 🚗For: Automations,"## **🐛Bug Report** **Describe the bug** * Often get repeat notifications for same things (Mail, Temp, Alarm Arm/Disarm) --- **To Reproduce** 1. 2. 3. 4. --- **Expected behavior** * Look at possible [solutions](https://www.facebook.com/groups/HomeAssistant/permalink/3334780796793267/) like time period wait --- **Screenshots** --- **Desktop (please complete the following information):** - OS: - Browser - Version **Smartphone (please complete the following information):** - Device: - OS: - Browser - Version --- **Additional context** * ",1.0,"[BUG] Reduce Notification Spam - ## **🐛Bug Report** **Describe the bug** * Often get repeat notifications for same things (Mail, Temp, Alarm Arm/Disarm) --- **To Reproduce** 1. 2. 3. 4. --- **Expected behavior** * Look at possible [solutions](https://www.facebook.com/groups/HomeAssistant/permalink/3334780796793267/) like time period wait --- **Screenshots** --- **Desktop (please complete the following information):** - OS: - Browser - Version **Smartphone (please complete the following information):** - Device: - OS: - Browser - Version --- **Additional context** * ",1, reduce notification spam 🐛bug report describe the bug often get repeat notifications for same things mail temp alarm arm disarm to reproduce steps to reproduce the error e g use x argument navigate to fill this information go to see error expected behavior look at possible like time period wait screenshots desktop please complete the following information use all the applicable bulleted list element for this specific issue and remove all the bulleted list elements that are not relevant for this issue os browser version smartphone please complete the following information device os browser version additional context 📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛 oh hi there 😄 to expedite issue processing please search open and closed issues before submitting a new one please read our rules of conduct at this repository s github code of conduct md 📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛 ,1 35886,9671527146.0,IssuesEvent,2019-05-21 23:12:25,hashicorp/packer,https://api.github.com/repos/hashicorp/packer,closed,Issue apparently very similar to 7500 on Windows 10,builder/hyperv duplicate regression,"Megan, I am running into the same issue as described in 7500 but on Windows 10. I pulled the latest build (1.4.1 circa 5.21.2019). The error seems nearly identical so was not sure if it was a platform difference or something else environmental as the last comment in issue 7500 refers to it being fixed in the master. Running with elevated rights on: OS Name Microsoft Windows 10 Pro Version 10.0.17763 Build 17763 User has been added to the Hyper-V Admin Group Log at error: ==> hyperv-iso: Enabling Integration Service... ==> hyperv-iso: PowerShell error: Hyper-V\Add-VMDvdDrive : Failed to add device 'Virtual CD/DVD Disk'. ==> hyperv-iso: Hyper-V Virtual Machine Management service Account does not have permission to open attachment. ==> hyperv-iso: 'dev-hyperv-base_name' failed to add device 'Virtual CD/DVD Disk'. (Virtual machine ID 2EA8890E-AF47-4C06-A9F4-1199E677B42F) ==> hyperv-iso: 'dev-hyperv-base_name': Hyper-V Virtual Machine Management service account does not have permission required to open attachment ==> hyperv-iso: 'C:\Users\xxxx\repos\packer_builds\packer_cache\08478213f4bb76a558776915c085b9de13744f87.iso'. Error: 'General access denied error' (0x80070005). (Virtual ==> hyperv-iso: machine ID 2EA8890E-AF47-4C06-A9F4-1199E677B42F) ==> hyperv-iso: At C:\Users\xxxx\AppData\Local\Temp\powershell294707329.ps1:3 char:18 ==> hyperv-iso: + ... ontroller = Hyper-V\Add-VMDvdDrive -VMName $vmName -path $isoPath -Pa ... ==> hyperv-iso: + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ==> hyperv-iso: + CategoryInfo : PermissionDenied: (:) [Add-VMDvdDrive], VirtualizationException ==> hyperv-iso: + FullyQualifiedErrorId : AccessDenied,Microsoft.HyperV.PowerShell.Commands.AddVMDvdDrive ==> hyperv-iso: Unregistering and deleting virtual machine... ==> hyperv-iso: Deleting output directory... ==> hyperv-iso: Deleting build directory... Build 'hyperv-iso' errored: PowerShell error: Hyper-V\Add-VMDvdDrive : Failed to add device 'Virtual CD/DVD Disk'. Reference: I was finally able to create a repro case for this. It turns out that it's already fixed on the master branch, probably by the PR Adrien linked above. If you're in a big rush for a fix you can use the Packer [nightly](https://github.com/hashicorp/packer/releases/tag/nightly) build until we release 1.4.1 tomorrow-ish :) _Originally posted by @SwampDragons in https://github.com/hashicorp/packer/issues/7500#issuecomment-492389793_",1.0,"Issue apparently very similar to 7500 on Windows 10 - Megan, I am running into the same issue as described in 7500 but on Windows 10. I pulled the latest build (1.4.1 circa 5.21.2019). The error seems nearly identical so was not sure if it was a platform difference or something else environmental as the last comment in issue 7500 refers to it being fixed in the master. Running with elevated rights on: OS Name Microsoft Windows 10 Pro Version 10.0.17763 Build 17763 User has been added to the Hyper-V Admin Group Log at error: ==> hyperv-iso: Enabling Integration Service... ==> hyperv-iso: PowerShell error: Hyper-V\Add-VMDvdDrive : Failed to add device 'Virtual CD/DVD Disk'. ==> hyperv-iso: Hyper-V Virtual Machine Management service Account does not have permission to open attachment. ==> hyperv-iso: 'dev-hyperv-base_name' failed to add device 'Virtual CD/DVD Disk'. (Virtual machine ID 2EA8890E-AF47-4C06-A9F4-1199E677B42F) ==> hyperv-iso: 'dev-hyperv-base_name': Hyper-V Virtual Machine Management service account does not have permission required to open attachment ==> hyperv-iso: 'C:\Users\xxxx\repos\packer_builds\packer_cache\08478213f4bb76a558776915c085b9de13744f87.iso'. Error: 'General access denied error' (0x80070005). (Virtual ==> hyperv-iso: machine ID 2EA8890E-AF47-4C06-A9F4-1199E677B42F) ==> hyperv-iso: At C:\Users\xxxx\AppData\Local\Temp\powershell294707329.ps1:3 char:18 ==> hyperv-iso: + ... ontroller = Hyper-V\Add-VMDvdDrive -VMName $vmName -path $isoPath -Pa ... ==> hyperv-iso: + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ==> hyperv-iso: + CategoryInfo : PermissionDenied: (:) [Add-VMDvdDrive], VirtualizationException ==> hyperv-iso: + FullyQualifiedErrorId : AccessDenied,Microsoft.HyperV.PowerShell.Commands.AddVMDvdDrive ==> hyperv-iso: Unregistering and deleting virtual machine... ==> hyperv-iso: Deleting output directory... ==> hyperv-iso: Deleting build directory... Build 'hyperv-iso' errored: PowerShell error: Hyper-V\Add-VMDvdDrive : Failed to add device 'Virtual CD/DVD Disk'. Reference: I was finally able to create a repro case for this. It turns out that it's already fixed on the master branch, probably by the PR Adrien linked above. If you're in a big rush for a fix you can use the Packer [nightly](https://github.com/hashicorp/packer/releases/tag/nightly) build until we release 1.4.1 tomorrow-ish :) _Originally posted by @SwampDragons in https://github.com/hashicorp/packer/issues/7500#issuecomment-492389793_",0,issue apparently very similar to on windows megan i am running into the same issue as described in but on windows i pulled the latest build circa the error seems nearly identical so was not sure if it was a platform difference or something else environmental as the last comment in issue refers to it being fixed in the master running with elevated rights on os name microsoft windows pro version build user has been added to the hyper v admin group log at error hyperv iso enabling integration service hyperv iso powershell error hyper v add vmdvddrive failed to add device virtual cd dvd disk hyperv iso hyper v virtual machine management service account does not have permission to open attachment hyperv iso dev hyperv base name failed to add device virtual cd dvd disk virtual machine id hyperv iso dev hyperv base name hyper v virtual machine management service account does not have permission required to open attachment hyperv iso c users xxxx repos packer builds packer cache iso error general access denied error virtual hyperv iso machine id hyperv iso at c users xxxx appdata local temp char hyperv iso ontroller hyper v add vmdvddrive vmname vmname path isopath pa hyperv iso hyperv iso categoryinfo permissiondenied virtualizationexception hyperv iso fullyqualifiederrorid accessdenied microsoft hyperv powershell commands addvmdvddrive hyperv iso unregistering and deleting virtual machine hyperv iso deleting output directory hyperv iso deleting build directory build hyperv iso errored powershell error hyper v add vmdvddrive failed to add device virtual cd dvd disk reference i was finally able to create a repro case for this it turns out that it s already fixed on the master branch probably by the pr adrien linked above if you re in a big rush for a fix you can use the packer build until we release tomorrow ish originally posted by swampdragons in ,0 952,8823562382.0,IssuesEvent,2019-01-02 14:06:54,arcus-azure/arcus.security,https://api.github.com/repos/arcus-azure/arcus.security,closed,Provide a release pipeline for NuGet.org,automation management,"Provide a release pipeline for NuGet.org. ### Checklist - [x] Build the codebase - [x] Run test suite - [x] Tag the codebase on success - [x] Create a GitHub pre-release for a preview releases - [x] Create a GitHub release for full releases - [x] Push all NuGet packages to NuGet.org",1.0,"Provide a release pipeline for NuGet.org - Provide a release pipeline for NuGet.org. ### Checklist - [x] Build the codebase - [x] Run test suite - [x] Tag the codebase on success - [x] Create a GitHub pre-release for a preview releases - [x] Create a GitHub release for full releases - [x] Push all NuGet packages to NuGet.org",1,provide a release pipeline for nuget org provide a release pipeline for nuget org checklist build the codebase run test suite tag the codebase on success create a github pre release for a preview releases create a github release for full releases push all nuget packages to nuget org,1 1808,10840581476.0,IssuesEvent,2019-11-12 08:40:12,elastic/opbeans-ruby,https://api.github.com/repos/elastic/opbeans-ruby,closed,It fails when you make some HTTP request,[zube]: In Review automation bug,"Opbeand ruby does not respond to any HTTP request and shows this error ``` Puma caught this error: undefined method `set_label' for ElasticAPM:Module (NoMethodError) /app/app/controllers/application_controller.rb:7:in `block in ' /usr/local/bundle/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:426:in `instance_exec' ```",1.0,"It fails when you make some HTTP request - Opbeand ruby does not respond to any HTTP request and shows this error ``` Puma caught this error: undefined method `set_label' for ElasticAPM:Module (NoMethodError) /app/app/controllers/application_controller.rb:7:in `block in ' /usr/local/bundle/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:426:in `instance_exec' ```",1,it fails when you make some http request opbeand ruby does not respond to any http request and shows this error puma caught this error undefined method set label for elasticapm module nomethoderror app app controllers application controller rb in block in usr local bundle gems activesupport lib active support callbacks rb in instance exec ,1 4245,15872537744.0,IssuesEvent,2021-04-09 00:07:18,SmartDataAnalytics/OpenResearch,https://api.github.com/repos/SmartDataAnalytics/OpenResearch,closed,Text Values in Ordinal Field,Migration WP 2.7 Hosting automation bug,Ordinal Field should be strictly numeric but there are text values in it for example 1st.,1.0,Text Values in Ordinal Field - Ordinal Field should be strictly numeric but there are text values in it for example 1st.,1,text values in ordinal field ordinal field should be strictly numeric but there are text values in it for example ,1 31000,25239777109.0,IssuesEvent,2022-11-15 06:06:52,leanprover/vscode-lean4,https://api.github.com/repos/leanprover/vscode-lean4,closed,get es-module-shims from NPM,infrastructure,"See discussion under https://github.com/leanprover/vscode-lean4/pull/167#issuecomment-1171741061, specifically: > I don't remember why I wrote 'it's not on NPM' even though it actually is. I may have just missed it. Note that the [version](https://ga.jspm.io/npm:es-module-shims@1.5.8/dist/es-module-shims.js) recommended in their README is different than [the NPM version](https://unpkg.com/es-module-shims@1.5.8/dist/es-module-shims.js), but it might just be minified. If you can get the NPM one working, it would be better. Since the point is to make Gitpod work, that would be the thing to test.",1.0,"get es-module-shims from NPM - See discussion under https://github.com/leanprover/vscode-lean4/pull/167#issuecomment-1171741061, specifically: > I don't remember why I wrote 'it's not on NPM' even though it actually is. I may have just missed it. Note that the [version](https://ga.jspm.io/npm:es-module-shims@1.5.8/dist/es-module-shims.js) recommended in their README is different than [the NPM version](https://unpkg.com/es-module-shims@1.5.8/dist/es-module-shims.js), but it might just be minified. If you can get the NPM one working, it would be better. Since the point is to make Gitpod work, that would be the thing to test.",0,get es module shims from npm see discussion under specifically i don t remember why i wrote it s not on npm even though it actually is i may have just missed it note that the recommended in their readme is different than but it might just be minified if you can get the npm one working it would be better since the point is to make gitpod work that would be the thing to test ,0 4551,16835300645.0,IssuesEvent,2021-06-18 11:11:43,keptn/keptn,https://api.github.com/repos/keptn/keptn,opened,Create list of dependencies of Keptn Core,automation,"## Action Item Create a list of dependencies + their licences of all components of Keptn core and monitoring related services. Only consider **direct** dependencies. ### Details Try using a tool that detects dependencies and their licences **automatically** Possible candidates: * https://github.com/google/go-licenses * https://github.com/ribice/glice * ... ? tba ## Acceptance Criteria - [ ] Easy readable table containing all dependencies of all Keptn components, their version as well as their licence available. - [ ] Nice to have: this analysis is reproducible using a GH action ;) ",1.0,"Create list of dependencies of Keptn Core - ## Action Item Create a list of dependencies + their licences of all components of Keptn core and monitoring related services. Only consider **direct** dependencies. ### Details Try using a tool that detects dependencies and their licences **automatically** Possible candidates: * https://github.com/google/go-licenses * https://github.com/ribice/glice * ... ? tba ## Acceptance Criteria - [ ] Easy readable table containing all dependencies of all Keptn components, their version as well as their licence available. - [ ] Nice to have: this analysis is reproducible using a GH action ;) ",1,create list of dependencies of keptn core action item create a list of dependencies their licences of all components of keptn core and monitoring related services only consider direct dependencies details try using a tool that detects dependencies and their licences automatically possible candidates tba acceptance criteria easy readable table containing all dependencies of all keptn components their version as well as their licence available nice to have this analysis is reproducible using a gh action ,1 571356,17023289524.0,IssuesEvent,2021-07-03 01:15:14,tomhughes/trac-tickets,https://api.github.com/repos/tomhughes/trac-tickets,closed,[PATCH] Expand tables in the properties dock to the available width by default,Component: merkaartor Priority: minor Resolution: fixed Type: enhancement,"**[Submitted to the original trac issue database at 10.29am, Saturday, 30th August 2008]** It would be nice if the tag & role table views in the properties dock automatically expanded to the width of the dock instead of always having to drag them out wider, the attached patch (against current subversion) implements this behavior. ",1.0,"[PATCH] Expand tables in the properties dock to the available width by default - **[Submitted to the original trac issue database at 10.29am, Saturday, 30th August 2008]** It would be nice if the tag & role table views in the properties dock automatically expanded to the width of the dock instead of always having to drag them out wider, the attached patch (against current subversion) implements this behavior. ",0, expand tables in the properties dock to the available width by default it would be nice if the tag role table views in the properties dock automatically expanded to the width of the dock instead of always having to drag them out wider the attached patch against current subversion implements this behavior ,0 201583,23018614184.0,IssuesEvent,2022-07-22 01:07:45,valdisiljuconoks/MvcAreasForEPiServer,https://api.github.com/repos/valdisiljuconoks/MvcAreasForEPiServer,closed,CVE-2018-0765 (High) detected in system.security.cryptography.xml.4.4.1.nupkg - autoclosed,security vulnerability,"## CVE-2018-0765 - High Severity Vulnerability
Vulnerable Library - system.security.cryptography.xml.4.4.1.nupkg

Provides classes to support the creation and validation of XML digital signatures. The classes in th...

Library home page: https://api.nuget.org/packages/system.security.cryptography.xml.4.4.1.nupkg

Path to dependency file: MvcAreasForEPiServer/src/MvcAreasForEPiServer/MvcAreasForEPiServer.csproj

Path to vulnerable library: /dotnet_FTZGBK/20211118100056/System.Security.Cryptography.Xml.4.4.1/System.Security.Cryptography.Xml.4.4.1.nupkg

Dependency Hierarchy: - :x: **system.security.cryptography.xml.4.4.1.nupkg** (Vulnerable Library)

Found in HEAD commit: 93afd136db816f65690c05bd5f312a9a5c3562fe

Vulnerability Details

A denial of service vulnerability exists when .NET and .NET Core improperly process XML documents, aka "".NET and .NET Core Denial of Service Vulnerability."" This affects Microsoft .NET Framework 2.0, Microsoft .NET Framework 3.0, Microsoft .NET Framework 4.7.1, Microsoft .NET Framework 4.6/4.6.1/4.6.2/4.7/4.7.1, Microsoft .NET Framework 4.5.2, Microsoft .NET Framework 4.7/4.7.1, Microsoft .NET Framework 4.6, Microsoft .NET Framework 3.5, Microsoft .NET Framework 3.5.1, Microsoft .NET Framework 4.6/4.6.1/4.6.2, Microsoft .NET Framework 4.6.2/4.7/4.7.1, .NET Core 2.0, Microsoft .NET Framework 4.7.2.

Publish Date: 2018-05-09

URL: CVE-2018-0765

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-35hc-x2cw-2j4v

Release Date: 2018-05-09

Fix Resolution: 4.4.2

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2018-0765 (High) detected in system.security.cryptography.xml.4.4.1.nupkg - autoclosed - ## CVE-2018-0765 - High Severity Vulnerability
Vulnerable Library - system.security.cryptography.xml.4.4.1.nupkg

Provides classes to support the creation and validation of XML digital signatures. The classes in th...

Library home page: https://api.nuget.org/packages/system.security.cryptography.xml.4.4.1.nupkg

Path to dependency file: MvcAreasForEPiServer/src/MvcAreasForEPiServer/MvcAreasForEPiServer.csproj

Path to vulnerable library: /dotnet_FTZGBK/20211118100056/System.Security.Cryptography.Xml.4.4.1/System.Security.Cryptography.Xml.4.4.1.nupkg

Dependency Hierarchy: - :x: **system.security.cryptography.xml.4.4.1.nupkg** (Vulnerable Library)

Found in HEAD commit: 93afd136db816f65690c05bd5f312a9a5c3562fe

Vulnerability Details

A denial of service vulnerability exists when .NET and .NET Core improperly process XML documents, aka "".NET and .NET Core Denial of Service Vulnerability."" This affects Microsoft .NET Framework 2.0, Microsoft .NET Framework 3.0, Microsoft .NET Framework 4.7.1, Microsoft .NET Framework 4.6/4.6.1/4.6.2/4.7/4.7.1, Microsoft .NET Framework 4.5.2, Microsoft .NET Framework 4.7/4.7.1, Microsoft .NET Framework 4.6, Microsoft .NET Framework 3.5, Microsoft .NET Framework 3.5.1, Microsoft .NET Framework 4.6/4.6.1/4.6.2, Microsoft .NET Framework 4.6.2/4.7/4.7.1, .NET Core 2.0, Microsoft .NET Framework 4.7.2.

Publish Date: 2018-05-09

URL: CVE-2018-0765

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-35hc-x2cw-2j4v

Release Date: 2018-05-09

Fix Resolution: 4.4.2

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in system security cryptography xml nupkg autoclosed cve high severity vulnerability vulnerable library system security cryptography xml nupkg provides classes to support the creation and validation of xml digital signatures the classes in th library home page a href path to dependency file mvcareasforepiserver src mvcareasforepiserver mvcareasforepiserver csproj path to vulnerable library dotnet ftzgbk system security cryptography xml system security cryptography xml nupkg dependency hierarchy x system security cryptography xml nupkg vulnerable library found in head commit a href vulnerability details a denial of service vulnerability exists when net and net core improperly process xml documents aka net and net core denial of service vulnerability this affects microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework net core microsoft net framework publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 7902,4102394750.0,IssuesEvent,2016-06-04 00:50:43,jeff1evesque/machine-learning,https://api.github.com/repos/jeff1evesque/machine-learning,closed,Move arguments in 'setup_tables.py' into 'settings.yaml',build enhancement,"We will move the arguments used for populating `tbl_model_type` into `settings.yaml`. Then, we will respectively reference the yaml attribute within `setup_tables.py`.",1.0,"Move arguments in 'setup_tables.py' into 'settings.yaml' - We will move the arguments used for populating `tbl_model_type` into `settings.yaml`. Then, we will respectively reference the yaml attribute within `setup_tables.py`.",0,move arguments in setup tables py into settings yaml we will move the arguments used for populating tbl model type into settings yaml then we will respectively reference the yaml attribute within setup tables py ,0 672,7752671874.0,IssuesEvent,2018-05-30 21:02:52,Shopify/quilt,https://api.github.com/repos/Shopify/quilt,opened,Set code coverage threshold of 80% for new packages.,automation difficulty: easy polish,"For libraries like these we should expect a high level of coverage. We should formalize this by making our coverage checks require 80%+ coverage and be blocking to deploys. This would help encourage us not to merge untested packages or fix bugs without adding tests for them.",1.0,"Set code coverage threshold of 80% for new packages. - For libraries like these we should expect a high level of coverage. We should formalize this by making our coverage checks require 80%+ coverage and be blocking to deploys. This would help encourage us not to merge untested packages or fix bugs without adding tests for them.",1,set code coverage threshold of for new packages for libraries like these we should expect a high level of coverage we should formalize this by making our coverage checks require coverage and be blocking to deploys this would help encourage us not to merge untested packages or fix bugs without adding tests for them ,1 370768,10948562472.0,IssuesEvent,2019-11-26 09:09:16,input-output-hk/jormungandr,https://api.github.com/repos/input-output-hk/jormungandr,closed,`max_number_of_transactions_per_block` does not increase more than 250,Priority - Low enhancement subsys-mempool wontfix,"**Describe the bug** `max_number_of_transactions_per_block` is limited to 250. **Mandatory Information** ``` jcli 0.7.0 (HEAD-a93d4f67, release, windows [x86_64]) - [rustc 1.39.0 (4560ea788 2019-11-04)] jormungandr 0.7.0 (HEAD-a93d4f67, release, windows [x86_64]) - [rustc 1.39.0 (4560ea788 2019-11-04)] ``` **To Reproduce** Steps to reproduce the behavior: 1. start node1 - `jormungandr ---genesis-block block-0.bin --config node_config.yaml --secret node_secret.yaml` 2. start node2 - `jormungandr --config node_config.yaml --secret node_secret.yaml --genesis-block-hash 3bae53f25be7523ce63c1dc09c9d3b3fbf7dac810e095bff1b8f498fa8de4e4d` 3. extract the scripts in the same folder 4. run script - `bash multipleBashScripts.sh 9001 ed25519e_sk18r7nd20gaxjfgmahyqu2vngv98leqefcdcft2nevcakpf999spx55t4ph8ryqslp6ac7uryekjcqsqzl63rjpmh0k92dvquesweq38cc8a0wc` **Expected behavior** ""max_number_of_transactions_per_block"" should respect the set value inside genesis file. **Additional context** - scenario: 2 stake pool nodes connected together - in my genesis file ``` - ""max_number_of_transactions_per_block"": 1000 - ""consensus_genesis_praos_active_slot_coeff"": 0.1 - ""slot_duration"": 2 - ""slots_per_epoch"": 110 ```` - `multipleBashScripts.sh` is creating 10 accounts that will initiate, in parallel, 100 transactions each to a new account each. There will be ~12 txs per second for ~70 seconds. - node 1 files [node1.zip](https://github.com/input-output-hk/jormungandr/files/3852402/node1.zip) - node 2 files [node2.zip](https://github.com/input-output-hk/jormungandr/files/3852403/node2.zip) - scripts [scripts.zip](https://github.com/input-output-hk/jormungandr/files/3852401/scripts.zip) - as you can see in the below picture, even there were 530 fragments in Pending, only 250 were included between 2 consecutive blocks ![fragments1](https://user-images.githubusercontent.com/29144964/68960550-0c119d00-07d9-11ea-9efc-f3b8d243037f.png) - using the attached python script we can look also at the fragment counts per blocks --> again there is a maximum of 250 fragments per block ``` D:\iohk\otherProjects\jormungandr\scripts\local_cluster>python logs_analyzer.py -l 9001 -t ================= Node 9001 - Transactions per block/epoch ==================== {'InABlock': {'block': '8e56eebc04fa4434561d5fe59ff18e2000e1c78b15943c681c6f09ea5fd8e8de', 'date': '5377.32'}} 250 {'InABlock': {'block': '68e18da55a6b987e95fba371f15cf94374e34f99f1d57598b3e139420fe92178', 'date': '5377.51'}} 250 {'InABlock': {'block': 'decf16be5221815b2756f71eda3a335673b9c9281cab660175ff928902fbe20a', 'date': '5377.58'}} 202 {'InABlock': {'block': '7a57c648275b42fa400c1c51b402df7c4afd8e74d2d6b00b95d1e536109e3ceb', 'date': '5377.12'}} 1 {'InABlock': {'block': '5822659ffcdd772142b1ccfe63993b03581a0b26d0b295b139c46526f973a32b', 'date': '5375.87'}} 1 {'InABlock': {'block': '8ba4482a1eb75a591fc45eec45c36502992c5ad58e520a8065aa0fc862c58914', 'date': '5376.34'}} 1 {'InABlock': {'block': '460f81e742e4129cc8ffbcb3e34361974fedbfcd08ff5251c00e4a0c34e94038', 'date': '5376.66'}} 1 {'InABlock': {'block': '7bb61722e459e5edd2cffe3cae888fe3d37ddbaab372e8073cc32a08e553dbcf', 'date': '5376.56'}} 1 {'InABlock': {'block': '454d439a6ee02cd99919759ea8f47ab3d143ff2d231f809006334c8bf3ff8e23', 'date': '5376.101'}} 1 {'InABlock': {'block': 'c2a4d9ff9966c6ca7b61cced450dddd4238df50d5e82b44ab7efd52b0fb83127', 'date': '5376.87'}} 1 ``` ",1.0,"`max_number_of_transactions_per_block` does not increase more than 250 - **Describe the bug** `max_number_of_transactions_per_block` is limited to 250. **Mandatory Information** ``` jcli 0.7.0 (HEAD-a93d4f67, release, windows [x86_64]) - [rustc 1.39.0 (4560ea788 2019-11-04)] jormungandr 0.7.0 (HEAD-a93d4f67, release, windows [x86_64]) - [rustc 1.39.0 (4560ea788 2019-11-04)] ``` **To Reproduce** Steps to reproduce the behavior: 1. start node1 - `jormungandr ---genesis-block block-0.bin --config node_config.yaml --secret node_secret.yaml` 2. start node2 - `jormungandr --config node_config.yaml --secret node_secret.yaml --genesis-block-hash 3bae53f25be7523ce63c1dc09c9d3b3fbf7dac810e095bff1b8f498fa8de4e4d` 3. extract the scripts in the same folder 4. run script - `bash multipleBashScripts.sh 9001 ed25519e_sk18r7nd20gaxjfgmahyqu2vngv98leqefcdcft2nevcakpf999spx55t4ph8ryqslp6ac7uryekjcqsqzl63rjpmh0k92dvquesweq38cc8a0wc` **Expected behavior** ""max_number_of_transactions_per_block"" should respect the set value inside genesis file. **Additional context** - scenario: 2 stake pool nodes connected together - in my genesis file ``` - ""max_number_of_transactions_per_block"": 1000 - ""consensus_genesis_praos_active_slot_coeff"": 0.1 - ""slot_duration"": 2 - ""slots_per_epoch"": 110 ```` - `multipleBashScripts.sh` is creating 10 accounts that will initiate, in parallel, 100 transactions each to a new account each. There will be ~12 txs per second for ~70 seconds. - node 1 files [node1.zip](https://github.com/input-output-hk/jormungandr/files/3852402/node1.zip) - node 2 files [node2.zip](https://github.com/input-output-hk/jormungandr/files/3852403/node2.zip) - scripts [scripts.zip](https://github.com/input-output-hk/jormungandr/files/3852401/scripts.zip) - as you can see in the below picture, even there were 530 fragments in Pending, only 250 were included between 2 consecutive blocks ![fragments1](https://user-images.githubusercontent.com/29144964/68960550-0c119d00-07d9-11ea-9efc-f3b8d243037f.png) - using the attached python script we can look also at the fragment counts per blocks --> again there is a maximum of 250 fragments per block ``` D:\iohk\otherProjects\jormungandr\scripts\local_cluster>python logs_analyzer.py -l 9001 -t ================= Node 9001 - Transactions per block/epoch ==================== {'InABlock': {'block': '8e56eebc04fa4434561d5fe59ff18e2000e1c78b15943c681c6f09ea5fd8e8de', 'date': '5377.32'}} 250 {'InABlock': {'block': '68e18da55a6b987e95fba371f15cf94374e34f99f1d57598b3e139420fe92178', 'date': '5377.51'}} 250 {'InABlock': {'block': 'decf16be5221815b2756f71eda3a335673b9c9281cab660175ff928902fbe20a', 'date': '5377.58'}} 202 {'InABlock': {'block': '7a57c648275b42fa400c1c51b402df7c4afd8e74d2d6b00b95d1e536109e3ceb', 'date': '5377.12'}} 1 {'InABlock': {'block': '5822659ffcdd772142b1ccfe63993b03581a0b26d0b295b139c46526f973a32b', 'date': '5375.87'}} 1 {'InABlock': {'block': '8ba4482a1eb75a591fc45eec45c36502992c5ad58e520a8065aa0fc862c58914', 'date': '5376.34'}} 1 {'InABlock': {'block': '460f81e742e4129cc8ffbcb3e34361974fedbfcd08ff5251c00e4a0c34e94038', 'date': '5376.66'}} 1 {'InABlock': {'block': '7bb61722e459e5edd2cffe3cae888fe3d37ddbaab372e8073cc32a08e553dbcf', 'date': '5376.56'}} 1 {'InABlock': {'block': '454d439a6ee02cd99919759ea8f47ab3d143ff2d231f809006334c8bf3ff8e23', 'date': '5376.101'}} 1 {'InABlock': {'block': 'c2a4d9ff9966c6ca7b61cced450dddd4238df50d5e82b44ab7efd52b0fb83127', 'date': '5376.87'}} 1 ``` ",0, max number of transactions per block does not increase more than describe the bug max number of transactions per block is limited to mandatory information jcli head release windows jormungandr head release windows to reproduce steps to reproduce the behavior start jormungandr genesis block block bin config node config yaml secret node secret yaml start jormungandr config node config yaml secret node secret yaml genesis block hash extract the scripts in the same folder run script bash multiplebashscripts sh expected behavior max number of transactions per block should respect the set value inside genesis file additional context scenario stake pool nodes connected together in my genesis file max number of transactions per block consensus genesis praos active slot coeff slot duration slots per epoch multiplebashscripts sh is creating accounts that will initiate in parallel transactions each to a new account each there will be txs per second for seconds node files node files scripts as you can see in the below picture even there were fragments in pending only were included between consecutive blocks using the attached python script we can look also at the fragment counts per blocks again there is a maximum of fragments per block d iohk otherprojects jormungandr scripts local cluster python logs analyzer py l t node transactions per block epoch inablock block date inablock block date inablock block date inablock block date inablock block date inablock block date inablock block date inablock block date inablock block date inablock block date ,0 201399,7031010780.0,IssuesEvent,2017-12-26 14:27:06,andresriancho/w3af,https://api.github.com/repos/andresriancho/w3af,opened,RCE via Spring Engine SSTI,easy improvement plugin priority:low,"It would be nice to have a plugin which tests for this vulnerability! https://hawkinsecurity.com/2017/12/13/rce-via-spring-engine-ssti/",1.0,"RCE via Spring Engine SSTI - It would be nice to have a plugin which tests for this vulnerability! https://hawkinsecurity.com/2017/12/13/rce-via-spring-engine-ssti/",0,rce via spring engine ssti it would be nice to have a plugin which tests for this vulnerability ,0 821,8299107625.0,IssuesEvent,2018-09-21 00:51:32,Azure/azure-powershell,https://api.github.com/repos/Azure/azure-powershell,opened,Register-AzureRmAutomationDscNode relies on Resources module,Automation automation-dsc,"The code is here: https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/Automation/Commands.Automation/Common/AutomationClientDSC.cs#L847-L860 As you can see, this cmdlet is actually trying to run the Resources module to deploy a resource group. This is not allowed in any of our modules. Instead, the code needs to use our internal Resources SDK. Create one in your PS Automation client, similar to this: https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/RecoveryServices/Commands.RecoveryServices/Common/PSRecoveryServicesClient.cs#L106 Then, you would replace the entire block of creating a PS runspace with a call to that client. It would look similar to this: https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/RecoveryServices/Commands.RecoveryServices/Common/PSRecoveryServicesVaultClient.cs#L83-L84 Except, since you want to create a resource group, you would use the `CreateOrUpdateWithHttpMessagesAsync` method. This **needs to be changed** or else this cmdlet does not work independently. Meaning, it only works as part of `AzureRM`. Additionally, this cmdlet will not work in `Az` at all, since the cmdlet name uses *AzureRm* to be called. I'd recommend getting this fixed as soon as possible. If you are aware of this pattern used anywhere else in your cmdlets, those places **must also be fixed**.",2.0,"Register-AzureRmAutomationDscNode relies on Resources module - The code is here: https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/Automation/Commands.Automation/Common/AutomationClientDSC.cs#L847-L860 As you can see, this cmdlet is actually trying to run the Resources module to deploy a resource group. This is not allowed in any of our modules. Instead, the code needs to use our internal Resources SDK. Create one in your PS Automation client, similar to this: https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/RecoveryServices/Commands.RecoveryServices/Common/PSRecoveryServicesClient.cs#L106 Then, you would replace the entire block of creating a PS runspace with a call to that client. It would look similar to this: https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/RecoveryServices/Commands.RecoveryServices/Common/PSRecoveryServicesVaultClient.cs#L83-L84 Except, since you want to create a resource group, you would use the `CreateOrUpdateWithHttpMessagesAsync` method. This **needs to be changed** or else this cmdlet does not work independently. Meaning, it only works as part of `AzureRM`. Additionally, this cmdlet will not work in `Az` at all, since the cmdlet name uses *AzureRm* to be called. I'd recommend getting this fixed as soon as possible. If you are aware of this pattern used anywhere else in your cmdlets, those places **must also be fixed**.",1,register azurermautomationdscnode relies on resources module the code is here as you can see this cmdlet is actually trying to run the resources module to deploy a resource group this is not allowed in any of our modules instead the code needs to use our internal resources sdk create one in your ps automation client similar to this then you would replace the entire block of creating a ps runspace with a call to that client it would look similar to this except since you want to create a resource group you would use the createorupdatewithhttpmessagesasync method this needs to be changed or else this cmdlet does not work independently meaning it only works as part of azurerm additionally this cmdlet will not work in az at all since the cmdlet name uses azurerm to be called i d recommend getting this fixed as soon as possible if you are aware of this pattern used anywhere else in your cmdlets those places must also be fixed ,1 5442,19604874410.0,IssuesEvent,2022-01-06 08:07:27,tikv/tikv,https://api.github.com/repos/tikv/tikv,closed,tikv have not logs saved in k8s ,type/bug severity/major found/automation,"## Bug Report ### What version of TiKV are you using? / # ./tikv-server -V TiKV Release Version: 5.4.0-alpha Edition: Community Git Commit Hash: 99b3436 Git Commit Branch: heads/refs/tags/v5.4.0-nightly UTC Build Time: 2022-01-04 01:15:55 Rust Version: rustc 1.56.0-nightly (2faabf579 2021-07-27) Enable Features: jemalloc mem-profiling portable sse test-engines-rocksdb cloud-aws cloud-gcp cloud-azure Profile: dist_release ### What operating system and CPU are you using? 8core 16G ### Steps to reproduce no matter ### What did you expect? tikv logs can be saved ### What did happened? tikv have not logs saved in k8s ![image](https://user-images.githubusercontent.com/84712107/148193798-f0491102-200e-4b5e-af83-3e7cf6f1f2b6.png) ",1.0,"tikv have not logs saved in k8s - ## Bug Report ### What version of TiKV are you using? / # ./tikv-server -V TiKV Release Version: 5.4.0-alpha Edition: Community Git Commit Hash: 99b3436 Git Commit Branch: heads/refs/tags/v5.4.0-nightly UTC Build Time: 2022-01-04 01:15:55 Rust Version: rustc 1.56.0-nightly (2faabf579 2021-07-27) Enable Features: jemalloc mem-profiling portable sse test-engines-rocksdb cloud-aws cloud-gcp cloud-azure Profile: dist_release ### What operating system and CPU are you using? 8core 16G ### Steps to reproduce no matter ### What did you expect? tikv logs can be saved ### What did happened? tikv have not logs saved in k8s ![image](https://user-images.githubusercontent.com/84712107/148193798-f0491102-200e-4b5e-af83-3e7cf6f1f2b6.png) ",1,tikv have not logs saved in bug report what version of tikv are you using tikv server v tikv release version alpha edition community git commit hash git commit branch heads refs tags nightly utc build time rust version rustc nightly enable features jemalloc mem profiling portable sse test engines rocksdb cloud aws cloud gcp cloud azure profile dist release what operating system and cpu are you using steps to reproduce no matter what did you expect tikv logs can be saved what did happened tikv have not logs saved in ,1 9197,27712655066.0,IssuesEvent,2023-03-14 15:07:10,githubcustomers/discovery.co.za,https://api.github.com/repos/githubcustomers/discovery.co.za,opened,Task One: Getting Started,ghas-trial automation Important,"# Task One: Getting Started Before following these steps, make sure you have understood and are happy with all the pre-requisites that need to be completed within the pre-requisites section of the project board. Once happy carry on below. Below you will find some helpful links for getting started with your GitHub Advanced Security Proof of Concept. :fireworks: - [Configuring CodeQL](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-codeql-code-scanning-in-your-ci-system) - [Running additional queries](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#running-additional-queries) - [CodeQL CLI Docs](https://codeql.github.com/docs/codeql-cli/getting-started-with-the-codeql-cli) - [Integrating other tools with GHAS](https://docs.github.com/en/enterprise-cloud@latest/code-security/code-scanning/integrating-with-code-scanning/about-integration-with-code-scanning) - [Running in your CI System](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/running-codeql-code-scanning-in-your-ci-system) - [GitHub/Microsoft Queries for Solarigate](https://www.microsoft.com/security/blog/2021/02/25/microsoft-open-sources-codeql-queries-used-to-hunt-for-solorigate-activity) - [CWE Query Mapping Documentation](https://codeql.github.com/codeql-query-help/codeql-cwe-coverage) Multiple issues have been created to help guide you along with this POC. This issue should align with the strategic goals you made as part of the pre-req. It helps to run some of these tasks in order; we recommend you follow the below: - Task One: Enabling Code Scanning and Secret Scanning - Task Two: Run default code-scanning queries - Task Three: Run additional code-scanning queries - Task Four: Configuring CodeQL Scans - Task Five: Establish Continuous Application Security Scanning - Task Six: Render results of other SARIF-based SAST tools directly within the GitHub UI (If Required) - Task Seven: Compare Other SAST and CodeQL Results - Task Eight: Bulk Enabling Code Scanning across multiple Repositories Quickly - Task Nine: Developer Experience Task - Task Ten: Core Language Support for your Organisation - Task Eleven: Parallel scans - Task Twelve: Detection of secret keys from known token formats committed to private repositories - Task Thirteen: Secret Scanning Integration - Task Fourteen: Test Custom Token Expressions The final task, collect some informal feedback. This is great to help understand how developers have found using the tool during the PoC. Information on this task can be found here: - Task Fifteen: Capture discussion about secure code development decisions ",1.0,"Task One: Getting Started - # Task One: Getting Started Before following these steps, make sure you have understood and are happy with all the pre-requisites that need to be completed within the pre-requisites section of the project board. Once happy carry on below. Below you will find some helpful links for getting started with your GitHub Advanced Security Proof of Concept. :fireworks: - [Configuring CodeQL](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-codeql-code-scanning-in-your-ci-system) - [Running additional queries](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#running-additional-queries) - [CodeQL CLI Docs](https://codeql.github.com/docs/codeql-cli/getting-started-with-the-codeql-cli) - [Integrating other tools with GHAS](https://docs.github.com/en/enterprise-cloud@latest/code-security/code-scanning/integrating-with-code-scanning/about-integration-with-code-scanning) - [Running in your CI System](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/running-codeql-code-scanning-in-your-ci-system) - [GitHub/Microsoft Queries for Solarigate](https://www.microsoft.com/security/blog/2021/02/25/microsoft-open-sources-codeql-queries-used-to-hunt-for-solorigate-activity) - [CWE Query Mapping Documentation](https://codeql.github.com/codeql-query-help/codeql-cwe-coverage) Multiple issues have been created to help guide you along with this POC. This issue should align with the strategic goals you made as part of the pre-req. It helps to run some of these tasks in order; we recommend you follow the below: - Task One: Enabling Code Scanning and Secret Scanning - Task Two: Run default code-scanning queries - Task Three: Run additional code-scanning queries - Task Four: Configuring CodeQL Scans - Task Five: Establish Continuous Application Security Scanning - Task Six: Render results of other SARIF-based SAST tools directly within the GitHub UI (If Required) - Task Seven: Compare Other SAST and CodeQL Results - Task Eight: Bulk Enabling Code Scanning across multiple Repositories Quickly - Task Nine: Developer Experience Task - Task Ten: Core Language Support for your Organisation - Task Eleven: Parallel scans - Task Twelve: Detection of secret keys from known token formats committed to private repositories - Task Thirteen: Secret Scanning Integration - Task Fourteen: Test Custom Token Expressions The final task, collect some informal feedback. This is great to help understand how developers have found using the tool during the PoC. Information on this task can be found here: - Task Fifteen: Capture discussion about secure code development decisions ",1,task one getting started task one getting started before following these steps make sure you have understood and are happy with all the pre requisites that need to be completed within the pre requisites section of the project board once happy carry on below below you will find some helpful links for getting started with your github advanced security proof of concept fireworks multiple issues have been created to help guide you along with this poc this issue should align with the strategic goals you made as part of the pre req it helps to run some of these tasks in order we recommend you follow the below task one enabling code scanning and secret scanning task two run default code scanning queries task three run additional code scanning queries task four configuring codeql scans task five establish continuous application security scanning task six render results of other sarif based sast tools directly within the github ui if required task seven compare other sast and codeql results task eight bulk enabling code scanning across multiple repositories quickly task nine developer experience task task ten core language support for your organisation task eleven parallel scans task twelve detection of secret keys from known token formats committed to private repositories task thirteen secret scanning integration task fourteen test custom token expressions the final task collect some informal feedback this is great to help understand how developers have found using the tool during the poc information on this task can be found here task fifteen capture discussion about secure code development decisions ,1 8123,26214328753.0,IssuesEvent,2023-01-04 09:41:43,apimatic/core-interfaces-python,https://api.github.com/repos/apimatic/core-interfaces-python,closed,Update PYPI package deployment script,automation,The task is to update the PYPI package deployment script in order to use an environment and also automate the tag and changelog creation. ,1.0,Update PYPI package deployment script - The task is to update the PYPI package deployment script in order to use an environment and also automate the tag and changelog creation. ,1,update pypi package deployment script the task is to update the pypi package deployment script in order to use an environment and also automate the tag and changelog creation ,1 8826,27172301712.0,IssuesEvent,2023-02-17 20:39:09,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Graph API does not give updated lastModifiedTime for few events ,Needs: Triage :mag: area:Scan Guidance automation:Closed," #### Category - [ ] Question - [ ] Documentation issue - [ ] **Bug** When i create or update a document in Onedrive, i do get an updated lastModifiedTime every time but when i share/unshare file and call API, lastModifiedTime does not change and remain same as last one Steps to Reproduce Call https://graph.microsoft.com/v1.0/users/{user_id}drive/root/delta and move until delta url found 1- Upload a file call above API with delta link and notice lastModifiedTime 2- share same file after few seconds or minutes call above API with delta link and notice lastModifiedTime (lastModifiedTime does not change and remain same as last event) [ ]: http://aka.ms/onedrive-api-issues [x]: http://aka.ms/onedrive-api-issues",1.0,"Graph API does not give updated lastModifiedTime for few events - #### Category - [ ] Question - [ ] Documentation issue - [ ] **Bug** When i create or update a document in Onedrive, i do get an updated lastModifiedTime every time but when i share/unshare file and call API, lastModifiedTime does not change and remain same as last one Steps to Reproduce Call https://graph.microsoft.com/v1.0/users/{user_id}drive/root/delta and move until delta url found 1- Upload a file call above API with delta link and notice lastModifiedTime 2- share same file after few seconds or minutes call above API with delta link and notice lastModifiedTime (lastModifiedTime does not change and remain same as last event) [ ]: http://aka.ms/onedrive-api-issues [x]: http://aka.ms/onedrive-api-issues",1,graph api does not give updated lastmodifiedtime for few events category question documentation issue bug when i create or update a document in onedrive i do get an updated lastmodifiedtime every time but when i share unshare file and call api lastmodifiedtime does not change and remain same as last one steps to reproduce call and move until delta url found upload a file call above api with delta link and notice lastmodifiedtime share same file after few seconds or minutes call above api with delta link and notice lastmodifiedtime lastmodifiedtime does not change and remain same as last event ,1 1552,10325717937.0,IssuesEvent,2019-09-01 19:43:18,a-t-0/Taskwarrior-installation,https://api.github.com/repos/a-t-0/Taskwarrior-installation,opened,Replace hardCoded current path in `TaskwarriorInstaller.ps1`,Automation Quality Robustness bug,"Currently the path:`/mnt/c/twInstall/Taskwarrior-installation/AutoInstallTaskwarrior/src/main/resources/autoinstalltaskwarrior/` is still `hardCoded`. Replace this with a variable, based on the current path as given by a powershell command. ",1.0,"Replace hardCoded current path in `TaskwarriorInstaller.ps1` - Currently the path:`/mnt/c/twInstall/Taskwarrior-installation/AutoInstallTaskwarrior/src/main/resources/autoinstalltaskwarrior/` is still `hardCoded`. Replace this with a variable, based on the current path as given by a powershell command. ",1,replace hardcoded current path in taskwarriorinstaller currently the path mnt c twinstall taskwarrior installation autoinstalltaskwarrior src main resources autoinstalltaskwarrior is still hardcoded replace this with a variable based on the current path as given by a powershell command ,1 144141,11596117744.0,IssuesEvent,2020-02-24 18:19:12,warfare-plugins/social-warfare,https://api.github.com/repos/warfare-plugins/social-warfare,reopened,Clean Out Pin Buttons wraps content in DOCTYPE/HTML wrapper,COMPLETE: Needs Tested ROUTINE: Maintenance,"Reported at: https://wordpress.org/support/topic/clean-out-pin-buttons-wraps-content-in-doctype-html-wrapper/ TL;DR – your clean_out_pin_buttons() function in lib/utilities/SWP_Compatibility.php needs to be updated so it doesn’t wrap ‘the_content’ in DOCTYPE and HTML tags. Change your call to loadHTML() so that it uses the LIBXML_HTML_NOIMPLIED and LIBXML_HTML_NODEFDTD options. I was troubleshooting various issues with a site today where the DIVI mobile menu wouldn’t work on Chrome (did work on FireFox) and it appeared like some scripts and styles were being duplicated. The site is using WP Rocket and when I disabled WP Rocket the issues went away. First I thought it was a javascript combining/minification issue and spent hours looking at that side of it. Nothing seemed to fix the problem, except if I disabled Social Warfare. So that got me looking at the interaction between Social Warfare and WP Rocket. When WP Rocket is enabled, it combines/minifies the javascript and appends it to the content just before the closing ‘’ tag. When I looked at the page HTML, I found that WP Rocket was including the combined/minified script TWICE in the file. Looking closer, I noticed a stray ‘’ in the middle of the content. Tracing that backup I noticed the content was wrapped in: ... Digging through the code further, I discovered that in May 2019 you added a function, clean_out_pin_buttons() that parses the content using the PHP DOMDocument. You do your parsing and then you call saveHTML() which saves the content as a valid HTML document, including the full DOCTYPE and HTML/Body wrappers. This of course leads to invalid HTML and screws up the minification process for WP Rocket. Please look at the documentation for loadHTML() and make use of the LIBXML_HTML_NOIMPLIED and LIBXML_HTML_NODEFDTD options. This should avoid the output being wrapped in these extra tags",1.0,"Clean Out Pin Buttons wraps content in DOCTYPE/HTML wrapper - Reported at: https://wordpress.org/support/topic/clean-out-pin-buttons-wraps-content-in-doctype-html-wrapper/ TL;DR – your clean_out_pin_buttons() function in lib/utilities/SWP_Compatibility.php needs to be updated so it doesn’t wrap ‘the_content’ in DOCTYPE and HTML tags. Change your call to loadHTML() so that it uses the LIBXML_HTML_NOIMPLIED and LIBXML_HTML_NODEFDTD options. I was troubleshooting various issues with a site today where the DIVI mobile menu wouldn’t work on Chrome (did work on FireFox) and it appeared like some scripts and styles were being duplicated. The site is using WP Rocket and when I disabled WP Rocket the issues went away. First I thought it was a javascript combining/minification issue and spent hours looking at that side of it. Nothing seemed to fix the problem, except if I disabled Social Warfare. So that got me looking at the interaction between Social Warfare and WP Rocket. When WP Rocket is enabled, it combines/minifies the javascript and appends it to the content just before the closing ‘’ tag. When I looked at the page HTML, I found that WP Rocket was including the combined/minified script TWICE in the file. Looking closer, I noticed a stray ‘’ in the middle of the content. Tracing that backup I noticed the content was wrapped in: ... Digging through the code further, I discovered that in May 2019 you added a function, clean_out_pin_buttons() that parses the content using the PHP DOMDocument. You do your parsing and then you call saveHTML() which saves the content as a valid HTML document, including the full DOCTYPE and HTML/Body wrappers. This of course leads to invalid HTML and screws up the minification process for WP Rocket. Please look at the documentation for loadHTML() and make use of the LIBXML_HTML_NOIMPLIED and LIBXML_HTML_NODEFDTD options. This should avoid the output being wrapped in these extra tags",0,clean out pin buttons wraps content in doctype html wrapper reported at tl dr – your clean out pin buttons function in lib utilities swp compatibility php needs to be updated so it doesn’t wrap ‘the content’ in doctype and html tags change your call to loadhtml so that it uses the libxml html noimplied and libxml html nodefdtd options i was troubleshooting various issues with a site today where the divi mobile menu wouldn’t work on chrome did work on firefox and it appeared like some scripts and styles were being duplicated the site is using wp rocket and when i disabled wp rocket the issues went away first i thought it was a javascript combining minification issue and spent hours looking at that side of it nothing seemed to fix the problem except if i disabled social warfare so that got me looking at the interaction between social warfare and wp rocket when wp rocket is enabled it combines minifies the javascript and appends it to the content just before the closing ‘ ’ tag when i looked at the page html i found that wp rocket was including the combined minified script twice in the file looking closer i noticed a stray ‘ ’ in the middle of the content tracing that backup i noticed the content was wrapped in doctype html public dtd html transitional en digging through the code further i discovered that in may you added a function clean out pin buttons that parses the content using the php domdocument you do your parsing and then you call savehtml which saves the content as a valid html document including the full doctype and html body wrappers this of course leads to invalid html and screws up the minification process for wp rocket please look at the documentation for loadhtml and make use of the libxml html noimplied and libxml html nodefdtd options this should avoid the output being wrapped in these extra tags,0 9771,7811336226.0,IssuesEvent,2018-06-12 09:47:35,core-wg/oscoap,https://api.github.com/repos/core-wg/oscoap,closed,Separate header parameter for transporting Master Salt,core-object-security-12,"The kid_context has been used in different applications either as an extended identifier of the client, or as a container for a Master Salt. You may want to use these independently, e.g. in order to reuse a master secret in a setting where both sender ID and identifier of the client are fixed. The proposed solution is to have a separate header parameter (flag bit 6) to indicate the presence of a Master Salt. @jimsch: Could we use the COSE parameter -20 for this or do we need to define a new common header parameter?",True,"Separate header parameter for transporting Master Salt - The kid_context has been used in different applications either as an extended identifier of the client, or as a container for a Master Salt. You may want to use these independently, e.g. in order to reuse a master secret in a setting where both sender ID and identifier of the client are fixed. The proposed solution is to have a separate header parameter (flag bit 6) to indicate the presence of a Master Salt. @jimsch: Could we use the COSE parameter -20 for this or do we need to define a new common header parameter?",0,separate header parameter for transporting master salt the kid context has been used in different applications either as an extended identifier of the client or as a container for a master salt you may want to use these independently e g in order to reuse a master secret in a setting where both sender id and identifier of the client are fixed the proposed solution is to have a separate header parameter flag bit to indicate the presence of a master salt jimsch could we use the cose parameter for this or do we need to define a new common header parameter ,0 5209,26464332116.0,IssuesEvent,2023-01-16 21:17:43,bazelbuild/intellij,https://api.github.com/repos/bazelbuild/intellij,closed,Flag --incompatible_disable_starlark_host_transitions will break IntelliJ UE Plugin Google in Bazel 7.0,type: bug product: IntelliJ topic: bazel awaiting-maintainer,"Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking IntelliJ UE Plugin Google. Please migrate to fix this and unblock the flip of this flag. The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032). Please check the following CI builds for build and test results: - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f8a-4362-8723-a4a3623eea43) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f86-4245-92df-9232ddf91098) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f95-4b12-a5b0-5b835e6d7624) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f91-4ff0-9e9e-12f7b44155d6) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f98-44a8-9f91-f363f6c96d5e) Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything. If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration.",True,"Flag --incompatible_disable_starlark_host_transitions will break IntelliJ UE Plugin Google in Bazel 7.0 - Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking IntelliJ UE Plugin Google. Please migrate to fix this and unblock the flip of this flag. The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032). Please check the following CI builds for build and test results: - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f8a-4362-8723-a4a3623eea43) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f86-4245-92df-9232ddf91098) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f95-4b12-a5b0-5b835e6d7624) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f91-4ff0-9e9e-12f7b44155d6) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f98-44a8-9f91-f363f6c96d5e) Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything. If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration.",0,flag incompatible disable starlark host transitions will break intellij ue plugin google in bazel incompatible flag incompatible disable starlark host transitions will be enabled by default in the next major release bazel thus breaking intellij ue plugin google please migrate to fix this and unblock the flip of this flag the flag is documented here please check the following ci builds for build and test results never heard of incompatible flags before we have that explains everything if you have any questions please file an issue in ,0 1813,10852479242.0,IssuesEvent,2019-11-13 12:53:54,elastic/apm-agent-go,https://api.github.com/repos/elastic/apm-agent-go,closed,Review which tags we should build,[zube]: In Progress automation ci,"right now, we build every tag is in the repository, but maybe it is not needed there are a bunch of tags like `module/*` that probably can be skipped.",1.0,"Review which tags we should build - right now, we build every tag is in the repository, but maybe it is not needed there are a bunch of tags like `module/*` that probably can be skipped.",1,review which tags we should build right now we build every tag is in the repository but maybe it is not needed there are a bunch of tags like module that probably can be skipped ,1 8655,27172046127.0,IssuesEvent,2023-02-17 20:24:24,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Receiving error 412 (Precondition Failed) when trying to update the metadata of an image with OneDrive Personal,status:investigating automation:Closed,"I randomly receive an error 412 when updating the FileSystemInfo facet just after uploading an image. The error resource is always like this: ``` ""error"": { ""code"": ""resourceModified"", ""innerError"": { ""date"": """", ""request-id"": """" }, ""message"": ""ETag does not match current item's value"" } ``` The error is triggered when uploading many images in sequence using simple upload. The operations performed are: 1. PUT image content 2. PATCH using id and `if-match: ` returned by PUT 3. repeat with next image I did some tests and this is what I discovered: - subsequent GET confirmed that cTag stays constant on the file - eTag always changes after a small time after the upload - repeating the PATCH operation after some time does not trigger the error again and the operation is performed correctly - not setting the `if-match` header in the PATCH operation still trigger the error, but the status code changes to HTTP 409, exactly as described in the issue #767 I suspect there is a race condition that makes the PATCH operation fail as @ificator hypothesizes in #767. If that is the case and the conflict cannot be solved, I think at least the error should be changed to something that suggest to retry later like code 503 (Service Unavailable).",1.0,"Receiving error 412 (Precondition Failed) when trying to update the metadata of an image with OneDrive Personal - I randomly receive an error 412 when updating the FileSystemInfo facet just after uploading an image. The error resource is always like this: ``` ""error"": { ""code"": ""resourceModified"", ""innerError"": { ""date"": """", ""request-id"": """" }, ""message"": ""ETag does not match current item's value"" } ``` The error is triggered when uploading many images in sequence using simple upload. The operations performed are: 1. PUT image content 2. PATCH using id and `if-match: ` returned by PUT 3. repeat with next image I did some tests and this is what I discovered: - subsequent GET confirmed that cTag stays constant on the file - eTag always changes after a small time after the upload - repeating the PATCH operation after some time does not trigger the error again and the operation is performed correctly - not setting the `if-match` header in the PATCH operation still trigger the error, but the status code changes to HTTP 409, exactly as described in the issue #767 I suspect there is a race condition that makes the PATCH operation fail as @ificator hypothesizes in #767. If that is the case and the conflict cannot be solved, I think at least the error should be changed to something that suggest to retry later like code 503 (Service Unavailable).",1,receiving error precondition failed when trying to update the metadata of an image with onedrive personal i randomly receive an error when updating the filesysteminfo facet just after uploading an image the error resource is always like this error code resourcemodified innererror date request id message etag does not match current item s value the error is triggered when uploading many images in sequence using simple upload the operations performed are put image content patch using id and if match returned by put repeat with next image i did some tests and this is what i discovered subsequent get confirmed that ctag stays constant on the file etag always changes after a small time after the upload repeating the patch operation after some time does not trigger the error again and the operation is performed correctly not setting the if match header in the patch operation still trigger the error but the status code changes to http exactly as described in the issue i suspect there is a race condition that makes the patch operation fail as ificator hypothesizes in if that is the case and the conflict cannot be solved i think at least the error should be changed to something that suggest to retry later like code service unavailable ,1 26496,11307707335.0,IssuesEvent,2020-01-18 22:55:51,NixOS/nixpkgs,https://api.github.com/repos/NixOS/nixpkgs,closed,Vulnerability roundup 79: gstreamer-0.10.36: 23 advisories,1.severity: security,"[search](https://search.nix.gsc.io/?q=gstreamer&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=gstreamer+in%3Apath&type=Code) * [ ] [CVE-2016-9634](https://nvd.nist.gov/vuln/detail/CVE-2016-9634) CVSSv3=9.8 (nixos-19.03) * [ ] [CVE-2016-9635](https://nvd.nist.gov/vuln/detail/CVE-2016-9635) CVSSv3=9.8 (nixos-19.03) * [ ] [CVE-2016-9636](https://nvd.nist.gov/vuln/detail/CVE-2016-9636) CVSSv3=9.8 (nixos-19.03) * [ ] [CVE-2016-9809](https://nvd.nist.gov/vuln/detail/CVE-2016-9809) CVSSv3=7.8 (nixos-19.03) * [ ] [CVE-2016-9808](https://nvd.nist.gov/vuln/detail/CVE-2016-9808) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2016-9812](https://nvd.nist.gov/vuln/detail/CVE-2016-9812) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2016-10199](https://nvd.nist.gov/vuln/detail/CVE-2016-10199) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5838](https://nvd.nist.gov/vuln/detail/CVE-2017-5838) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5839](https://nvd.nist.gov/vuln/detail/CVE-2017-5839) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5840](https://nvd.nist.gov/vuln/detail/CVE-2017-5840) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5841](https://nvd.nist.gov/vuln/detail/CVE-2017-5841) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5843](https://nvd.nist.gov/vuln/detail/CVE-2017-5843) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5845](https://nvd.nist.gov/vuln/detail/CVE-2017-5845) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2016-9807](https://nvd.nist.gov/vuln/detail/CVE-2016-9807) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2016-9810](https://nvd.nist.gov/vuln/detail/CVE-2016-9810) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2016-9813](https://nvd.nist.gov/vuln/detail/CVE-2016-9813) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2016-10198](https://nvd.nist.gov/vuln/detail/CVE-2016-10198) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2017-5837](https://nvd.nist.gov/vuln/detail/CVE-2017-5837) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2017-5842](https://nvd.nist.gov/vuln/detail/CVE-2017-5842) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2017-5844](https://nvd.nist.gov/vuln/detail/CVE-2017-5844) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2017-5846](https://nvd.nist.gov/vuln/detail/CVE-2017-5846) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2016-9811](https://nvd.nist.gov/vuln/detail/CVE-2016-9811) CVSSv3=4.7 (nixos-19.03) * [ ] [CVE-2015-0797](https://nvd.nist.gov/vuln/detail/CVE-2015-0797) (nixos-19.03) Scanned versions: nixos-19.03: d1dff0bcd9f. May contain false positives. ",True,"Vulnerability roundup 79: gstreamer-0.10.36: 23 advisories - [search](https://search.nix.gsc.io/?q=gstreamer&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=gstreamer+in%3Apath&type=Code) * [ ] [CVE-2016-9634](https://nvd.nist.gov/vuln/detail/CVE-2016-9634) CVSSv3=9.8 (nixos-19.03) * [ ] [CVE-2016-9635](https://nvd.nist.gov/vuln/detail/CVE-2016-9635) CVSSv3=9.8 (nixos-19.03) * [ ] [CVE-2016-9636](https://nvd.nist.gov/vuln/detail/CVE-2016-9636) CVSSv3=9.8 (nixos-19.03) * [ ] [CVE-2016-9809](https://nvd.nist.gov/vuln/detail/CVE-2016-9809) CVSSv3=7.8 (nixos-19.03) * [ ] [CVE-2016-9808](https://nvd.nist.gov/vuln/detail/CVE-2016-9808) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2016-9812](https://nvd.nist.gov/vuln/detail/CVE-2016-9812) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2016-10199](https://nvd.nist.gov/vuln/detail/CVE-2016-10199) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5838](https://nvd.nist.gov/vuln/detail/CVE-2017-5838) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5839](https://nvd.nist.gov/vuln/detail/CVE-2017-5839) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5840](https://nvd.nist.gov/vuln/detail/CVE-2017-5840) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5841](https://nvd.nist.gov/vuln/detail/CVE-2017-5841) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5843](https://nvd.nist.gov/vuln/detail/CVE-2017-5843) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2017-5845](https://nvd.nist.gov/vuln/detail/CVE-2017-5845) CVSSv3=7.5 (nixos-19.03) * [ ] [CVE-2016-9807](https://nvd.nist.gov/vuln/detail/CVE-2016-9807) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2016-9810](https://nvd.nist.gov/vuln/detail/CVE-2016-9810) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2016-9813](https://nvd.nist.gov/vuln/detail/CVE-2016-9813) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2016-10198](https://nvd.nist.gov/vuln/detail/CVE-2016-10198) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2017-5837](https://nvd.nist.gov/vuln/detail/CVE-2017-5837) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2017-5842](https://nvd.nist.gov/vuln/detail/CVE-2017-5842) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2017-5844](https://nvd.nist.gov/vuln/detail/CVE-2017-5844) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2017-5846](https://nvd.nist.gov/vuln/detail/CVE-2017-5846) CVSSv3=5.5 (nixos-19.03) * [ ] [CVE-2016-9811](https://nvd.nist.gov/vuln/detail/CVE-2016-9811) CVSSv3=4.7 (nixos-19.03) * [ ] [CVE-2015-0797](https://nvd.nist.gov/vuln/detail/CVE-2015-0797) (nixos-19.03) Scanned versions: nixos-19.03: d1dff0bcd9f. May contain false positives. ",0,vulnerability roundup gstreamer advisories nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos nixos scanned versions nixos may contain false positives ,0 56402,11579155047.0,IssuesEvent,2020-02-21 17:18:34,Pressio/pressio,https://api.github.com/repos/Pressio/pressio,closed,query FOM for the time-discrete residual,API algorithmic code design experimental general new feature meta,"This feature seems to be useful for a few cases. - [x] List feasibility and limitations (e.g does this seamlessly apply to ST, TC-LSRM and other ROMs?) - [x] create/finalize a tentative design plan - [x] identify which steps are the main bottlenecks - [x] identify main challenges of integrating the design into pressio - [x] draft an implementation inside an `experimental` namespace inside pressio/rom - [x] create unit tests - [x] create regular tests - [x] move functionality into main namespace ",1.0,"query FOM for the time-discrete residual - This feature seems to be useful for a few cases. - [x] List feasibility and limitations (e.g does this seamlessly apply to ST, TC-LSRM and other ROMs?) - [x] create/finalize a tentative design plan - [x] identify which steps are the main bottlenecks - [x] identify main challenges of integrating the design into pressio - [x] draft an implementation inside an `experimental` namespace inside pressio/rom - [x] create unit tests - [x] create regular tests - [x] move functionality into main namespace ",0,query fom for the time discrete residual this feature seems to be useful for a few cases list feasibility and limitations e g does this seamlessly apply to st tc lsrm and other roms create finalize a tentative design plan identify which steps are the main bottlenecks identify main challenges of integrating the design into pressio draft an implementation inside an experimental namespace inside pressio rom create unit tests create regular tests move functionality into main namespace ,0 313703,23488927826.0,IssuesEvent,2022-08-17 16:41:19,Anglepi/My-Many-Reads,https://api.github.com/repos/Anglepi/My-Many-Reads,closed,Define the methodology used in this project.,documentation,"- Why and how am I using the issues and milestones? - How will I divide the work? Basically, everything that needs clarification, such as using labels in the issues, what types of labels, a classification of what can be represented with an issue, the meaning of a milestone. Also, everything related to the development of the project, such as agile methods, sprint duration, etc. Depends on #4 ",1.0,"Define the methodology used in this project. - - Why and how am I using the issues and milestones? - How will I divide the work? Basically, everything that needs clarification, such as using labels in the issues, what types of labels, a classification of what can be represented with an issue, the meaning of a milestone. Also, everything related to the development of the project, such as agile methods, sprint duration, etc. Depends on #4 ",0,define the methodology used in this project why and how am i using the issues and milestones how will i divide the work basically everything that needs clarification such as using labels in the issues what types of labels a classification of what can be represented with an issue the meaning of a milestone also everything related to the development of the project such as agile methods sprint duration etc depends on ,0 68383,28376817908.0,IssuesEvent,2023-04-12 21:36:22,BCDevOps/developer-experience,https://api.github.com/repos/BCDevOps/developer-experience,opened,PUT - finalize status page on uptime.com,*team/ ops and shared services*,"**Describe the issue** To show the PUT status on a page via uptime.com. This is a first-try ticket for Artem and Billy to pick up uptime.com service as service lead! **Definition of done** - [ ] create check components - [ ] improve status page ",1.0,"PUT - finalize status page on uptime.com - **Describe the issue** To show the PUT status on a page via uptime.com. This is a first-try ticket for Artem and Billy to pick up uptime.com service as service lead! **Definition of done** - [ ] create check components - [ ] improve status page ",0,put finalize status page on uptime com describe the issue to show the put status on a page via uptime com this is a first try ticket for artem and billy to pick up uptime com service as service lead definition of done create check components improve status page ,0 261344,8229966787.0,IssuesEvent,2018-09-07 11:12:08,VirtoCommerce/vc-module-catalog,https://api.github.com/repos/VirtoCommerce/vc-module-catalog,closed,Allow to search by product type in indexed search,High priority bug client request,"Currently only criteria for database search has ProductType property. Indexed search isn't support it ",1.0,"Allow to search by product type in indexed search - Currently only criteria for database search has ProductType property. Indexed search isn't support it ",0,allow to search by product type in indexed search currently only criteria for database search has producttype property indexed search isn t support it ,0 153059,5874167387.0,IssuesEvent,2017-05-15 15:29:32,graphcool/console,https://api.github.com/repos/graphcool/console,closed,Function: overview should contain model name and [CREATE/UPDATE/DELETE] for RP,area/functions priority/high,"Function: overview should contain model name and [CREATE/UPDATE/DELETE] for RP ![image](https://cloud.githubusercontent.com/assets/281337/26028695/221d7586-3826-11e7-91fd-4e0ddb1188bb.png) ",1.0,"Function: overview should contain model name and [CREATE/UPDATE/DELETE] for RP - Function: overview should contain model name and [CREATE/UPDATE/DELETE] for RP ![image](https://cloud.githubusercontent.com/assets/281337/26028695/221d7586-3826-11e7-91fd-4e0ddb1188bb.png) ",0,function overview should contain model name and for rp function overview should contain model name and for rp ,0 57282,15729322995.0,IssuesEvent,2021-03-29 14:44:25,danmar/testissues,https://api.github.com/repos/danmar/testissues,opened,"False positive, pointer to local array variable (std::string) (Trac #255)",False positive Incomplete Migration Migrated from Trac defect hyd_danmar,"Migrated from https://trac.cppcheck.net/ticket/255 ```json { ""status"": ""closed"", ""changetime"": ""2009-04-19T14:48:56"", ""description"": ""False positive, f() is returning object std::string, which is created from the local buffer. The local buffer itself is not returned. \n\n{{{\nstd::string f()\n{\n char buf[77];\n return buf;\n}\n}}}\n\n{{{\n[bb.cpp:4]: (error) Returning pointer to local array variable\n}}}\n"", ""reporter"": ""aggro80"", ""cc"": ""sigra"", ""resolution"": ""fixed"", ""_ts"": ""1240152536000000"", ""component"": ""False positive"", ""summary"": ""False positive, pointer to local array variable (std::string)"", ""priority"": """", ""keywords"": """", ""time"": ""2009-04-13T20:34:30"", ""milestone"": ""1.32"", ""owner"": ""hyd_danmar"", ""type"": ""defect"" } ``` ",1.0,"False positive, pointer to local array variable (std::string) (Trac #255) - Migrated from https://trac.cppcheck.net/ticket/255 ```json { ""status"": ""closed"", ""changetime"": ""2009-04-19T14:48:56"", ""description"": ""False positive, f() is returning object std::string, which is created from the local buffer. The local buffer itself is not returned. \n\n{{{\nstd::string f()\n{\n char buf[77];\n return buf;\n}\n}}}\n\n{{{\n[bb.cpp:4]: (error) Returning pointer to local array variable\n}}}\n"", ""reporter"": ""aggro80"", ""cc"": ""sigra"", ""resolution"": ""fixed"", ""_ts"": ""1240152536000000"", ""component"": ""False positive"", ""summary"": ""False positive, pointer to local array variable (std::string)"", ""priority"": """", ""keywords"": """", ""time"": ""2009-04-13T20:34:30"", ""milestone"": ""1.32"", ""owner"": ""hyd_danmar"", ""type"": ""defect"" } ``` ",0,false positive pointer to local array variable std string trac migrated from json status closed changetime description false positive f is returning object std string which is created from the local buffer the local buffer itself is not returned n n nstd string f n n char buf n return buf n n n n n error returning pointer to local array variable n n reporter cc sigra resolution fixed ts component false positive summary false positive pointer to local array variable std string priority keywords time milestone owner hyd danmar type defect ,0 284,5167602138.0,IssuesEvent,2017-01-17 19:15:22,brave/browser-laptop,https://api.github.com/repos/brave/browser-laptop,closed,Add error checking to buildPackage and buildInstaller,automation QA/no qa needed release-notes/exclude,Currently errors are ignored if they happen in one of those 2 scripts.,1.0,Add error checking to buildPackage and buildInstaller - Currently errors are ignored if they happen in one of those 2 scripts.,1,add error checking to buildpackage and buildinstaller currently errors are ignored if they happen in one of those scripts ,1 3839,14688515947.0,IssuesEvent,2021-01-02 03:17:09,akail/homeassistant,https://api.github.com/repos/akail/homeassistant,closed,Blink Automation isn't working,Automation bug,For some reason the blink automation for coming home and leaving isn't working. The automation should be turning on and off the motion detection for the blink camera's.,1.0,Blink Automation isn't working - For some reason the blink automation for coming home and leaving isn't working. The automation should be turning on and off the motion detection for the blink camera's.,1,blink automation isn t working for some reason the blink automation for coming home and leaving isn t working the automation should be turning on and off the motion detection for the blink camera s ,1 9913,30747110100.0,IssuesEvent,2023-07-28 15:52:10,aws-samples/eks-workshop-v2,https://api.github.com/repos/aws-samples/eks-workshop-v2,closed,Have flux already installed on the cluster,enhancement content/automation,"### What would you like to be added? Having flux already installed in the cluster it simplifies and make it consistent with other labs like argocd Use helm to install https://github.com/fluxcd-community/helm-charts ### Why is this needed? The terraform setup of the cluster would need to be updated to install the helm chart of flux. Investigate if bootstrap is need it or it can also be done as part of terraform setup. If its simple do this changes we want to do it now, if becomes too complex then will wait for the new architecture of workshop that each lab/module would have their own setup/start and end scripts. cc @niallthomson ",1.0,"Have flux already installed on the cluster - ### What would you like to be added? Having flux already installed in the cluster it simplifies and make it consistent with other labs like argocd Use helm to install https://github.com/fluxcd-community/helm-charts ### Why is this needed? The terraform setup of the cluster would need to be updated to install the helm chart of flux. Investigate if bootstrap is need it or it can also be done as part of terraform setup. If its simple do this changes we want to do it now, if becomes too complex then will wait for the new architecture of workshop that each lab/module would have their own setup/start and end scripts. cc @niallthomson ",1,have flux already installed on the cluster what would you like to be added having flux already installed in the cluster it simplifies and make it consistent with other labs like argocd use helm to install why is this needed the terraform setup of the cluster would need to be updated to install the helm chart of flux investigate if bootstrap is need it or it can also be done as part of terraform setup if its simple do this changes we want to do it now if becomes too complex then will wait for the new architecture of workshop that each lab module would have their own setup start and end scripts cc niallthomson ,1 126410,17891433935.0,IssuesEvent,2021-09-08 01:08:28,turkdevops/deploy-sourcegraph,https://api.github.com/repos/turkdevops/deploy-sourcegraph,opened,CVE-2020-14040 (High) detected in github.com/golang/text-v0.3.2,security vulnerability,"## CVE-2020-14040 - High Severity Vulnerability
Vulnerable Library - github.com/golang/text-v0.3.2

[mirror] Go text processing support

Dependency Hierarchy: - github.com/pulumi/pulumi-v1.12.0 (Root Library) - github.com/golang/net-ca1201d0de80cfde86cb01aea620983605dfe99b - :x: **github.com/golang/text-v0.3.2** (Vulnerable Library)

Found in HEAD commit: 5229a328bacd506bfc40230e7e5c497b323e9b8f

Found in base branch: pipeline/skip-integration-tests-for-renovate-insiders

Vulnerability Details

The x/text package before 0.3.3 for Go has a vulnerability in encoding/unicode that could lead to the UTF-16 decoder entering an infinite loop, causing the program to crash or run out of memory. An attacker could provide a single byte to a UTF16 decoder instantiated with UseBOM or ExpectBOM to trigger an infinite loop if the String function on the Decoder is called, or the Decoder is passed to golang.org/x/text/transform.String.

Publish Date: 2020-06-17

URL: CVE-2020-14040

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://osv.dev/vulnerability/GO-2020-0015

Release Date: 2020-06-17

Fix Resolution: v0.3.3

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-14040 (High) detected in github.com/golang/text-v0.3.2 - ## CVE-2020-14040 - High Severity Vulnerability
Vulnerable Library - github.com/golang/text-v0.3.2

[mirror] Go text processing support

Dependency Hierarchy: - github.com/pulumi/pulumi-v1.12.0 (Root Library) - github.com/golang/net-ca1201d0de80cfde86cb01aea620983605dfe99b - :x: **github.com/golang/text-v0.3.2** (Vulnerable Library)

Found in HEAD commit: 5229a328bacd506bfc40230e7e5c497b323e9b8f

Found in base branch: pipeline/skip-integration-tests-for-renovate-insiders

Vulnerability Details

The x/text package before 0.3.3 for Go has a vulnerability in encoding/unicode that could lead to the UTF-16 decoder entering an infinite loop, causing the program to crash or run out of memory. An attacker could provide a single byte to a UTF16 decoder instantiated with UseBOM or ExpectBOM to trigger an infinite loop if the String function on the Decoder is called, or the Decoder is passed to golang.org/x/text/transform.String.

Publish Date: 2020-06-17

URL: CVE-2020-14040

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://osv.dev/vulnerability/GO-2020-0015

Release Date: 2020-06-17

Fix Resolution: v0.3.3

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in github com golang text cve high severity vulnerability vulnerable library github com golang text go text processing support dependency hierarchy github com pulumi pulumi root library github com golang net x github com golang text vulnerable library found in head commit a href found in base branch pipeline skip integration tests for renovate insiders vulnerability details the x text package before for go has a vulnerability in encoding unicode that could lead to the utf decoder entering an infinite loop causing the program to crash or run out of memory an attacker could provide a single byte to a decoder instantiated with usebom or expectbom to trigger an infinite loop if the string function on the decoder is called or the decoder is passed to golang org x text transform string publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 10238,32037705777.0,IssuesEvent,2023-09-22 16:36:40,vegaprotocol/token-frontend,https://api.github.com/repos/vegaprotocol/token-frontend,closed,Add extended tests for rewards page,TFE-Automation Token Frontend,"- Check rewards table is displayed with amount of reward user has accrued - Check that the user is reward per epoch and their reward count is increased per epoch",1.0,"Add extended tests for rewards page - - Check rewards table is displayed with amount of reward user has accrued - Check that the user is reward per epoch and their reward count is increased per epoch",1,add extended tests for rewards page check rewards table is displayed with amount of reward user has accrued check that the user is reward per epoch and their reward count is increased per epoch,1 5382,19402689163.0,IssuesEvent,2021-12-19 13:20:53,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,When I use automation API via Django GCP instances do not deploy correctly.,kind/bug awaiting-feedback area/automation-api resolution/no-repro awaiting-feedback-stale,"## Hello! - Vote on this issue by adding a 👍 reaction - To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already) ## Issue details I set up a class for easy instantiation and deployment of a stack like so in `deployments.py`: ``` class DeployVmInstance(BaseStack): def __init__(self, project_id, subnet_name, stack_name, stack_region, stack_zone, instance_tags, image_name, disk_size=DEFAULT_DISK_SIZE, machine_type=DEFAULT_VM_HOST_MACHINE_TYPE, timeouts=DEFAULT_TIMEOUTS, desired_status=DEFAULT_VM_DESIRED_STATUS): self.subnet_name = subnet_name self.stack_name = stack_name self.project_id = project_id self.project_name = f'{self.stack_name}-vm' self.vm_name = self.project_name self.stack_region = stack_region self.stack_zone = stack_zone self.machine_type = machine_type self.instance_tags = instance_tags self.disk_size = disk_size self.timeouts = timeouts self.image_name = image_name self.desired_status = desired_status def set_instance_status(self, status): self.desired_status = status def create_stack(self): ''' Create instance ''' subnet = compute.get_subnetwork(name=self.subnet_name, region=self.stack_region) vm_instance = compute.Instance(self.vm_name, desired_status=self.desired_status, machine_type=self.machine_type, advanced_machine_features=FEATURE_ARGS(enable_nested_virtualization=True), # noqa boot_disk=BOOT_DISK_ARGS( initialize_params=BOOT_INIT_ARGS( image=self.image_name, size=self.disk_size ) ), network_interfaces=[NETWORK_ARGS(subnetwork=subnet.id)], # noqa tags=self.instance_tags, zone=self.stack_zone) network_ip = vm_instance.network_interfaces[0]['network_ip'] pulumi.export('network_ip', network_ip) pulumi.export('boot_disk', vm_instance.boot_disk.source) pulumi.export('boot_disk_name', vm_instance.boot_disk.device_name) def deploy_stack(self): ''' Deploy stack ''' stack = auto.create_or_select_stack(stack_name=self.stack_name, project_name=self.project_name, program=self.create_stack) stack.set_config(""gcp:project"", auto.ConfigValue(self.project_id)) up_res = stack.up(on_output=print) return up_res.outputs ``` To test I added to a file called `test_deployment.py` and deployment worked fine: ``` from deployments import DeployVmInstance PROJECT_ID = '***************' DEFAULT_REGION = 'us-central1' new_vm_tags = ['test-vm'] new_vm = DeployVmInstance(project_id=PROJECT_ID, subnet_name='testvm-dimcorp-subnet', stack_name='test-created-vm', stack_region=DEFAULT_REGION, stack_zone='us-central1-b', instance_tags = new_vm_tags, image_name=""****/global/images/ubuntu"") new_vm.deploy_stack() ``` After The test worked I created a Django POST endpoint which did not work. If stack existed accessing the endpoint would result in the existing instance being deleted. If the stack did not exist the stack would run but no instance would be deployed. `post_view.py`: ``` from deployments import DeployVmInstance PROJECT_ID = '***************' DEFAULT_REGION = 'us-central1' class ImageLaunchView(APIView): def post(self, request, pk, format=None): new_vm_tags = ['test-vm'] new_vm = DeployVmInstance(project_id=PROJECT_ID, subnet_name='testvm-dimcorp-subnet', stack_name='test-created-vm', stack_region=DEFAULT_REGION, stack_zone='us-central1-b', instance_tags = new_vm_tags, image_name=""****/global/images/ubuntu"") new_vm.deploy_stack() ``` ### Steps to reproduce 1. test by running `python3 test_deployment.py` (everything works fine) 2. curl django endpoint curl --request POST https:/new-server/launch (does not work) Expected: Expected same deployment behavior from both `test_deployment.py` and Django endpoint Actual: `test_deployment.py` script deploys GCP instance just fine, however the same code run from Django cause weird behavior. If an instance already exists the running Django endpoint will result in the instance being deleted. If there is no existin stack or instance then the stack will run but no instance will be deployed. ",1.0,"When I use automation API via Django GCP instances do not deploy correctly. - ## Hello! - Vote on this issue by adding a 👍 reaction - To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already) ## Issue details I set up a class for easy instantiation and deployment of a stack like so in `deployments.py`: ``` class DeployVmInstance(BaseStack): def __init__(self, project_id, subnet_name, stack_name, stack_region, stack_zone, instance_tags, image_name, disk_size=DEFAULT_DISK_SIZE, machine_type=DEFAULT_VM_HOST_MACHINE_TYPE, timeouts=DEFAULT_TIMEOUTS, desired_status=DEFAULT_VM_DESIRED_STATUS): self.subnet_name = subnet_name self.stack_name = stack_name self.project_id = project_id self.project_name = f'{self.stack_name}-vm' self.vm_name = self.project_name self.stack_region = stack_region self.stack_zone = stack_zone self.machine_type = machine_type self.instance_tags = instance_tags self.disk_size = disk_size self.timeouts = timeouts self.image_name = image_name self.desired_status = desired_status def set_instance_status(self, status): self.desired_status = status def create_stack(self): ''' Create instance ''' subnet = compute.get_subnetwork(name=self.subnet_name, region=self.stack_region) vm_instance = compute.Instance(self.vm_name, desired_status=self.desired_status, machine_type=self.machine_type, advanced_machine_features=FEATURE_ARGS(enable_nested_virtualization=True), # noqa boot_disk=BOOT_DISK_ARGS( initialize_params=BOOT_INIT_ARGS( image=self.image_name, size=self.disk_size ) ), network_interfaces=[NETWORK_ARGS(subnetwork=subnet.id)], # noqa tags=self.instance_tags, zone=self.stack_zone) network_ip = vm_instance.network_interfaces[0]['network_ip'] pulumi.export('network_ip', network_ip) pulumi.export('boot_disk', vm_instance.boot_disk.source) pulumi.export('boot_disk_name', vm_instance.boot_disk.device_name) def deploy_stack(self): ''' Deploy stack ''' stack = auto.create_or_select_stack(stack_name=self.stack_name, project_name=self.project_name, program=self.create_stack) stack.set_config(""gcp:project"", auto.ConfigValue(self.project_id)) up_res = stack.up(on_output=print) return up_res.outputs ``` To test I added to a file called `test_deployment.py` and deployment worked fine: ``` from deployments import DeployVmInstance PROJECT_ID = '***************' DEFAULT_REGION = 'us-central1' new_vm_tags = ['test-vm'] new_vm = DeployVmInstance(project_id=PROJECT_ID, subnet_name='testvm-dimcorp-subnet', stack_name='test-created-vm', stack_region=DEFAULT_REGION, stack_zone='us-central1-b', instance_tags = new_vm_tags, image_name=""****/global/images/ubuntu"") new_vm.deploy_stack() ``` After The test worked I created a Django POST endpoint which did not work. If stack existed accessing the endpoint would result in the existing instance being deleted. If the stack did not exist the stack would run but no instance would be deployed. `post_view.py`: ``` from deployments import DeployVmInstance PROJECT_ID = '***************' DEFAULT_REGION = 'us-central1' class ImageLaunchView(APIView): def post(self, request, pk, format=None): new_vm_tags = ['test-vm'] new_vm = DeployVmInstance(project_id=PROJECT_ID, subnet_name='testvm-dimcorp-subnet', stack_name='test-created-vm', stack_region=DEFAULT_REGION, stack_zone='us-central1-b', instance_tags = new_vm_tags, image_name=""****/global/images/ubuntu"") new_vm.deploy_stack() ``` ### Steps to reproduce 1. test by running `python3 test_deployment.py` (everything works fine) 2. curl django endpoint curl --request POST https:/new-server/launch (does not work) Expected: Expected same deployment behavior from both `test_deployment.py` and Django endpoint Actual: `test_deployment.py` script deploys GCP instance just fine, however the same code run from Django cause weird behavior. If an instance already exists the running Django endpoint will result in the instance being deleted. If there is no existin stack or instance then the stack will run but no instance will be deployed. ",1,when i use automation api via django gcp instances do not deploy correctly hello vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already issue details i set up a class for easy instantiation and deployment of a stack like so in deployments py class deployvminstance basestack def init self project id subnet name stack name stack region stack zone instance tags image name disk size default disk size machine type default vm host machine type timeouts default timeouts desired status default vm desired status self subnet name subnet name self stack name stack name self project id project id self project name f self stack name vm self vm name self project name self stack region stack region self stack zone stack zone self machine type machine type self instance tags instance tags self disk size disk size self timeouts timeouts self image name image name self desired status desired status def set instance status self status self desired status status def create stack self create instance subnet compute get subnetwork name self subnet name region self stack region vm instance compute instance self vm name desired status self desired status machine type self machine type advanced machine features feature args enable nested virtualization true noqa boot disk boot disk args initialize params boot init args image self image name size self disk size network interfaces noqa tags self instance tags zone self stack zone network ip vm instance network interfaces pulumi export network ip network ip pulumi export boot disk vm instance boot disk source pulumi export boot disk name vm instance boot disk device name def deploy stack self deploy stack stack auto create or select stack stack name self stack name project name self project name program self create stack stack set config gcp project auto configvalue self project id up res stack up on output print return up res outputs to test i added to a file called test deployment py and deployment worked fine from deployments import deployvminstance project id default region us new vm tags new vm deployvminstance project id project id subnet name testvm dimcorp subnet stack name test created vm stack region default region stack zone us b instance tags new vm tags image name global images ubuntu new vm deploy stack after the test worked i created a django post endpoint which did not work if stack existed accessing the endpoint would result in the existing instance being deleted if the stack did not exist the stack would run but no instance would be deployed post view py from deployments import deployvminstance project id default region us class imagelaunchview apiview def post self request pk format none new vm tags new vm deployvminstance project id project id subnet name testvm dimcorp subnet stack name test created vm stack region default region stack zone us b instance tags new vm tags image name global images ubuntu new vm deploy stack steps to reproduce test by running test deployment py everything works fine curl django endpoint curl request post https new server launch does not work expected expected same deployment behavior from both test deployment py and django endpoint actual test deployment py script deploys gcp instance just fine however the same code run from django cause weird behavior if an instance already exists the running django endpoint will result in the instance being deleted if there is no existin stack or instance then the stack will run but no instance will be deployed ,1 368842,25810822519.0,IssuesEvent,2022-12-11 20:52:32,speedb-io/speedb,https://api.github.com/repos/speedb-io/speedb,opened,OSS documentation for the generic switch memtable,documentation,"**Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. 2. 3. 4. **Expected behavior** A clear and concise description of what you expected to happen. **System (please complete the following information):** - OS: [e.g. RHEL8.6] - Hardware [e.g. Intel Xeon Ice Lake, 64GB, NVMe] **Additional context** Add any other context about the problem here. ",1.0,"OSS documentation for the generic switch memtable - **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. 2. 3. 4. **Expected behavior** A clear and concise description of what you expected to happen. **System (please complete the following information):** - OS: [e.g. RHEL8.6] - Hardware [e.g. Intel Xeon Ice Lake, 64GB, NVMe] **Additional context** Add any other context about the problem here. ",0,oss documentation for the generic switch memtable describe the bug a clear and concise description of what the bug is to reproduce steps to reproduce the behavior expected behavior a clear and concise description of what you expected to happen system please complete the following information os hardware additional context add any other context about the problem here ,0 218545,16996317488.0,IssuesEvent,2021-07-01 06:59:32,aimakerspace/PeekingDuck,https://api.github.com/repos/aimakerspace/PeekingDuck,closed,pylint began surfacing no-member error on predict_on_batch for efficientdet.,bug testing,"Pylint started failing due to no member in efficientdet detector.py code, which didn't happened previously...",1.0,"pylint began surfacing no-member error on predict_on_batch for efficientdet. - Pylint started failing due to no member in efficientdet detector.py code, which didn't happened previously...",0,pylint began surfacing no member error on predict on batch for efficientdet pylint started failing due to no member in efficientdet detector py code which didn t happened previously ,0 8433,26965634472.0,IssuesEvent,2023-02-08 22:05:13,influxdata/ui,https://api.github.com/repos/influxdata/ui,closed,Data Explorer Time selection always assumes Local Time,kind/bug team/ui team/automation,"The time selection widget always the time you select is in local time and then the `Local`/`UTC` toggle converts the time. This means it is impossible to select a time in UTC, you must know the Local time you want to select. This is counter productive because the reason I want to work in UTC is because I want to ignore timezones. For example say I am working in a team that spans multiple timezones. Its common for one person to find some interesting data and share the UTC time for that data. So now I want to go see that same data. I want to be able to simply put in that UTC time into the date selector and query the data. Instead I am forced to first convert that UTC time I know into a local time so that the UI can convert it back to UTC. 1. Open time selection box 2. Make sure the UTC toggle is selected 3. Attempt to put in this time range 2022-10-18T12:00:00Z to 2022-10-18T17:00:00Z 4. You can select the day easily i.e the 18th but even typing in the box its always interpreted as a local time **Expected behavior:** When the Local vs UTC is selected as UTC the text in the time boxes is interpreted as a UTC time **Actual behavior:** When the Local vs UTC is selected as UTC the text in the time boxes is interpreted as a Local time **Visual Proof:** ![image](https://user-images.githubusercontent.com/3771906/197019886-81df209f-89b5-4168-9af4-cf88125cbecc.png) Note that the start time is written as `2022-10-18 12:00` but if you look at the actual time range selected its ` 2022-10-18T18:00:00Z`. I am currently 6 hours west of UTC. Same thing is true for the stop time, written as `2022-10-18 17:00` but the actual time range is ` 2022-10-18T23:00:00Z` ## About your environment Cloud ",1.0,"Data Explorer Time selection always assumes Local Time - The time selection widget always the time you select is in local time and then the `Local`/`UTC` toggle converts the time. This means it is impossible to select a time in UTC, you must know the Local time you want to select. This is counter productive because the reason I want to work in UTC is because I want to ignore timezones. For example say I am working in a team that spans multiple timezones. Its common for one person to find some interesting data and share the UTC time for that data. So now I want to go see that same data. I want to be able to simply put in that UTC time into the date selector and query the data. Instead I am forced to first convert that UTC time I know into a local time so that the UI can convert it back to UTC. 1. Open time selection box 2. Make sure the UTC toggle is selected 3. Attempt to put in this time range 2022-10-18T12:00:00Z to 2022-10-18T17:00:00Z 4. You can select the day easily i.e the 18th but even typing in the box its always interpreted as a local time **Expected behavior:** When the Local vs UTC is selected as UTC the text in the time boxes is interpreted as a UTC time **Actual behavior:** When the Local vs UTC is selected as UTC the text in the time boxes is interpreted as a Local time **Visual Proof:** ![image](https://user-images.githubusercontent.com/3771906/197019886-81df209f-89b5-4168-9af4-cf88125cbecc.png) Note that the start time is written as `2022-10-18 12:00` but if you look at the actual time range selected its ` 2022-10-18T18:00:00Z`. I am currently 6 hours west of UTC. Same thing is true for the stop time, written as `2022-10-18 17:00` but the actual time range is ` 2022-10-18T23:00:00Z` ## About your environment Cloud ",1,data explorer time selection always assumes local time the time selection widget always the time you select is in local time and then the local utc toggle converts the time this means it is impossible to select a time in utc you must know the local time you want to select this is counter productive because the reason i want to work in utc is because i want to ignore timezones for example say i am working in a team that spans multiple timezones its common for one person to find some interesting data and share the utc time for that data so now i want to go see that same data i want to be able to simply put in that utc time into the date selector and query the data instead i am forced to first convert that utc time i know into a local time so that the ui can convert it back to utc open time selection box make sure the utc toggle is selected attempt to put in this time range to you can select the day easily i e the but even typing in the box its always interpreted as a local time expected behavior when the local vs utc is selected as utc the text in the time boxes is interpreted as a utc time actual behavior when the local vs utc is selected as utc the text in the time boxes is interpreted as a local time visual proof note that the start time is written as but if you look at the actual time range selected its i am currently hours west of utc same thing is true for the stop time written as but the actual time range is about your environment cloud ,1 349979,24964034624.0,IssuesEvent,2022-11-01 17:49:43,AY2223S1-CS2103T-T11-1/tp,https://api.github.com/repos/AY2223S1-CS2103T-T11-1/tp,closed,[PE-D][Tester B] Not sure how to use seq command,documentation bug,"Couldn't really make sense of it as it is not documented. ![image.png](https://raw.githubusercontent.com/ThomasHoooo/ped/main/files/f9f54700-d624-47b9-8c3b-dfbc518fde98.png) userguide: ![image.png](https://raw.githubusercontent.com/ThomasHoooo/ped/main/files/1d274a84-a1b9-43aa-a09c-46fbc9e88b8d.png) ------------- Labels: `severity.VeryLow` `type.DocumentationBug` original: ThomasHoooo/ped#19",1.0,"[PE-D][Tester B] Not sure how to use seq command - Couldn't really make sense of it as it is not documented. ![image.png](https://raw.githubusercontent.com/ThomasHoooo/ped/main/files/f9f54700-d624-47b9-8c3b-dfbc518fde98.png) userguide: ![image.png](https://raw.githubusercontent.com/ThomasHoooo/ped/main/files/1d274a84-a1b9-43aa-a09c-46fbc9e88b8d.png) ------------- Labels: `severity.VeryLow` `type.DocumentationBug` original: ThomasHoooo/ped#19",0, not sure how to use seq command couldn t really make sense of it as it is not documented userguide labels severity verylow type documentationbug original thomashoooo ped ,0 3617,14147820272.0,IssuesEvent,2020-11-10 21:26:59,BCDevOps/OpenShift4-RollOut,https://api.github.com/repos/BCDevOps/OpenShift4-RollOut,closed,CCM Resource Patching,tech/automation,"By default ArgoCD is meant to ensure a resource matches what's inside of the monitored GitHub repository. This does not work for patching existing resources such as the OpenShift Image Registry Operator. The recommended solutions are to use one of the following: - Operator - Continuously ensures resource state - **Current Solution** - K8s Job on Pre or Post ArgoCD Application Sync - This is good but limits what we can do and I'm not sure spawning a bunch of pods during every sync is something we want to do - Perform Actions within a pipeline - Since we're using github actions and many of these resources need cluster admin access I would prefer to avoid this solution unless we're going to move our pipelines back inside the cluster. Definition of Done: - [x] Operator implemented with support for patching resources using Ansible style to reduce effort for playbook migrations - [x] Operator documented - [x] Peer-Reviewed",1.0,"CCM Resource Patching - By default ArgoCD is meant to ensure a resource matches what's inside of the monitored GitHub repository. This does not work for patching existing resources such as the OpenShift Image Registry Operator. The recommended solutions are to use one of the following: - Operator - Continuously ensures resource state - **Current Solution** - K8s Job on Pre or Post ArgoCD Application Sync - This is good but limits what we can do and I'm not sure spawning a bunch of pods during every sync is something we want to do - Perform Actions within a pipeline - Since we're using github actions and many of these resources need cluster admin access I would prefer to avoid this solution unless we're going to move our pipelines back inside the cluster. Definition of Done: - [x] Operator implemented with support for patching resources using Ansible style to reduce effort for playbook migrations - [x] Operator documented - [x] Peer-Reviewed",1,ccm resource patching by default argocd is meant to ensure a resource matches what s inside of the monitored github repository this does not work for patching existing resources such as the openshift image registry operator the recommended solutions are to use one of the following operator continuously ensures resource state current solution job on pre or post argocd application sync this is good but limits what we can do and i m not sure spawning a bunch of pods during every sync is something we want to do perform actions within a pipeline since we re using github actions and many of these resources need cluster admin access i would prefer to avoid this solution unless we re going to move our pipelines back inside the cluster definition of done operator implemented with support for patching resources using ansible style to reduce effort for playbook migrations operator documented peer reviewed,1 10208,31929771628.0,IssuesEvent,2023-09-19 06:29:00,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,closed,make test_dashboard_validation_ui collect all checks and fail on any failure,ui_automation lifecycle/stale Squad/Black,"It was found during test run on local machine that not every test check fails the test. For example: ``` ... 17:02:36 - MainThread - ocs_ci.ocs.ui.base_ui - WARNING - Locator xpath //div[contains(text(),'System Capacity')] did not find text System Capacity 17:02:36 - MainThread - ocs_ci.ocs.ui.validation_ui - CRITICAL - System Capacity Card not found on OpenShift Data Foundation Overview page ... 17:07:08 - MainThread - ocs_ci.ocs.ui.validation_ui - INFO - Successfully navigated back to ODF tab under Storage, test successful! PASSED ------------------------------------------------------------------------- live log teardown ------------------------------------------------------------------------- 17:07:08 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - Close browser ... ``` In order to gat results on any check failure we may want to add check results with test failure text to a dictionary and assert that all the checks passed (Complex test approach -slide 8 https://docs.google.com/presentation/d/1VnUJv4iaePcs8AME06GB_139gnQsGTrtWT2itYxcQgo/edit?usp=sharing). ",1.0,"make test_dashboard_validation_ui collect all checks and fail on any failure - It was found during test run on local machine that not every test check fails the test. For example: ``` ... 17:02:36 - MainThread - ocs_ci.ocs.ui.base_ui - WARNING - Locator xpath //div[contains(text(),'System Capacity')] did not find text System Capacity 17:02:36 - MainThread - ocs_ci.ocs.ui.validation_ui - CRITICAL - System Capacity Card not found on OpenShift Data Foundation Overview page ... 17:07:08 - MainThread - ocs_ci.ocs.ui.validation_ui - INFO - Successfully navigated back to ODF tab under Storage, test successful! PASSED ------------------------------------------------------------------------- live log teardown ------------------------------------------------------------------------- 17:07:08 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - Close browser ... ``` In order to gat results on any check failure we may want to add check results with test failure text to a dictionary and assert that all the checks passed (Complex test approach -slide 8 https://docs.google.com/presentation/d/1VnUJv4iaePcs8AME06GB_139gnQsGTrtWT2itYxcQgo/edit?usp=sharing). ",1,make test dashboard validation ui collect all checks and fail on any failure it was found during test run on local machine that not every test check fails the test for example mainthread ocs ci ocs ui base ui warning locator xpath div did not find text system capacity mainthread ocs ci ocs ui validation ui critical system capacity card not found on openshift data foundation overview page mainthread ocs ci ocs ui validation ui info successfully navigated back to odf tab under storage test successful passed live log teardown mainthread ocs ci ocs ui base ui info close browser in order to gat results on any check failure we may want to add check results with test failure text to a dictionary and assert that all the checks passed complex test approach slide ,1 6929,24041659210.0,IssuesEvent,2022-09-16 02:50:36,AdamXweb/awesome-aussie,https://api.github.com/repos/AdamXweb/awesome-aussie,closed,[ADDITION] Employment Hero,Addition Awaiting Review Added to Airtable Automation from Airtable,"### Category HR ### Software to be added Employment Hero ### Supporting Material URL: https://employmenthero.com/ Description: Employment Hero is a cloud-based platform that helps businesses manage HR, payroll, and employee benefits Size: HQ: Sydney LinkedIn: https://www.linkedin.com/company/employment-hero/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/recYKh2Hwc6QtEO7u",1.0,"[ADDITION] Employment Hero - ### Category HR ### Software to be added Employment Hero ### Supporting Material URL: https://employmenthero.com/ Description: Employment Hero is a cloud-based platform that helps businesses manage HR, payroll, and employee benefits Size: HQ: Sydney LinkedIn: https://www.linkedin.com/company/employment-hero/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/recYKh2Hwc6QtEO7u",1, employment hero category hr software to be added employment hero supporting material url description employment hero is a cloud based platform that helps businesses manage hr payroll and employee benefits size hq sydney linkedin see record on airtable ,1 29948,2718576957.0,IssuesEvent,2015-04-12 13:13:22,nivhg/ludopen,https://api.github.com/repos/nivhg/ludopen,closed,[Membre] Ajouter le nombre d'enfants d'une famille,auto-migrated Priority-Medium Type-Enhancement,"``` Faire de même dans la BDD ``` Original issue reported on code.google.com by `vepa2s...@gmail.com` on 11 Mar 2015 at 9:26",1.0,"[Membre] Ajouter le nombre d'enfants d'une famille - ``` Faire de même dans la BDD ``` Original issue reported on code.google.com by `vepa2s...@gmail.com` on 11 Mar 2015 at 9:26",0, ajouter le nombre d enfants d une famille faire de même dans la bdd original issue reported on code google com by gmail com on mar at ,0 162865,25707221824.0,IssuesEvent,2022-12-07 02:08:22,CSC207-2022F-UofT/course-project-team-tree-dog,https://api.github.com/repos/CSC207-2022F-UofT/course-project-team-tree-dog,closed,"[Design] Game Statistics, Summary Screen, Story & Stat Repo Saving, Blog",design,"Core Directives: - Design work will be done in the clean arch doc - Additional logic written in existing class boxes must be in the designated color - Removing logic in existing class boxes must be done by putting a strikethrough and changing to the designated color - Any new class boxes added must have a min 2px outline in the designated color. The text inside this box doesn’t need to be in the designated color. - Any other complex modifications (such as moving or removing existing arrows) should not be done. Instead, make a text box with the designated color outline and write these desired modifications there
[2] Game Statistics: Track, gather and pass some interesting post-game stats to PGE [3] Game Summary Screen: An additional client side screen when game ends, a player in the game will be able to view certain data passed by “Game End”, statistics for instance. When a player didn’t disconnect and the client received information that the game has ended, take client to the game summary screen and display PgeOutputData as passed into the view model by PGE. [See Note#2] [4] Repository Story and Stat Saving: Saving finished game statistics and stories to a repository with the display names of all the players who contributed Story statistics should be tracked, such as most used word, most used letter, each player’s word contributions, authors, etc. Design this with Open/Closed principle in mind so that new statistic measures can be added easily [8] Blog to view stories which have been previously written by other players

Color:

![Image](https://user-images.githubusercontent.com/47086586/202567329-4b6ae3ab-a39d-4937-85af-1370cb39754b.png)",1.0,"[Design] Game Statistics, Summary Screen, Story & Stat Repo Saving, Blog - Core Directives: - Design work will be done in the clean arch doc - Additional logic written in existing class boxes must be in the designated color - Removing logic in existing class boxes must be done by putting a strikethrough and changing to the designated color - Any new class boxes added must have a min 2px outline in the designated color. The text inside this box doesn’t need to be in the designated color. - Any other complex modifications (such as moving or removing existing arrows) should not be done. Instead, make a text box with the designated color outline and write these desired modifications there
[2] Game Statistics: Track, gather and pass some interesting post-game stats to PGE [3] Game Summary Screen: An additional client side screen when game ends, a player in the game will be able to view certain data passed by “Game End”, statistics for instance. When a player didn’t disconnect and the client received information that the game has ended, take client to the game summary screen and display PgeOutputData as passed into the view model by PGE. [See Note#2] [4] Repository Story and Stat Saving: Saving finished game statistics and stories to a repository with the display names of all the players who contributed Story statistics should be tracked, such as most used word, most used letter, each player’s word contributions, authors, etc. Design this with Open/Closed principle in mind so that new statistic measures can be added easily [8] Blog to view stories which have been previously written by other players

Color:

![Image](https://user-images.githubusercontent.com/47086586/202567329-4b6ae3ab-a39d-4937-85af-1370cb39754b.png)",0, game statistics summary screen story stat repo saving blog core directives design work will be done in the clean arch doc additional logic written in existing class boxes must be in the designated color removing logic in existing class boxes must be done by putting a strikethrough and changing to the designated color any new class boxes added must have a min outline in the designated color the text inside this box doesn’t need to be in the designated color any other complex modifications such as moving or removing existing arrows should not be done instead make a text box with the designated color outline and write these desired modifications there game statistics track gather and pass some interesting post game stats to pge game summary screen an additional client side screen when game ends a player in the game will be able to view certain data passed by “game end” statistics for instance when a player didn’t disconnect and the client received information that the game has ended take client to the game summary screen and display pgeoutputdata as passed into the view model by pge repository story and stat saving saving finished game statistics and stories to a repository with the display names of all the players who contributed story statistics should be tracked such as most used word most used letter each player’s word contributions authors etc design this with open closed principle in mind so that new statistic measures can be added easily blog to view stories which have been previously written by other players color ,0 192901,14632489283.0,IssuesEvent,2020-12-23 22:33:53,kncaputo/underrated,https://api.github.com/repos/kncaputo/underrated,opened,add tests for `MovieTrailers`,testing,"As a developer, When I see a new component `MovieTrailer` has been made, I want to see corresponding tests, So that I know the component works as expected. ",1.0,"add tests for `MovieTrailers` - As a developer, When I see a new component `MovieTrailer` has been made, I want to see corresponding tests, So that I know the component works as expected. ",0,add tests for movietrailers as a developer when i see a new component movietrailer has been made i want to see corresponding tests so that i know the component works as expected ,0 1192,9666035027.0,IssuesEvent,2019-05-21 09:51:03,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,[APM-CI][Integration Tests][GO] Broken tests when running the go testapp,automation,"Automation fails when building the docker image to run the go testapp in the ITs with the below stacktrace ``` github.com/elastic/go-sysinfo (download) github.com/pkg/errors (download) Fetching https://howett.net/plist?go-get=1 Parsing meta tags from https://howett.net/plist?go-get=1 (status code 200) get ""howett.net/plist"": found meta tag get.metaImport{Prefix:""howett.net/plist"", VCS:""git"", RepoRoot:""https://gitlab.howett.net/go/plist.git""} at https://howett.net/plist?go-get=1 howett.net/plist (download) github.com/joeshaw/multierror (download) github.com/prometheus/procfs (download) Fetching https://go.elastic.co/fastjson?go-get=1 Parsing meta tags from https://go.elastic.co/fastjson?go-get=1 (status code 200) get ""go.elastic.co/fastjson"": found meta tag get.metaImport{Prefix:""go.elastic.co/fastjson"", VCS:""git"", RepoRoot:""https://github.com/elastic/go-fastjson""} at https://go.elastic.co/fastjson?go-get=1 go.elastic.co/fastjson (download) github.com/armon/go-radix (download) Fetching https://golang.org/x/sys/unix?go-get=1 Parsing meta tags from https://golang.org/x/sys/unix?go-get=1 (status code 200) get ""golang.org/x/sys/unix"": found meta tag get.metaImport{Prefix:""golang.org/x/sys"", VCS:""git"", RepoRoot:""https://go.googlesource.com/sys""} at https://golang.org/x/sys/unix?go-get=1 get ""golang.org/x/sys/unix"": verifying non-authoritative meta tag Fetching https://golang.org/x/sys?go-get=1 Parsing meta tags from https://golang.org/x/sys?go-get=1 (status code 200) golang.org/x/sys (download) github.com/elastic/go-sysinfo/providers/windows go.elastic.co/apm/internal/apmstrings go.elastic.co/apm/internal/wildcard go.elastic.co/apm/internal/iochan github.com/pkg/errors howett.net/plist github.com/armon/go-radix net github.com/joeshaw/multierror github.com/prometheus/procfs/internal/fs go.elastic.co/apm/internal/apmcontext go.elastic.co/apm/internal/ringbuffer github.com/elastic/go-sysinfo/types go.elastic.co/apm/internal/apmconfig go.elastic.co/fastjson github.com/elastic/go-sysinfo/internal/registry go.elastic.co/apm/internal/apmlog github.com/elastic/go-sysinfo/providers/darwin github.com/elastic/go-sysinfo/providers/shared vendor/golang_org/x/net/lex/httplex github.com/prometheus/procfs vendor/golang_org/x/net/proxy net/textproto crypto/x509 golang.org/x/sys/unix github.com/elastic/go-sysinfo/providers/linux crypto/tls # github.com/elastic/go-sysinfo/providers/linux ../github.com/elastic/go-sysinfo/providers/linux/host_linux.go:45:20: cannot convert filepath.Join(hostFS, procfs.DefaultMountPoint) (type string) to type procfs.FS ../github.com/elastic/go-sysinfo/providers/linux/host_linux.go:64:42: h.procFS.Path undefined (type procfs.FS has no field or method Path) ../github.com/elastic/go-sysinfo/providers/linux/process_linux.go:91:13: p.fs.Path undefined (type procfs.FS has no field or method Path) net/http/httptrace net/http go.elastic.co/apm/transport go.elastic.co/apm/model go.elastic.co/apm/internal/apmhostutil go.elastic.co/apm/internal/apmhttputil go.elastic.co/apm/stacktrace go.elastic.co/apm/internal/pkgerrorsutil ``` It worked previously in the CI, refers to [here](https://apm-ci.elastic.co/blue/organizations/jenkins/apm-integration-test-axis-pipeline/detail/apm-integration-test-axis-pipeline/3943/pipeline/56)",1.0,"[APM-CI][Integration Tests][GO] Broken tests when running the go testapp - Automation fails when building the docker image to run the go testapp in the ITs with the below stacktrace ``` github.com/elastic/go-sysinfo (download) github.com/pkg/errors (download) Fetching https://howett.net/plist?go-get=1 Parsing meta tags from https://howett.net/plist?go-get=1 (status code 200) get ""howett.net/plist"": found meta tag get.metaImport{Prefix:""howett.net/plist"", VCS:""git"", RepoRoot:""https://gitlab.howett.net/go/plist.git""} at https://howett.net/plist?go-get=1 howett.net/plist (download) github.com/joeshaw/multierror (download) github.com/prometheus/procfs (download) Fetching https://go.elastic.co/fastjson?go-get=1 Parsing meta tags from https://go.elastic.co/fastjson?go-get=1 (status code 200) get ""go.elastic.co/fastjson"": found meta tag get.metaImport{Prefix:""go.elastic.co/fastjson"", VCS:""git"", RepoRoot:""https://github.com/elastic/go-fastjson""} at https://go.elastic.co/fastjson?go-get=1 go.elastic.co/fastjson (download) github.com/armon/go-radix (download) Fetching https://golang.org/x/sys/unix?go-get=1 Parsing meta tags from https://golang.org/x/sys/unix?go-get=1 (status code 200) get ""golang.org/x/sys/unix"": found meta tag get.metaImport{Prefix:""golang.org/x/sys"", VCS:""git"", RepoRoot:""https://go.googlesource.com/sys""} at https://golang.org/x/sys/unix?go-get=1 get ""golang.org/x/sys/unix"": verifying non-authoritative meta tag Fetching https://golang.org/x/sys?go-get=1 Parsing meta tags from https://golang.org/x/sys?go-get=1 (status code 200) golang.org/x/sys (download) github.com/elastic/go-sysinfo/providers/windows go.elastic.co/apm/internal/apmstrings go.elastic.co/apm/internal/wildcard go.elastic.co/apm/internal/iochan github.com/pkg/errors howett.net/plist github.com/armon/go-radix net github.com/joeshaw/multierror github.com/prometheus/procfs/internal/fs go.elastic.co/apm/internal/apmcontext go.elastic.co/apm/internal/ringbuffer github.com/elastic/go-sysinfo/types go.elastic.co/apm/internal/apmconfig go.elastic.co/fastjson github.com/elastic/go-sysinfo/internal/registry go.elastic.co/apm/internal/apmlog github.com/elastic/go-sysinfo/providers/darwin github.com/elastic/go-sysinfo/providers/shared vendor/golang_org/x/net/lex/httplex github.com/prometheus/procfs vendor/golang_org/x/net/proxy net/textproto crypto/x509 golang.org/x/sys/unix github.com/elastic/go-sysinfo/providers/linux crypto/tls # github.com/elastic/go-sysinfo/providers/linux ../github.com/elastic/go-sysinfo/providers/linux/host_linux.go:45:20: cannot convert filepath.Join(hostFS, procfs.DefaultMountPoint) (type string) to type procfs.FS ../github.com/elastic/go-sysinfo/providers/linux/host_linux.go:64:42: h.procFS.Path undefined (type procfs.FS has no field or method Path) ../github.com/elastic/go-sysinfo/providers/linux/process_linux.go:91:13: p.fs.Path undefined (type procfs.FS has no field or method Path) net/http/httptrace net/http go.elastic.co/apm/transport go.elastic.co/apm/model go.elastic.co/apm/internal/apmhostutil go.elastic.co/apm/internal/apmhttputil go.elastic.co/apm/stacktrace go.elastic.co/apm/internal/pkgerrorsutil ``` It worked previously in the CI, refers to [here](https://apm-ci.elastic.co/blue/organizations/jenkins/apm-integration-test-axis-pipeline/detail/apm-integration-test-axis-pipeline/3943/pipeline/56)",1, broken tests when running the go testapp automation fails when building the docker image to run the go testapp in the its with the below stacktrace github com elastic go sysinfo download github com pkg errors download fetching parsing meta tags from status code get howett net plist found meta tag get metaimport prefix howett net plist vcs git reporoot at howett net plist download github com joeshaw multierror download github com prometheus procfs download fetching parsing meta tags from status code get go elastic co fastjson found meta tag get metaimport prefix go elastic co fastjson vcs git reporoot at go elastic co fastjson download github com armon go radix download fetching parsing meta tags from status code get golang org x sys unix found meta tag get metaimport prefix golang org x sys vcs git reporoot at get golang org x sys unix verifying non authoritative meta tag fetching parsing meta tags from status code golang org x sys download github com elastic go sysinfo providers windows go elastic co apm internal apmstrings go elastic co apm internal wildcard go elastic co apm internal iochan github com pkg errors howett net plist github com armon go radix net github com joeshaw multierror github com prometheus procfs internal fs go elastic co apm internal apmcontext go elastic co apm internal ringbuffer github com elastic go sysinfo types go elastic co apm internal apmconfig go elastic co fastjson github com elastic go sysinfo internal registry go elastic co apm internal apmlog github com elastic go sysinfo providers darwin github com elastic go sysinfo providers shared vendor golang org x net lex httplex github com prometheus procfs vendor golang org x net proxy net textproto crypto golang org x sys unix github com elastic go sysinfo providers linux crypto tls github com elastic go sysinfo providers linux github com elastic go sysinfo providers linux host linux go cannot convert filepath join hostfs procfs defaultmountpoint type string to type procfs fs github com elastic go sysinfo providers linux host linux go h procfs path undefined type procfs fs has no field or method path github com elastic go sysinfo providers linux process linux go p fs path undefined type procfs fs has no field or method path net http httptrace net http go elastic co apm transport go elastic co apm model go elastic co apm internal apmhostutil go elastic co apm internal apmhttputil go elastic co apm stacktrace go elastic co apm internal pkgerrorsutil it worked previously in the ci refers to ,1 45756,7199717605.0,IssuesEvent,2018-02-05 16:44:14,opensistemas-hub/osbrain,https://api.github.com/repos/opensistemas-hub/osbrain,closed,Document agent attributes,documentation,"`name`, `uuid`, ... Any common/default agent attribute. Maybe simply expand/complete https://osbrain.readthedocs.io/en/stable/api/agent.html. What about #205?",1.0,"Document agent attributes - `name`, `uuid`, ... Any common/default agent attribute. Maybe simply expand/complete https://osbrain.readthedocs.io/en/stable/api/agent.html. What about #205?",0,document agent attributes name uuid any common default agent attribute maybe simply expand complete what about ,0 23116,11852358712.0,IssuesEvent,2020-03-24 19:47:05,pulse2percept/pulse2percept,https://api.github.com/repos/pulse2percept/pulse2percept,closed,Reduce memory footprint with __slots__,enhancement performance,"[`__slots__`](https://docs.python.org/3/reference/datamodel.html#slots) can reduce memory usage by preventing the creation of `__dict__` and `__weakref` and speed up attribute lookup. A few things to consider: https://stackoverflow.com/a/28059785 TLDR on multiple inheritance: - top needs to inherit from `object` - slots are inherited, so if parent has `['foo', 'bar']`, then child should only add `['baz']`, not `['foo', 'bar', 'baz']` - if you did it right, child should not have `__dict__`",True,"Reduce memory footprint with __slots__ - [`__slots__`](https://docs.python.org/3/reference/datamodel.html#slots) can reduce memory usage by preventing the creation of `__dict__` and `__weakref` and speed up attribute lookup. A few things to consider: https://stackoverflow.com/a/28059785 TLDR on multiple inheritance: - top needs to inherit from `object` - slots are inherited, so if parent has `['foo', 'bar']`, then child should only add `['baz']`, not `['foo', 'bar', 'baz']` - if you did it right, child should not have `__dict__`",0,reduce memory footprint with slots can reduce memory usage by preventing the creation of dict and weakref and speed up attribute lookup a few things to consider tldr on multiple inheritance top needs to inherit from object slots are inherited so if parent has then child should only add not if you did it right child should not have dict ,0 108640,11597296644.0,IssuesEvent,2020-02-24 20:34:37,locationtech/geowave,https://api.github.com/repos/locationtech/geowave,closed,Fix Javadoc errors,documentation,"When building javadocs, there are a bunch of errors and warnings for broken links, or incorrect parameters. These should be fixed.",1.0,"Fix Javadoc errors - When building javadocs, there are a bunch of errors and warnings for broken links, or incorrect parameters. These should be fixed.",0,fix javadoc errors when building javadocs there are a bunch of errors and warnings for broken links or incorrect parameters these should be fixed ,0 158929,13751618832.0,IssuesEvent,2020-10-06 13:34:47,k8-proxy/k8-proxy-desktop,https://api.github.com/repos/k8-proxy/k8-proxy-desktop,closed,Creating Installation steps for desktop Linux installer,P2 documentation,Creating Installation steps for desktop Linux installer and this would be incorporated in READMe,1.0,Creating Installation steps for desktop Linux installer - Creating Installation steps for desktop Linux installer and this would be incorporated in READMe,0,creating installation steps for desktop linux installer creating installation steps for desktop linux installer and this would be incorporated in readme,0 163466,25819077766.0,IssuesEvent,2022-12-12 08:11:14,Altinn/altinn-studio,https://api.github.com/repos/Altinn/altinn-studio,closed,Only empty pages/Tomme sider i Atinn Studio,kind/bug area/ui-editor solution/studio/designer team/studio,"### Description of the bug After deleting a page in Altinn Studio and/or moving a (new?) page in Altinn Studio, all my pages appeared completely blank in Altinn Studio. If I open the app and go to ""Lage"" it shows a default blank page instead of my actual first page. ![image](https://user-images.githubusercontent.com/42466346/203539567-0fed0334-98b6-45a0-8a0a-f5c01a53829d.png) If I click on one of my pages it's still completely empty and it is not possible to drag-and-drop anything onto the page. ![image](https://user-images.githubusercontent.com/42466346/203540704-a78d071d-ec76-4f15-8a7f-c4f71c627847.png) I would expect to see the same content I see when I look at the app in VSC (and it works in local testing). ### Steps To Reproduce I don't know how to reproduce this since I'm not exactly sure what caused it, but it seemed to me that it happened after I added/moved/renamed a page in Altinn Studio. ### Additional Information See discussion on Slack: https://altinn.slack.com/archives/C02EVE4RU82/p1668090470317929 This error occurred for me sometime on November 10. 2022 on my app https://altinn.studio/repos/xmrsa/ra0749-vare-01. When I looked at the last commit I had around then there was a warning about a BOM-sign. I could see the pages when I first pulled this change to AS, but it might still be related? ![image](https://user-images.githubusercontent.com/42466346/203541768-4a2bc201-18d0-45f1-93c4-9739f0c1cc50.png) ",1.0,"Only empty pages/Tomme sider i Atinn Studio - ### Description of the bug After deleting a page in Altinn Studio and/or moving a (new?) page in Altinn Studio, all my pages appeared completely blank in Altinn Studio. If I open the app and go to ""Lage"" it shows a default blank page instead of my actual first page. ![image](https://user-images.githubusercontent.com/42466346/203539567-0fed0334-98b6-45a0-8a0a-f5c01a53829d.png) If I click on one of my pages it's still completely empty and it is not possible to drag-and-drop anything onto the page. ![image](https://user-images.githubusercontent.com/42466346/203540704-a78d071d-ec76-4f15-8a7f-c4f71c627847.png) I would expect to see the same content I see when I look at the app in VSC (and it works in local testing). ### Steps To Reproduce I don't know how to reproduce this since I'm not exactly sure what caused it, but it seemed to me that it happened after I added/moved/renamed a page in Altinn Studio. ### Additional Information See discussion on Slack: https://altinn.slack.com/archives/C02EVE4RU82/p1668090470317929 This error occurred for me sometime on November 10. 2022 on my app https://altinn.studio/repos/xmrsa/ra0749-vare-01. When I looked at the last commit I had around then there was a warning about a BOM-sign. I could see the pages when I first pulled this change to AS, but it might still be related? ![image](https://user-images.githubusercontent.com/42466346/203541768-4a2bc201-18d0-45f1-93c4-9739f0c1cc50.png) ",0,only empty pages tomme sider i atinn studio description of the bug after deleting a page in altinn studio and or moving a new page in altinn studio all my pages appeared completely blank in altinn studio if i open the app and go to lage it shows a default blank page instead of my actual first page if i click on one of my pages it s still completely empty and it is not possible to drag and drop anything onto the page i would expect to see the same content i see when i look at the app in vsc and it works in local testing steps to reproduce i don t know how to reproduce this since i m not exactly sure what caused it but it seemed to me that it happened after i added moved renamed a page in altinn studio additional information see discussion on slack this error occurred for me sometime on november on my app when i looked at the last commit i had around then there was a warning about a bom sign i could see the pages when i first pulled this change to as but it might still be related ,0 4928,18055565082.0,IssuesEvent,2021-09-20 07:46:00,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Page did not load properly in Safari,automation/svc triaged cxp docs-experience dsc/subsvc Pri2 escalated-content-team," Sidebar navigation did not appear. Header did not appear. Feedback button did not work. There were probably other pieces that did not work as well but those were the most noticeable one. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: bef853ec-6fcb-22dc-76cf-d12af7c87dee * Version Independent ID: c36fc153-ef89-842e-9b48-5986b1b49552 * Content: [Azure Automation State Configuration overview](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-overview#) * Content Source: [articles/automation/automation-dsc-overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-dsc-overview.md) * Service: **automation** * Sub-service: **dsc** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1.0,"Page did not load properly in Safari - Sidebar navigation did not appear. Header did not appear. Feedback button did not work. There were probably other pieces that did not work as well but those were the most noticeable one. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: bef853ec-6fcb-22dc-76cf-d12af7c87dee * Version Independent ID: c36fc153-ef89-842e-9b48-5986b1b49552 * Content: [Azure Automation State Configuration overview](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-overview#) * Content Source: [articles/automation/automation-dsc-overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-dsc-overview.md) * Service: **automation** * Sub-service: **dsc** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1,page did not load properly in safari sidebar navigation did not appear header did not appear feedback button did not work there were probably other pieces that did not work as well but those were the most noticeable one document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service dsc github login mgoedtel microsoft alias magoedte ,1 6592,23465978125.0,IssuesEvent,2022-08-16 16:48:09,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,FAILED: Automated Tests(2),automation,"Stats: { ""suites"": 40, ""tests"": 302, ""passes"": 300, ""pending"": 0, ""failures"": 2, ""start"": ""2022-08-10T06:05:08.415Z"", ""end"": ""2022-08-10T06:21:06.955Z"", ""duration"": 646907, ""testsRegistered"": 302, ""passPercent"": 99.33774834437087, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 0, ""hasSkipped"": false } Failed Tests: ""Get the namespace directory details (/namespaces/{ns}/directory) and verify the success code in the response"" ""Get the namespace directory details by its ID (/namespaces/{ns}/directory/{id}) and verify the success code in the response"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/2830348785",1.0,"FAILED: Automated Tests(2) - Stats: { ""suites"": 40, ""tests"": 302, ""passes"": 300, ""pending"": 0, ""failures"": 2, ""start"": ""2022-08-10T06:05:08.415Z"", ""end"": ""2022-08-10T06:21:06.955Z"", ""duration"": 646907, ""testsRegistered"": 302, ""passPercent"": 99.33774834437087, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 0, ""hasSkipped"": false } Failed Tests: ""Get the namespace directory details (/namespaces/{ns}/directory) and verify the success code in the response"" ""Get the namespace directory details by its ID (/namespaces/{ns}/directory/{id}) and verify the success code in the response"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/2830348785",1,failed automated tests stats suites tests passes pending failures start end duration testsregistered passpercent pendingpercent other hasother false skipped hasskipped false failed tests get the namespace directory details namespaces ns directory and verify the success code in the response get the namespace directory details by its id namespaces ns directory id and verify the success code in the response run link ,1 4856,17794136285.0,IssuesEvent,2021-08-31 19:50:35,2i2c-org/pilot-hubs,https://api.github.com/repos/2i2c-org/pilot-hubs,opened,Automate hub deployments on AWS,type: enhancement prio: high :label: CI/CD :label: automation,"### Description Once we have a cluster running, we need to automate hub deployments on AWS. We also need to configure our already-running hubs to follow this process. ### Value / benefit This will let us automatically add new AWS hubs, and will standardize our infrastructure deployments in general. As AWS is a very popular cloud provider, this is an important step to make us more confident in adding new hubs. ### Implementation details This might be done most easily by automating the deployment of our specific AWS hubs, and then documenting / generalizing how to do it for a generic hub. ### Tasks to complete - [ ] Build base infrastructure for generic hub deployment on AWS (potentially as part of specific hubs below) - [ ] Automatically deploy specific hubs - [ ] #632 - [ ] farallon hub - [ ] openscapes hub ### Updates _No response_",1.0,"Automate hub deployments on AWS - ### Description Once we have a cluster running, we need to automate hub deployments on AWS. We also need to configure our already-running hubs to follow this process. ### Value / benefit This will let us automatically add new AWS hubs, and will standardize our infrastructure deployments in general. As AWS is a very popular cloud provider, this is an important step to make us more confident in adding new hubs. ### Implementation details This might be done most easily by automating the deployment of our specific AWS hubs, and then documenting / generalizing how to do it for a generic hub. ### Tasks to complete - [ ] Build base infrastructure for generic hub deployment on AWS (potentially as part of specific hubs below) - [ ] Automatically deploy specific hubs - [ ] #632 - [ ] farallon hub - [ ] openscapes hub ### Updates _No response_",1,automate hub deployments on aws description once we have a cluster running we need to automate hub deployments on aws we also need to configure our already running hubs to follow this process value benefit this will let us automatically add new aws hubs and will standardize our infrastructure deployments in general as aws is a very popular cloud provider this is an important step to make us more confident in adding new hubs implementation details this might be done most easily by automating the deployment of our specific aws hubs and then documenting generalizing how to do it for a generic hub tasks to complete build base infrastructure for generic hub deployment on aws potentially as part of specific hubs below automatically deploy specific hubs farallon hub openscapes hub updates no response ,1 1131,9551284720.0,IssuesEvent,2019-05-02 14:08:34,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,Make *GreenfieldRaptorRelease builds not use `org.mozilla.fenix` application ID,👩‍💻 engineering task 🤖 automation,"Post #1321, we can finally use the (greenfield) Raptor release builds. But they use the `org.mozilla.fenix` application ID _and_ are signed with a special ""dep"" key. That means you can't pave over your existing `org.mozilla.fenix` Nightly vehicle, making it very difficult to do automation testing _and_ dogfood Fenix at the same time. Let's make the Raptor/automation builds have a different application ID.",1.0,"Make *GreenfieldRaptorRelease builds not use `org.mozilla.fenix` application ID - Post #1321, we can finally use the (greenfield) Raptor release builds. But they use the `org.mozilla.fenix` application ID _and_ are signed with a special ""dep"" key. That means you can't pave over your existing `org.mozilla.fenix` Nightly vehicle, making it very difficult to do automation testing _and_ dogfood Fenix at the same time. Let's make the Raptor/automation builds have a different application ID.",1,make greenfieldraptorrelease builds not use org mozilla fenix application id post we can finally use the greenfield raptor release builds but they use the org mozilla fenix application id and are signed with a special dep key that means you can t pave over your existing org mozilla fenix nightly vehicle making it very difficult to do automation testing and dogfood fenix at the same time let s make the raptor automation builds have a different application id ,1 2489,12109389007.0,IssuesEvent,2020-04-21 08:41:09,nf-core/tools,https://api.github.com/repos/nf-core/tools,closed,nf-core sync: repeat attempt for failed pipelines,automation,"In the last sync, quite a lot of pipelines failed with a `404` error which essentially makes no sense. As I don't know what causes this or how to fix it, a blunt approach could be to have a second loop after we finish which tries the syncing again with all failed pipelines. Hopefully that may then work on some of them and reduce the failure rate. A more refined approach would be to detect 404 errors in the code, sleep for a few seconds and then try to open the PR again. A combination of both of these could be good. Phil",1.0,"nf-core sync: repeat attempt for failed pipelines - In the last sync, quite a lot of pipelines failed with a `404` error which essentially makes no sense. As I don't know what causes this or how to fix it, a blunt approach could be to have a second loop after we finish which tries the syncing again with all failed pipelines. Hopefully that may then work on some of them and reduce the failure rate. A more refined approach would be to detect 404 errors in the code, sleep for a few seconds and then try to open the PR again. A combination of both of these could be good. Phil",1,nf core sync repeat attempt for failed pipelines in the last sync quite a lot of pipelines failed with a error which essentially makes no sense as i don t know what causes this or how to fix it a blunt approach could be to have a second loop after we finish which tries the syncing again with all failed pipelines hopefully that may then work on some of them and reduce the failure rate a more refined approach would be to detect errors in the code sleep for a few seconds and then try to open the pr again a combination of both of these could be good phil,1 34559,9412174293.0,IssuesEvent,2019-04-10 02:49:52,habitat-sh/builder,https://api.github.com/repos/habitat-sh/builder,closed,How to make an existing origin private,A-builder C-feature V-bldr,"Does Builder support the ability to switch origins to private visibility after it has been created as public visibility? If an origin goes private, does that mean any user/client has to be part of that origin to gain at least read access to the packages?",1.0,"How to make an existing origin private - Does Builder support the ability to switch origins to private visibility after it has been created as public visibility? If an origin goes private, does that mean any user/client has to be part of that origin to gain at least read access to the packages?",0,how to make an existing origin private does builder support the ability to switch origins to private visibility after it has been created as public visibility if an origin goes private does that mean any user client has to be part of that origin to gain at least read access to the packages ,0 267778,28509217362.0,IssuesEvent,2023-04-19 01:45:33,dpteam/RK3188_TABLET,https://api.github.com/repos/dpteam/RK3188_TABLET,closed,CVE-2015-5156 (Medium) detected in linuxv3.0 - autoclosed,Mend: dependency security vulnerability,"## CVE-2015-5156 - Medium Severity Vulnerability
Vulnerable Library - linuxv3.0

Linux kernel source tree

Library home page: https://github.com/verygreen/linux.git

Found in HEAD commit: 0c501f5a0fd72c7b2ac82904235363bd44fd8f9e

Found in base branch: master

Vulnerable Source Files (1)

/drivers/net/virtio_net.c

Vulnerability Details

The virtnet_probe function in drivers/net/virtio_net.c in the Linux kernel before 4.2 attempts to support a FRAGLIST feature without proper memory allocation, which allows guest OS users to cause a denial of service (buffer overflow and memory corruption) via a crafted sequence of fragmented packets.

Publish Date: 2015-10-19

URL: CVE-2015-5156

CVSS 3 Score Details (6.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Adjacent - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://nvd.nist.gov/vuln/detail/CVE-2015-5156

Release Date: 2015-10-19

Fix Resolution: 4.2

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2015-5156 (Medium) detected in linuxv3.0 - autoclosed - ## CVE-2015-5156 - Medium Severity Vulnerability
Vulnerable Library - linuxv3.0

Linux kernel source tree

Library home page: https://github.com/verygreen/linux.git

Found in HEAD commit: 0c501f5a0fd72c7b2ac82904235363bd44fd8f9e

Found in base branch: master

Vulnerable Source Files (1)

/drivers/net/virtio_net.c

Vulnerability Details

The virtnet_probe function in drivers/net/virtio_net.c in the Linux kernel before 4.2 attempts to support a FRAGLIST feature without proper memory allocation, which allows guest OS users to cause a denial of service (buffer overflow and memory corruption) via a crafted sequence of fragmented packets.

Publish Date: 2015-10-19

URL: CVE-2015-5156

CVSS 3 Score Details (6.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Adjacent - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://nvd.nist.gov/vuln/detail/CVE-2015-5156

Release Date: 2015-10-19

Fix Resolution: 4.2

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in autoclosed cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch master vulnerable source files drivers net virtio net c vulnerability details the virtnet probe function in drivers net virtio net c in the linux kernel before attempts to support a fraglist feature without proper memory allocation which allows guest os users to cause a denial of service buffer overflow and memory corruption via a crafted sequence of fragmented packets publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0 5016,18270112930.0,IssuesEvent,2021-10-04 13:02:06,CDCgov/prime-reportstream,https://api.github.com/repos/CDCgov/prime-reportstream,closed,Sender Receives Email when they get PHD Approval to Send,Epic sender-tools ob-automation-for-senders,"## Problem Statement As a Sender, after I have requested a review of my data by my PHD(s) and they have approved me to send data, I would like to receive a notification that it is ok for me to start sending them data so that I don't have contact someone to find out.",1.0,"Sender Receives Email when they get PHD Approval to Send - ## Problem Statement As a Sender, after I have requested a review of my data by my PHD(s) and they have approved me to send data, I would like to receive a notification that it is ok for me to start sending them data so that I don't have contact someone to find out.",1,sender receives email when they get phd approval to send problem statement as a sender after i have requested a review of my data by my phd s and they have approved me to send data i would like to receive a notification that it is ok for me to start sending them data so that i don t have contact someone to find out ,1 9781,30512388417.0,IssuesEvent,2023-07-18 22:09:14,rpopuc/gha-tests,https://api.github.com/repos/rpopuc/gha-tests,opened,Deploy,deploy-automation,"## Description Realiza deploy automatizado da aplicação. ## Environments environment_1 ## Branches branch_name",1.0,"Deploy - ## Description Realiza deploy automatizado da aplicação. ## Environments environment_1 ## Branches branch_name",1,deploy description realiza deploy automatizado da aplicação environments environment branches branch name,1 640,7668879794.0,IssuesEvent,2018-05-14 07:48:00,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Wrong handling of key pressing in input,AREA: client SYSTEM: automations TYPE: bug,"### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? Input value changed after click and press ""Up"" button ### What is the expected behavior? Input value should not be changed ### How would you reproduce the current behavior (if this is a bug)? ```js import { Selector, ClientFunction } from 'testcafe'; fixture `fixture` .page `http://dolzhikov-w8/172/RegressionTestsSite/ASPxEditors/ASPxDateEdit/T187651.aspx`; test('test', async t => { const input = Selector(""#ASPxDateEdit1_I""); await t.click(input); var oldValue = await input.value; await t .pressKey(""up"") .expect(input.value).eql(oldValue); }); ``` ### Specify your * testcafe version: 0.18.6-dev20171222 ",1.0,"Wrong handling of key pressing in input - ### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? Input value changed after click and press ""Up"" button ### What is the expected behavior? Input value should not be changed ### How would you reproduce the current behavior (if this is a bug)? ```js import { Selector, ClientFunction } from 'testcafe'; fixture `fixture` .page `http://dolzhikov-w8/172/RegressionTestsSite/ASPxEditors/ASPxDateEdit/T187651.aspx`; test('test', async t => { const input = Selector(""#ASPxDateEdit1_I""); await t.click(input); var oldValue = await input.value; await t .pressKey(""up"") .expect(input.value).eql(oldValue); }); ``` ### Specify your * testcafe version: 0.18.6-dev20171222 ",1,wrong handling of key pressing in input are you requesting a feature or reporting a bug bug what is the current behavior input value changed after click and press up button what is the expected behavior input value should not be changed how would you reproduce the current behavior if this is a bug js import selector clientfunction from testcafe fixture fixture page test test async t const input selector i await t click input var oldvalue await input value await t presskey up expect input value eql oldvalue specify your testcafe version ,1 9687,30246199305.0,IssuesEvent,2023-07-06 16:39:56,nephio-project/nephio,https://api.github.com/repos/nephio-project/nephio,closed,Epic : End to end Tests,area/package-management sig/automation,"This epic lists the end to end tests. - [x] #235 - (passing) [000.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/000.sh) setup through mgmt, mgmt-staging repos Ready - [x] #236 - (passing) [001.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/001.sh) regional - (passing) [002.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/002.sh) edge - [x] #237 - (passing) [004.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/004.sh) provision operator on all workload clusters - [x] #238 - [x] (passing) [003.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/003.sh) free5gc-cp - [ ] (failing) [005.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/005.sh) AMF, SMF - [ ] (failing) [006.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/006.sh) UPF - [x] (stretch) #150 - (to be written) 007.sh assign to @denysaleksandrov - [x] #239 - (to be written) 008.sh assign to @rravindran123 - [x] #240 - (to be written) 009.sh assigned (but not confirmed by her yet!) to @n2vo - [x] #241 - I don't think we have an automated way to do this - [x] (stretch) ""other NF"" package(s) - make more configurable and use pkg to deploy",1.0,"Epic : End to end Tests - This epic lists the end to end tests. - [x] #235 - (passing) [000.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/000.sh) setup through mgmt, mgmt-staging repos Ready - [x] #236 - (passing) [001.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/001.sh) regional - (passing) [002.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/002.sh) edge - [x] #237 - (passing) [004.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/004.sh) provision operator on all workload clusters - [x] #238 - [x] (passing) [003.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/003.sh) free5gc-cp - [ ] (failing) [005.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/005.sh) AMF, SMF - [ ] (failing) [006.sh](https://github.com/nephio-project/test-infra/blob/main/e2e/tests/006.sh) UPF - [x] (stretch) #150 - (to be written) 007.sh assign to @denysaleksandrov - [x] #239 - (to be written) 008.sh assign to @rravindran123 - [x] #240 - (to be written) 009.sh assigned (but not confirmed by her yet!) to @n2vo - [x] #241 - I don't think we have an automated way to do this - [x] (stretch) ""other NF"" package(s) - make more configurable and use pkg to deploy",1,epic end to end tests this epic lists the end to end tests passing setup through mgmt mgmt staging repos ready passing regional passing edge passing provision operator on all workload clusters passing cp failing amf smf failing upf stretch to be written sh assign to denysaleksandrov to be written sh assign to to be written sh assigned but not confirmed by her yet to i don t think we have an automated way to do this stretch other nf package s make more configurable and use pkg to deploy,1 7807,25717866705.0,IssuesEvent,2022-12-07 11:37:46,ita-social-projects/TeachUA,https://api.github.com/repos/ita-social-projects/TeachUA,opened,[Advanced search] Sort functionality for centers does not work properly,bug Backend Priority: Medium Automation,"**Environment:** Windows 11, Google Chrome Version 108.0.5359.95 (Official Build) (64-bit). **Reproducible:** always. **Build found:** last commit [dca1e10](https://github.com/ita-social-projects/TeachUA/commit/dca1e10e149957cb6b3303eb9f303af49fbb6cd9) **Preconditions** 1. Go to the webpage: https://speak-ukrainian.org.ua/dev/ 2. Go to 'Гуртки' tab. 3. Click on 'Розширений пошук' button. **Steps to reproduce** 1. Click on 'Центр' radio button. 2. Click on 'за рейтингом' sort. 3. Pay attention to the centers on the first page. 4. Go to a database. 5. Execute the following query: SELECT c.name, c.rating FROM ( SELECT name, rating, id FROM centers ORDER BY rating ) AS c INNER JOIN locations as l ON c.id=l.center_id INNER JOIN cities as ct ON l.city_id=ct.id WHERE ct.name = 'Київ' GROUP BY c.name, rating ORDER BY rating; 6. Pay attention to the first few centers on the output. **Actual result** Results on UI and DB are totally different. UI: ![image](https://user-images.githubusercontent.com/82941067/206167905-670ed326-5d3b-4d4c-8c99-c12ee7689b01.png) DB: ![image](https://user-images.githubusercontent.com/82941067/206168190-70799837-afa4-4924-bec4-346a3393c85d.png) It seems like 'за алфавітом' sort on UI works as 'за рейтингом' because if we click 'за алфавітом' on UI and compare results on DB for 'за рейтингом' - they will be completely the same. **Expected result** Both UI and DB results should be the same. **User story and test case links** User story #274 [Test case](https://jira.softserve.academy/browse/TUA-449) **Labels to be added** ""Bug"", Priority (""pri: ""). ",1.0,"[Advanced search] Sort functionality for centers does not work properly - **Environment:** Windows 11, Google Chrome Version 108.0.5359.95 (Official Build) (64-bit). **Reproducible:** always. **Build found:** last commit [dca1e10](https://github.com/ita-social-projects/TeachUA/commit/dca1e10e149957cb6b3303eb9f303af49fbb6cd9) **Preconditions** 1. Go to the webpage: https://speak-ukrainian.org.ua/dev/ 2. Go to 'Гуртки' tab. 3. Click on 'Розширений пошук' button. **Steps to reproduce** 1. Click on 'Центр' radio button. 2. Click on 'за рейтингом' sort. 3. Pay attention to the centers on the first page. 4. Go to a database. 5. Execute the following query: SELECT c.name, c.rating FROM ( SELECT name, rating, id FROM centers ORDER BY rating ) AS c INNER JOIN locations as l ON c.id=l.center_id INNER JOIN cities as ct ON l.city_id=ct.id WHERE ct.name = 'Київ' GROUP BY c.name, rating ORDER BY rating; 6. Pay attention to the first few centers on the output. **Actual result** Results on UI and DB are totally different. UI: ![image](https://user-images.githubusercontent.com/82941067/206167905-670ed326-5d3b-4d4c-8c99-c12ee7689b01.png) DB: ![image](https://user-images.githubusercontent.com/82941067/206168190-70799837-afa4-4924-bec4-346a3393c85d.png) It seems like 'за алфавітом' sort on UI works as 'за рейтингом' because if we click 'за алфавітом' on UI and compare results on DB for 'за рейтингом' - they will be completely the same. **Expected result** Both UI and DB results should be the same. **User story and test case links** User story #274 [Test case](https://jira.softserve.academy/browse/TUA-449) **Labels to be added** ""Bug"", Priority (""pri: ""). ",1, sort functionality for centers does not work properly environment windows google chrome version official build bit reproducible always build found last commit preconditions go to the webpage go to гуртки tab click on розширений пошук button steps to reproduce click on центр radio button click on за рейтингом sort pay attention to the centers on the first page go to a database execute the following query select c name c rating from select name rating id from centers order by rating as c inner join locations as l on c id l center id inner join cities as ct on l city id ct id where ct name київ group by c name rating order by rating pay attention to the first few centers on the output actual result results on ui and db are totally different ui db it seems like за алфавітом sort on ui works as за рейтингом because if we click за алфавітом on ui and compare results on db for за рейтингом they will be completely the same expected result both ui and db results should be the same user story and test case links user story labels to be added bug priority pri ,1 754289,26380559173.0,IssuesEvent,2023-01-12 08:16:49,xwikisas/application-ldapuserimport,https://api.github.com/repos/xwikisas/application-ldapuserimport,opened,Groups synchronization can produce duplicate users when username is in lowercase,Priority: Major Type: Bug,"Have configured in ldap a group that contains a user with lowercase username, let's say `Group1` and `user1` Steps to reproduce: 1. Set `cn` as uid attribute 2. Map `Group1` to a local group, e.g. `XWiki.Group1` 3. Login with `user1` and observe that the profile was created 4. Go to groups administration and update the `Group1` (first click on `Update` to see the members, and then on `Confirm`) Expected result: The correct number of members is displayed and the synchronization is done correctly, if needed. Actual result: An error is displayed and after checking, a new `XWiki.User1_1` is created ![image](https://user-images.githubusercontent.com/22794181/212013689-33424053-eebf-45cb-aeb1-457788bf87da.png) ``` Pasted text.txt Edit Paste [New Upload](https://up1.xwikisas.com/#) org.apache.velocity.exception.MethodInvocationException: Invocation of method 'updateGroup' in class com.xwiki.ldapuserimport.script.LDAPUserImportScriptService threw exception java.lang.NullPointerException at xwiki:LDAPUserImport.LDAPUserImportService[line 34, column 31] at org.apache.velocity.runtime.parser.node.ASTMethod.handleInvocationException(ASTMethod.java:306) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:233) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:369) at org.apache.velocity.runtime.parser.node.ASTReference.evaluate(ASTReference.java:674) at org.apache.velocity.runtime.parser.node.ASTExpression.evaluate(ASTExpression.java:63) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:170) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.parser.node.ASTElseIfStatement.render(ASTElseIfStatement.java:104) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:191) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.xwiki.velocity.internal.directive.TryCatchDirective.render(TryCatchDirective.java:86) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:301) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:172) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:172) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.Template.merge(Template.java:358) at org.apache.velocity.Template.merge(Template.java:262) at org.xwiki.velocity.internal.DefaultVelocityEngine.evaluate(DefaultVelocityEngine.java:284) at com.xpn.xwiki.render.DefaultVelocityManager.evaluate(DefaultVelocityManager.java:321) at org.xwiki.rendering.internal.macro.velocity.VelocityMacro.evaluateString(VelocityMacro.java:131) at org.xwiki.rendering.internal.macro.velocity.VelocityMacro.evaluateString(VelocityMacro.java:52) at org.xwiki.rendering.macro.script.AbstractScriptMacro.evaluateBlock(AbstractScriptMacro.java:286) at org.xwiki.rendering.macro.script.AbstractScriptMacro.execute(AbstractScriptMacro.java:182) at org.xwiki.rendering.macro.script.AbstractScriptMacro.execute(AbstractScriptMacro.java:58) at org.xwiki.rendering.internal.transformation.macro.MacroTransformation.transform(MacroTransformation.java:297) at org.xwiki.rendering.internal.transformation.DefaultRenderingContext.transformInContext(DefaultRenderingContext.java:183) at org.xwiki.rendering.internal.transformation.DefaultTransformationManager.performTransformations(DefaultTransformationManager.java:103) at org.xwiki.display.internal.DocumentContentAsyncExecutor.executeInCurrentExecutionContext(DocumentContentAsyncExecutor.java:348) at org.xwiki.display.internal.DocumentContentAsyncExecutor.execute(DocumentContentAsyncExecutor.java:221) at org.xwiki.display.internal.DocumentContentAsyncRenderer.execute(DocumentContentAsyncRenderer.java:107) at org.xwiki.rendering.async.internal.block.AbstractBlockAsyncRenderer.render(AbstractBlockAsyncRenderer.java:157) at org.xwiki.rendering.async.internal.block.AbstractBlockAsyncRenderer.render(AbstractBlockAsyncRenderer.java:54) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.syncRender(DefaultAsyncRendererExecutor.java:273) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.render(DefaultAsyncRendererExecutor.java:250) at org.xwiki.rendering.async.internal.block.DefaultBlockAsyncRendererExecutor.execute(DefaultBlockAsyncRendererExecutor.java:125) at org.xwiki.display.internal.DocumentContentDisplayer.display(DocumentContentDisplayer.java:67) at org.xwiki.display.internal.DocumentContentDisplayer.display(DocumentContentDisplayer.java:43) at org.xwiki.display.internal.DefaultDocumentDisplayer.display(DefaultDocumentDisplayer.java:96) at org.xwiki.display.internal.DefaultDocumentDisplayer.display(DefaultDocumentDisplayer.java:39) at org.xwiki.sheet.internal.SheetDocumentDisplayer.display(SheetDocumentDisplayer.java:123) at org.xwiki.sheet.internal.SheetDocumentDisplayer.display(SheetDocumentDisplayer.java:52) at org.xwiki.display.internal.ConfiguredDocumentDisplayer.display(ConfiguredDocumentDisplayer.java:68) at org.xwiki.display.internal.ConfiguredDocumentDisplayer.display(ConfiguredDocumentDisplayer.java:42) at com.xpn.xwiki.doc.XWikiDocument.display(XWikiDocument.java:1216) at com.xpn.xwiki.doc.XWikiDocument.getRenderedContent(XWikiDocument.java:1357) at com.xpn.xwiki.doc.XWikiDocument.displayDocument(XWikiDocument.java:1306) at com.xpn.xwiki.api.Document.displayDocument(Document.java:765) at jdk.internal.reflect.GeneratedMethodAccessor498.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:565) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:548) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:219) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:369) at org.apache.velocity.runtime.parser.node.ASTReference.value(ASTReference.java:701) at org.apache.velocity.runtime.parser.node.ASTExpression.value(ASTExpression.java:72) at org.apache.velocity.runtime.parser.node.ASTSetDirective.render(ASTSetDirective.java:240) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:191) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:172) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.xwiki.velocity.internal.directive.TryCatchDirective.render(TryCatchDirective.java:86) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:301) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.Template.merge(Template.java:358) at org.apache.velocity.Template.merge(Template.java:262) at org.xwiki.velocity.internal.DefaultVelocityEngine.evaluate(DefaultVelocityEngine.java:284) at com.xpn.xwiki.render.DefaultVelocityManager.evaluate(DefaultVelocityManager.java:321) at com.xpn.xwiki.internal.template.VelocityTemplateEvaluator.evaluateContent(VelocityTemplateEvaluator.java:95) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.evaluateContent(TemplateAsyncRenderer.java:217) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.renderVelocity(TemplateAsyncRenderer.java:180) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:137) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:53) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.lambda$syncRender$0(DefaultAsyncRendererExecutor.java:267) at com.xpn.xwiki.internal.security.authorization.DefaultAuthorExecutor.call(DefaultAuthorExecutor.java:98) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.syncRender(DefaultAsyncRendererExecutor.java:267) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.render(DefaultAsyncRendererExecutor.java:250) at org.xwiki.rendering.async.internal.block.DefaultBlockAsyncRendererExecutor.render(DefaultBlockAsyncRendererExecutor.java:154) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:772) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:745) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:725) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:711) at com.xpn.xwiki.internal.template.DefaultTemplateManager.render(DefaultTemplateManager.java:78) at com.xpn.xwiki.XWiki.evaluateTemplate(XWiki.java:2516) at com.xpn.xwiki.XWiki.parseTemplate(XWiki.java:2494) at com.xpn.xwiki.api.XWiki.parseTemplate(XWiki.java:983) at jdk.internal.reflect.GeneratedMethodAccessor213.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:565) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:548) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:219) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:369) at org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:490) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.directive.VelocimacroProxy.render(VelocimacroProxy.java:215) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:328) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:258) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:301) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.Template.merge(Template.java:358) at org.apache.velocity.Template.merge(Template.java:262) at org.xwiki.velocity.internal.DefaultVelocityEngine.evaluate(DefaultVelocityEngine.java:284) at com.xpn.xwiki.render.DefaultVelocityManager.evaluate(DefaultVelocityManager.java:321) at com.xpn.xwiki.internal.template.VelocityTemplateEvaluator.evaluateContent(VelocityTemplateEvaluator.java:95) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.evaluateContent(TemplateAsyncRenderer.java:217) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.renderVelocity(TemplateAsyncRenderer.java:180) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:137) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:53) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.lambda$syncRender$0(DefaultAsyncRendererExecutor.java:267) at com.xpn.xwiki.internal.security.authorization.DefaultAuthorExecutor.call(DefaultAuthorExecutor.java:98) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.syncRender(DefaultAsyncRendererExecutor.java:267) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.render(DefaultAsyncRendererExecutor.java:250) at org.xwiki.rendering.async.internal.block.DefaultBlockAsyncRendererExecutor.render(DefaultBlockAsyncRendererExecutor.java:154) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:772) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:745) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:725) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:711) at com.xpn.xwiki.internal.template.DefaultTemplateManager.render(DefaultTemplateManager.java:78) at com.xpn.xwiki.XWiki.evaluateTemplate(XWiki.java:2516) at com.xpn.xwiki.XWiki.parseTemplate(XWiki.java:2494) at com.xpn.xwiki.api.XWiki.parseTemplate(XWiki.java:983) at jdk.internal.reflect.GeneratedMethodAccessor213.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:565) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:548) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:219) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:369) at org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:490) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.directive.VelocimacroProxy.render(VelocimacroProxy.java:215) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:328) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:258) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:301) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.Template.merge(Template.java:358) at org.apache.velocity.Template.merge(Template.java:262) at org.xwiki.velocity.internal.DefaultVelocityEngine.evaluate(DefaultVelocityEngine.java:284) at com.xpn.xwiki.render.DefaultVelocityManager.evaluate(DefaultVelocityManager.java:321) at com.xpn.xwiki.internal.template.VelocityTemplateEvaluator.evaluateContent(VelocityTemplateEvaluator.java:95) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.evaluateContent(TemplateAsyncRenderer.java:217) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.renderVelocity(TemplateAsyncRenderer.java:180) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:137) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:53) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.lambda$syncRender$0(DefaultAsyncRendererExecutor.java:267) at com.xpn.xwiki.internal.security.authorization.DefaultAuthorExecutor.call(DefaultAuthorExecutor.java:98) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.syncRender(DefaultAsyncRendererExecutor.java:267) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.render(DefaultAsyncRendererExecutor.java:250) at org.xwiki.rendering.async.internal.block.DefaultBlockAsyncRendererExecutor.render(DefaultBlockAsyncRendererExecutor.java:154) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:772) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:745) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:725) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:711) at com.xpn.xwiki.internal.template.DefaultTemplateManager.render(DefaultTemplateManager.java:78) at com.xpn.xwiki.XWiki.evaluateTemplate(XWiki.java:2516) at com.xpn.xwiki.XWiki.parseTemplate(XWiki.java:2494) at com.xpn.xwiki.api.XWiki.parseTemplate(XWiki.java:983) at jdk.internal.reflect.GeneratedMethodAccessor213.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:565) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:548) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:219) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:369) at org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:490) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.directive.VelocimacroProxy.render(VelocimacroProxy.java:215) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:328) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:258) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:301) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.Template.merge(Template.java:358) at org.apache.velocity.Template.merge(Template.java:262) at org.xwiki.velocity.internal.DefaultVelocityEngine.evaluate(DefaultVelocityEngine.java:284) at com.xpn.xwiki.render.DefaultVelocityManager.evaluate(DefaultVelocityManager.java:321) at com.xpn.xwiki.internal.template.VelocityTemplateEvaluator.evaluateContent(VelocityTemplateEvaluator.java:95) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.evaluateContent(TemplateAsyncRenderer.java:217) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.renderVelocity(TemplateAsyncRenderer.java:180) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:137) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:53) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.lambda$syncRender$0(DefaultAsyncRendererExecutor.java:267) at com.xpn.xwiki.internal.security.authorization.DefaultAuthorExecutor.call(DefaultAuthorExecutor.java:98) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.syncRender(DefaultAsyncRendererExecutor.java:267) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.render(DefaultAsyncRendererExecutor.java:250) at org.xwiki.rendering.async.internal.block.DefaultBlockAsyncRendererExecutor.render(DefaultBlockAsyncRendererExecutor.java:154) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:772) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:745) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:725) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:711) at com.xpn.xwiki.internal.template.DefaultTemplateManager.render(DefaultTemplateManager.java:78) at com.xpn.xwiki.XWiki.evaluateTemplate(XWiki.java:2516) at com.xpn.xwiki.web.Utils.parseTemplate(Utils.java:179) at com.xpn.xwiki.web.XWikiAction.execute(XWikiAction.java:572) at com.xpn.xwiki.web.XWikiAction.execute(XWikiAction.java:250) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:425) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:228) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913) at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder$NotAsyncServlet.service(ServletHolder.java:1411) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:763) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1631) at com.xpn.xwiki.web.ActionFilter.doFilter(ActionFilter.java:122) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1618) at org.xwiki.wysiwyg.filter.ConversionFilter.doFilter(ConversionFilter.java:109) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1618) at org.xwiki.container.servlet.filters.internal.SetHTTPHeaderFilter.doFilter(SetHTTPHeaderFilter.java:63) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1618) at org.xwiki.container.servlet.filters.internal.SavedRequestRestorerFilter.doFilter(SavedRequestRestorerFilter.java:208) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1618) at org.xwiki.container.servlet.filters.internal.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:111) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1618) at org.xwiki.resource.servlet.RoutingFilter.doFilter(RoutingFilter.java:132) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:549) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:602) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1369) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:489) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1284) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:501) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.lang.NullPointerException at com.novell.ldap.asn1.ASN1OctetString.(ASN1OctetString.java:73) at com.novell.ldap.rfc2251.RfcLDAPString.(RfcLDAPString.java:32) at com.novell.ldap.rfc2251.RfcLDAPDN.(RfcLDAPDN.java:35) at com.novell.ldap.LDAPSearchRequest.(LDAPSearchRequest.java:182) at com.novell.ldap.LDAPConnection.search(LDAPConnection.java:3516) at com.novell.ldap.LDAPConnection.search(LDAPConnection.java:3412) at org.xwiki.contrib.ldap.PagedLDAPSearchResults.search(PagedLDAPSearchResults.java:111) at org.xwiki.contrib.ldap.PagedLDAPSearchResults.(PagedLDAPSearchResults.java:94) at org.xwiki.contrib.ldap.XWikiLDAPConnection.searchPaginated(XWikiLDAPConnection.java:404) at org.xwiki.contrib.ldap.XWikiLDAPConnection.searchLDAP(XWikiLDAPConnection.java:335) at org.xwiki.contrib.ldap.XWikiLDAPUtils.syncUser(XWikiLDAPUtils.java:1108) at com.xwiki.ldapuserimport.internal.DefaultLDAPUserImportManager.importUsers(DefaultLDAPUserImportManager.java:439) at com.xwiki.ldapuserimport.internal.DefaultLDAPUserImportManager.updateGroup(DefaultLDAPUserImportManager.java:665) at com.xwiki.ldapuserimport.script.LDAPUserImportScriptService.updateGroup(LDAPUserImportScriptService.java:137) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:565) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:548) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:219) ... 251 more [Download](blob:https://up1.xwikisas.com/2f6ea1ba-4642-406e-a0f6-68ea72463304)[View In Browser](blob:https://up1.xwikisas.com/df7e6890-3ec0-4c76-b882-e3436d122293)[Delete](https://up1.xwikisas.com/del?delkey=fae7e23587b18ecbdfcb0dbf4f7dd9f77f8cbd9e72158a6f67dec9cede049528&ident=GhsXhIneu4Uevn6JPn56yA) ``` This is not reproducing if `sAMAccountName` is set as uid attribute",1.0,"Groups synchronization can produce duplicate users when username is in lowercase - Have configured in ldap a group that contains a user with lowercase username, let's say `Group1` and `user1` Steps to reproduce: 1. Set `cn` as uid attribute 2. Map `Group1` to a local group, e.g. `XWiki.Group1` 3. Login with `user1` and observe that the profile was created 4. Go to groups administration and update the `Group1` (first click on `Update` to see the members, and then on `Confirm`) Expected result: The correct number of members is displayed and the synchronization is done correctly, if needed. Actual result: An error is displayed and after checking, a new `XWiki.User1_1` is created ![image](https://user-images.githubusercontent.com/22794181/212013689-33424053-eebf-45cb-aeb1-457788bf87da.png) ``` Pasted text.txt Edit Paste [New Upload](https://up1.xwikisas.com/#) org.apache.velocity.exception.MethodInvocationException: Invocation of method 'updateGroup' in class com.xwiki.ldapuserimport.script.LDAPUserImportScriptService threw exception java.lang.NullPointerException at xwiki:LDAPUserImport.LDAPUserImportService[line 34, column 31] at org.apache.velocity.runtime.parser.node.ASTMethod.handleInvocationException(ASTMethod.java:306) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:233) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:369) at org.apache.velocity.runtime.parser.node.ASTReference.evaluate(ASTReference.java:674) at org.apache.velocity.runtime.parser.node.ASTExpression.evaluate(ASTExpression.java:63) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:170) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.parser.node.ASTElseIfStatement.render(ASTElseIfStatement.java:104) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:191) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.xwiki.velocity.internal.directive.TryCatchDirective.render(TryCatchDirective.java:86) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:301) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:172) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:172) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.Template.merge(Template.java:358) at org.apache.velocity.Template.merge(Template.java:262) at org.xwiki.velocity.internal.DefaultVelocityEngine.evaluate(DefaultVelocityEngine.java:284) at com.xpn.xwiki.render.DefaultVelocityManager.evaluate(DefaultVelocityManager.java:321) at org.xwiki.rendering.internal.macro.velocity.VelocityMacro.evaluateString(VelocityMacro.java:131) at org.xwiki.rendering.internal.macro.velocity.VelocityMacro.evaluateString(VelocityMacro.java:52) at org.xwiki.rendering.macro.script.AbstractScriptMacro.evaluateBlock(AbstractScriptMacro.java:286) at org.xwiki.rendering.macro.script.AbstractScriptMacro.execute(AbstractScriptMacro.java:182) at org.xwiki.rendering.macro.script.AbstractScriptMacro.execute(AbstractScriptMacro.java:58) at org.xwiki.rendering.internal.transformation.macro.MacroTransformation.transform(MacroTransformation.java:297) at org.xwiki.rendering.internal.transformation.DefaultRenderingContext.transformInContext(DefaultRenderingContext.java:183) at org.xwiki.rendering.internal.transformation.DefaultTransformationManager.performTransformations(DefaultTransformationManager.java:103) at org.xwiki.display.internal.DocumentContentAsyncExecutor.executeInCurrentExecutionContext(DocumentContentAsyncExecutor.java:348) at org.xwiki.display.internal.DocumentContentAsyncExecutor.execute(DocumentContentAsyncExecutor.java:221) at org.xwiki.display.internal.DocumentContentAsyncRenderer.execute(DocumentContentAsyncRenderer.java:107) at org.xwiki.rendering.async.internal.block.AbstractBlockAsyncRenderer.render(AbstractBlockAsyncRenderer.java:157) at org.xwiki.rendering.async.internal.block.AbstractBlockAsyncRenderer.render(AbstractBlockAsyncRenderer.java:54) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.syncRender(DefaultAsyncRendererExecutor.java:273) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.render(DefaultAsyncRendererExecutor.java:250) at org.xwiki.rendering.async.internal.block.DefaultBlockAsyncRendererExecutor.execute(DefaultBlockAsyncRendererExecutor.java:125) at org.xwiki.display.internal.DocumentContentDisplayer.display(DocumentContentDisplayer.java:67) at org.xwiki.display.internal.DocumentContentDisplayer.display(DocumentContentDisplayer.java:43) at org.xwiki.display.internal.DefaultDocumentDisplayer.display(DefaultDocumentDisplayer.java:96) at org.xwiki.display.internal.DefaultDocumentDisplayer.display(DefaultDocumentDisplayer.java:39) at org.xwiki.sheet.internal.SheetDocumentDisplayer.display(SheetDocumentDisplayer.java:123) at org.xwiki.sheet.internal.SheetDocumentDisplayer.display(SheetDocumentDisplayer.java:52) at org.xwiki.display.internal.ConfiguredDocumentDisplayer.display(ConfiguredDocumentDisplayer.java:68) at org.xwiki.display.internal.ConfiguredDocumentDisplayer.display(ConfiguredDocumentDisplayer.java:42) at com.xpn.xwiki.doc.XWikiDocument.display(XWikiDocument.java:1216) at com.xpn.xwiki.doc.XWikiDocument.getRenderedContent(XWikiDocument.java:1357) at com.xpn.xwiki.doc.XWikiDocument.displayDocument(XWikiDocument.java:1306) at com.xpn.xwiki.api.Document.displayDocument(Document.java:765) at jdk.internal.reflect.GeneratedMethodAccessor498.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:565) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:548) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:219) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:369) at org.apache.velocity.runtime.parser.node.ASTReference.value(ASTReference.java:701) at org.apache.velocity.runtime.parser.node.ASTExpression.value(ASTExpression.java:72) at org.apache.velocity.runtime.parser.node.ASTSetDirective.render(ASTSetDirective.java:240) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:191) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:172) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.xwiki.velocity.internal.directive.TryCatchDirective.render(TryCatchDirective.java:86) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:301) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.Template.merge(Template.java:358) at org.apache.velocity.Template.merge(Template.java:262) at org.xwiki.velocity.internal.DefaultVelocityEngine.evaluate(DefaultVelocityEngine.java:284) at com.xpn.xwiki.render.DefaultVelocityManager.evaluate(DefaultVelocityManager.java:321) at com.xpn.xwiki.internal.template.VelocityTemplateEvaluator.evaluateContent(VelocityTemplateEvaluator.java:95) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.evaluateContent(TemplateAsyncRenderer.java:217) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.renderVelocity(TemplateAsyncRenderer.java:180) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:137) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:53) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.lambda$syncRender$0(DefaultAsyncRendererExecutor.java:267) at com.xpn.xwiki.internal.security.authorization.DefaultAuthorExecutor.call(DefaultAuthorExecutor.java:98) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.syncRender(DefaultAsyncRendererExecutor.java:267) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.render(DefaultAsyncRendererExecutor.java:250) at org.xwiki.rendering.async.internal.block.DefaultBlockAsyncRendererExecutor.render(DefaultBlockAsyncRendererExecutor.java:154) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:772) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:745) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:725) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:711) at com.xpn.xwiki.internal.template.DefaultTemplateManager.render(DefaultTemplateManager.java:78) at com.xpn.xwiki.XWiki.evaluateTemplate(XWiki.java:2516) at com.xpn.xwiki.XWiki.parseTemplate(XWiki.java:2494) at com.xpn.xwiki.api.XWiki.parseTemplate(XWiki.java:983) at jdk.internal.reflect.GeneratedMethodAccessor213.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:565) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:548) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:219) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:369) at org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:490) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.directive.VelocimacroProxy.render(VelocimacroProxy.java:215) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:328) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:258) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:301) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.Template.merge(Template.java:358) at org.apache.velocity.Template.merge(Template.java:262) at org.xwiki.velocity.internal.DefaultVelocityEngine.evaluate(DefaultVelocityEngine.java:284) at com.xpn.xwiki.render.DefaultVelocityManager.evaluate(DefaultVelocityManager.java:321) at com.xpn.xwiki.internal.template.VelocityTemplateEvaluator.evaluateContent(VelocityTemplateEvaluator.java:95) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.evaluateContent(TemplateAsyncRenderer.java:217) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.renderVelocity(TemplateAsyncRenderer.java:180) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:137) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:53) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.lambda$syncRender$0(DefaultAsyncRendererExecutor.java:267) at com.xpn.xwiki.internal.security.authorization.DefaultAuthorExecutor.call(DefaultAuthorExecutor.java:98) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.syncRender(DefaultAsyncRendererExecutor.java:267) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.render(DefaultAsyncRendererExecutor.java:250) at org.xwiki.rendering.async.internal.block.DefaultBlockAsyncRendererExecutor.render(DefaultBlockAsyncRendererExecutor.java:154) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:772) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:745) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:725) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:711) at com.xpn.xwiki.internal.template.DefaultTemplateManager.render(DefaultTemplateManager.java:78) at com.xpn.xwiki.XWiki.evaluateTemplate(XWiki.java:2516) at com.xpn.xwiki.XWiki.parseTemplate(XWiki.java:2494) at com.xpn.xwiki.api.XWiki.parseTemplate(XWiki.java:983) at jdk.internal.reflect.GeneratedMethodAccessor213.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:565) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:548) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:219) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:369) at org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:490) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.directive.VelocimacroProxy.render(VelocimacroProxy.java:215) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:328) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:258) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:301) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.Template.merge(Template.java:358) at org.apache.velocity.Template.merge(Template.java:262) at org.xwiki.velocity.internal.DefaultVelocityEngine.evaluate(DefaultVelocityEngine.java:284) at com.xpn.xwiki.render.DefaultVelocityManager.evaluate(DefaultVelocityManager.java:321) at com.xpn.xwiki.internal.template.VelocityTemplateEvaluator.evaluateContent(VelocityTemplateEvaluator.java:95) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.evaluateContent(TemplateAsyncRenderer.java:217) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.renderVelocity(TemplateAsyncRenderer.java:180) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:137) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:53) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.lambda$syncRender$0(DefaultAsyncRendererExecutor.java:267) at com.xpn.xwiki.internal.security.authorization.DefaultAuthorExecutor.call(DefaultAuthorExecutor.java:98) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.syncRender(DefaultAsyncRendererExecutor.java:267) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.render(DefaultAsyncRendererExecutor.java:250) at org.xwiki.rendering.async.internal.block.DefaultBlockAsyncRendererExecutor.render(DefaultBlockAsyncRendererExecutor.java:154) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:772) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:745) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:725) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:711) at com.xpn.xwiki.internal.template.DefaultTemplateManager.render(DefaultTemplateManager.java:78) at com.xpn.xwiki.XWiki.evaluateTemplate(XWiki.java:2516) at com.xpn.xwiki.XWiki.parseTemplate(XWiki.java:2494) at com.xpn.xwiki.api.XWiki.parseTemplate(XWiki.java:983) at jdk.internal.reflect.GeneratedMethodAccessor213.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:565) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:548) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:219) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:369) at org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:490) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:144) at org.apache.velocity.runtime.directive.VelocimacroProxy.render(VelocimacroProxy.java:215) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:328) at org.apache.velocity.runtime.directive.RuntimeMacro.render(RuntimeMacro.java:258) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:301) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:423) at org.apache.velocity.Template.merge(Template.java:358) at org.apache.velocity.Template.merge(Template.java:262) at org.xwiki.velocity.internal.DefaultVelocityEngine.evaluate(DefaultVelocityEngine.java:284) at com.xpn.xwiki.render.DefaultVelocityManager.evaluate(DefaultVelocityManager.java:321) at com.xpn.xwiki.internal.template.VelocityTemplateEvaluator.evaluateContent(VelocityTemplateEvaluator.java:95) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.evaluateContent(TemplateAsyncRenderer.java:217) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.renderVelocity(TemplateAsyncRenderer.java:180) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:137) at com.xpn.xwiki.internal.template.TemplateAsyncRenderer.render(TemplateAsyncRenderer.java:53) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.lambda$syncRender$0(DefaultAsyncRendererExecutor.java:267) at com.xpn.xwiki.internal.security.authorization.DefaultAuthorExecutor.call(DefaultAuthorExecutor.java:98) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.syncRender(DefaultAsyncRendererExecutor.java:267) at org.xwiki.rendering.async.internal.DefaultAsyncRendererExecutor.render(DefaultAsyncRendererExecutor.java:250) at org.xwiki.rendering.async.internal.block.DefaultBlockAsyncRendererExecutor.render(DefaultBlockAsyncRendererExecutor.java:154) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:772) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:745) at com.xpn.xwiki.internal.template.InternalTemplateManager.renderFromSkin(InternalTemplateManager.java:725) at com.xpn.xwiki.internal.template.InternalTemplateManager.render(InternalTemplateManager.java:711) at com.xpn.xwiki.internal.template.DefaultTemplateManager.render(DefaultTemplateManager.java:78) at com.xpn.xwiki.XWiki.evaluateTemplate(XWiki.java:2516) at com.xpn.xwiki.web.Utils.parseTemplate(Utils.java:179) at com.xpn.xwiki.web.XWikiAction.execute(XWikiAction.java:572) at com.xpn.xwiki.web.XWikiAction.execute(XWikiAction.java:250) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:425) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:228) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913) at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder$NotAsyncServlet.service(ServletHolder.java:1411) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:763) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1631) at com.xpn.xwiki.web.ActionFilter.doFilter(ActionFilter.java:122) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1618) at org.xwiki.wysiwyg.filter.ConversionFilter.doFilter(ConversionFilter.java:109) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1618) at org.xwiki.container.servlet.filters.internal.SetHTTPHeaderFilter.doFilter(SetHTTPHeaderFilter.java:63) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1618) at org.xwiki.container.servlet.filters.internal.SavedRequestRestorerFilter.doFilter(SavedRequestRestorerFilter.java:208) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1618) at org.xwiki.container.servlet.filters.internal.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:111) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1618) at org.xwiki.resource.servlet.RoutingFilter.doFilter(RoutingFilter.java:132) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:549) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:602) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1369) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:489) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1284) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:501) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.lang.NullPointerException at com.novell.ldap.asn1.ASN1OctetString.(ASN1OctetString.java:73) at com.novell.ldap.rfc2251.RfcLDAPString.(RfcLDAPString.java:32) at com.novell.ldap.rfc2251.RfcLDAPDN.(RfcLDAPDN.java:35) at com.novell.ldap.LDAPSearchRequest.(LDAPSearchRequest.java:182) at com.novell.ldap.LDAPConnection.search(LDAPConnection.java:3516) at com.novell.ldap.LDAPConnection.search(LDAPConnection.java:3412) at org.xwiki.contrib.ldap.PagedLDAPSearchResults.search(PagedLDAPSearchResults.java:111) at org.xwiki.contrib.ldap.PagedLDAPSearchResults.(PagedLDAPSearchResults.java:94) at org.xwiki.contrib.ldap.XWikiLDAPConnection.searchPaginated(XWikiLDAPConnection.java:404) at org.xwiki.contrib.ldap.XWikiLDAPConnection.searchLDAP(XWikiLDAPConnection.java:335) at org.xwiki.contrib.ldap.XWikiLDAPUtils.syncUser(XWikiLDAPUtils.java:1108) at com.xwiki.ldapuserimport.internal.DefaultLDAPUserImportManager.importUsers(DefaultLDAPUserImportManager.java:439) at com.xwiki.ldapuserimport.internal.DefaultLDAPUserImportManager.updateGroup(DefaultLDAPUserImportManager.java:665) at com.xwiki.ldapuserimport.script.LDAPUserImportScriptService.updateGroup(LDAPUserImportScriptService.java:137) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:565) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:548) at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:219) ... 251 more [Download](blob:https://up1.xwikisas.com/2f6ea1ba-4642-406e-a0f6-68ea72463304)[View In Browser](blob:https://up1.xwikisas.com/df7e6890-3ec0-4c76-b882-e3436d122293)[Delete](https://up1.xwikisas.com/del?delkey=fae7e23587b18ecbdfcb0dbf4f7dd9f77f8cbd9e72158a6f67dec9cede049528&ident=GhsXhIneu4Uevn6JPn56yA) ``` This is not reproducing if `sAMAccountName` is set as uid attribute",0,groups synchronization can produce duplicate users when username is in lowercase have configured in ldap a group that contains a user with lowercase username let s say and steps to reproduce set cn as uid attribute map to a local group e g xwiki login with and observe that the profile was created go to groups administration and update the first click on update to see the members and then on confirm expected result the correct number of members is displayed and the synchronization is done correctly if needed actual result an error is displayed and after checking a new xwiki is created pasted text txt edit paste org apache velocity exception methodinvocationexception invocation of method updategroup in class com xwiki ldapuserimport script ldapuserimportscriptservice threw exception java lang nullpointerexception at xwiki ldapuserimport ldapuserimportservice at org apache velocity runtime parser node astmethod handleinvocationexception astmethod java at org apache velocity runtime parser node astmethod execute astmethod java at org apache velocity runtime parser node astreference execute astreference java at org apache velocity runtime parser node astreference evaluate astreference java at org apache velocity runtime parser node astexpression evaluate astexpression java at org apache velocity runtime parser node astifstatement render astifstatement java at org apache velocity runtime parser node astblock render astblock java at org apache velocity runtime parser node astelseifstatement render astelseifstatement java at org apache velocity runtime parser node astifstatement render astifstatement java at org apache velocity runtime parser node astblock render astblock java at org xwiki velocity internal directive trycatchdirective render trycatchdirective java at org apache velocity runtime parser node astdirective render astdirective java at org apache velocity runtime parser node astblock render astblock java at org apache velocity runtime parser node astifstatement render astifstatement java at org apache velocity runtime parser node astblock render astblock java at org apache velocity runtime parser node astifstatement render astifstatement java at org apache velocity runtime parser node simplenode render simplenode java at org apache velocity template merge template java at org apache velocity template merge template java at org xwiki velocity internal defaultvelocityengine evaluate defaultvelocityengine java at com xpn xwiki render defaultvelocitymanager evaluate defaultvelocitymanager java at org xwiki rendering internal macro velocity velocitymacro evaluatestring velocitymacro java at org xwiki rendering internal macro velocity velocitymacro evaluatestring velocitymacro java at org xwiki rendering macro script abstractscriptmacro evaluateblock abstractscriptmacro java at org xwiki rendering macro script abstractscriptmacro execute abstractscriptmacro java at org xwiki rendering macro script abstractscriptmacro execute abstractscriptmacro java at org xwiki rendering internal transformation macro macrotransformation transform macrotransformation java at org xwiki rendering internal transformation defaultrenderingcontext transformincontext defaultrenderingcontext java at org xwiki rendering internal transformation defaulttransformationmanager performtransformations defaulttransformationmanager java at org xwiki display internal documentcontentasyncexecutor executeincurrentexecutioncontext documentcontentasyncexecutor java at org xwiki display internal documentcontentasyncexecutor execute documentcontentasyncexecutor java at org xwiki display internal documentcontentasyncrenderer execute documentcontentasyncrenderer java at org xwiki rendering async internal block abstractblockasyncrenderer render abstractblockasyncrenderer java at org xwiki rendering async internal block abstractblockasyncrenderer render abstractblockasyncrenderer java at org xwiki rendering async internal defaultasyncrendererexecutor syncrender defaultasyncrendererexecutor java at org xwiki rendering async internal defaultasyncrendererexecutor render defaultasyncrendererexecutor java at org xwiki rendering async internal block defaultblockasyncrendererexecutor execute defaultblockasyncrendererexecutor java at org xwiki display internal documentcontentdisplayer display documentcontentdisplayer java at org xwiki display internal documentcontentdisplayer display documentcontentdisplayer java at org xwiki display internal defaultdocumentdisplayer display defaultdocumentdisplayer java at org xwiki display internal defaultdocumentdisplayer display defaultdocumentdisplayer java at org xwiki sheet internal sheetdocumentdisplayer display sheetdocumentdisplayer java at org xwiki sheet internal sheetdocumentdisplayer display sheetdocumentdisplayer java at org xwiki display internal configureddocumentdisplayer display configureddocumentdisplayer java at org xwiki display internal configureddocumentdisplayer display configureddocumentdisplayer java at com xpn xwiki doc xwikidocument display xwikidocument java at com xpn xwiki doc xwikidocument getrenderedcontent xwikidocument java at com xpn xwiki doc xwikidocument displaydocument xwikidocument java at com xpn xwiki api document displaydocument document java at jdk internal reflect invoke unknown source at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org apache velocity util introspection uberspectimpl velmethodimpl doinvoke uberspectimpl java at org apache velocity util introspection uberspectimpl velmethodimpl invoke uberspectimpl java at org apache velocity runtime parser node astmethod execute astmethod java at org apache velocity runtime parser node astreference execute astreference java at org apache velocity runtime parser node astreference value astreference java at org apache velocity runtime parser node astexpression value astexpression java at org apache velocity runtime parser node astsetdirective render astsetdirective java at org apache velocity runtime parser node astblock render astblock java at org apache velocity runtime parser node simplenode render simplenode java at org apache velocity runtime parser node astifstatement render astifstatement java at org apache velocity runtime parser node astblock render astblock java at org apache velocity runtime parser node astifstatement render astifstatement java at org apache velocity runtime parser node astblock render astblock java at org xwiki velocity internal directive trycatchdirective render trycatchdirective java at org apache velocity runtime parser node astdirective render astdirective java at org apache velocity runtime parser node simplenode render simplenode java at org apache velocity template merge template java at org apache velocity template merge template java at org xwiki velocity internal defaultvelocityengine evaluate defaultvelocityengine java at com xpn xwiki render defaultvelocitymanager evaluate defaultvelocitymanager java at com xpn xwiki internal template velocitytemplateevaluator evaluatecontent velocitytemplateevaluator java at com xpn xwiki internal template templateasyncrenderer evaluatecontent templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer rendervelocity templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer render templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer render templateasyncrenderer java at org xwiki rendering async internal defaultasyncrendererexecutor lambda syncrender defaultasyncrendererexecutor java at com xpn xwiki internal security authorization defaultauthorexecutor call defaultauthorexecutor java at org xwiki rendering async internal defaultasyncrendererexecutor syncrender defaultasyncrendererexecutor java at org xwiki rendering async internal defaultasyncrendererexecutor render defaultasyncrendererexecutor java at org xwiki rendering async internal block defaultblockasyncrendererexecutor render defaultblockasyncrendererexecutor java at com xpn xwiki internal template internaltemplatemanager render internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager renderfromskin internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager renderfromskin internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager render internaltemplatemanager java at com xpn xwiki internal template defaulttemplatemanager render defaulttemplatemanager java at com xpn xwiki xwiki evaluatetemplate xwiki java at com xpn xwiki xwiki parsetemplate xwiki java at com xpn xwiki api xwiki parsetemplate xwiki java at jdk internal reflect invoke unknown source at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org apache velocity util introspection uberspectimpl velmethodimpl doinvoke uberspectimpl java at org apache velocity util introspection uberspectimpl velmethodimpl invoke uberspectimpl java at org apache velocity runtime parser node astmethod execute astmethod java at org apache velocity runtime parser node astreference execute astreference java at org apache velocity runtime parser node astreference render astreference java at org apache velocity runtime parser node astblock render astblock java at org apache velocity runtime directive velocimacroproxy render velocimacroproxy java at org apache velocity runtime directive runtimemacro render runtimemacro java at org apache velocity runtime directive runtimemacro render runtimemacro java at org apache velocity runtime parser node astdirective render astdirective java at org apache velocity runtime parser node simplenode render simplenode java at org apache velocity template merge template java at org apache velocity template merge template java at org xwiki velocity internal defaultvelocityengine evaluate defaultvelocityengine java at com xpn xwiki render defaultvelocitymanager evaluate defaultvelocitymanager java at com xpn xwiki internal template velocitytemplateevaluator evaluatecontent velocitytemplateevaluator java at com xpn xwiki internal template templateasyncrenderer evaluatecontent templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer rendervelocity templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer render templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer render templateasyncrenderer java at org xwiki rendering async internal defaultasyncrendererexecutor lambda syncrender defaultasyncrendererexecutor java at com xpn xwiki internal security authorization defaultauthorexecutor call defaultauthorexecutor java at org xwiki rendering async internal defaultasyncrendererexecutor syncrender defaultasyncrendererexecutor java at org xwiki rendering async internal defaultasyncrendererexecutor render defaultasyncrendererexecutor java at org xwiki rendering async internal block defaultblockasyncrendererexecutor render defaultblockasyncrendererexecutor java at com xpn xwiki internal template internaltemplatemanager render internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager renderfromskin internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager renderfromskin internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager render internaltemplatemanager java at com xpn xwiki internal template defaulttemplatemanager render defaulttemplatemanager java at com xpn xwiki xwiki evaluatetemplate xwiki java at com xpn xwiki xwiki parsetemplate xwiki java at com xpn xwiki api xwiki parsetemplate xwiki java at jdk internal reflect invoke unknown source at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org apache velocity util introspection uberspectimpl velmethodimpl doinvoke uberspectimpl java at org apache velocity util introspection uberspectimpl velmethodimpl invoke uberspectimpl java at org apache velocity runtime parser node astmethod execute astmethod java at org apache velocity runtime parser node astreference execute astreference java at org apache velocity runtime parser node astreference render astreference java at org apache velocity runtime parser node astblock render astblock java at org apache velocity runtime directive velocimacroproxy render velocimacroproxy java at org apache velocity runtime directive runtimemacro render runtimemacro java at org apache velocity runtime directive runtimemacro render runtimemacro java at org apache velocity runtime parser node astdirective render astdirective java at org apache velocity runtime parser node simplenode render simplenode java at org apache velocity template merge template java at org apache velocity template merge template java at org xwiki velocity internal defaultvelocityengine evaluate defaultvelocityengine java at com xpn xwiki render defaultvelocitymanager evaluate defaultvelocitymanager java at com xpn xwiki internal template velocitytemplateevaluator evaluatecontent velocitytemplateevaluator java at com xpn xwiki internal template templateasyncrenderer evaluatecontent templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer rendervelocity templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer render templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer render templateasyncrenderer java at org xwiki rendering async internal defaultasyncrendererexecutor lambda syncrender defaultasyncrendererexecutor java at com xpn xwiki internal security authorization defaultauthorexecutor call defaultauthorexecutor java at org xwiki rendering async internal defaultasyncrendererexecutor syncrender defaultasyncrendererexecutor java at org xwiki rendering async internal defaultasyncrendererexecutor render defaultasyncrendererexecutor java at org xwiki rendering async internal block defaultblockasyncrendererexecutor render defaultblockasyncrendererexecutor java at com xpn xwiki internal template internaltemplatemanager render internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager renderfromskin internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager renderfromskin internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager render internaltemplatemanager java at com xpn xwiki internal template defaulttemplatemanager render defaulttemplatemanager java at com xpn xwiki xwiki evaluatetemplate xwiki java at com xpn xwiki xwiki parsetemplate xwiki java at com xpn xwiki api xwiki parsetemplate xwiki java at jdk internal reflect invoke unknown source at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org apache velocity util introspection uberspectimpl velmethodimpl doinvoke uberspectimpl java at org apache velocity util introspection uberspectimpl velmethodimpl invoke uberspectimpl java at org apache velocity runtime parser node astmethod execute astmethod java at org apache velocity runtime parser node astreference execute astreference java at org apache velocity runtime parser node astreference render astreference java at org apache velocity runtime parser node astblock render astblock java at org apache velocity runtime directive velocimacroproxy render velocimacroproxy java at org apache velocity runtime directive runtimemacro render runtimemacro java at org apache velocity runtime directive runtimemacro render runtimemacro java at org apache velocity runtime parser node astdirective render astdirective java at org apache velocity runtime parser node simplenode render simplenode java at org apache velocity template merge template java at org apache velocity template merge template java at org xwiki velocity internal defaultvelocityengine evaluate defaultvelocityengine java at com xpn xwiki render defaultvelocitymanager evaluate defaultvelocitymanager java at com xpn xwiki internal template velocitytemplateevaluator evaluatecontent velocitytemplateevaluator java at com xpn xwiki internal template templateasyncrenderer evaluatecontent templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer rendervelocity templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer render templateasyncrenderer java at com xpn xwiki internal template templateasyncrenderer render templateasyncrenderer java at org xwiki rendering async internal defaultasyncrendererexecutor lambda syncrender defaultasyncrendererexecutor java at com xpn xwiki internal security authorization defaultauthorexecutor call defaultauthorexecutor java at org xwiki rendering async internal defaultasyncrendererexecutor syncrender defaultasyncrendererexecutor java at org xwiki rendering async internal defaultasyncrendererexecutor render defaultasyncrendererexecutor java at org xwiki rendering async internal block defaultblockasyncrendererexecutor render defaultblockasyncrendererexecutor java at com xpn xwiki internal template internaltemplatemanager render internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager renderfromskin internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager renderfromskin internaltemplatemanager java at com xpn xwiki internal template internaltemplatemanager render internaltemplatemanager java at com xpn xwiki internal template defaulttemplatemanager render defaulttemplatemanager java at com xpn xwiki xwiki evaluatetemplate xwiki java at com xpn xwiki web utils parsetemplate utils java at com xpn xwiki web xwikiaction execute xwikiaction java at com xpn xwiki web xwikiaction execute xwikiaction java at org apache struts action requestprocessor processactionperform requestprocessor java at org apache struts action requestprocessor process requestprocessor java at org apache struts action actionservlet process actionservlet java at org apache struts action actionservlet dopost actionservlet java at javax servlet http httpservlet service httpservlet java at javax servlet http httpservlet service httpservlet java at org eclipse jetty servlet servletholder notasyncservlet service servletholder java at org eclipse jetty servlet servletholder handle servletholder java at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java at com xpn xwiki web actionfilter dofilter actionfilter java at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java at org xwiki wysiwyg filter conversionfilter dofilter conversionfilter java at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java at org xwiki container servlet filters internal sethttpheaderfilter dofilter sethttpheaderfilter java at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java at org xwiki container servlet filters internal savedrequestrestorerfilter dofilter savedrequestrestorerfilter java at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java at org xwiki container servlet filters internal setcharacterencodingfilter dofilter setcharacterencodingfilter java at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java at org xwiki resource servlet routingfilter dofilter routingfilter java at org eclipse jetty servlet servlethandler cachedchain dofilter servlethandler java at org eclipse jetty servlet servlethandler dohandle servlethandler java at org eclipse jetty server handler scopedhandler handle scopedhandler java at org eclipse jetty security securityhandler handle securityhandler java at org eclipse jetty server handler handlerwrapper handle handlerwrapper java at org eclipse jetty server handler scopedhandler nexthandle scopedhandler java at org eclipse jetty server session sessionhandler dohandle sessionhandler java at org eclipse jetty server handler scopedhandler nexthandle scopedhandler java at org eclipse jetty server handler contexthandler dohandle contexthandler java at org eclipse jetty server handler scopedhandler nextscope scopedhandler java at org eclipse jetty servlet servlethandler doscope servlethandler java at org eclipse jetty server session sessionhandler doscope sessionhandler java at org eclipse jetty server handler scopedhandler nextscope scopedhandler java at org eclipse jetty server handler contexthandler doscope contexthandler java at org eclipse jetty server handler scopedhandler handle scopedhandler java at org eclipse jetty server handler contexthandlercollection handle contexthandlercollection java at org eclipse jetty server handler handlercollection handle handlercollection java at org eclipse jetty server handler handlerwrapper handle handlerwrapper java at org eclipse jetty server server handle server java at org eclipse jetty server httpchannel lambda handle httpchannel java at org eclipse jetty server httpchannel dispatch httpchannel java at org eclipse jetty server httpchannel handle httpchannel java at org eclipse jetty server httpconnection onfillable httpconnection java at org eclipse jetty io abstractconnection readcallback succeeded abstractconnection java at org eclipse jetty io fillinterest fillable fillinterest java at org eclipse jetty io channelendpoint run channelendpoint java at org eclipse jetty util thread strategy eatwhatyoukill runtask eatwhatyoukill java at org eclipse jetty util thread strategy eatwhatyoukill doproduce eatwhatyoukill java at org eclipse jetty util thread strategy eatwhatyoukill tryproduce eatwhatyoukill java at org eclipse jetty util thread strategy eatwhatyoukill run eatwhatyoukill java at org eclipse jetty util thread reservedthreadexecutor reservedthread run reservedthreadexecutor java at org eclipse jetty util thread queuedthreadpool runjob queuedthreadpool java at org eclipse jetty util thread queuedthreadpool runner run queuedthreadpool java at java base java lang thread run thread java caused by java lang nullpointerexception at com novell ldap java at com novell ldap rfcldapstring rfcldapstring java at com novell ldap rfcldapdn rfcldapdn java at com novell ldap ldapsearchrequest ldapsearchrequest java at com novell ldap ldapconnection search ldapconnection java at com novell ldap ldapconnection search ldapconnection java at org xwiki contrib ldap pagedldapsearchresults search pagedldapsearchresults java at org xwiki contrib ldap pagedldapsearchresults pagedldapsearchresults java at org xwiki contrib ldap xwikildapconnection searchpaginated xwikildapconnection java at org xwiki contrib ldap xwikildapconnection searchldap xwikildapconnection java at org xwiki contrib ldap xwikildaputils syncuser xwikildaputils java at com xwiki ldapuserimport internal defaultldapuserimportmanager importusers defaultldapuserimportmanager java at com xwiki ldapuserimport internal defaultldapuserimportmanager updategroup defaultldapuserimportmanager java at com xwiki ldapuserimport script ldapuserimportscriptservice updategroup ldapuserimportscriptservice java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org apache velocity util introspection uberspectimpl velmethodimpl doinvoke uberspectimpl java at org apache velocity util introspection uberspectimpl velmethodimpl invoke uberspectimpl java at org apache velocity runtime parser node astmethod execute astmethod java more blob blob this is not reproducing if samaccountname is set as uid attribute,0 230392,25465639089.0,IssuesEvent,2022-11-25 03:47:23,phytomichael/KSA1,https://api.github.com/repos/phytomichael/KSA1,opened,CVE-2022-45868 (High) detected in h2-1.3.162.jar,security vulnerability,"## CVE-2022-45868 - High Severity Vulnerability
Vulnerable Library - h2-1.3.162.jar

H2 Database Engine

Library home page: http://www.h2database.com

Path to dependency file: /KSA1/ksa/ksa/ksa-debug/pom.xml

Path to vulnerable library: /2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar

Dependency Hierarchy: - :x: **h2-1.3.162.jar** (Vulnerable Library)

Vulnerability Details

The web-based admin console in H2 Database Engine through 2.1.214 can be started via the CLI with the argument -webAdminPassword, which allows the user to specify the password in cleartext for the web admin console. Consequently, a local user (or an attacker that has obtained local access through some means) would be able to discover the password by listing processes and their arguments. NOTE: the vendor states ""This is not a vulnerability of H2 Console ... Passwords should never be passed on the command line and every qualified DBA or system administrator is expected to know that.""

Publish Date: 2022-11-23

URL: CVE-2022-45868

CVSS 3 Score Details (8.4)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

",True,"CVE-2022-45868 (High) detected in h2-1.3.162.jar - ## CVE-2022-45868 - High Severity Vulnerability
Vulnerable Library - h2-1.3.162.jar

H2 Database Engine

Library home page: http://www.h2database.com

Path to dependency file: /KSA1/ksa/ksa/ksa-debug/pom.xml

Path to vulnerable library: /2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar,/root/.m2/repository/com/h2database/h2/1.3.162/h2-1.3.162.jar

Dependency Hierarchy: - :x: **h2-1.3.162.jar** (Vulnerable Library)

Vulnerability Details

The web-based admin console in H2 Database Engine through 2.1.214 can be started via the CLI with the argument -webAdminPassword, which allows the user to specify the password in cleartext for the web admin console. Consequently, a local user (or an attacker that has obtained local access through some means) would be able to discover the password by listing processes and their arguments. NOTE: the vendor states ""This is not a vulnerability of H2 Console ... Passwords should never be passed on the command line and every qualified DBA or system administrator is expected to know that.""

Publish Date: 2022-11-23

URL: CVE-2022-45868

CVSS 3 Score Details (8.4)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

",0,cve high detected in jar cve high severity vulnerability vulnerable library jar database engine library home page a href path to dependency file ksa ksa ksa debug pom xml path to vulnerable library repository com jar root repository com jar root repository com jar root repository com jar root repository com jar root repository com jar root repository com jar root repository com jar root repository com jar root repository com jar root repository com jar root repository com jar root repository com jar root repository com jar root repository com jar dependency hierarchy x jar vulnerable library vulnerability details the web based admin console in database engine through can be started via the cli with the argument webadminpassword which allows the user to specify the password in cleartext for the web admin console consequently a local user or an attacker that has obtained local access through some means would be able to discover the password by listing processes and their arguments note the vendor states this is not a vulnerability of console passwords should never be passed on the command line and every qualified dba or system administrator is expected to know that publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href ,0 2051,11311267651.0,IssuesEvent,2020-01-20 01:08:00,home-assistant/home-assistant-polymer,https://api.github.com/repos/home-assistant/home-assistant-polymer,closed,Automations UI needs clarification on conditions,editor: automation in progress,"The automation GUI says: ""Conditions are an optional part of an automation rule and can be used to prevent an action from happening when triggered. "" This implies conditions will prevent an action happening if a state is evaluated as true. So if a user was to check for a state that should cancel the action when that state is true - the action would actually continue. This is incorrect as conditions function as: all conditions must be true to continue execution. This could probably do with clarification that: ""Conditions can prevent an automation from continuing, all conditions must be true to continue execution."" This is in the GUI here: https://i.imgur.com/QoRPjKc.jpg ",1.0,"Automations UI needs clarification on conditions - The automation GUI says: ""Conditions are an optional part of an automation rule and can be used to prevent an action from happening when triggered. "" This implies conditions will prevent an action happening if a state is evaluated as true. So if a user was to check for a state that should cancel the action when that state is true - the action would actually continue. This is incorrect as conditions function as: all conditions must be true to continue execution. This could probably do with clarification that: ""Conditions can prevent an automation from continuing, all conditions must be true to continue execution."" This is in the GUI here: https://i.imgur.com/QoRPjKc.jpg ",1,automations ui needs clarification on conditions the automation gui says conditions are an optional part of an automation rule and can be used to prevent an action from happening when triggered this implies conditions will prevent an action happening if a state is evaluated as true so if a user was to check for a state that should cancel the action when that state is true the action would actually continue this is incorrect as conditions function as all conditions must be true to continue execution this could probably do with clarification that conditions can prevent an automation from continuing all conditions must be true to continue execution this is in the gui here ,1 9553,29667221982.0,IssuesEvent,2023-06-11 00:17:11,nephio-project/nephio,https://api.github.com/repos/nephio-project/nephio,closed,E2E testing : Deploy free5gc operator on workload cluster using Nephio,area/package-management sig/automation,"Make sure all pods are up and running ",1.0,"E2E testing : Deploy free5gc operator on workload cluster using Nephio - Make sure all pods are up and running ",1, testing deploy operator on workload cluster using nephio make sure all pods are up and running ,1 7522,25027186176.0,IssuesEvent,2022-11-04 09:09:12,Budibase/budibase,https://api.github.com/repos/Budibase/budibase,closed,Webhook automation export app,bug automations sev3 - substantial env - production webhook,"**Hosting** - Self - Method: k8s - Budibase Version: 2.0.26 - App Version: 2.0.26 **Describe the bug** When exporting an app, the webhook automations do not update the appId, so the urls are not valid when you import it in other instance. **To Reproduce** 1. Create an app 2. Create an automation triggered by a webhook 3. Export the app 4. Import the app 5. The url are not valid **Expected behavior** I'd expect that the urls created for the new automation are valid ",1.0,"Webhook automation export app - **Hosting** - Self - Method: k8s - Budibase Version: 2.0.26 - App Version: 2.0.26 **Describe the bug** When exporting an app, the webhook automations do not update the appId, so the urls are not valid when you import it in other instance. **To Reproduce** 1. Create an app 2. Create an automation triggered by a webhook 3. Export the app 4. Import the app 5. The url are not valid **Expected behavior** I'd expect that the urls created for the new automation are valid ",1,webhook automation export app hosting self method budibase version app version describe the bug when exporting an app the webhook automations do not update the appid so the urls are not valid when you import it in other instance to reproduce create an app create an automation triggered by a webhook export the app import the app the url are not valid expected behavior i d expect that the urls created for the new automation are valid ,1 310234,23326959866.0,IssuesEvent,2022-08-08 22:27:20,Kwenta/margin-manager,https://api.github.com/repos/Kwenta/margin-manager,closed,Update KIP,documentation,"In the [KIP-18](https://kips-git-feat-cross-margin-and-advanced-orders-kwenta.vercel.app/kips/kip-18/#fee-structure), it is specified that the fee is calculated based on the total margin balanced. However, the implementation calculates the fee based on the total size change.",1.0,"Update KIP - In the [KIP-18](https://kips-git-feat-cross-margin-and-advanced-orders-kwenta.vercel.app/kips/kip-18/#fee-structure), it is specified that the fee is calculated based on the total margin balanced. However, the implementation calculates the fee based on the total size change.",0,update kip in the it is specified that the fee is calculated based on the total margin balanced however the implementation calculates the fee based on the total size change ,0 8562,27132594109.0,IssuesEvent,2023-02-16 10:51:02,kedacore/keda,https://api.github.com/repos/kedacore/keda,closed,Change workflows to auto-add issues to GitHub Project because of deprecation,help wanted automation,"Change workflows to auto-add issues to GitHub Project https://www.cloudwithchris.com/blog/github-projects-ga-automation-updates/",1.0,"Change workflows to auto-add issues to GitHub Project because of deprecation - Change workflows to auto-add issues to GitHub Project https://www.cloudwithchris.com/blog/github-projects-ga-automation-updates/",1,change workflows to auto add issues to github project because of deprecation change workflows to auto add issues to github project ,1 140590,12942715378.0,IssuesEvent,2020-07-18 03:23:48,eduviictor/event-manager,https://api.github.com/repos/eduviictor/event-manager,closed,Atualizar documento de contagem de ponto de função,documentation help wanted,"O modelo para o documento de contagem de ponto de função pode ser encontrado clicando [**aqui**](https://docs.google.com/document/d/1s4bMbrpQt9RF6tymXvI0HHfQO14hMyL08UxmX1eH82s/edit?usp=sharing). Duas contagens são necessárias: - Contagem indicativa do tamanho funcional do software; - Contagem detalhada do tamanho funcional dos user stories (cada membro da equipe faz **uma contagem detalhada** em **um user story diferente**). Segundo o plano de iteração, **o gerente deve fazer a contagem indicativa do tamanho funcional de Projeto**.",1.0,"Atualizar documento de contagem de ponto de função - O modelo para o documento de contagem de ponto de função pode ser encontrado clicando [**aqui**](https://docs.google.com/document/d/1s4bMbrpQt9RF6tymXvI0HHfQO14hMyL08UxmX1eH82s/edit?usp=sharing). Duas contagens são necessárias: - Contagem indicativa do tamanho funcional do software; - Contagem detalhada do tamanho funcional dos user stories (cada membro da equipe faz **uma contagem detalhada** em **um user story diferente**). Segundo o plano de iteração, **o gerente deve fazer a contagem indicativa do tamanho funcional de Projeto**.",0,atualizar documento de contagem de ponto de função o modelo para o documento de contagem de ponto de função pode ser encontrado clicando duas contagens são necessárias contagem indicativa do tamanho funcional do software contagem detalhada do tamanho funcional dos user stories cada membro da equipe faz uma contagem detalhada em um user story diferente segundo o plano de iteração o gerente deve fazer a contagem indicativa do tamanho funcional de projeto ,0 583941,17401722787.0,IssuesEvent,2021-08-02 20:44:49,spicygreenbook/greenbook-app,https://api.github.com/repos/spicygreenbook/greenbook-app,closed,Listing Reorganization,Priority: Medium Status: In Progress Type: Enhancement,"We have completed a mobile site audit. Pain Point 2 See image below to see how the listings should be reorganized. ![image](https://user-images.githubusercontent.com/69876068/115070228-843a3180-9ea9-11eb-9ed0-3b954ee8de70.png) See entire audit [here](https://prismic-io.s3.amazonaws.com/spicygreenbook/61231769-e7dc-4d34-a18e-389e72912bbb_SGB+Mobile+Site+Audit.pptx.pdf) ",1.0,"Listing Reorganization - We have completed a mobile site audit. Pain Point 2 See image below to see how the listings should be reorganized. ![image](https://user-images.githubusercontent.com/69876068/115070228-843a3180-9ea9-11eb-9ed0-3b954ee8de70.png) See entire audit [here](https://prismic-io.s3.amazonaws.com/spicygreenbook/61231769-e7dc-4d34-a18e-389e72912bbb_SGB+Mobile+Site+Audit.pptx.pdf) ",0,listing reorganization we have completed a mobile site audit pain point see image below to see how the listings should be reorganized see entire audit ,0 968,8882007568.0,IssuesEvent,2019-01-14 11:53:57,nf-core/tools,https://api.github.com/repos/nf-core/tools,closed,Increase the code coverage,automation command line tools,"* [x] Raise the code coverage up to more than 90% again. #186 * [ ] Check for misleading error messages and make them more human-readable (i.e. #180 ) ",1.0,"Increase the code coverage - * [x] Raise the code coverage up to more than 90% again. #186 * [ ] Check for misleading error messages and make them more human-readable (i.e. #180 ) ",1,increase the code coverage raise the code coverage up to more than again check for misleading error messages and make them more human readable i e ,1 7608,25254344408.0,IssuesEvent,2022-11-15 16:50:50,wazuh/wazuh-kibana-app,https://api.github.com/repos/wazuh/wazuh-kibana-app,closed,Automated - Test Plan - Add increased coverage for Agent's filter functionallity,automation,"In this issue, we will track all the efforts to increase the coverage of the automation test cases for the agent's filter functionallity. Perhaps, we add other test cases to increase the coverage elsewhere in the App. In these test cases, they are normally executed manually by those that will be added to be automated. Also, all test cases added should be executed successfully on the Wzd and Xpack environments. ## Tasks - [x] https://github.com/wazuh/wazuh-kibana-app/issues/4793 - [x] https://github.com/wazuh/wazuh-kibana-app/issues/4811 - [x] https://github.com/wazuh/wazuh-kibana-app/issues/4819 - [x] https://github.com/wazuh/wazuh-kibana-app/issues/4835 ",1.0,"Automated - Test Plan - Add increased coverage for Agent's filter functionallity - In this issue, we will track all the efforts to increase the coverage of the automation test cases for the agent's filter functionallity. Perhaps, we add other test cases to increase the coverage elsewhere in the App. In these test cases, they are normally executed manually by those that will be added to be automated. Also, all test cases added should be executed successfully on the Wzd and Xpack environments. ## Tasks - [x] https://github.com/wazuh/wazuh-kibana-app/issues/4793 - [x] https://github.com/wazuh/wazuh-kibana-app/issues/4811 - [x] https://github.com/wazuh/wazuh-kibana-app/issues/4819 - [x] https://github.com/wazuh/wazuh-kibana-app/issues/4835 ",1,automated test plan add increased coverage for agent s filter functionallity in this issue we will track all the efforts to increase the coverage of the automation test cases for the agent s filter functionallity perhaps we add other test cases to increase the coverage elsewhere in the app in these test cases they are normally executed manually by those that will be added to be automated also all test cases added should be executed successfully on the wzd and xpack environments tasks ,1 10244,32038854418.0,IssuesEvent,2023-09-22 17:31:12,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,closed,[DocDB][LST] ERROR: Invalid argument: Invalid table definition: Error creating index db_lst_320365.idx_tg1_2_c95_float8_c118_numeric_c1_date_c32_date_c136_text on the master: Not enough live tablet servers to create table with replication factor 3. Need a,kind/bug area/docdb priority/medium qa_automation,"Jira Link: [DB-2633](https://yugabyte.atlassian.net/browse/DB-2633) ### Description On a universe created with ` bin/yb-ctl --replication_factor 3 create --tserver_flags=ysql_enable_packed_row=true,ysql_packed_row_size_limit=1700 --master_flags=ysql_enable_packed_row=true,ysql_packed_row_size_limit=1700 ` Not sure if this is related to packed rows, couldn't reproduce it. Using LST against current master state (50cbd58f6bd317120fc2b7ebb4f0d83a4bceea27) fails with internal errors: ``` 2022-06-14 14:53:12,410 MainThread INFO 2022-06-14 14:53:12,410 MainThread INFO -------------------------------------------------------------------------------- 2022-06-14 14:53:12,410 MainThread INFO Running Long System Test 0.1 2022-06-14 14:53:12,410 MainThread INFO -------------------------------------------------------------------------------- 2022-06-14 14:53:12,410 MainThread INFO 2022-06-14 14:53:12,455 MainThread INFO Reproduce with: git checkout 28535cc6 && ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=4 --runtime=0 --complexity=full --max-columns=500 --seed=320365 2022-06-14 14:53:12,609 MainThread INFO Database version: PostgreSQL 11.2-YB-2.15.1.0-b0 on arm-apple-darwin21.5.0, compiled by Apple clang version 13.1.6 (clang-1316.0.21.2.5), 64-bit 2022-06-14 14:53:12,626 MainThread INFO Creating tables for database db_lst_320365 2022-06-14 14:53:31,793 MainThread INFO Starting worker_0: RandomSelectAction, SetConfigAction 2022-06-14 14:53:31,794 MainThread INFO Starting worker_1: RandomSelectAction, SetConfigAction 2022-06-14 14:53:31,794 MainThread INFO Starting worker_2: RandomSelectAction, SetConfigAction 2022-06-14 14:53:31,795 MainThread INFO Starting worker_3: CreateIndexAction, DropIndexAction, SetConfigAction, AddColumnAction 2022-06-14 14:53:41,803 MainThread INFO Worker queries/s: [088.9][111.1][106.6][004.8] 2022-06-14 14:53:51,804 MainThread INFO Worker queries/s: [112.5][120.3][122.3][004.2] 2022-06-14 14:54:01,805 MainThread INFO Worker queries/s: [073.2][073.4][072.8][007.1] 2022-06-14 14:54:11,807 MainThread INFO Worker queries/s: [087.5][076.7][093.5][004.2] 2022-06-14 14:54:21,809 MainThread INFO Worker queries/s: [118.8][118.0][130.8][004.1] 2022-06-14 14:54:26,953 worker_3 ERROR Unexpected query failure: InternalError_ Query: CREATE INDEX NONCONCURRENTLY idx_tg1_2_c95_float8_c118_numeric_c1_date_c32_date_c136_text ON tg1_2 USING btree (c95_float8 DESC NULLS FIRST, c118_numeric ASC NULLS FIRST, c1_date DESC NULLS LAST, c32_date ASC NULLS LAST, c136_text DESC NULLS LAST) WHERE TRUE; values: None runtime: 2022-06-14 14:54:26.918 - 2022-06-14 14:54:26.953 supports explain: False supports rollback: False affected rows: None Action: CreateIndexAction Error class: InternalError_ Error code: XX000 Error message: ERROR: Invalid argument: Invalid table definition: Error creating index db_lst_320365.idx_tg1_2_c95_float8_c118_numeric_c1_date_c32_date_c136_text on the master: Not enough live tablet servers to create table with replication factor 3. Need at least 2 tablet servers whereas 0 are alive. Transaction isolation level: committed DB Node: host: 127.0.0.1, port: 5433 DB Backend PID: 3829 ``` Related master logs: ``` I0614 14:54:26.926455 1891217408 master_heartbeat_service.cc:110] Got heartbeat from unknown tablet server { permanent_uuid: ""0b4dc4bd295947589f47382c22fe128b"" instance_seqno: 1655211145719926 start_time_us: 1655211145719926 } as 127.0.0.2:50094; Asking this server to re-register. I0614 14:54:26.931890 1891217408 master_heartbeat_service.cc:110] Got heartbeat from unknown tablet server { permanent_uuid: ""99c95ea5efea4cff91c2f41efb5cbf40"" instance_seqno: 1655211145712831 start_time_us: 1655211145712831 } as 127.0.0.1:50108; Asking this server to re-register. I0614 14:54:26.946957 1891790848 catalog_manager.cc:3245] CreateTable from 127.0.0.1:50095: name: ""idx_tg1_2_c95_float8_c118_numeric_c1_date_c32_date_c136_text"" schema { columns { name: ""c95_float8"" type { main: DOUBLE } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 2 order: 1 pg_type_oid: 701 } columns { name: ""c118_numeric"" type { main: DECIMAL } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 1 order: 2 pg_type_oid: 1700 } columns { name: ""c1_date"" type { main: INT32 } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 4 order: 3 pg_type_oid: 1082 } columns { name: ""c32_date"" type { main: INT32 } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 3 order: 4 pg_type_oid: 1082 } columns { name: ""c136_text"" type { main: STRING } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 4 order: 5 pg_type_oid: 25 } columns { name: ""ybidxbasectid"" type { main: BINARY } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 1 order: -101 pg_type_oid: 0 } table_properties { contain_counters: false is_transactional: true consistency_level: STRONG use_mangled_column_name: false is_ysql_catalog_table: false retain_delete_markers: false partition_key_version: 0 } colocated_table_id { } pgschema_name: ""public"" } num_tablets: 0 partition_schema { range_schema { columns { name: ""c95_float8"" } columns { name: ""c118_numeric"" } columns { name: ""c1_date"" } columns { name: ""c32_date"" } columns { name: ""c136_text"" } columns { name: ""ybidxbasectid"" } } } table_type: PGSQL_TABLE_TYPE namespace { id: ""0000bccd000030008000000000000000"" name: ""db_lst_320365"" database_type: YQL_DATABASE_PGSQL } indexed_table_id: ""0000bccd00003000800000000000c24f"" is_local_index: false is_unique_index: false table_id: ""0000bccd00003000800000000000c5d0"" index_info { indexed_table_id: ""0000bccd00003000800000000000c24f"" } is_colocated_via_database: false skip_index_backfill: true tablegroup_id: ""0000bccd00003000800000000000bcfe"" transaction { transaction_id: ""\213\353\340\325\013+C\340\215\253-\320\""\311i\275"" isolation: SNAPSHOT_ISOLATION status_tablet: ""eda10822320c452c99ed54ad0da38e0c"" priority: 14231579313254783101 start_hybrid_time: 6779745349315387392 locality: GLOBAL } is_backfill_deferred: false is_matview: false I0614 14:54:26.947255 1891790848 catalog_manager.cc:3452] Setting default tablets to 0 with 0 primary servers I0614 14:54:26.947271 1891790848 partition.cc:542] Creating partitions with num_tablets: 1 W0614 14:54:26.947286 1891790848 catalog_manager.cc:3950] Not enough live tablet servers to create table with replication factor 3. Need at least 2 tablet servers whereas 0 are alive.. Placement info: , replication factor flag: 3 W0614 14:54:26.947307 1891790848 master_service_base-internal.h:46] Unknown master error in status: Invalid argument (yb/master/catalog_manager.cc:3953): Not enough live tablet servers to create table with replication factor 3. Need at least 2 tablet servers whereas 0 are alive. I0614 14:54:26.968788 1899819008 ts_manager.cc:140] Registered new tablet server { permanent_uuid: ""99c95ea5efea4cff91c2f41efb5cbf40"" instance_seqno: 1655211145712831 start_time_us: 1655211145712831 } with Master, full list: [{99c95ea5efea4cff91c2f41efb5cbf40, 0x0000000157958888 -> { permanent_uuid: 99c95ea5efea4cff91c2f41efb5cbf40 registration: common { private_rpc_addresses { host: ""127.0.0.1"" port: 9100 } http_addresses { host: ""127.0.0.1"" port: 9000 } cloud_info { placement_cloud: ""cloud1"" placement_region: ""datacenter1"" placement_zone: ""rack1"" } placement_uuid: """" pg_port: 5433 } capabilities: 2189743739 capabilities: 1427296937 capabilities: 2980225056 placement_id: cloud1:datacenter1:rack1 }}] ``` [lst.zip](https://github.com/yugabyte/yugabyte-db/files/8899946/lst.zip) [yb-master.zip](https://github.com/yugabyte/yugabyte-db/files/8899959/yb-master.zip) ",1.0,"[DocDB][LST] ERROR: Invalid argument: Invalid table definition: Error creating index db_lst_320365.idx_tg1_2_c95_float8_c118_numeric_c1_date_c32_date_c136_text on the master: Not enough live tablet servers to create table with replication factor 3. Need a - Jira Link: [DB-2633](https://yugabyte.atlassian.net/browse/DB-2633) ### Description On a universe created with ` bin/yb-ctl --replication_factor 3 create --tserver_flags=ysql_enable_packed_row=true,ysql_packed_row_size_limit=1700 --master_flags=ysql_enable_packed_row=true,ysql_packed_row_size_limit=1700 ` Not sure if this is related to packed rows, couldn't reproduce it. Using LST against current master state (50cbd58f6bd317120fc2b7ebb4f0d83a4bceea27) fails with internal errors: ``` 2022-06-14 14:53:12,410 MainThread INFO 2022-06-14 14:53:12,410 MainThread INFO -------------------------------------------------------------------------------- 2022-06-14 14:53:12,410 MainThread INFO Running Long System Test 0.1 2022-06-14 14:53:12,410 MainThread INFO -------------------------------------------------------------------------------- 2022-06-14 14:53:12,410 MainThread INFO 2022-06-14 14:53:12,455 MainThread INFO Reproduce with: git checkout 28535cc6 && ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=4 --runtime=0 --complexity=full --max-columns=500 --seed=320365 2022-06-14 14:53:12,609 MainThread INFO Database version: PostgreSQL 11.2-YB-2.15.1.0-b0 on arm-apple-darwin21.5.0, compiled by Apple clang version 13.1.6 (clang-1316.0.21.2.5), 64-bit 2022-06-14 14:53:12,626 MainThread INFO Creating tables for database db_lst_320365 2022-06-14 14:53:31,793 MainThread INFO Starting worker_0: RandomSelectAction, SetConfigAction 2022-06-14 14:53:31,794 MainThread INFO Starting worker_1: RandomSelectAction, SetConfigAction 2022-06-14 14:53:31,794 MainThread INFO Starting worker_2: RandomSelectAction, SetConfigAction 2022-06-14 14:53:31,795 MainThread INFO Starting worker_3: CreateIndexAction, DropIndexAction, SetConfigAction, AddColumnAction 2022-06-14 14:53:41,803 MainThread INFO Worker queries/s: [088.9][111.1][106.6][004.8] 2022-06-14 14:53:51,804 MainThread INFO Worker queries/s: [112.5][120.3][122.3][004.2] 2022-06-14 14:54:01,805 MainThread INFO Worker queries/s: [073.2][073.4][072.8][007.1] 2022-06-14 14:54:11,807 MainThread INFO Worker queries/s: [087.5][076.7][093.5][004.2] 2022-06-14 14:54:21,809 MainThread INFO Worker queries/s: [118.8][118.0][130.8][004.1] 2022-06-14 14:54:26,953 worker_3 ERROR Unexpected query failure: InternalError_ Query: CREATE INDEX NONCONCURRENTLY idx_tg1_2_c95_float8_c118_numeric_c1_date_c32_date_c136_text ON tg1_2 USING btree (c95_float8 DESC NULLS FIRST, c118_numeric ASC NULLS FIRST, c1_date DESC NULLS LAST, c32_date ASC NULLS LAST, c136_text DESC NULLS LAST) WHERE TRUE; values: None runtime: 2022-06-14 14:54:26.918 - 2022-06-14 14:54:26.953 supports explain: False supports rollback: False affected rows: None Action: CreateIndexAction Error class: InternalError_ Error code: XX000 Error message: ERROR: Invalid argument: Invalid table definition: Error creating index db_lst_320365.idx_tg1_2_c95_float8_c118_numeric_c1_date_c32_date_c136_text on the master: Not enough live tablet servers to create table with replication factor 3. Need at least 2 tablet servers whereas 0 are alive. Transaction isolation level: committed DB Node: host: 127.0.0.1, port: 5433 DB Backend PID: 3829 ``` Related master logs: ``` I0614 14:54:26.926455 1891217408 master_heartbeat_service.cc:110] Got heartbeat from unknown tablet server { permanent_uuid: ""0b4dc4bd295947589f47382c22fe128b"" instance_seqno: 1655211145719926 start_time_us: 1655211145719926 } as 127.0.0.2:50094; Asking this server to re-register. I0614 14:54:26.931890 1891217408 master_heartbeat_service.cc:110] Got heartbeat from unknown tablet server { permanent_uuid: ""99c95ea5efea4cff91c2f41efb5cbf40"" instance_seqno: 1655211145712831 start_time_us: 1655211145712831 } as 127.0.0.1:50108; Asking this server to re-register. I0614 14:54:26.946957 1891790848 catalog_manager.cc:3245] CreateTable from 127.0.0.1:50095: name: ""idx_tg1_2_c95_float8_c118_numeric_c1_date_c32_date_c136_text"" schema { columns { name: ""c95_float8"" type { main: DOUBLE } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 2 order: 1 pg_type_oid: 701 } columns { name: ""c118_numeric"" type { main: DECIMAL } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 1 order: 2 pg_type_oid: 1700 } columns { name: ""c1_date"" type { main: INT32 } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 4 order: 3 pg_type_oid: 1082 } columns { name: ""c32_date"" type { main: INT32 } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 3 order: 4 pg_type_oid: 1082 } columns { name: ""c136_text"" type { main: STRING } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 4 order: 5 pg_type_oid: 25 } columns { name: ""ybidxbasectid"" type { main: BINARY } is_key: true is_nullable: false is_static: false is_counter: false sorting_type: 1 order: -101 pg_type_oid: 0 } table_properties { contain_counters: false is_transactional: true consistency_level: STRONG use_mangled_column_name: false is_ysql_catalog_table: false retain_delete_markers: false partition_key_version: 0 } colocated_table_id { } pgschema_name: ""public"" } num_tablets: 0 partition_schema { range_schema { columns { name: ""c95_float8"" } columns { name: ""c118_numeric"" } columns { name: ""c1_date"" } columns { name: ""c32_date"" } columns { name: ""c136_text"" } columns { name: ""ybidxbasectid"" } } } table_type: PGSQL_TABLE_TYPE namespace { id: ""0000bccd000030008000000000000000"" name: ""db_lst_320365"" database_type: YQL_DATABASE_PGSQL } indexed_table_id: ""0000bccd00003000800000000000c24f"" is_local_index: false is_unique_index: false table_id: ""0000bccd00003000800000000000c5d0"" index_info { indexed_table_id: ""0000bccd00003000800000000000c24f"" } is_colocated_via_database: false skip_index_backfill: true tablegroup_id: ""0000bccd00003000800000000000bcfe"" transaction { transaction_id: ""\213\353\340\325\013+C\340\215\253-\320\""\311i\275"" isolation: SNAPSHOT_ISOLATION status_tablet: ""eda10822320c452c99ed54ad0da38e0c"" priority: 14231579313254783101 start_hybrid_time: 6779745349315387392 locality: GLOBAL } is_backfill_deferred: false is_matview: false I0614 14:54:26.947255 1891790848 catalog_manager.cc:3452] Setting default tablets to 0 with 0 primary servers I0614 14:54:26.947271 1891790848 partition.cc:542] Creating partitions with num_tablets: 1 W0614 14:54:26.947286 1891790848 catalog_manager.cc:3950] Not enough live tablet servers to create table with replication factor 3. Need at least 2 tablet servers whereas 0 are alive.. Placement info: , replication factor flag: 3 W0614 14:54:26.947307 1891790848 master_service_base-internal.h:46] Unknown master error in status: Invalid argument (yb/master/catalog_manager.cc:3953): Not enough live tablet servers to create table with replication factor 3. Need at least 2 tablet servers whereas 0 are alive. I0614 14:54:26.968788 1899819008 ts_manager.cc:140] Registered new tablet server { permanent_uuid: ""99c95ea5efea4cff91c2f41efb5cbf40"" instance_seqno: 1655211145712831 start_time_us: 1655211145712831 } with Master, full list: [{99c95ea5efea4cff91c2f41efb5cbf40, 0x0000000157958888 -> { permanent_uuid: 99c95ea5efea4cff91c2f41efb5cbf40 registration: common { private_rpc_addresses { host: ""127.0.0.1"" port: 9100 } http_addresses { host: ""127.0.0.1"" port: 9000 } cloud_info { placement_cloud: ""cloud1"" placement_region: ""datacenter1"" placement_zone: ""rack1"" } placement_uuid: """" pg_port: 5433 } capabilities: 2189743739 capabilities: 1427296937 capabilities: 2980225056 placement_id: cloud1:datacenter1:rack1 }}] ``` [lst.zip](https://github.com/yugabyte/yugabyte-db/files/8899946/lst.zip) [yb-master.zip](https://github.com/yugabyte/yugabyte-db/files/8899959/yb-master.zip) ",1, error invalid argument invalid table definition error creating index db lst idx numeric date date text on the master not enough live tablet servers to create table with replication factor need a jira link description on a universe created with bin yb ctl replication factor create tserver flags ysql enable packed row true ysql packed row size limit master flags ysql enable packed row true ysql packed row size limit not sure if this is related to packed rows couldn t reproduce it using lst against current master state fails with internal errors mainthread info mainthread info mainthread info running long system test mainthread info mainthread info mainthread info reproduce with git checkout long system test py nodes threads runtime complexity full max columns seed mainthread info database version postgresql yb on arm apple compiled by apple clang version clang bit mainthread info creating tables for database db lst mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker createindexaction dropindexaction setconfigaction addcolumnaction mainthread info worker queries s mainthread info worker queries s mainthread info worker queries s mainthread info worker queries s mainthread info worker queries s worker error unexpected query failure internalerror query create index nonconcurrently idx numeric date date text on using btree desc nulls first numeric asc nulls first date desc nulls last date asc nulls last text desc nulls last where true values none runtime supports explain false supports rollback false affected rows none action createindexaction error class internalerror error code error message error invalid argument invalid table definition error creating index db lst idx numeric date date text on the master not enough live tablet servers to create table with replication factor need at least tablet servers whereas are alive transaction isolation level committed db node host port db backend pid related master logs master heartbeat service cc got heartbeat from unknown tablet server permanent uuid instance seqno start time us as asking this server to re register master heartbeat service cc got heartbeat from unknown tablet server permanent uuid instance seqno start time us as asking this server to re register catalog manager cc createtable from name idx numeric date date text schema columns name type main double is key true is nullable false is static false is counter false sorting type order pg type oid columns name numeric type main decimal is key true is nullable false is static false is counter false sorting type order pg type oid columns name date type main is key true is nullable false is static false is counter false sorting type order pg type oid columns name date type main is key true is nullable false is static false is counter false sorting type order pg type oid columns name text type main string is key true is nullable false is static false is counter false sorting type order pg type oid columns name ybidxbasectid type main binary is key true is nullable false is static false is counter false sorting type order pg type oid table properties contain counters false is transactional true consistency level strong use mangled column name false is ysql catalog table false retain delete markers false partition key version colocated table id pgschema name public num tablets partition schema range schema columns name columns name numeric columns name date columns name date columns name text columns name ybidxbasectid table type pgsql table type namespace id name db lst database type yql database pgsql indexed table id is local index false is unique index false table id index info indexed table id is colocated via database false skip index backfill true tablegroup id transaction transaction id c isolation snapshot isolation status tablet priority start hybrid time locality global is backfill deferred false is matview false catalog manager cc setting default tablets to with primary servers partition cc creating partitions with num tablets catalog manager cc not enough live tablet servers to create table with replication factor need at least tablet servers whereas are alive placement info replication factor flag master service base internal h unknown master error in status invalid argument yb master catalog manager cc not enough live tablet servers to create table with replication factor need at least tablet servers whereas are alive ts manager cc registered new tablet server permanent uuid instance seqno start time us with master full list ,1 4931,18059171069.0,IssuesEvent,2021-09-20 12:08:44,mozilla-mobile/focus-ios,https://api.github.com/repos/mozilla-mobile/focus-ios,closed,Update the smoke test for running on main,eng:automation,"I noticed that when we build PRs, we check if it comes from the `refresh` branch: ```bash if [[ $BITRISEIO_GIT_BRANCH_DEST == refresh ]] || [[ $BITRISE_GIT_BRANCH == refresh ]] then echo ""PR for refresh branch"" envman add --key TEST_PLAN_NAME --value SmokeTestRefresh else echo ""Regular build, running Smoke Test"" envman add --key TEST_PLAN_NAME --value SmokeTest fi ``` Since `refresh` is now moved to `main`, this conditional can probably go and we can rename `SmokeTestRefresh` to `SmokeTest` ?",1.0,"Update the smoke test for running on main - I noticed that when we build PRs, we check if it comes from the `refresh` branch: ```bash if [[ $BITRISEIO_GIT_BRANCH_DEST == refresh ]] || [[ $BITRISE_GIT_BRANCH == refresh ]] then echo ""PR for refresh branch"" envman add --key TEST_PLAN_NAME --value SmokeTestRefresh else echo ""Regular build, running Smoke Test"" envman add --key TEST_PLAN_NAME --value SmokeTest fi ``` Since `refresh` is now moved to `main`, this conditional can probably go and we can rename `SmokeTestRefresh` to `SmokeTest` ?",1,update the smoke test for running on main i noticed that when we build prs we check if it comes from the refresh branch bash if then echo pr for refresh branch envman add key test plan name value smoketestrefresh else echo regular build running smoke test envman add key test plan name value smoketest fi since refresh is now moved to main this conditional can probably go and we can rename smoketestrefresh to smoketest ,1 1952,11166788037.0,IssuesEvent,2019-12-27 14:38:16,elastic/apm-agent-dotnet,https://api.github.com/repos/elastic/apm-agent-dotnet,opened,(ci): Deploy to AppVeyor doesn't work,automation bug ci,"**Describe the bug** Deploy to AppVeyor doesn't work ``` [2019-12-20T18:41:52.398Z] info : InternalServerError https://ci.appveyor.com/nuget/elastic/api/v2/package/ 125ms [2019-12-20T18:41:52.398Z] error: Response status code does not indicate success: 500 (NuGet package cannot be read: Item has already been added. Key in dictionary: 'packageIcon.png' Key being added: 'packageIcon.png'). ``` **To Reproduce** Run a build from the master branch **Expected behavior** A clear and concise description of what you expected to happen. ",1.0,"(ci): Deploy to AppVeyor doesn't work - **Describe the bug** Deploy to AppVeyor doesn't work ``` [2019-12-20T18:41:52.398Z] info : InternalServerError https://ci.appveyor.com/nuget/elastic/api/v2/package/ 125ms [2019-12-20T18:41:52.398Z] error: Response status code does not indicate success: 500 (NuGet package cannot be read: Item has already been added. Key in dictionary: 'packageIcon.png' Key being added: 'packageIcon.png'). ``` **To Reproduce** Run a build from the master branch **Expected behavior** A clear and concise description of what you expected to happen. ",1, ci deploy to appveyor doesn t work describe the bug deploy to appveyor doesn t work info internalservererror error response status code does not indicate success nuget package cannot be read item has already been added key in dictionary packageicon png key being added packageicon png to reproduce run a build from the master branch expected behavior a clear and concise description of what you expected to happen ,1 5716,20829760003.0,IssuesEvent,2022-03-19 08:16:36,fdefelici/clove-unit,https://api.github.com/repos/fdefelici/clove-unit,closed,Add Release Automation,automation,"add support for travis-ci and release automation creation. Try compiling / executing with different compiler / os",1.0,"Add Release Automation - add support for travis-ci and release automation creation. Try compiling / executing with different compiler / os",1,add release automation add support for travis ci and release automation creation try compiling executing with different compiler os,1 1904,11054150399.0,IssuesEvent,2019-12-10 12:55:40,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,Syscheck automated tests: Check syscheck alert for files that exist when starting the agent,automation component/fim,"| Working branch | | --- | | [272-syscheck-test-file-exists-starting-agent](https://github.com/wazuh/wazuh-qa/tree/272-syscheck-test-file-exists-starting-agent) | ## Description Update `test_basic_usage` to add a simple test to check if `syscheck` generates alerts for files that exists when starting the agent. We need to use a fixture that creates a file inside the monitored folder before starting the test and then apply some modifications over that file. Events should be raised as usually when modified and deleted. ## Subtasks - [ ] Modify and delete a file created before the start of wazuh service on Unix and ensure events are raised as expected. This must work with Scheduled, Whodata and Real-Time monitoring. - [ ] Modify and delete a file created before the start of wazuh service on Windows and ensure events are raised as expected. This must work with Scheduled, Whodata and Real-Time monitoring. ",1.0,"Syscheck automated tests: Check syscheck alert for files that exist when starting the agent - | Working branch | | --- | | [272-syscheck-test-file-exists-starting-agent](https://github.com/wazuh/wazuh-qa/tree/272-syscheck-test-file-exists-starting-agent) | ## Description Update `test_basic_usage` to add a simple test to check if `syscheck` generates alerts for files that exists when starting the agent. We need to use a fixture that creates a file inside the monitored folder before starting the test and then apply some modifications over that file. Events should be raised as usually when modified and deleted. ## Subtasks - [ ] Modify and delete a file created before the start of wazuh service on Unix and ensure events are raised as expected. This must work with Scheduled, Whodata and Real-Time monitoring. - [ ] Modify and delete a file created before the start of wazuh service on Windows and ensure events are raised as expected. This must work with Scheduled, Whodata and Real-Time monitoring. ",1,syscheck automated tests check syscheck alert for files that exist when starting the agent working branch description update test basic usage to add a simple test to check if syscheck generates alerts for files that exists when starting the agent we need to use a fixture that creates a file inside the monitored folder before starting the test and then apply some modifications over that file events should be raised as usually when modified and deleted subtasks modify and delete a file created before the start of wazuh service on unix and ensure events are raised as expected this must work with scheduled whodata and real time monitoring modify and delete a file created before the start of wazuh service on windows and ensure events are raised as expected this must work with scheduled whodata and real time monitoring ,1 309818,23306673006.0,IssuesEvent,2022-08-08 02:18:17,nebari-dev/nebari-docs,https://api.github.com/repos/nebari-dev/nebari-docs,closed,[DOC] - Add a search bar for Nebari docs,type: enhancement 💅🏼 area: documentation 📖 area: user experience 👩🏻‍💻,"## Description The Nebari docs currently doesn't feature a search bar for easy navigation across the docs. Add a search bar by going through various solutions available for Docusaurus to implement this feature.",1.0,"[DOC] - Add a search bar for Nebari docs - ## Description The Nebari docs currently doesn't feature a search bar for easy navigation across the docs. Add a search bar by going through various solutions available for Docusaurus to implement this feature.",0, add a search bar for nebari docs description the nebari docs currently doesn t feature a search bar for easy navigation across the docs add a search bar by going through various solutions available for docusaurus to implement this feature ,0 5006,18265053696.0,IssuesEvent,2021-10-04 07:25:32,Shaulbm/moovNowMVP,https://api.github.com/repos/Shaulbm/moovNowMVP,opened,Create users automatically from Google Sheet link,External Automation,"Allow the admin (or HR) to create a google sheet and use it as an input to an admin action in the API. Existing users / mails are updated (to allow hierarchy change). In the sheet we want to get data on the user and his relationships (who is directly managing him and who is he influenced from - for scrum masters, project managers etc.)""",1.0,"Create users automatically from Google Sheet link - Allow the admin (or HR) to create a google sheet and use it as an input to an admin action in the API. Existing users / mails are updated (to allow hierarchy change). In the sheet we want to get data on the user and his relationships (who is directly managing him and who is he influenced from - for scrum masters, project managers etc.)""",1,create users automatically from google sheet link allow the admin or hr to create a google sheet and use it as an input to an admin action in the api existing users mails are updated to allow hierarchy change in the sheet we want to get data on the user and his relationships who is directly managing him and who is he influenced from for scrum masters project managers etc ,1 7829,25763386083.0,IssuesEvent,2022-12-08 22:45:48,jgyates/genmon,https://api.github.com/repos/jgyates/genmon,closed,Outage log MQTT published packet is too long for home assistant sensor,automation - monitoring apps AddOn,"I'm trying to extract data from the outage_log that is published via MQTT. The published string represents the last 8 outages (maybe there would be more but my log only shows 8 so far). The problem I have is that the published packet exceeds the home assistant sensor character limit of 255 characters so I can't parse to just one event instead of 8 and then extract the pertinent data. Is there a way around this in genmon? Could the MQTT payload be JSON formatted maybe?",1.0,"Outage log MQTT published packet is too long for home assistant sensor - I'm trying to extract data from the outage_log that is published via MQTT. The published string represents the last 8 outages (maybe there would be more but my log only shows 8 so far). The problem I have is that the published packet exceeds the home assistant sensor character limit of 255 characters so I can't parse to just one event instead of 8 and then extract the pertinent data. Is there a way around this in genmon? Could the MQTT payload be JSON formatted maybe?",1,outage log mqtt published packet is too long for home assistant sensor i m trying to extract data from the outage log that is published via mqtt the published string represents the last outages maybe there would be more but my log only shows so far the problem i have is that the published packet exceeds the home assistant sensor character limit of characters so i can t parse to just one event instead of and then extract the pertinent data is there a way around this in genmon could the mqtt payload be json formatted maybe ,1 7214,24456997580.0,IssuesEvent,2022-10-07 07:44:41,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,opened,[YSQL][LST] ERROR: Already present: Duplicate request 14 from client ... (min running 14),area/ysql status/awaiting-triage qa_automation,"### Description ``` $ cd ~/code/yugabyte-db $ git checkout 14a71540ec1634c926c4865c002e945088d119d2 $ ./yb_build.sh release $ bin/yb-ctl --replication_factor 3 create --tserver_flags=enable_deadlock_detection=true,ysql_max_connections=20,ysql_enable_packed_row=true,yb_enable_read_committed_isolation=true,ysql_num_shards_per_tserver=2,enable_stream_compression=true,stream_compression_algo=2,yb_num_shards_per_tserver=2 --master_flags=yb_enable_read_committed_isolation=true,enable_stream_compression=true,stream_compression_algo=2,enable_automatic_tablet_splitting=true,tablet_split_low_phase_shard_count_per_node=1,tablet_split_high_phase_shard_count_per_node=5,ysql_enable_packed_row=true,enable_deadlock_detection=true $ cd ~/code/yb-long-system-test $ git checkout f76f4c65 $ ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=10 --runtime=60 --max-columns=20 --complexity=full --seed=076005 2022-09-29 13:11:49,322 MainThread INFO Reproduce with: git checkout f76f4c65 && ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=10 --runtime=60 --max-columns=20 --complexity=full --seed=076005 2022-09-29 13:11:50,051 MainThread INFO Database version: PostgreSQL 11.2-YB-2.15.4.0-b0 on x86_64-pc-linux-gnu, compiled by clang version 14.0.6 (https://github.com/yugabyte/llvm-project.git 4047555d02fa41c8dbfc8a8f529680276af9742e), 64-bit 2022-09-29 13:11:50,054 MainThread INFO Creating tables for database db_lst_076005 2022-09-29 13:12:25,226 MainThread INFO Starting worker_0: CreateIndexAction, DropIndexAction, SetConfigAction, AddColumnAction 2022-09-29 13:12:25,229 MainThread INFO Starting worker_1: RandomSelectAction, SetConfigAction 2022-09-29 13:12:25,240 MainThread INFO Starting worker_2: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-09-29 13:12:25,260 MainThread INFO Starting worker_3: RandomSelectAction, SetConfigAction 2022-09-29 13:12:25,274 MainThread INFO Starting worker_4: RandomSelectAction, SetConfigAction 2022-09-29 13:12:25,279 MainThread INFO Starting worker_5: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-09-29 13:12:25,285 MainThread INFO Starting worker_6: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-09-29 13:12:25,286 MainThread INFO Starting worker_7: RandomSelectAction, SetConfigAction 2022-09-29 13:12:25,334 MainThread INFO Starting worker_8: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-09-29 13:12:25,475 MainThread INFO Starting worker_9: RandomSelectAction, SetConfigAction 2022-09-29 13:12:35,499 MainThread INFO Worker queries/s: [000.0][000.9][001.1][000.8][000.7][002.5][000.7][000.3][001.1][000.4] 2022-09-29 13:12:45,518 MainThread INFO Worker queries/s: [000.0][000.3][000.1][000.6][000.2][000.4][000.4][000.4][000.0][000.1] 2022-09-29 13:12:55,544 MainThread INFO Worker queries/s: [000.0][000.0][000.0][000.2][000.0][000.0][000.0][000.0][000.0][000.0] 2022-09-29 13:12:59,903 worker_2 ERROR Unexpected query failure: InternalError_ Query: INSERT INTO tg2_0 (c0_date, c1_float4, c2_int4range, c3_smallint, c4_int8range, c5_float8, c6_json, c7_int, c8_int4range) VALUES ('1987-08-18', 58.10929428007245, '(80,85)'::INT4RANGE, 75, '(-89,37)'::INT8RANGE, 10.052095581830315, '{""a"": 5, ""b"": [""0"", ""1"", ""2"", ""3"", ""4"", ""5"", ""6"", ""7""], ""c"": false}'::json, -30, '[-75,84]'::INT4RANGE); values: None runtime: 2022-09-29 13:12:36.090 - 2022-09-29 13:12:59.893 supports explain: True supports rollback: True affected rows: None Action: SingleInsertAction Error class: InternalError_ Error code: XX000 Error message: ERROR: Already present: Duplicate request 14 from client 1666c86e-d225-4fa2-8755-bac12785aa11 (min running 14) @ 0x99bb28 errmsg @ 0x9c6bf5 HandleYBStatusAtErrorLevel @ 0x6bb30b standard_ExecutorFinish @ 0x7f9cec43bbb1 pgss_ExecutorFinish @ 0x7f9cec4327aa ybpgm_ExecutorFinish @ 0x857f67 ProcessQuery @ 0x857455 PortalRunMulti @ 0x856cd6 PortalRun @ 0x854456 yb_exec_simple_query_impl @ 0x8549f6 yb_exec_query_wrapper_one_attempt @ 0x85152b PostgresMain @ 0x7c04cc BackendRun @ 0x7bf913 ServerLoop @ 0x7bbdbb PostmasterMain @ 0x71b13d PostgresServerProcessMain @ 0x71b5e2 main @ 0x7f9cf0d07825 __libc_start_main @ 0x4c90d9 _start @ 0x99bb28 errmsg @ 0x852deb YBPrepareCacheRefreshIfNeeded @ 0x850e7d PostgresMain @ 0x7c04cc BackendRun @ 0x7bf913 ServerLoop @ 0x7bbdbb PostmasterMain @ 0x71b13d PostgresServerProcessMain @ 0x71b5e2 main @ 0x7f9cf0d07825 __libc_start_main @ 0x4c90d9 _start CONTEXT: Catalog Version Mismatch: A DDL occurred while processing this query. Try again. Transaction isolation level: read uncommitted DB Node: host: 127.0.0.3, port: 5433 DB Backend PID: 3172382 ``` LST logs: [lst.zip](https://github.com/yugabyte/yugabyte-db/files/9731701/lst.zip) Another query error I only noticed now while checking the failed runs, even though it happened a week ago. I noticed there was a bug from 3 years ago about this failure, not sure if this should be considered a duplicate: https://github.com/yugabyte/yugabyte-db/issues/1208",1.0,"[YSQL][LST] ERROR: Already present: Duplicate request 14 from client ... (min running 14) - ### Description ``` $ cd ~/code/yugabyte-db $ git checkout 14a71540ec1634c926c4865c002e945088d119d2 $ ./yb_build.sh release $ bin/yb-ctl --replication_factor 3 create --tserver_flags=enable_deadlock_detection=true,ysql_max_connections=20,ysql_enable_packed_row=true,yb_enable_read_committed_isolation=true,ysql_num_shards_per_tserver=2,enable_stream_compression=true,stream_compression_algo=2,yb_num_shards_per_tserver=2 --master_flags=yb_enable_read_committed_isolation=true,enable_stream_compression=true,stream_compression_algo=2,enable_automatic_tablet_splitting=true,tablet_split_low_phase_shard_count_per_node=1,tablet_split_high_phase_shard_count_per_node=5,ysql_enable_packed_row=true,enable_deadlock_detection=true $ cd ~/code/yb-long-system-test $ git checkout f76f4c65 $ ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=10 --runtime=60 --max-columns=20 --complexity=full --seed=076005 2022-09-29 13:11:49,322 MainThread INFO Reproduce with: git checkout f76f4c65 && ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=10 --runtime=60 --max-columns=20 --complexity=full --seed=076005 2022-09-29 13:11:50,051 MainThread INFO Database version: PostgreSQL 11.2-YB-2.15.4.0-b0 on x86_64-pc-linux-gnu, compiled by clang version 14.0.6 (https://github.com/yugabyte/llvm-project.git 4047555d02fa41c8dbfc8a8f529680276af9742e), 64-bit 2022-09-29 13:11:50,054 MainThread INFO Creating tables for database db_lst_076005 2022-09-29 13:12:25,226 MainThread INFO Starting worker_0: CreateIndexAction, DropIndexAction, SetConfigAction, AddColumnAction 2022-09-29 13:12:25,229 MainThread INFO Starting worker_1: RandomSelectAction, SetConfigAction 2022-09-29 13:12:25,240 MainThread INFO Starting worker_2: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-09-29 13:12:25,260 MainThread INFO Starting worker_3: RandomSelectAction, SetConfigAction 2022-09-29 13:12:25,274 MainThread INFO Starting worker_4: RandomSelectAction, SetConfigAction 2022-09-29 13:12:25,279 MainThread INFO Starting worker_5: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-09-29 13:12:25,285 MainThread INFO Starting worker_6: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-09-29 13:12:25,286 MainThread INFO Starting worker_7: RandomSelectAction, SetConfigAction 2022-09-29 13:12:25,334 MainThread INFO Starting worker_8: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-09-29 13:12:25,475 MainThread INFO Starting worker_9: RandomSelectAction, SetConfigAction 2022-09-29 13:12:35,499 MainThread INFO Worker queries/s: [000.0][000.9][001.1][000.8][000.7][002.5][000.7][000.3][001.1][000.4] 2022-09-29 13:12:45,518 MainThread INFO Worker queries/s: [000.0][000.3][000.1][000.6][000.2][000.4][000.4][000.4][000.0][000.1] 2022-09-29 13:12:55,544 MainThread INFO Worker queries/s: [000.0][000.0][000.0][000.2][000.0][000.0][000.0][000.0][000.0][000.0] 2022-09-29 13:12:59,903 worker_2 ERROR Unexpected query failure: InternalError_ Query: INSERT INTO tg2_0 (c0_date, c1_float4, c2_int4range, c3_smallint, c4_int8range, c5_float8, c6_json, c7_int, c8_int4range) VALUES ('1987-08-18', 58.10929428007245, '(80,85)'::INT4RANGE, 75, '(-89,37)'::INT8RANGE, 10.052095581830315, '{""a"": 5, ""b"": [""0"", ""1"", ""2"", ""3"", ""4"", ""5"", ""6"", ""7""], ""c"": false}'::json, -30, '[-75,84]'::INT4RANGE); values: None runtime: 2022-09-29 13:12:36.090 - 2022-09-29 13:12:59.893 supports explain: True supports rollback: True affected rows: None Action: SingleInsertAction Error class: InternalError_ Error code: XX000 Error message: ERROR: Already present: Duplicate request 14 from client 1666c86e-d225-4fa2-8755-bac12785aa11 (min running 14) @ 0x99bb28 errmsg @ 0x9c6bf5 HandleYBStatusAtErrorLevel @ 0x6bb30b standard_ExecutorFinish @ 0x7f9cec43bbb1 pgss_ExecutorFinish @ 0x7f9cec4327aa ybpgm_ExecutorFinish @ 0x857f67 ProcessQuery @ 0x857455 PortalRunMulti @ 0x856cd6 PortalRun @ 0x854456 yb_exec_simple_query_impl @ 0x8549f6 yb_exec_query_wrapper_one_attempt @ 0x85152b PostgresMain @ 0x7c04cc BackendRun @ 0x7bf913 ServerLoop @ 0x7bbdbb PostmasterMain @ 0x71b13d PostgresServerProcessMain @ 0x71b5e2 main @ 0x7f9cf0d07825 __libc_start_main @ 0x4c90d9 _start @ 0x99bb28 errmsg @ 0x852deb YBPrepareCacheRefreshIfNeeded @ 0x850e7d PostgresMain @ 0x7c04cc BackendRun @ 0x7bf913 ServerLoop @ 0x7bbdbb PostmasterMain @ 0x71b13d PostgresServerProcessMain @ 0x71b5e2 main @ 0x7f9cf0d07825 __libc_start_main @ 0x4c90d9 _start CONTEXT: Catalog Version Mismatch: A DDL occurred while processing this query. Try again. Transaction isolation level: read uncommitted DB Node: host: 127.0.0.3, port: 5433 DB Backend PID: 3172382 ``` LST logs: [lst.zip](https://github.com/yugabyte/yugabyte-db/files/9731701/lst.zip) Another query error I only noticed now while checking the failed runs, even though it happened a week ago. I noticed there was a bug from 3 years ago about this failure, not sure if this should be considered a duplicate: https://github.com/yugabyte/yugabyte-db/issues/1208",1, error already present duplicate request from client min running description cd code yugabyte db git checkout yb build sh release bin yb ctl replication factor create tserver flags enable deadlock detection true ysql max connections ysql enable packed row true yb enable read committed isolation true ysql num shards per tserver enable stream compression true stream compression algo yb num shards per tserver master flags yb enable read committed isolation true enable stream compression true stream compression algo enable automatic tablet splitting true tablet split low phase shard count per node tablet split high phase shard count per node ysql enable packed row true enable deadlock detection true cd code yb long system test git checkout long system test py nodes threads runtime max columns complexity full seed mainthread info reproduce with git checkout long system test py nodes threads runtime max columns complexity full seed mainthread info database version postgresql yb on pc linux gnu compiled by clang version bit mainthread info creating tables for database db lst mainthread info starting worker createindexaction dropindexaction setconfigaction addcolumnaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker singleinsertaction singleupdateaction singledeleteaction bulkinsertaction bulkupdateaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker singleinsertaction singleupdateaction singledeleteaction bulkinsertaction bulkupdateaction setconfigaction mainthread info starting worker singleinsertaction singleupdateaction singledeleteaction bulkinsertaction bulkupdateaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker singleinsertaction singleupdateaction singledeleteaction bulkinsertaction bulkupdateaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info worker queries s mainthread info worker queries s mainthread info worker queries s worker error unexpected query failure internalerror query insert into date smallint json int values a b c false json values none runtime supports explain true supports rollback true affected rows none action singleinsertaction error class internalerror error code error message error already present duplicate request from client min running errmsg handleybstatusaterrorlevel standard executorfinish pgss executorfinish ybpgm executorfinish processquery portalrunmulti portalrun yb exec simple query impl yb exec query wrapper one attempt postgresmain backendrun serverloop postmastermain postgresserverprocessmain main libc start main start errmsg ybpreparecacherefreshifneeded postgresmain backendrun serverloop postmastermain postgresserverprocessmain main libc start main start context catalog version mismatch a ddl occurred while processing this query try again transaction isolation level read uncommitted db node host port db backend pid lst logs another query error i only noticed now while checking the failed runs even though it happened a week ago i noticed there was a bug from years ago about this failure not sure if this should be considered a duplicate ,1 614,7520649587.0,IssuesEvent,2018-04-12 14:59:14,Peripli/service-manager,https://api.github.com/repos/Peripli/service-manager,closed,Make use of https://coveralls.io for code coverage,automation,"We need to measure code coverage of our tests. There is a [free service](https://coveralls.io) that can be used for this. Info how Go project can be scanned can be found [here](https://coveralls.zendesk.com/hc/en-us/articles/201342809-Go). Example of already integrated project with Go can be found [here](https://github.com/SYNQfm/gosample). ",1.0,"Make use of https://coveralls.io for code coverage - We need to measure code coverage of our tests. There is a [free service](https://coveralls.io) that can be used for this. Info how Go project can be scanned can be found [here](https://coveralls.zendesk.com/hc/en-us/articles/201342809-Go). Example of already integrated project with Go can be found [here](https://github.com/SYNQfm/gosample). ",1,make use of for code coverage we need to measure code coverage of our tests there is a that can be used for this info how go project can be scanned can be found example of already integrated project with go can be found ,1 93658,8440828264.0,IssuesEvent,2018-10-18 08:31:12,kartoza/healthyrivers,https://api.github.com/repos/kartoza/healthyrivers,closed,Fix inconsistencies in reporting,testing,"# Problem Please show number of records as well as number of sites when searching for a particular river. ![screenshot 2018-10-12 at 16 49 56](https://user-images.githubusercontent.com/2510900/46876600-095ab080-ce3f-11e8-9d36-ec979a108bac.png) See screenshot below. Also inconsistency in counts between search and site display.",1.0,"Fix inconsistencies in reporting - # Problem Please show number of records as well as number of sites when searching for a particular river. ![screenshot 2018-10-12 at 16 49 56](https://user-images.githubusercontent.com/2510900/46876600-095ab080-ce3f-11e8-9d36-ec979a108bac.png) See screenshot below. Also inconsistency in counts between search and site display.",0,fix inconsistencies in reporting problem please show number of records as well as number of sites when searching for a particular river see screenshot below also inconsistency in counts between search and site display ,0 10043,31278100682.0,IssuesEvent,2023-08-22 07:50:16,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,data inconsistency seen when running storage sink and restarting PD/TiCDC/TiKV,type/bug severity/critical found/automation area/ticdc affects-6.5 affects-7.1,"### What did you do? 1. Create storage sink changefeed 2. Run workload 3. Inject failure for all PD 4. Inject failure for all TiCDC 5. Inject failure for all TiKV 6. Run storage consumer to consume storage sink content to downstream 7. Compare data consistency between upstream and downstream ### What did you expect to see? Data should be consistent ### What did you see instead? Data inconsisteny seen ### Versions of the cluster CDC version: ``` [root@upstream-ticdc-0 /]# /cdc version Release Version: v7.4.0-alpha Git Commit Hash: dcfcb43a99bacd7639156421993a3a95284c5f45 Git Branch: heads/refs/tags/v7.4.0-alpha UTC Build Time: 2023-08-16 11:36:28 Go Version: go version go1.21.0 linux/amd64 Failpoint Build: false ```",1.0,"data inconsistency seen when running storage sink and restarting PD/TiCDC/TiKV - ### What did you do? 1. Create storage sink changefeed 2. Run workload 3. Inject failure for all PD 4. Inject failure for all TiCDC 5. Inject failure for all TiKV 6. Run storage consumer to consume storage sink content to downstream 7. Compare data consistency between upstream and downstream ### What did you expect to see? Data should be consistent ### What did you see instead? Data inconsisteny seen ### Versions of the cluster CDC version: ``` [root@upstream-ticdc-0 /]# /cdc version Release Version: v7.4.0-alpha Git Commit Hash: dcfcb43a99bacd7639156421993a3a95284c5f45 Git Branch: heads/refs/tags/v7.4.0-alpha UTC Build Time: 2023-08-16 11:36:28 Go Version: go version go1.21.0 linux/amd64 Failpoint Build: false ```",1,data inconsistency seen when running storage sink and restarting pd ticdc tikv what did you do create storage sink changefeed run workload inject failure for all pd inject failure for all ticdc inject failure for all tikv run storage consumer to consume storage sink content to downstream compare data consistency between upstream and downstream what did you expect to see data should be consistent what did you see instead data inconsisteny seen versions of the cluster cdc version cdc version release version alpha git commit hash git branch heads refs tags alpha utc build time go version go version linux failpoint build false ,1 230329,25464191743.0,IssuesEvent,2022-11-25 01:04:36,MikeGratsas/payments,https://api.github.com/repos/MikeGratsas/payments,opened,CVE-2022-45868 (High) detected in h2-1.4.200.jar,security vulnerability,"## CVE-2022-45868 - High Severity Vulnerability
Vulnerable Library - h2-1.4.200.jar

H2 Database Engine

Library home page: https://h2database.com

Path to dependency file: /pom.xml

Path to vulnerable library: /2/repository/com/h2database/h2/1.4.200/h2-1.4.200.jar,/target/payments-0.0.1-SNAPSHOT/WEB-INF/lib/h2-1.4.200.jar

Dependency Hierarchy: - :x: **h2-1.4.200.jar** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

The web-based admin console in H2 Database Engine through 2.1.214 can be started via the CLI with the argument -webAdminPassword, which allows the user to specify the password in cleartext for the web admin console. Consequently, a local user (or an attacker that has obtained local access through some means) would be able to discover the password by listing processes and their arguments. NOTE: the vendor states ""This is not a vulnerability of H2 Console ... Passwords should never be passed on the command line and every qualified DBA or system administrator is expected to know that.""

Publish Date: 2022-11-23

URL: CVE-2022-45868

CVSS 3 Score Details (8.4)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-45868 (High) detected in h2-1.4.200.jar - ## CVE-2022-45868 - High Severity Vulnerability
Vulnerable Library - h2-1.4.200.jar

H2 Database Engine

Library home page: https://h2database.com

Path to dependency file: /pom.xml

Path to vulnerable library: /2/repository/com/h2database/h2/1.4.200/h2-1.4.200.jar,/target/payments-0.0.1-SNAPSHOT/WEB-INF/lib/h2-1.4.200.jar

Dependency Hierarchy: - :x: **h2-1.4.200.jar** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

The web-based admin console in H2 Database Engine through 2.1.214 can be started via the CLI with the argument -webAdminPassword, which allows the user to specify the password in cleartext for the web admin console. Consequently, a local user (or an attacker that has obtained local access through some means) would be able to discover the password by listing processes and their arguments. NOTE: the vendor states ""This is not a vulnerability of H2 Console ... Passwords should never be passed on the command line and every qualified DBA or system administrator is expected to know that.""

Publish Date: 2022-11-23

URL: CVE-2022-45868

CVSS 3 Score Details (8.4)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in jar cve high severity vulnerability vulnerable library jar database engine library home page a href path to dependency file pom xml path to vulnerable library repository com jar target payments snapshot web inf lib jar dependency hierarchy x jar vulnerable library found in base branch master vulnerability details the web based admin console in database engine through can be started via the cli with the argument webadminpassword which allows the user to specify the password in cleartext for the web admin console consequently a local user or an attacker that has obtained local access through some means would be able to discover the password by listing processes and their arguments note the vendor states this is not a vulnerability of console passwords should never be passed on the command line and every qualified dba or system administrator is expected to know that publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with mend ,0 3310,13439947209.0,IssuesEvent,2020-09-07 23:00:16,webanno/webanno,https://api.github.com/repos/webanno/webanno,closed,Importing partial Conll-u format for automation,Module: Automation Support request 🐛Bug,"**Describe the bug** I have a conll-u file with, per line, the token, the lemma, and the pos tag. I'm unable to upload this document for automation. When uploading it prints : ""Error while uploading document DOMcorpuscreole.conllu: NumberFormatException: For input string: ""_"""" **To Reproduce** Steps to reproduce the behavior: 1. Go to projects 2. Create an automation project 3. Click on Documents 4. Upload Conll-u document 5. See error **Expected behavior** I would like to be able to import a partial conll-u file for automation, since I did POS tagging on one software and plan on doing dependencies on Webanno. **Screenshots** ![image](https://user-images.githubusercontent.com/49000319/74443654-8b972280-4e41-11ea-9617-c0d213f57e06.png) The sections per line are tab-separated (it may not look like it on Atom). **Please complete the following information:** - Version and build ID: [WebAnno -- 3.6.4 (2019-12-15 11:32:54, build 97e9ce7d289b6b34715e146c9e6a2782b535e43a) ] - OS: Windows - Browser: chrome **Additional context** Add any other context about the problem here. ",1.0,"Importing partial Conll-u format for automation - **Describe the bug** I have a conll-u file with, per line, the token, the lemma, and the pos tag. I'm unable to upload this document for automation. When uploading it prints : ""Error while uploading document DOMcorpuscreole.conllu: NumberFormatException: For input string: ""_"""" **To Reproduce** Steps to reproduce the behavior: 1. Go to projects 2. Create an automation project 3. Click on Documents 4. Upload Conll-u document 5. See error **Expected behavior** I would like to be able to import a partial conll-u file for automation, since I did POS tagging on one software and plan on doing dependencies on Webanno. **Screenshots** ![image](https://user-images.githubusercontent.com/49000319/74443654-8b972280-4e41-11ea-9617-c0d213f57e06.png) The sections per line are tab-separated (it may not look like it on Atom). **Please complete the following information:** - Version and build ID: [WebAnno -- 3.6.4 (2019-12-15 11:32:54, build 97e9ce7d289b6b34715e146c9e6a2782b535e43a) ] - OS: Windows - Browser: chrome **Additional context** Add any other context about the problem here. ",1,importing partial conll u format for automation describe the bug i have a conll u file with per line the token the lemma and the pos tag i m unable to upload this document for automation when uploading it prints error while uploading document domcorpuscreole conllu numberformatexception for input string to reproduce steps to reproduce the behavior go to projects create an automation project click on documents upload conll u document see error expected behavior i would like to be able to import a partial conll u file for automation since i did pos tagging on one software and plan on doing dependencies on webanno screenshots the sections per line are tab separated it may not look like it on atom please complete the following information version and build id os windows browser chrome additional context add any other context about the problem here ,1 197448,15680577902.0,IssuesEvent,2021-03-25 03:13:17,ZackHolmberg/COMP4350-Project,https://api.github.com/repos/ZackHolmberg/COMP4350-Project,opened,Update documentation for Sprint 3,2 documentation,"Update the class diagram, architecture diagram and sequence diagrams to reflect changes since the last sprint. ",1.0,"Update documentation for Sprint 3 - Update the class diagram, architecture diagram and sequence diagrams to reflect changes since the last sprint. ",0,update documentation for sprint update the class diagram architecture diagram and sequence diagrams to reflect changes since the last sprint ,0 6219,22576144657.0,IssuesEvent,2022-06-28 07:32:03,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,closed,Change the locator of storagesystem name on UI wherever applicable and storageclass selection for PVC creation for external mode deployment.,bug High Priority ui_automation,"Issue is seen with BUILD ID: 4.9.3-2 RUN ID: 1645097102 storageclass name for external mode is - `ocs-external-storagecluster-ceph-rbd` and `ocs-external-storagecluster-ceph-rbd` storagesystem name for external mode is - `ocs-external-storagecluster-storagesystem`",1.0,"Change the locator of storagesystem name on UI wherever applicable and storageclass selection for PVC creation for external mode deployment. - Issue is seen with BUILD ID: 4.9.3-2 RUN ID: 1645097102 storageclass name for external mode is - `ocs-external-storagecluster-ceph-rbd` and `ocs-external-storagecluster-ceph-rbd` storagesystem name for external mode is - `ocs-external-storagecluster-storagesystem`",1,change the locator of storagesystem name on ui wherever applicable and storageclass selection for pvc creation for external mode deployment issue is seen with build id run id storageclass name for external mode is ocs external storagecluster ceph rbd and ocs external storagecluster ceph rbd storagesystem name for external mode is ocs external storagecluster storagesystem ,1 8062,26145882735.0,IssuesEvent,2022-12-30 04:40:35,vesoft-inc/nebula,https://api.github.com/repos/vesoft-inc/nebula,closed,Incorrect result on edge list join query,type/bug severity/major find/automation affects/master,"**Please check the FAQ documentation before raising an issue** **Describe the bug (__required__)** Look at the query below: ```txt (root@nebula) [gdlancer]> match (v1)-[e*1..2]->(v2) where id(v1) in [6] match (v2)-[e*1..2]->(v1) return count(*) +----------+ | count(*) | +----------+ | 0 | +----------+ Got 1 rows (time spent 2.736ms/16.574125ms) Thu, 29 Dec 2022 14:03:49 CST ``` We can see that Nebula return no result, if we change the return clause into `return size(e)`, it shows that there are 6 rows in the result: ```txt (root@nebula) [gdlancer]> match (v1)-[e*1..2]->(v2) where id(v1) in [6] match (v2)-[e*1..2]->(v1) return size(e) +---------+ | size(e) | +---------+ | 2 | | 2 | | 2 | | 2 | | 2 | | 2 | +---------+ Got 6 rows (time spent 2.583ms/18.1315ms) Thu, 29 Dec 2022 14:05:48 CST ``` in contrast the same query on the same dataset in Neo4j return 6 rows: ```txt $ match (v1)-[e*1..2]->(v2) where v1.id in [6] match (v2)-[e*1..2]->(v1) return count(*) ╒══════════╕ │""count(*)""│ ╞══════════╡ │6 │ └──────────┘ ``` **Your Environments (__required__)** * OS: `uname -a` * Compiler: `g++ --version` or `clang++ --version` * CPU: `lscpu` * Commit id (e.g. `a3ffc7d8`) 967a8c9e0 (community edition) **How To Reproduce(__required__)** Steps to reproduce the behavior: 1. Step 1 2. Step 2 3. Step 3 **Expected behavior** **Additional context** ",1.0,"Incorrect result on edge list join query - **Please check the FAQ documentation before raising an issue** **Describe the bug (__required__)** Look at the query below: ```txt (root@nebula) [gdlancer]> match (v1)-[e*1..2]->(v2) where id(v1) in [6] match (v2)-[e*1..2]->(v1) return count(*) +----------+ | count(*) | +----------+ | 0 | +----------+ Got 1 rows (time spent 2.736ms/16.574125ms) Thu, 29 Dec 2022 14:03:49 CST ``` We can see that Nebula return no result, if we change the return clause into `return size(e)`, it shows that there are 6 rows in the result: ```txt (root@nebula) [gdlancer]> match (v1)-[e*1..2]->(v2) where id(v1) in [6] match (v2)-[e*1..2]->(v1) return size(e) +---------+ | size(e) | +---------+ | 2 | | 2 | | 2 | | 2 | | 2 | | 2 | +---------+ Got 6 rows (time spent 2.583ms/18.1315ms) Thu, 29 Dec 2022 14:05:48 CST ``` in contrast the same query on the same dataset in Neo4j return 6 rows: ```txt $ match (v1)-[e*1..2]->(v2) where v1.id in [6] match (v2)-[e*1..2]->(v1) return count(*) ╒══════════╕ │""count(*)""│ ╞══════════╡ │6 │ └──────────┘ ``` **Your Environments (__required__)** * OS: `uname -a` * Compiler: `g++ --version` or `clang++ --version` * CPU: `lscpu` * Commit id (e.g. `a3ffc7d8`) 967a8c9e0 (community edition) **How To Reproduce(__required__)** Steps to reproduce the behavior: 1. Step 1 2. Step 2 3. Step 3 **Expected behavior** **Additional context** ",1,incorrect result on edge list join query please check the faq documentation before raising an issue describe the bug required look at the query below txt root nebula match where id in match return count count got rows time spent thu dec cst we can see that nebula return no result if we change the return clause into return size e it shows that there are rows in the result txt root nebula match where id in match return size e size e got rows time spent thu dec cst in contrast the same query on the same dataset in return rows txt match where id in match return count ╒══════════╕ │ count │ ╞══════════╡ │ │ └──────────┘ your environments required os uname a compiler g version or clang version cpu lscpu commit id e g community edition how to reproduce required steps to reproduce the behavior step step step expected behavior additional context ,1 4922,18046721075.0,IssuesEvent,2021-09-19 02:26:05,iGEM-Engineering/iGEM-distribution,https://api.github.com/repos/iGEM-Engineering/iGEM-distribution,opened,Only re-build when there are changes,automation,"Right now, all of the elements of the distribution get rebuilt every time that anything changes. That's not very efficient, especially as some of the packages get bigger and when it involves slow things like GenBank to SBOL conversion. So we should check the marks on when a file has last been changed, and only run an update if its sources have been changed since then.",1.0,"Only re-build when there are changes - Right now, all of the elements of the distribution get rebuilt every time that anything changes. That's not very efficient, especially as some of the packages get bigger and when it involves slow things like GenBank to SBOL conversion. So we should check the marks on when a file has last been changed, and only run an update if its sources have been changed since then.",1,only re build when there are changes right now all of the elements of the distribution get rebuilt every time that anything changes that s not very efficient especially as some of the packages get bigger and when it involves slow things like genbank to sbol conversion so we should check the marks on when a file has last been changed and only run an update if its sources have been changed since then ,1 595,7423002179.0,IssuesEvent,2018-03-23 02:37:18,vmware/harbor,https://api.github.com/repos/vmware/harbor,closed,Repo filter does not work well,kind/automation-found kind/bug,"When use filter search in repository, need to type a full name to filter, otherwise there will no result ",1.0,"Repo filter does not work well - When use filter search in repository, need to type a full name to filter, otherwise there will no result ",1,repo filter does not work well when use filter search in repository need to type a full name to filter otherwise there will no result ,1 634,7644451302.0,IssuesEvent,2018-05-08 15:30:29,briandfoy/rakudo-star-chocolatey,https://api.github.com/repos/briandfoy/rakudo-star-chocolatey,closed,Deploy from AppVeyor,automation enhancement,Push a tag and make AppVeyor push to choco. This way doesn't need my Windows virtual machines.,1.0,Deploy from AppVeyor - Push a tag and make AppVeyor push to choco. This way doesn't need my Windows virtual machines.,1,deploy from appveyor push a tag and make appveyor push to choco this way doesn t need my windows virtual machines ,1 637270,20624426193.0,IssuesEvent,2022-03-07 20:50:42,PediatricOpenTargets/ticket-tracker,https://api.github.com/repos/PediatricOpenTargets/ticket-tracker,closed,Updated analysis: subset each-cohort independent primary tumor RNA-seq samples in `tumor-gtex-plots` module,enhancement low priority," #### What analysis module should be updated and why? The `tumor-gtex-plots` module should be updated to subset each-cohort independent primary tumor RNA-seq samples before generating plots/results. The motivation is to have only one primary tumor RNA-seq sample for each patient of each cohort in the plots/results. The list of each-cohort independent primary tumor RNA-seq samples (`analyses/independent-samples/results/independent-specimens.rnaseq.primary.eachcohort.tsv`) is generated in the [`independent-samples` module](https://github.com/PediatricOpenTargets/OpenPedCan-analysis/tree/dev/analyses/independent-samples), essentially by selecting only one primary tumor RNA-seq sample for each patient in each cohort. #### What changes need to be made? Please provide enough detail for another participant to make the update. Subset RNA-seq samples in `gene-expression-rsem-tpm-collapsed.rds` before generating plots/results. - For tumor RNA-seq samples, subset samples in `analyses/independent-samples/results/independent-specimens.rnaseq.primary.eachcohort.tsv`. - For GTEx samples, use all samples like before without any subsetting. #### What input data should be used? Which data were used in the version being updated? Input data for `tumor-gtex-plots`. `analyses/independent-samples/results/independent-specimens.rnaseq.primary.eachcohort.tsv`. #### When do you expect the revised analysis will be completed? 1-3 days #### Who will complete the updated analysis? @komalsrathi cc @jharenza ",1.0,"Updated analysis: subset each-cohort independent primary tumor RNA-seq samples in `tumor-gtex-plots` module - #### What analysis module should be updated and why? The `tumor-gtex-plots` module should be updated to subset each-cohort independent primary tumor RNA-seq samples before generating plots/results. The motivation is to have only one primary tumor RNA-seq sample for each patient of each cohort in the plots/results. The list of each-cohort independent primary tumor RNA-seq samples (`analyses/independent-samples/results/independent-specimens.rnaseq.primary.eachcohort.tsv`) is generated in the [`independent-samples` module](https://github.com/PediatricOpenTargets/OpenPedCan-analysis/tree/dev/analyses/independent-samples), essentially by selecting only one primary tumor RNA-seq sample for each patient in each cohort. #### What changes need to be made? Please provide enough detail for another participant to make the update. Subset RNA-seq samples in `gene-expression-rsem-tpm-collapsed.rds` before generating plots/results. - For tumor RNA-seq samples, subset samples in `analyses/independent-samples/results/independent-specimens.rnaseq.primary.eachcohort.tsv`. - For GTEx samples, use all samples like before without any subsetting. #### What input data should be used? Which data were used in the version being updated? Input data for `tumor-gtex-plots`. `analyses/independent-samples/results/independent-specimens.rnaseq.primary.eachcohort.tsv`. #### When do you expect the revised analysis will be completed? 1-3 days #### Who will complete the updated analysis? @komalsrathi cc @jharenza ",0,updated analysis subset each cohort independent primary tumor rna seq samples in tumor gtex plots module what analysis module should be updated and why the tumor gtex plots module should be updated to subset each cohort independent primary tumor rna seq samples before generating plots results the motivation is to have only one primary tumor rna seq sample for each patient of each cohort in the plots results the list of each cohort independent primary tumor rna seq samples analyses independent samples results independent specimens rnaseq primary eachcohort tsv is generated in the essentially by selecting only one primary tumor rna seq sample for each patient in each cohort what changes need to be made please provide enough detail for another participant to make the update subset rna seq samples in gene expression rsem tpm collapsed rds before generating plots results for tumor rna seq samples subset samples in analyses independent samples results independent specimens rnaseq primary eachcohort tsv for gtex samples use all samples like before without any subsetting what input data should be used which data were used in the version being updated input data for tumor gtex plots analyses independent samples results independent specimens rnaseq primary eachcohort tsv when do you expect the revised analysis will be completed days who will complete the updated analysis komalsrathi cc jharenza ,0 53,3091047968.0,IssuesEvent,2015-08-26 10:43:26,MISP/MISP,https://api.github.com/repos/MISP/MISP,opened,Automatically apply admin tools according to current hotfix level,automation functionality usability,"This might be considered a variation of #122 . The idea is as follows: - Store the ""effective"" hotfix level also in DB (or other file) - Have a generic upgrade script to run after git pulling the latest code - Check which hotfix level of code was checked out from VERSION.json, if this higher, then apply all necessary admin scripts. - Update the ""effective"" hotfix level in DB For example after hotfix 115 (from version 2.3 - https://github.com/MISP/MISP/commit/8ab674e1df57c24ad5fc39b04c5f554e8290c2f7), apply the added administrative tool *removeDuplicateEvents*.",1.0,"Automatically apply admin tools according to current hotfix level - This might be considered a variation of #122 . The idea is as follows: - Store the ""effective"" hotfix level also in DB (or other file) - Have a generic upgrade script to run after git pulling the latest code - Check which hotfix level of code was checked out from VERSION.json, if this higher, then apply all necessary admin scripts. - Update the ""effective"" hotfix level in DB For example after hotfix 115 (from version 2.3 - https://github.com/MISP/MISP/commit/8ab674e1df57c24ad5fc39b04c5f554e8290c2f7), apply the added administrative tool *removeDuplicateEvents*.",1,automatically apply admin tools according to current hotfix level this might be considered a variation of the idea is as follows store the effective hotfix level also in db or other file have a generic upgrade script to run after git pulling the latest code check which hotfix level of code was checked out from version json if this higher then apply all necessary admin scripts update the effective hotfix level in db for example after hotfix from version apply the added administrative tool removeduplicateevents ,1 25944,2684054444.0,IssuesEvent,2015-03-28 16:19:49,ConEmu/old-issues,https://api.github.com/repos/ConEmu/old-issues,closed,Иногда при смене режима реальная консоль создается с неправильной высотой,1 star bug imported Priority-Medium,"_From [thecybershadow](https://code.google.com/u/thecybershadow/) on March 15, 2012 16:16:35_ OS version: Windows Server 2008 x64 ConEmu version: 120315e x86 Far version: 3.0 (build 2546) x86 Похоже на Issue 82 , но: 1) это происходит не при открытии, а при смене экранного режима 2) Alt+F9 в реальной консоли не помогает При смене разрешения не воспроизводится, только при смене ориентации. *Steps to reproduction* 1. http://noeld.com/programs.asp#display 2. https://gist.github.com/2047520 3. Press Enter until reproduced _Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=509_",1.0,"Иногда при смене режима реальная консоль создается с неправильной высотой - _From [thecybershadow](https://code.google.com/u/thecybershadow/) on March 15, 2012 16:16:35_ OS version: Windows Server 2008 x64 ConEmu version: 120315e x86 Far version: 3.0 (build 2546) x86 Похоже на Issue 82 , но: 1) это происходит не при открытии, а при смене экранного режима 2) Alt+F9 в реальной консоли не помогает При смене разрешения не воспроизводится, только при смене ориентации. *Steps to reproduction* 1. http://noeld.com/programs.asp#display 2. https://gist.github.com/2047520 3. Press Enter until reproduced _Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=509_",0,иногда при смене режима реальная консоль создается с неправильной высотой from on march os version windows server conemu version far version build похоже на issue но это происходит не при открытии а при смене экранного режима alt в реальной консоли не помогает при смене разрешения не воспроизводится только при смене ориентации steps to reproduction press enter until reproduced original issue ,0 63860,15726854261.0,IssuesEvent,2021-03-29 11:54:31,rticommunity/rticonnextdds-examples,https://api.github.com/repos/rticommunity/rticonnextdds-examples,closed,Port flat_data_latency C++11 example to CMake,build documentation enhancement good first issue style,The example `/examples/connext_dds/flat_data_latency/c++11` is missing the new `README.md` template and the `CMakeLists.txt`.,1.0,Port flat_data_latency C++11 example to CMake - The example `/examples/connext_dds/flat_data_latency/c++11` is missing the new `README.md` template and the `CMakeLists.txt`.,0,port flat data latency c example to cmake the example examples connext dds flat data latency c is missing the new readme md template and the cmakelists txt ,0 113857,17169598017.0,IssuesEvent,2021-07-15 01:01:29,rsoreq/assertj-core,https://api.github.com/repos/rsoreq/assertj-core,opened,CVE-2020-8908 (Low) detected in guava-25.1-android.jar,security vulnerability,"## CVE-2020-8908 - Low Severity Vulnerability
Vulnerable Library - guava-25.1-android.jar

Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more.

Library home page: https://github.com/google/guava

Path to vulnerable library: assertj-core/target/local-repo/com/google/guava/guava/25.1-android/guava-25.1-android.jar

Dependency Hierarchy: - :x: **guava-25.1-android.jar** (Vulnerable Library)

Found in base branch: main

Vulnerability Details

A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime's java.io.tmpdir system property to point to a location whose permissions are appropriately configured.

Publish Date: 2020-12-10

URL: CVE-2020-8908

CVSS 3 Score Details (3.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908

Release Date: 2020-12-10

Fix Resolution: v30.0

*** - [ ] Check this box to open an automated fix PR ",True,"CVE-2020-8908 (Low) detected in guava-25.1-android.jar - ## CVE-2020-8908 - Low Severity Vulnerability
Vulnerable Library - guava-25.1-android.jar

Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more.

Library home page: https://github.com/google/guava

Path to vulnerable library: assertj-core/target/local-repo/com/google/guava/guava/25.1-android/guava-25.1-android.jar

Dependency Hierarchy: - :x: **guava-25.1-android.jar** (Vulnerable Library)

Found in base branch: main

Vulnerability Details

A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime's java.io.tmpdir system property to point to a location whose permissions are appropriately configured.

Publish Date: 2020-12-10

URL: CVE-2020-8908

CVSS 3 Score Details (3.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908

Release Date: 2020-12-10

Fix Resolution: v30.0

*** - [ ] Check this box to open an automated fix PR ",0,cve low detected in guava android jar cve low severity vulnerability vulnerable library guava android jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more library home page a href path to vulnerable library assertj core target local repo com google guava guava android guava android jar dependency hierarchy x guava android jar vulnerable library found in base branch main vulnerability details a temp directory creation vulnerability exists in all versions of guava allowing an attacker with access to the machine to potentially access data in a temporary directory created by the guava api com google common io files createtempdir by default on unix like systems the created directory is world readable readable by an attacker with access to the system the method in question has been marked deprecated in versions and later and should not be used for android developers we recommend choosing a temporary directory api provided by android such as context getcachedir for other java developers we recommend migrating to the java api java nio file files createtempdirectory which explicitly configures permissions of or configuring the java runtime s java io tmpdir system property to point to a location whose permissions are appropriately configured publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com google guava guava android isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a temp directory creation vulnerability exists in all versions of guava allowing an attacker with access to the machine to potentially access data in a temporary directory created by the guava api com google common io files createtempdir by default on unix like systems the created directory is world readable readable by an attacker with access to the system the method in question has been marked deprecated in versions and later and should not be used for android developers we recommend choosing a temporary directory api provided by android such as context getcachedir for other java developers we recommend migrating to the java api java nio file files createtempdirectory which explicitly configures permissions of or configuring the java runtime java io tmpdir system property to point to a location whose permissions are appropriately configured vulnerabilityurl ,0 7026,24143539591.0,IssuesEvent,2022-09-21 16:39:04,vegaprotocol/frontend-monorepo,https://api.github.com/repos/vegaprotocol/frontend-monorepo,opened,Switch all apps that interact with tokens to sepolia,blocked Block Explorer Trading BlockExplorer-Automation Token Frontend,"As per #1417, except the work is easier for `trading`, because contract addresses should come from assets/ network parameters. Hopefully. # Tasks - [ ] Get contract addresses (see #1417) - [ ] Switch network ids from Ropsten to sepolia - [ ] Find any text references and do the same",1.0,"Switch all apps that interact with tokens to sepolia - As per #1417, except the work is easier for `trading`, because contract addresses should come from assets/ network parameters. Hopefully. # Tasks - [ ] Get contract addresses (see #1417) - [ ] Switch network ids from Ropsten to sepolia - [ ] Find any text references and do the same",1,switch all apps that interact with tokens to sepolia as per except the work is easier for trading because contract addresses should come from assets network parameters hopefully tasks get contract addresses see switch network ids from ropsten to sepolia find any text references and do the same,1 154672,13563689433.0,IssuesEvent,2020-09-18 08:55:44,golang/go,https://api.github.com/repos/golang/go,closed,"""func (*Resolver) LookupIP"" seems not exists, but it is in the documentation",Documentation,"### What version of Go are you using (`go version`)?
$ go version
go version go1.14.6 darwin/amd64
### Does this issue reproduce with the latest release?
package main
import ""context""
import ""net""
func GoogleDNSDialer(ctx context.Context, network, address string) (net.Conn, error) {
   d := net.Dialer{}
   return d.DialContext(ctx, ""udp"", ""8.8.8.8:53"")
}
var local_resolver net.Resolver = net.Resolver{
   Dial: GoogleDNSDialer,
}
func LocalLookupIP(name string) ([]net.IP, error) {
   return local_resolver.LookupIP(context.Background(), ""ip"", name)
}
go build
### What operating system and processor architecture are you using (`go env`)?
go env Output
$ go env

GO111MODULE=""""
GOARCH=""amd64""
GOBIN=""""
GOCACHE=""/Users/thierryfournier/Library/Caches/go-build""
GOENV=""/Users/thierryfournier/Library/Application Support/go/env""
GOEXE=""""
GOFLAGS=""""
GOHOSTARCH=""amd64""
GOHOSTOS=""darwin""
GOINSECURE=""""
GONOPROXY=""""
GONOSUMDB=""""
GOOS=""darwin""
GOPATH=""/Users/thierryfournier/go""
GOPRIVATE=""""
GOPROXY=""https://proxy.golang.org,direct""
GOROOT=""/usr/local/go""
GOSUMDB=""sum.golang.org""
GOTMPDIR=""""
GOTOOLDIR=""/usr/local/go/pkg/tool/darwin_amd64""
GCCGO=""gccgo""
AR=""ar""
CC=""clang""
CXX=""clang++""
CGO_ENABLED=""1""
GOMOD=""""
CGO_CFLAGS=""-g -O2""
CGO_CPPFLAGS=""""
CGO_CXXFLAGS=""-g -O2""
CGO_FFLAGS=""-g -O2""
CGO_LDFLAGS=""-g -O2""
PKG_CONFIG=""pkg-config""
GOGCCFLAGS=""-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/_d/zs7l5ldx1r94fn_qh1zvn1rw0000gn/T/go-build235743352=/tmp/go-build -gno-record-gcc-switches -fno-common""


### What did you do? Compilation error is: ./a.go:12:25: local_resolver.LookupIP undefined (type net.Resolver has no field or method LookupIP) But the documentation says that the function ""func (*Resolver) LookupIP"" exists: https://godoc.org/net#Resolver.LookupIP I check the file ""/usr/local/go/src/net/lookup.go"" and the function doesn't exists ### What did you expect to see? My build works (or fix the documentation). ### What did you see instead? ",1.0,"""func (*Resolver) LookupIP"" seems not exists, but it is in the documentation - ### What version of Go are you using (`go version`)?
$ go version
go version go1.14.6 darwin/amd64
### Does this issue reproduce with the latest release?
package main
import ""context""
import ""net""
func GoogleDNSDialer(ctx context.Context, network, address string) (net.Conn, error) {
   d := net.Dialer{}
   return d.DialContext(ctx, ""udp"", ""8.8.8.8:53"")
}
var local_resolver net.Resolver = net.Resolver{
   Dial: GoogleDNSDialer,
}
func LocalLookupIP(name string) ([]net.IP, error) {
   return local_resolver.LookupIP(context.Background(), ""ip"", name)
}
go build
### What operating system and processor architecture are you using (`go env`)?
go env Output
$ go env

GO111MODULE=""""
GOARCH=""amd64""
GOBIN=""""
GOCACHE=""/Users/thierryfournier/Library/Caches/go-build""
GOENV=""/Users/thierryfournier/Library/Application Support/go/env""
GOEXE=""""
GOFLAGS=""""
GOHOSTARCH=""amd64""
GOHOSTOS=""darwin""
GOINSECURE=""""
GONOPROXY=""""
GONOSUMDB=""""
GOOS=""darwin""
GOPATH=""/Users/thierryfournier/go""
GOPRIVATE=""""
GOPROXY=""https://proxy.golang.org,direct""
GOROOT=""/usr/local/go""
GOSUMDB=""sum.golang.org""
GOTMPDIR=""""
GOTOOLDIR=""/usr/local/go/pkg/tool/darwin_amd64""
GCCGO=""gccgo""
AR=""ar""
CC=""clang""
CXX=""clang++""
CGO_ENABLED=""1""
GOMOD=""""
CGO_CFLAGS=""-g -O2""
CGO_CPPFLAGS=""""
CGO_CXXFLAGS=""-g -O2""
CGO_FFLAGS=""-g -O2""
CGO_LDFLAGS=""-g -O2""
PKG_CONFIG=""pkg-config""
GOGCCFLAGS=""-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/_d/zs7l5ldx1r94fn_qh1zvn1rw0000gn/T/go-build235743352=/tmp/go-build -gno-record-gcc-switches -fno-common""


### What did you do? Compilation error is: ./a.go:12:25: local_resolver.LookupIP undefined (type net.Resolver has no field or method LookupIP) But the documentation says that the function ""func (*Resolver) LookupIP"" exists: https://godoc.org/net#Resolver.LookupIP I check the file ""/usr/local/go/src/net/lookup.go"" and the function doesn't exists ### What did you expect to see? My build works (or fix the documentation). ### What did you see instead? ",0, func resolver lookupip seems not exists but it is in the documentation what version of go are you using go version go version go version darwin does this issue reproduce with the latest release package main import context import net func googlednsdialer ctx context context network address string net conn error d net dialer return d dialcontext ctx udp var local resolver net resolver net resolver dial googlednsdialer func locallookupip name string net ip error return local resolver lookupip context background ip name go build what operating system and processor architecture are you using go env go env output go env goarch gobin gocache users thierryfournier library caches go build goenv users thierryfournier library application support go env goexe goflags gohostarch gohostos darwin goinsecure gonoproxy gonosumdb goos darwin gopath users thierryfournier go goprivate goproxy goroot usr local go gosumdb sum golang org gotmpdir gotooldir usr local go pkg tool darwin gccgo gccgo ar ar cc clang cxx clang cgo enabled gomod cgo cflags g cgo cppflags cgo cxxflags g cgo fflags g cgo ldflags g pkg config pkg config gogccflags fpic pthread fno caret diagnostics qunused arguments fmessage length fdebug prefix map var folders d t go tmp go build gno record gcc switches fno common what did you do compilation error is a go local resolver lookupip undefined type net resolver has no field or method lookupip but the documentation says that the function func resolver lookupip exists i check the file usr local go src net lookup go and the function doesn t exists what did you expect to see my build works or fix the documentation what did you see instead ,0 4554,16844926277.0,IssuesEvent,2021-06-19 09:12:42,spring-cloud/spring-cloud-dataflow,https://api.github.com/repos/spring-cloud/spring-cloud-dataflow,closed,Changes in images,automation/rlnotes-header,"In a `2.8.0` release we started to use paketo to package images and accidentally moved back to using jdk8 while `2.7.x` line used jdk11. We're now creating images for both jdk8 and jdk11 with postfixed tags while defaulting normal image to jdk11. This allows us to easily add additional images in future. Essentially we now have: ``` springcloud/spring-cloud-dataflow-server:2.8.1 springcloud/spring-cloud-dataflow-server:2.8.1-jdk8 springcloud/spring-cloud-dataflow-server:2.8.1-jdk11 springcloud/spring-cloud-dataflow-composed-task-runner:2.8.1 springcloud/spring-cloud-dataflow-composed-task-runner:2.8.1-jdk11 springcloud/spring-cloud-dataflow-composed-task-runner:2.8.1-jdk8 springcloud/spring-cloud-skipper-server:2.7.1 springcloud/spring-cloud-skipper-server:2.7.1-jdk8 springcloud/spring-cloud-skipper-server:2.7.1-jdk11 ``` ",1.0,"Changes in images - In a `2.8.0` release we started to use paketo to package images and accidentally moved back to using jdk8 while `2.7.x` line used jdk11. We're now creating images for both jdk8 and jdk11 with postfixed tags while defaulting normal image to jdk11. This allows us to easily add additional images in future. Essentially we now have: ``` springcloud/spring-cloud-dataflow-server:2.8.1 springcloud/spring-cloud-dataflow-server:2.8.1-jdk8 springcloud/spring-cloud-dataflow-server:2.8.1-jdk11 springcloud/spring-cloud-dataflow-composed-task-runner:2.8.1 springcloud/spring-cloud-dataflow-composed-task-runner:2.8.1-jdk11 springcloud/spring-cloud-dataflow-composed-task-runner:2.8.1-jdk8 springcloud/spring-cloud-skipper-server:2.7.1 springcloud/spring-cloud-skipper-server:2.7.1-jdk8 springcloud/spring-cloud-skipper-server:2.7.1-jdk11 ``` ",1,changes in images in a release we started to use paketo to package images and accidentally moved back to using while x line used we re now creating images for both and with postfixed tags while defaulting normal image to this allows us to easily add additional images in future essentially we now have springcloud spring cloud dataflow server springcloud spring cloud dataflow server springcloud spring cloud dataflow server springcloud spring cloud dataflow composed task runner springcloud spring cloud dataflow composed task runner springcloud spring cloud dataflow composed task runner springcloud spring cloud skipper server springcloud spring cloud skipper server springcloud spring cloud skipper server ,1 158,4232056828.0,IssuesEvent,2016-07-04 19:48:36,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,Rules are not working as expected,Automation bug," Sorry about that very unspecific title of this issue. I will change it as soon as we / I have a better one. # Short description: * rules are not executed anymore after restart * after disable and enable the rule is executed * the now executed rule executed the action if the condition does not met # Long description: I have created to rules. Rule 1: * if item 'zwave_device_852ab7b3_node27_switch_binary' state is set to ON (using trigger and condition) * set 'sonos_zoneplayer_RINCON_000E588375A001400_control' to PLAY Rule 2: * if item 'zwave_device_852ab7b3_node27_switch_binary' state is set to OFF (using trigger and condition) * set 'sonos_zoneplayer_RINCON_000E588375A001400_control' to PAUSE After the rules have been created, all has been working. I restarted the system (at least once). Now I realized that the rules are not working anymore. I filtered the log a little bit, because it contains a lot of messages ```text 2016-04-28 08:25:30,465 | INFO | SH-safeCall-1505 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command ON 2016-04-28 08:25:30,481 | INFO | SH-safeCall-1507 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:25:30,587 | INFO | SH-safeCall-1509 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:25:30,943 | INFO | SH-safeCall-1507 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from OFF to ON 2016-04-28 08:25:37,907 | INFO | SH-safeCall-1507 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command OFF 2016-04-28 08:25:37,920 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:25:37,976 | INFO | SH-safeCall-1510 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from ON to OFF 2016-04-28 08:25:38,054 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF ``` There are no rules executed. After that I disabled and enabled the first rule. ```text 2016-04-28 08:26:10,758 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: DISABLED 2016-04-28 08:26:11,085 | INFO | SH-safeCall-1506 | RuleUpdatedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Rule 'rule_1' has been updated. 2016-04-28 08:26:11,089 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: DISABLED 2016-04-28 08:26:11,095 | INFO | SH-safeCall-1506 | RuleUpdatedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Rule 'rule_1' has been updated. 2016-04-28 08:26:11,099 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: DISABLED 2016-04-28 08:26:12,285 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: NOT_INITIALIZED 2016-04-28 08:26:12,413 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:12,611 | INFO | SH-safeCall-1506 | RuleUpdatedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Rule 'rule_1' has been updated. 2016-04-28 08:26:12,686 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: NOT_INITIALIZED 2016-04-28 08:26:12,730 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:12,743 | INFO | SH-safeCall-1506 | RuleUpdatedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Rule 'rule_1' has been updated. 2016-04-28 08:26:12,772 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: NOT_INITIALIZED 2016-04-28 08:26:12,799 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE ``` Now (OFF => ON), the first rule is triggered, and the action is executed. ```text 2016-04-28 08:26:21,636 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node6_sensor_binary updated to ON 2016-04-28 08:26:25,563 | INFO | SH-safeCall-1510 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command ON 2016-04-28 08:26:25,607 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:25,616 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:25,677 | INFO | SH-safeCall-1510 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:25,677 | INFO | SH-safeCall-1506 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from OFF to ON 2016-04-28 08:26:25,690 | INFO | SH-safeCall-1509 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:25,698 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:25,700 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:25,700 | INFO | SH-safeCall-1509 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:25,739 | INFO | SH-safeCall-1505 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:25,743 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:25,945 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY ``` If I switch the trigger to OFF, I would assume, that rule 1 is triggered, but the condition will break the execution. But why is there a ""received command PLAY""? ```text 2016-04-28 08:26:29,879 | INFO | SH-safeCall-1509 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command OFF 2016-04-28 08:26:29,889 | INFO | SH-safeCall-1507 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:29,928 | INFO | SH-safeCall-1509 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:29,937 | INFO | SH-safeCall-1507 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:29,943 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:29,950 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,052 | INFO | SH-safeCall-1509 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from ON to OFF 2016-04-28 08:26:30,214 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,214 | INFO | SH-safeCall-1511 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,217 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,265 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,267 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,270 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,350 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,353 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,357 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,438 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,442 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,445 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,452 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,454 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,459 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,493 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,494 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,496 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,539 | INFO | SH-safeCall-1509 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,539 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,546 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,605 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,607 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,610 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF ``` OFF => ON seems to be okay. ```text 2016-04-28 08:26:37,214 | INFO | SH-safeCall-1508 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command ON 2016-04-28 08:26:37,225 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,227 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,265 | INFO | SH-safeCall-1511 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,272 | INFO | SH-safeCall-1511 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,280 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,299 | INFO | SH-safeCall-1505 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from OFF to ON 2016-04-28 08:26:37,354 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,357 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,391 | INFO | SH-safeCall-1506 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,397 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,400 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,459 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,459 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,495 | INFO | SH-safeCall-1506 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,499 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,501 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,620 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,623 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,646 | INFO | SH-safeCall-1506 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,649 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,653 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,733 | INFO | SH-safeCall-1509 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,744 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,763 | INFO | SH-safeCall-1511 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,766 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,771 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,823 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,824 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_serial_zstick_852ab7b3_serial_sof updated to 8118 2016-04-28 08:26:37,824 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,854 | INFO | SH-safeCall-1508 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,858 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,858 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,897 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,898 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,926 | INFO | SH-safeCall-1511 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,930 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,931 | INFO | SH-safeCall-1511 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,946 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,946 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,981 | INFO | SH-safeCall-1508 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,984 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,985 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY ``` Here again, a ON => OFF, also executed the action (but the condition should break the rule). ```text 2016-04-28 08:26:45,690 | INFO | SH-safeCall-1508 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command OFF 2016-04-28 08:26:45,699 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:45,702 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:45,750 | INFO | SH-safeCall-1509 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from ON to OFF 2016-04-28 08:26:45,757 | INFO | SH-safeCall-1506 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:45,765 | INFO | SH-safeCall-1509 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:45,767 | INFO | SH-safeCall-1511 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:46,147 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:46,148 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:46,154 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE ``` Rule 1 ```text { ""enabled"": true, ""status"": { ""status"": ""IDLE"", ""statusDetail"": ""NONE"" }, ""triggers"": [ { ""id"": ""trigger_1"", ""label"": ""Item State Trigger"", ""description"": ""This triggers a rule if an items state changed"", ""configuration"": { ""itemName"": ""zwave_device_852ab7b3_node27_switch_binary"" }, ""type"": ""ItemStateChangeTrigger"" } ], ""conditions"": [ { ""inputs"": {}, ""id"": ""condition_2"", ""label"": ""Item state condition"", ""description"": ""compares the items current state with the given"", ""configuration"": { ""itemName"": ""zwave_device_852ab7b3_node27_switch_binary"", ""state"": ""ON"", ""operator"": ""="" }, ""type"": ""ItemStateCondition"" } ], ""actions"": [ { ""inputs"": {}, ""id"": ""action_3"", ""label"": ""Post command to an item"", ""description"": ""posts commands on items"", ""configuration"": { ""itemName"": ""sonos_zoneplayer_RINCON_000E588375A001400_control"", ""command"": ""PLAY"" }, ""type"": ""ItemPostCommandAction"" } ], ""configuration"": {}, ""configDescriptions"": [], ""uid"": ""rule_1"", ""name"": ""Bistro-Licht-Sonos-ON"", ""tags"": [] } ``` Rule 2 ```text { ""enabled"": true, ""status"": { ""status"": ""NOT_INITIALIZED"", ""statusDetail"": ""HANDLER_INITIALIZING_ERROR"", ""description"": ""Missing handler 'ItemStateChangeTrigger' for module 'trigger_1'\n"" }, ""triggers"": [ { ""id"": ""trigger_1"", ""label"": ""Item State Trigger"", ""description"": ""This triggers a rule if an items state changed"", ""configuration"": { ""itemName"": ""zwave_device_852ab7b3_node27_switch_binary"" }, ""type"": ""ItemStateChangeTrigger"" } ], ""conditions"": [ { ""inputs"": {}, ""id"": ""condition_2"", ""label"": ""Item state condition"", ""description"": ""compares the items current state with the given"", ""configuration"": { ""itemName"": ""zwave_device_852ab7b3_node27_switch_binary"", ""state"": ""OFF"", ""operator"": ""="" }, ""type"": ""ItemStateCondition"" } ], ""actions"": [ { ""inputs"": {}, ""id"": ""action_3"", ""label"": ""Post command to an item"", ""description"": ""posts commands on items"", ""configuration"": { ""itemName"": ""sonos_zoneplayer_RINCON_000E588375A001400_control"", ""command"": ""PAUSE"" }, ""type"": ""ItemPostCommandAction"" } ], ""configuration"": {}, ""configDescriptions"": [], ""uid"": ""rule_2"", ""name"": ""Bistro-Licht-Sonos-OFF"", ""tags"": [] } ```",1.0,"Rules are not working as expected - Sorry about that very unspecific title of this issue. I will change it as soon as we / I have a better one. # Short description: * rules are not executed anymore after restart * after disable and enable the rule is executed * the now executed rule executed the action if the condition does not met # Long description: I have created to rules. Rule 1: * if item 'zwave_device_852ab7b3_node27_switch_binary' state is set to ON (using trigger and condition) * set 'sonos_zoneplayer_RINCON_000E588375A001400_control' to PLAY Rule 2: * if item 'zwave_device_852ab7b3_node27_switch_binary' state is set to OFF (using trigger and condition) * set 'sonos_zoneplayer_RINCON_000E588375A001400_control' to PAUSE After the rules have been created, all has been working. I restarted the system (at least once). Now I realized that the rules are not working anymore. I filtered the log a little bit, because it contains a lot of messages ```text 2016-04-28 08:25:30,465 | INFO | SH-safeCall-1505 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command ON 2016-04-28 08:25:30,481 | INFO | SH-safeCall-1507 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:25:30,587 | INFO | SH-safeCall-1509 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:25:30,943 | INFO | SH-safeCall-1507 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from OFF to ON 2016-04-28 08:25:37,907 | INFO | SH-safeCall-1507 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command OFF 2016-04-28 08:25:37,920 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:25:37,976 | INFO | SH-safeCall-1510 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from ON to OFF 2016-04-28 08:25:38,054 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF ``` There are no rules executed. After that I disabled and enabled the first rule. ```text 2016-04-28 08:26:10,758 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: DISABLED 2016-04-28 08:26:11,085 | INFO | SH-safeCall-1506 | RuleUpdatedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Rule 'rule_1' has been updated. 2016-04-28 08:26:11,089 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: DISABLED 2016-04-28 08:26:11,095 | INFO | SH-safeCall-1506 | RuleUpdatedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Rule 'rule_1' has been updated. 2016-04-28 08:26:11,099 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: DISABLED 2016-04-28 08:26:12,285 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: NOT_INITIALIZED 2016-04-28 08:26:12,413 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:12,611 | INFO | SH-safeCall-1506 | RuleUpdatedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Rule 'rule_1' has been updated. 2016-04-28 08:26:12,686 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: NOT_INITIALIZED 2016-04-28 08:26:12,730 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:12,743 | INFO | SH-safeCall-1506 | RuleUpdatedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Rule 'rule_1' has been updated. 2016-04-28 08:26:12,772 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: NOT_INITIALIZED 2016-04-28 08:26:12,799 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE ``` Now (OFF => ON), the first rule is triggered, and the action is executed. ```text 2016-04-28 08:26:21,636 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node6_sensor_binary updated to ON 2016-04-28 08:26:25,563 | INFO | SH-safeCall-1510 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command ON 2016-04-28 08:26:25,607 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:25,616 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:25,677 | INFO | SH-safeCall-1510 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:25,677 | INFO | SH-safeCall-1506 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from OFF to ON 2016-04-28 08:26:25,690 | INFO | SH-safeCall-1509 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:25,698 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:25,700 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:25,700 | INFO | SH-safeCall-1509 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:25,739 | INFO | SH-safeCall-1505 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:25,743 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:25,945 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY ``` If I switch the trigger to OFF, I would assume, that rule 1 is triggered, but the condition will break the execution. But why is there a ""received command PLAY""? ```text 2016-04-28 08:26:29,879 | INFO | SH-safeCall-1509 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command OFF 2016-04-28 08:26:29,889 | INFO | SH-safeCall-1507 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:29,928 | INFO | SH-safeCall-1509 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:29,937 | INFO | SH-safeCall-1507 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:29,943 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:29,950 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,052 | INFO | SH-safeCall-1509 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from ON to OFF 2016-04-28 08:26:30,214 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,214 | INFO | SH-safeCall-1511 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,217 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,265 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,267 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,270 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,350 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,353 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,357 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,438 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,442 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,445 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,452 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,454 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,459 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,493 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,494 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,496 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,539 | INFO | SH-safeCall-1509 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:30,539 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,546 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,605 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:30,607 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:30,610 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF ``` OFF => ON seems to be okay. ```text 2016-04-28 08:26:37,214 | INFO | SH-safeCall-1508 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command ON 2016-04-28 08:26:37,225 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,227 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,265 | INFO | SH-safeCall-1511 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,272 | INFO | SH-safeCall-1511 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,280 | INFO | SH-safeCall-1510 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,299 | INFO | SH-safeCall-1505 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from OFF to ON 2016-04-28 08:26:37,354 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,357 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,391 | INFO | SH-safeCall-1506 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,397 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,400 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,459 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,459 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,495 | INFO | SH-safeCall-1506 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,499 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,501 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,620 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,623 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,646 | INFO | SH-safeCall-1506 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,649 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,653 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,733 | INFO | SH-safeCall-1509 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,744 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,763 | INFO | SH-safeCall-1511 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,766 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,771 | INFO | SH-safeCall-1505 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,823 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,824 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_serial_zstick_852ab7b3_serial_sof updated to 8118 2016-04-28 08:26:37,824 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,854 | INFO | SH-safeCall-1508 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,858 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,858 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,897 | INFO | SH-safeCall-1511 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,898 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,926 | INFO | SH-safeCall-1511 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,930 | INFO | SH-safeCall-1506 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,931 | INFO | SH-safeCall-1511 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:37,946 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:37,946 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to ON 2016-04-28 08:26:37,981 | INFO | SH-safeCall-1508 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:37,984 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:37,985 | INFO | SH-safeCall-1508 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY ``` Here again, a ON => OFF, also executed the action (but the condition should break the rule). ```text 2016-04-28 08:26:45,690 | INFO | SH-safeCall-1508 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'zwave_device_852ab7b3_node27_switch_binary' received command OFF 2016-04-28 08:26:45,699 | INFO | SH-safeCall-1508 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:45,702 | INFO | SH-safeCall-1506 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:45,750 | INFO | SH-safeCall-1509 | ItemStateChangedEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary changed from ON to OFF 2016-04-28 08:26:45,757 | INFO | SH-safeCall-1506 | ItemCommandEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | Item 'sonos_zoneplayer_RINCON_000E588375A001400_control' received command PLAY 2016-04-28 08:26:45,765 | INFO | SH-safeCall-1509 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE 2016-04-28 08:26:45,767 | INFO | SH-safeCall-1511 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | sonos_zoneplayer_RINCON_000E588375A001400_control updated to PLAY 2016-04-28 08:26:46,147 | INFO | SH-safeCall-1510 | ItemStateEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | zwave_device_852ab7b3_node27_switch_binary updated to OFF 2016-04-28 08:26:46,148 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: RUNNING 2016-04-28 08:26:46,154 | INFO | SH-safeCall-1505 | RuleStatusInfoEvent | 181 - org.eclipse.smarthome.io.monitor - 0.8.0.201604271228 | rule_1 updated: IDLE ``` Rule 1 ```text { ""enabled"": true, ""status"": { ""status"": ""IDLE"", ""statusDetail"": ""NONE"" }, ""triggers"": [ { ""id"": ""trigger_1"", ""label"": ""Item State Trigger"", ""description"": ""This triggers a rule if an items state changed"", ""configuration"": { ""itemName"": ""zwave_device_852ab7b3_node27_switch_binary"" }, ""type"": ""ItemStateChangeTrigger"" } ], ""conditions"": [ { ""inputs"": {}, ""id"": ""condition_2"", ""label"": ""Item state condition"", ""description"": ""compares the items current state with the given"", ""configuration"": { ""itemName"": ""zwave_device_852ab7b3_node27_switch_binary"", ""state"": ""ON"", ""operator"": ""="" }, ""type"": ""ItemStateCondition"" } ], ""actions"": [ { ""inputs"": {}, ""id"": ""action_3"", ""label"": ""Post command to an item"", ""description"": ""posts commands on items"", ""configuration"": { ""itemName"": ""sonos_zoneplayer_RINCON_000E588375A001400_control"", ""command"": ""PLAY"" }, ""type"": ""ItemPostCommandAction"" } ], ""configuration"": {}, ""configDescriptions"": [], ""uid"": ""rule_1"", ""name"": ""Bistro-Licht-Sonos-ON"", ""tags"": [] } ``` Rule 2 ```text { ""enabled"": true, ""status"": { ""status"": ""NOT_INITIALIZED"", ""statusDetail"": ""HANDLER_INITIALIZING_ERROR"", ""description"": ""Missing handler 'ItemStateChangeTrigger' for module 'trigger_1'\n"" }, ""triggers"": [ { ""id"": ""trigger_1"", ""label"": ""Item State Trigger"", ""description"": ""This triggers a rule if an items state changed"", ""configuration"": { ""itemName"": ""zwave_device_852ab7b3_node27_switch_binary"" }, ""type"": ""ItemStateChangeTrigger"" } ], ""conditions"": [ { ""inputs"": {}, ""id"": ""condition_2"", ""label"": ""Item state condition"", ""description"": ""compares the items current state with the given"", ""configuration"": { ""itemName"": ""zwave_device_852ab7b3_node27_switch_binary"", ""state"": ""OFF"", ""operator"": ""="" }, ""type"": ""ItemStateCondition"" } ], ""actions"": [ { ""inputs"": {}, ""id"": ""action_3"", ""label"": ""Post command to an item"", ""description"": ""posts commands on items"", ""configuration"": { ""itemName"": ""sonos_zoneplayer_RINCON_000E588375A001400_control"", ""command"": ""PAUSE"" }, ""type"": ""ItemPostCommandAction"" } ], ""configuration"": {}, ""configDescriptions"": [], ""uid"": ""rule_2"", ""name"": ""Bistro-Licht-Sonos-OFF"", ""tags"": [] } ```",1,rules are not working as expected sorry about that very unspecific title of this issue i will change it as soon as we i have a better one short description rules are not executed anymore after restart after disable and enable the rule is executed the now executed rule executed the action if the condition does not met long description i have created to rules rule if item zwave device switch binary state is set to on using trigger and condition set sonos zoneplayer rincon control to play rule if item zwave device switch binary state is set to off using trigger and condition set sonos zoneplayer rincon control to pause after the rules have been created all has been working i restarted the system at least once now i realized that the rules are not working anymore i filtered the log a little bit because it contains a lot of messages text info sh safecall itemcommandevent org eclipse smarthome io monitor item zwave device switch binary received command on info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall itemstatechangedevent org eclipse smarthome io monitor zwave device switch binary changed from off to on info sh safecall itemcommandevent org eclipse smarthome io monitor item zwave device switch binary received command off info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall itemstatechangedevent org eclipse smarthome io monitor zwave device switch binary changed from on to off info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off there are no rules executed after that i disabled and enabled the first rule text info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated disabled info sh safecall ruleupdatedevent org eclipse smarthome io monitor rule rule has been updated info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated disabled info sh safecall ruleupdatedevent org eclipse smarthome io monitor rule rule has been updated info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated disabled info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated not initialized info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall ruleupdatedevent org eclipse smarthome io monitor rule rule has been updated info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated not initialized info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall ruleupdatedevent org eclipse smarthome io monitor rule rule has been updated info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated not initialized info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle now off on the first rule is triggered and the action is executed text info sh safecall itemstateevent org eclipse smarthome io monitor zwave device sensor binary updated to on info sh safecall itemcommandevent org eclipse smarthome io monitor item zwave device switch binary received command on info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall itemstatechangedevent org eclipse smarthome io monitor zwave device switch binary changed from off to on info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play if i switch the trigger to off i would assume that rule is triggered but the condition will break the execution but why is there a received command play text info sh safecall itemcommandevent org eclipse smarthome io monitor item zwave device switch binary received command off info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstatechangedevent org eclipse smarthome io monitor zwave device switch binary changed from on to off info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off off on seems to be okay text info sh safecall itemcommandevent org eclipse smarthome io monitor item zwave device switch binary received command on info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstatechangedevent org eclipse smarthome io monitor zwave device switch binary changed from off to on info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave serial zstick serial sof updated to info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to on info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play here again a on off also executed the action but the condition should break the rule text info sh safecall itemcommandevent org eclipse smarthome io monitor item zwave device switch binary received command off info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall itemstatechangedevent org eclipse smarthome io monitor zwave device switch binary changed from on to off info sh safecall itemcommandevent org eclipse smarthome io monitor item sonos zoneplayer rincon control received command play info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle info sh safecall itemstateevent org eclipse smarthome io monitor sonos zoneplayer rincon control updated to play info sh safecall itemstateevent org eclipse smarthome io monitor zwave device switch binary updated to off info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated running info sh safecall rulestatusinfoevent org eclipse smarthome io monitor rule updated idle rule text enabled true status status idle statusdetail none triggers id trigger label item state trigger description this triggers a rule if an items state changed configuration itemname zwave device switch binary type itemstatechangetrigger conditions inputs id condition label item state condition description compares the items current state with the given configuration itemname zwave device switch binary state on operator type itemstatecondition actions inputs id action label post command to an item description posts commands on items configuration itemname sonos zoneplayer rincon control command play type itempostcommandaction configuration configdescriptions uid rule name bistro licht sonos on tags rule text enabled true status status not initialized statusdetail handler initializing error description missing handler itemstatechangetrigger for module trigger n triggers id trigger label item state trigger description this triggers a rule if an items state changed configuration itemname zwave device switch binary type itemstatechangetrigger conditions inputs id condition label item state condition description compares the items current state with the given configuration itemname zwave device switch binary state off operator type itemstatecondition actions inputs id action label post command to an item description posts commands on items configuration itemname sonos zoneplayer rincon control command pause type itempostcommandaction configuration configdescriptions uid rule name bistro licht sonos off tags ,1 169344,20841647252.0,IssuesEvent,2022-03-21 01:13:00,akshat702/cart-ionic,https://api.github.com/repos/akshat702/cart-ionic,opened,CVE-2022-24771 (High) detected in node-forge-0.7.5.tgz,security vulnerability,"## CVE-2022-24771 - High Severity Vulnerability
Vulnerable Library - node-forge-0.7.5.tgz

JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.

Library home page: https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz

Path to dependency file: /cart/e2e/package.json

Path to vulnerable library: /cart/e2e/node_modules/node-forge/package.json,/cart/e2e/node_modules/node-forge/package.json

Dependency Hierarchy: - build-angular-0.12.4.tgz (Root Library) - webpack-dev-server-3.1.14.tgz - selfsigned-1.10.4.tgz - :x: **node-forge-0.7.5.tgz** (Vulnerable Library)

Vulnerability Details

Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure. This can allow a crafted structure that steals padding bytes and uses unchecked portion of the PKCS#1 encoded message to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.

Publish Date: 2022-03-18

URL: CVE-2022-24771

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24771

Release Date: 2022-03-18

Fix Resolution: node-forge - 1.3.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-24771 (High) detected in node-forge-0.7.5.tgz - ## CVE-2022-24771 - High Severity Vulnerability
Vulnerable Library - node-forge-0.7.5.tgz

JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.

Library home page: https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz

Path to dependency file: /cart/e2e/package.json

Path to vulnerable library: /cart/e2e/node_modules/node-forge/package.json,/cart/e2e/node_modules/node-forge/package.json

Dependency Hierarchy: - build-angular-0.12.4.tgz (Root Library) - webpack-dev-server-3.1.14.tgz - selfsigned-1.10.4.tgz - :x: **node-forge-0.7.5.tgz** (Vulnerable Library)

Vulnerability Details

Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure. This can allow a crafted structure that steals padding bytes and uses unchecked portion of the PKCS#1 encoded message to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.

Publish Date: 2022-03-18

URL: CVE-2022-24771

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24771

Release Date: 2022-03-18

Fix Resolution: node-forge - 1.3.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in node forge tgz cve high severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file cart package json path to vulnerable library cart node modules node forge package json cart node modules node forge package json dependency hierarchy build angular tgz root library webpack dev server tgz selfsigned tgz x node forge tgz vulnerable library vulnerability details forge also called node forge is a native implementation of transport layer security in javascript prior to version rsa pkcs signature verification code is lenient in checking the digest algorithm structure this can allow a crafted structure that steals padding bytes and uses unchecked portion of the pkcs encoded message to forge a signature when a low public exponent is being used the issue has been addressed in node forge version there are currently no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource ,0 8667,27172061384.0,IssuesEvent,2023-02-17 20:25:14,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,"[7.2] File picker gets stuck at loading, but 7.0 get through",area:Picker Needs: Attention :wave: automation:Closed,"The setup for OD file picker is If I use the 7.2 version, it gets stuck at https://192.168.20.118:8181/abc/xyz/xyz0102e.jsf?oauth=%7B%22clientId%22%3A%22.... The version 7.0 works fine, but since I am going to upgrade to 7.2 for being able to pick file from sharepoint, I would like to ask for help. Thanks ",1.0,"[7.2] File picker gets stuck at loading, but 7.0 get through - The setup for OD file picker is If I use the 7.2 version, it gets stuck at https://192.168.20.118:8181/abc/xyz/xyz0102e.jsf?oauth=%7B%22clientId%22%3A%22.... The version 7.0 works fine, but since I am going to upgrade to 7.2 for being able to pick file from sharepoint, I would like to ask for help. Thanks ",1, file picker gets stuck at loading but get through the setup for od file picker is function launchonedrivepicker var odoptions clientid xxxxxxxx action share multiselect true advanced success function files frminput i urlreference null files cancel function error function e alert e onedrive open odoptions if i use the version it gets stuck at the version works fine but since i am going to upgrade to for being able to pick file from sharepoint i would like to ask for help thanks ,1 1048,9258377396.0,IssuesEvent,2019-03-17 15:13:50,spacemeshos/go-spacemesh,https://api.github.com/repos/spacemeshos/go-spacemesh,opened,Initialise new oracle_server in every test,CI automation,"# Overview / Motivation The oracle server should be deployed in every test from scratch. # The Task use setup to deploy oracle_server use teardown to destroy oracle_server # Implementation Notes TODO: Add links to relevant resources, specs, related issues, etc... # Contribution Guidelines Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter [dev channel](https://gitter.im/spacemesh-os/Lobby) 1. Introduce yourself on go-spacemesh [dev chat channel](https://gitter.im/spacemesh-os/Lobby) - ask our team any question you may have about this task 2. Fork branch `develop` to your own repo and work in your repo 3. You must document all methods, enums and types with [godoc comments](https://blog.golang.org/godoc-documenting-go-code) 4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature 5. When ready for code review, submit a PR from your repo back to branch `develop` 6. Attach relevant issue to PR ",1.0,"Initialise new oracle_server in every test - # Overview / Motivation The oracle server should be deployed in every test from scratch. # The Task use setup to deploy oracle_server use teardown to destroy oracle_server # Implementation Notes TODO: Add links to relevant resources, specs, related issues, etc... # Contribution Guidelines Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter [dev channel](https://gitter.im/spacemesh-os/Lobby) 1. Introduce yourself on go-spacemesh [dev chat channel](https://gitter.im/spacemesh-os/Lobby) - ask our team any question you may have about this task 2. Fork branch `develop` to your own repo and work in your repo 3. You must document all methods, enums and types with [godoc comments](https://blog.golang.org/godoc-documenting-go-code) 4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature 5. When ready for code review, submit a PR from your repo back to branch `develop` 6. Attach relevant issue to PR ",1,initialise new oracle server in every test overview motivation the oracle server should be deployed in every test from scratch the task use setup to deploy oracle server use teardown to destroy oracle server implementation notes todo add links to relevant resources specs related issues etc contribution guidelines important issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity we will not assign tasks to developers who have nt introduced themselves on our gitter introduce yourself on go spacemesh ask our team any question you may have about this task fork branch develop to your own repo and work in your repo you must document all methods enums and types with you must write go unit tests for all types and methods when submitting a component and integration tests if you submit a feature when ready for code review submit a pr from your repo back to branch develop attach relevant issue to pr ,1 69887,9344829844.0,IssuesEvent,2019-03-30 01:21:40,fga-eps-mds/2019.1-Aix,https://api.github.com/repos/fga-eps-mds/2019.1-Aix,closed,Elaboração do EAP,Documentation EPS,"Elaborar documento de Estrutura Analítica do Projeto **Tarefas** - [x] Criar estrutura de documento no drive - [x] Levantar dados do documento com o grupo de EPS - [ ] Revisar documento - [ ] Passar documento para Github Pages **Critérios de aceitação** - [ ] O documento deve estar devidamente revisado e adicionado ao Github pages do grupo - [ ] Documentar na issue o link do arquivo onde se pode encontrar o documento. ",1.0,"Elaboração do EAP - Elaborar documento de Estrutura Analítica do Projeto **Tarefas** - [x] Criar estrutura de documento no drive - [x] Levantar dados do documento com o grupo de EPS - [ ] Revisar documento - [ ] Passar documento para Github Pages **Critérios de aceitação** - [ ] O documento deve estar devidamente revisado e adicionado ao Github pages do grupo - [ ] Documentar na issue o link do arquivo onde se pode encontrar o documento. ",0,elaboração do eap elaborar documento de estrutura analítica do projeto tarefas criar estrutura de documento no drive levantar dados do documento com o grupo de eps revisar documento passar documento para github pages critérios de aceitação o documento deve estar devidamente revisado e adicionado ao github pages do grupo documentar na issue o link do arquivo onde se pode encontrar o documento ,0 4450,16577825137.0,IssuesEvent,2021-05-31 07:46:51,jscoobyced/docker-nodejs-mariadb,https://api.github.com/repos/jscoobyced/docker-nodejs-mariadb,opened,Make build pass,automation,For now the build can't pass if there is no valid docker credentials in the GH Actions Secret. The build should check for the values first then skip those steps.,1.0,Make build pass - For now the build can't pass if there is no valid docker credentials in the GH Actions Secret. The build should check for the values first then skip those steps.,1,make build pass for now the build can t pass if there is no valid docker credentials in the gh actions secret the build should check for the values first then skip those steps ,1 3516,13904565040.0,IssuesEvent,2020-10-20 08:46:05,mozilla-mobile/focus-ios,https://api.github.com/repos/mozilla-mobile/focus-ios,opened,[XCUITests] testFindInPageURLBarElement fails on both Focus and Klar,UIAutomation,"Filing issue to investigate if the failure is due to real issue, intermittent or update needed with this test See Bitrise logs: https://addons-testing.bitrise.io/builds/a578196a3d9f9b4b/testreport/ca0a7428-c206-4279-afe7-f243c02150d5/testsuite/1/testcases?status=failed",1.0,"[XCUITests] testFindInPageURLBarElement fails on both Focus and Klar - Filing issue to investigate if the failure is due to real issue, intermittent or update needed with this test See Bitrise logs: https://addons-testing.bitrise.io/builds/a578196a3d9f9b4b/testreport/ca0a7428-c206-4279-afe7-f243c02150d5/testsuite/1/testcases?status=failed",1, testfindinpageurlbarelement fails on both focus and klar filing issue to investigate if the failure is due to real issue intermittent or update needed with this test see bitrise logs ,1 6621,23595797352.0,IssuesEvent,2022-08-23 19:06:44,rancher-sandbox/rancher-desktop,https://api.github.com/repos/rancher-sandbox/rancher-desktop,opened,[CI E2E] expand E2E CI on Mac M1,kind/enhancement area/e2e kind/automation area/ci,"### Problem Description Currently our E2E CI is running only on Linux. We need to expand it to run on Mac M1. ### Proposed Solution Trigger: Nightly or manual Job: same purpose of Cirrus CI - e2e tests Self-hosted GitHub runner: 1 provo Mac M1 runner ### Additional Information _No response_",1.0,"[CI E2E] expand E2E CI on Mac M1 - ### Problem Description Currently our E2E CI is running only on Linux. We need to expand it to run on Mac M1. ### Proposed Solution Trigger: Nightly or manual Job: same purpose of Cirrus CI - e2e tests Self-hosted GitHub runner: 1 provo Mac M1 runner ### Additional Information _No response_",1, expand ci on mac problem description currently our ci is running only on linux we need to expand it to run on mac proposed solution trigger nightly or manual job same purpose of cirrus ci tests self hosted github runner provo mac runner additional information no response ,1 565974,16773496998.0,IssuesEvent,2021-06-14 17:40:35,GoogleCloudPlatform/python-docs-samples,https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples,closed,cloud-sql.sql-server.client-side-encryption.snippets.query_and_decrypt_data_test: test_query_and_decrypt_data failed,api: cloudsql flakybot: flaky flakybot: issue priority: p1 samples type: bug,"This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 1ab7826e29b175d6419ec22ed6c876255b7f7a19 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/8a7b8e9f-401a-4be5-b4a2-320666cb8264), [Sponge](http://sponge2/8a7b8e9f-401a-4be5-b4a2-320666cb8264) status: failed
Test output
Traceback (most recent call last):
  File ""/workspace/cloud-sql/sql-server/client-side-encryption/snippets/query_and_decrypt_data_test.py"", line 33, in setup_pool
    db_user = os.environ[""SQLSERVER_USER""]
  File ""/usr/local/lib/python3.7/os.py"", line 681, in __getitem__
    raise KeyError(key) from None
KeyError: 'SQLSERVER_USER'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File ""/workspace/cloud-sql/sql-server/client-side-encryption/snippets/query_and_decrypt_data_test.py"", line 39, in setup_pool
    ""The following env variables must be set to run these tests:""
Exception: The following env variables must be set to run these tests:SQLSERVER_USER, SQLSERVER_PASSWORD, SQLSERVER_DATABASE, SQLSERVER_HOST
",1.0,"cloud-sql.sql-server.client-side-encryption.snippets.query_and_decrypt_data_test: test_query_and_decrypt_data failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 1ab7826e29b175d6419ec22ed6c876255b7f7a19 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/8a7b8e9f-401a-4be5-b4a2-320666cb8264), [Sponge](http://sponge2/8a7b8e9f-401a-4be5-b4a2-320666cb8264) status: failed
Test output
Traceback (most recent call last):
  File ""/workspace/cloud-sql/sql-server/client-side-encryption/snippets/query_and_decrypt_data_test.py"", line 33, in setup_pool
    db_user = os.environ[""SQLSERVER_USER""]
  File ""/usr/local/lib/python3.7/os.py"", line 681, in __getitem__
    raise KeyError(key) from None
KeyError: 'SQLSERVER_USER'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File ""/workspace/cloud-sql/sql-server/client-side-encryption/snippets/query_and_decrypt_data_test.py"", line 39, in setup_pool
    ""The following env variables must be set to run these tests:""
Exception: The following env variables must be set to run these tests:SQLSERVER_USER, SQLSERVER_PASSWORD, SQLSERVER_DATABASE, SQLSERVER_HOST
",0,cloud sql sql server client side encryption snippets query and decrypt data test test query and decrypt data failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output traceback most recent call last file workspace cloud sql sql server client side encryption snippets query and decrypt data test py line in setup pool db user os environ file usr local lib os py line in getitem raise keyerror key from none keyerror sqlserver user during handling of the above exception another exception occurred traceback most recent call last file workspace cloud sql sql server client side encryption snippets query and decrypt data test py line in setup pool the following env variables must be set to run these tests exception the following env variables must be set to run these tests sqlserver user sqlserver password sqlserver database sqlserver host ,0 98386,16373812778.0,IssuesEvent,2021-05-15 17:39:53,hugh-whitesource/NodeGoat-1,https://api.github.com/repos/hugh-whitesource/NodeGoat-1,opened,"CVE-2018-1000620 (High) detected in cryptiles-0.2.2.tgz, cryptiles-2.0.5.tgz",security vulnerability,"## CVE-2018-1000620 - High Severity Vulnerability
Vulnerable Libraries - cryptiles-0.2.2.tgz, cryptiles-2.0.5.tgz

cryptiles-0.2.2.tgz

General purpose crypto utilities

Library home page: https://registry.npmjs.org/cryptiles/-/cryptiles-0.2.2.tgz

Path to dependency file: NodeGoat-1/package.json

Path to vulnerable library: NodeGoat-1/node_modules/zaproxy/node_modules/cryptiles/package.json

Dependency Hierarchy: - zaproxy-0.2.0.tgz (Root Library) - request-2.36.0.tgz - hawk-1.0.0.tgz - :x: **cryptiles-0.2.2.tgz** (Vulnerable Library)

cryptiles-2.0.5.tgz

General purpose crypto utilities

Library home page: https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz

Path to dependency file: NodeGoat-1/package.json

Path to vulnerable library: NodeGoat-1/node_modules/cryptiles/package.json,NodeGoat-1/node_modules/npm/node_modules/request/node_modules/hawk/node_modules/cryptiles/package.json

Dependency Hierarchy: - grunt-retire-0.3.12.tgz (Root Library) - request-2.67.0.tgz - hawk-3.1.3.tgz - :x: **cryptiles-2.0.5.tgz** (Vulnerable Library)

Found in HEAD commit: 1acb8446b41e455d2f087e892c9a9ce80609f601

Found in base branch: master

Vulnerability Details

Eran Hammer cryptiles version 4.1.1 earlier contains a CWE-331: Insufficient Entropy vulnerability in randomDigits() method that can result in An attacker is more likely to be able to brute force something that was supposed to be random.. This attack appear to be exploitable via Depends upon the calling application.. This vulnerability appears to have been fixed in 4.1.2.

Publish Date: 2018-07-09

URL: CVE-2018-1000620

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-1000620

Release Date: 2018-07-09

Fix Resolution: v4.1.2

",True,"CVE-2018-1000620 (High) detected in cryptiles-0.2.2.tgz, cryptiles-2.0.5.tgz - ## CVE-2018-1000620 - High Severity Vulnerability
Vulnerable Libraries - cryptiles-0.2.2.tgz, cryptiles-2.0.5.tgz

cryptiles-0.2.2.tgz

General purpose crypto utilities

Library home page: https://registry.npmjs.org/cryptiles/-/cryptiles-0.2.2.tgz

Path to dependency file: NodeGoat-1/package.json

Path to vulnerable library: NodeGoat-1/node_modules/zaproxy/node_modules/cryptiles/package.json

Dependency Hierarchy: - zaproxy-0.2.0.tgz (Root Library) - request-2.36.0.tgz - hawk-1.0.0.tgz - :x: **cryptiles-0.2.2.tgz** (Vulnerable Library)

cryptiles-2.0.5.tgz

General purpose crypto utilities

Library home page: https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz

Path to dependency file: NodeGoat-1/package.json

Path to vulnerable library: NodeGoat-1/node_modules/cryptiles/package.json,NodeGoat-1/node_modules/npm/node_modules/request/node_modules/hawk/node_modules/cryptiles/package.json

Dependency Hierarchy: - grunt-retire-0.3.12.tgz (Root Library) - request-2.67.0.tgz - hawk-3.1.3.tgz - :x: **cryptiles-2.0.5.tgz** (Vulnerable Library)

Found in HEAD commit: 1acb8446b41e455d2f087e892c9a9ce80609f601

Found in base branch: master

Vulnerability Details

Eran Hammer cryptiles version 4.1.1 earlier contains a CWE-331: Insufficient Entropy vulnerability in randomDigits() method that can result in An attacker is more likely to be able to brute force something that was supposed to be random.. This attack appear to be exploitable via Depends upon the calling application.. This vulnerability appears to have been fixed in 4.1.2.

Publish Date: 2018-07-09

URL: CVE-2018-1000620

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-1000620

Release Date: 2018-07-09

Fix Resolution: v4.1.2

",0,cve high detected in cryptiles tgz cryptiles tgz cve high severity vulnerability vulnerable libraries cryptiles tgz cryptiles tgz cryptiles tgz general purpose crypto utilities library home page a href path to dependency file nodegoat package json path to vulnerable library nodegoat node modules zaproxy node modules cryptiles package json dependency hierarchy zaproxy tgz root library request tgz hawk tgz x cryptiles tgz vulnerable library cryptiles tgz general purpose crypto utilities library home page a href path to dependency file nodegoat package json path to vulnerable library nodegoat node modules cryptiles package json nodegoat node modules npm node modules request node modules hawk node modules cryptiles package json dependency hierarchy grunt retire tgz root library request tgz hawk tgz x cryptiles tgz vulnerable library found in head commit a href found in base branch master vulnerability details eran hammer cryptiles version earlier contains a cwe insufficient entropy vulnerability in randomdigits method that can result in an attacker is more likely to be able to brute force something that was supposed to be random this attack appear to be exploitable via depends upon the calling application this vulnerability appears to have been fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree zaproxy request hawk cryptiles isminimumfixversionavailable true minimumfixversion packagetype javascript node js packagename cryptiles packageversion packagefilepaths istransitivedependency true dependencytree grunt retire request hawk cryptiles isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails eran hammer cryptiles version earlier contains a cwe insufficient entropy vulnerability in randomdigits method that can result in an attacker is more likely to be able to brute force something that was supposed to be random this attack appear to be exploitable via depends upon the calling application this vulnerability appears to have been fixed in vulnerabilityurl ,0 4508,16743958825.0,IssuesEvent,2021-06-11 13:24:46,openml/automlbenchmark,https://api.github.com/repos/openml/automlbenchmark,opened,Cache Framework Dependencies in Github Workflows,automation enhancement,"The Github Workflow which runs the validation tests on the frameworks takes a long time, in part because of installation time (in particular for R packages). I think with careful caching, we can avoid the installation step and mount a cache instead whenever the framework installation has not changed. Let me know if this is a bad idea or I missed something. I think we need to cache the following folders: - FRAMEWORK/venv (Python) - FRAMEWORK/packages (R) The `FRAMEWORK/.installed` file probably has to be generated. We can load the installation from cache only if all of the following files are not changed (all paths relative to `automlbenchmark/frameworks`): - shared/setup.sh - shared/requirements.txt - FRAMEWORK/setup.sh - FRAMEWORK/requirements.txt ",1.0,"Cache Framework Dependencies in Github Workflows - The Github Workflow which runs the validation tests on the frameworks takes a long time, in part because of installation time (in particular for R packages). I think with careful caching, we can avoid the installation step and mount a cache instead whenever the framework installation has not changed. Let me know if this is a bad idea or I missed something. I think we need to cache the following folders: - FRAMEWORK/venv (Python) - FRAMEWORK/packages (R) The `FRAMEWORK/.installed` file probably has to be generated. We can load the installation from cache only if all of the following files are not changed (all paths relative to `automlbenchmark/frameworks`): - shared/setup.sh - shared/requirements.txt - FRAMEWORK/setup.sh - FRAMEWORK/requirements.txt ",1,cache framework dependencies in github workflows the github workflow which runs the validation tests on the frameworks takes a long time in part because of installation time in particular for r packages i think with careful caching we can avoid the installation step and mount a cache instead whenever the framework installation has not changed let me know if this is a bad idea or i missed something i think we need to cache the following folders framework venv python framework packages r the framework installed file probably has to be generated we can load the installation from cache only if all of the following files are not changed all paths relative to automlbenchmark frameworks shared setup sh shared requirements txt framework setup sh framework requirements txt ,1 260932,22680715098.0,IssuesEvent,2022-07-04 09:37:45,Azure/azure-sdk-for-python,https://api.github.com/repos/Azure/azure-sdk-for-python,opened,Key Vault keys/secrets/certificates Samples names improvement,KeyVault Client test-manual-pass,"**Issue Description:** When implementing automated tests for key vault samples, the samples keys/secrets/certificates names in **sync** and **async samples** are the same, and the error is reported as: ![image](https://user-images.githubusercontent.com/57166602/177125630-204126a9-7fb1-46db-b88a-0e6bac7e2c74.png) **Suggestion:** Add **async** to the names of keys/secrets/certificates in the `async samples` to distinguish the names, which is convenient for automatic testing of samples. For examples: - `helloWorldSecretName` in **sync samples**. - `helloWorldSecretNameAsync` in **async samples**. @lmazuel , @hector-norza , @heaths , @schaabs for notification.",1.0,"Key Vault keys/secrets/certificates Samples names improvement - **Issue Description:** When implementing automated tests for key vault samples, the samples keys/secrets/certificates names in **sync** and **async samples** are the same, and the error is reported as: ![image](https://user-images.githubusercontent.com/57166602/177125630-204126a9-7fb1-46db-b88a-0e6bac7e2c74.png) **Suggestion:** Add **async** to the names of keys/secrets/certificates in the `async samples` to distinguish the names, which is convenient for automatic testing of samples. For examples: - `helloWorldSecretName` in **sync samples**. - `helloWorldSecretNameAsync` in **async samples**. @lmazuel , @hector-norza , @heaths , @schaabs for notification.",0,key vault keys secrets certificates samples names improvement issue description when implementing automated tests for key vault samples the samples keys secrets certificates names in sync and async samples are the same and the error is reported as suggestion add async to the names of keys secrets certificates in the async samples to distinguish the names which is convenient for automatic testing of samples for examples helloworldsecretname in sync samples helloworldsecretnameasync in async samples lmazuel hector norza heaths schaabs for notification ,0 290271,25046258190.0,IssuesEvent,2022-11-05 09:32:59,finos/waltz,https://api.github.com/repos/finos/waltz,closed,Report Grid: Endpoint to trigger on-demand recalc of filter groups,fixed (test & close) QoL,"### Description This should be something we can embed on the page in the named note (see #6228) ### Resourcing We intend to contribute this feature",1.0,"Report Grid: Endpoint to trigger on-demand recalc of filter groups - ### Description This should be something we can embed on the page in the named note (see #6228) ### Resourcing We intend to contribute this feature",0,report grid endpoint to trigger on demand recalc of filter groups description this should be something we can embed on the page in the named note see resourcing we intend to contribute this feature,0 439,6515488510.0,IssuesEvent,2017-08-26 16:22:45,nirbos/TAWL,https://api.github.com/repos/nirbos/TAWL,closed,Git: Fremdbibliotheken und Plugins als Submodule inkludieren,Automation / Dev-Environment JavaScript,"Mittels Git gibt es die Möglichkeit Fremdbibliotheken und Plugins als externen Code (und externes Repository) ins eigene zu inkludieren. Das stellt vor allem eine leichtere handhabbare Möglichkeit von Updates dar. [Git Dokumenation](https://git-scm.com/book/en/v2/Git-Tools-Submodules)",1.0,"Git: Fremdbibliotheken und Plugins als Submodule inkludieren - Mittels Git gibt es die Möglichkeit Fremdbibliotheken und Plugins als externen Code (und externes Repository) ins eigene zu inkludieren. Das stellt vor allem eine leichtere handhabbare Möglichkeit von Updates dar. [Git Dokumenation](https://git-scm.com/book/en/v2/Git-Tools-Submodules)",1,git fremdbibliotheken und plugins als submodule inkludieren mittels git gibt es die möglichkeit fremdbibliotheken und plugins als externen code und externes repository ins eigene zu inkludieren das stellt vor allem eine leichtere handhabbare möglichkeit von updates dar ,1 2233,11615898250.0,IssuesEvent,2020-02-26 14:53:59,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,opened,[Bug] Fix intermittent UI test failure - createBookmarkFolderTest,eng:automation 🐞 bug,"``` androidx.test.espresso.NoMatchingViewException: No views in hierarchy found matching: with id: org.mozilla.fenix.debug:id/title View Hierarchy: +>DecorView{id=-1, visibility=VISIBLE, width=1440, height=2560, has-focus=true, has-focusable=true, has-window-focus=true, is-clickable=false, is-enabled=true, is-focused=false, is-focusable=false, is-layout-requested=false, is-selected=false, layout-params=WM.LayoutParams{(0,0)(fillxfill) sim=#10 ty=1 fl=#81010100 pfl=0x20000 wanim=0x7f130300 vsysui=0x2000 needsMenuKey=2}, tag=null, root-is-layout-requested=false, has-input-connection=false, x=0.0, y=0.0, child-count=2} | +->LinearLayout{id=-1, visibility=VISIBLE, width=1440, height=2392, has-focus=tr ``` https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/8255769918756530997/executions/bs.c8afcc8803588cfd/testcases/1",1.0,"[Bug] Fix intermittent UI test failure - createBookmarkFolderTest - ``` androidx.test.espresso.NoMatchingViewException: No views in hierarchy found matching: with id: org.mozilla.fenix.debug:id/title View Hierarchy: +>DecorView{id=-1, visibility=VISIBLE, width=1440, height=2560, has-focus=true, has-focusable=true, has-window-focus=true, is-clickable=false, is-enabled=true, is-focused=false, is-focusable=false, is-layout-requested=false, is-selected=false, layout-params=WM.LayoutParams{(0,0)(fillxfill) sim=#10 ty=1 fl=#81010100 pfl=0x20000 wanim=0x7f130300 vsysui=0x2000 needsMenuKey=2}, tag=null, root-is-layout-requested=false, has-input-connection=false, x=0.0, y=0.0, child-count=2} | +->LinearLayout{id=-1, visibility=VISIBLE, width=1440, height=2392, has-focus=tr ``` https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/8255769918756530997/executions/bs.c8afcc8803588cfd/testcases/1",1, fix intermittent ui test failure createbookmarkfoldertest androidx test espresso nomatchingviewexception no views in hierarchy found matching with id org mozilla fenix debug id title view hierarchy decorview id visibility visible width height has focus true has focusable true has window focus true is clickable false is enabled true is focused false is focusable false is layout requested false is selected false layout params wm layoutparams fillxfill sim ty fl pfl wanim vsysui needsmenukey tag null root is layout requested false has input connection false x y child count linearlayout id visibility visible width height has focus tr ,1 45749,5957763962.0,IssuesEvent,2017-05-29 04:31:07,oppia/oppia,https://api.github.com/repos/oppia/oppia,closed,"In the collection editor, create a search interface for adding explorations by ID.",loc: full-stack owner: @seanlip TODO: design (UX) type: feature (important),"(This is a sub-issue of #1464.) When adding an exploration to a collection, users have the opportunity to create a new exploration (by entering its title) or by entering the id of an existing exploration. The latter approach is clumsy, and while it should be kept, we should add a search field that allows the user to search through (possibly just the titles of) all public explorations, as well as all private explorations they have access to, and select an exploration from that list. For reference, here is the current ""Add Exploration"" UI: ![add-exploration](https://cloud.githubusercontent.com/assets/10575562/15880022/50aa4d7e-2cde-11e6-8cc8-d90e90bd988a.png) ",1.0,"In the collection editor, create a search interface for adding explorations by ID. - (This is a sub-issue of #1464.) When adding an exploration to a collection, users have the opportunity to create a new exploration (by entering its title) or by entering the id of an existing exploration. The latter approach is clumsy, and while it should be kept, we should add a search field that allows the user to search through (possibly just the titles of) all public explorations, as well as all private explorations they have access to, and select an exploration from that list. For reference, here is the current ""Add Exploration"" UI: ![add-exploration](https://cloud.githubusercontent.com/assets/10575562/15880022/50aa4d7e-2cde-11e6-8cc8-d90e90bd988a.png) ",0,in the collection editor create a search interface for adding explorations by id this is a sub issue of when adding an exploration to a collection users have the opportunity to create a new exploration by entering its title or by entering the id of an existing exploration the latter approach is clumsy and while it should be kept we should add a search field that allows the user to search through possibly just the titles of all public explorations as well as all private explorations they have access to and select an exploration from that list for reference here is the current add exploration ui ,0 146807,11757673285.0,IssuesEvent,2020-03-13 14:06:46,jenkinsci/ecutest-plugin,https://api.github.com/repos/jenkinsci/ecutest-plugin,closed,Add option to inject common build variables as ATX constants,feature test-guide," **Describe the feature request** To simplify the definition of common Jenkins build variables as ATX constants in the _TCF Constant Settings_ of each TEST-GUIDE configuration a new option for the `ATXPublisher` is requested. The proposed variables below are then injected automatically as mapped ATX constants into the current TEST-GUIDE settings. When uploading the ATX reports these variables are created as backlinks from TEST-GUIDE to the Jenkins build. **Additional context** Proposed mapping of Jenkins build variables and ATX constants: - BUILD_NUMBER -> TT_JENKINS_BUILD_NUMBER - BUILD_URL -> TT_JENKINS_BUILD_URL - JOB_NAME -> TT_JENKINS_JOB_NAME",1.0,"Add option to inject common build variables as ATX constants - **Describe the feature request** To simplify the definition of common Jenkins build variables as ATX constants in the _TCF Constant Settings_ of each TEST-GUIDE configuration a new option for the `ATXPublisher` is requested. The proposed variables below are then injected automatically as mapped ATX constants into the current TEST-GUIDE settings. When uploading the ATX reports these variables are created as backlinks from TEST-GUIDE to the Jenkins build. **Additional context** Proposed mapping of Jenkins build variables and ATX constants: - BUILD_NUMBER -> TT_JENKINS_BUILD_NUMBER - BUILD_URL -> TT_JENKINS_BUILD_URL - JOB_NAME -> TT_JENKINS_JOB_NAME",0,add option to inject common build variables as atx constants for reporting issues containing nda relevant information please use our describe the feature request to simplify the definition of common jenkins build variables as atx constants in the tcf constant settings of each test guide configuration a new option for the atxpublisher is requested the proposed variables below are then injected automatically as mapped atx constants into the current test guide settings when uploading the atx reports these variables are created as backlinks from test guide to the jenkins build additional context proposed mapping of jenkins build variables and atx constants build number tt jenkins build number build url tt jenkins build url job name tt jenkins job name,0 292092,8953150864.0,IssuesEvent,2019-01-25 18:35:30,Codeinwp/gutenberg-blocks,https://api.github.com/repos/Codeinwp/gutenberg-blocks,closed,Test 1.1.1 Version,help wanted priority,"@stefan-cotitosu Here's a list of all the changes: - [x] Added Typography Option to Block Toolbar in Advanced Heading and Button Group - [x] Fixed Padding Resizer in Section - [x] Template Library should work fine in all devices, including category picker and search - [x] Fixed alignment on span tag, so use Advanced Heading with span tag and see if the alignment works or not. - [x] Added Line Height option to Button Group - [x] Added option to collapse button on devices - [x] Fixed icons in button group. Add Pricing Block and try to add an icon to the button. - [x] Increased maximum font size limit in Heading - [x] Fixed Font Weight value from `regular` to `normal` - [x] Plugin should work just fine with Gutenberg plugin installed in 5.0, without Gutenberg installed in 5.0, and with Gutenberg plugin installed in 4.9. - [x] Fixed unescaped char in Post Grid Title, so now blog posts should work fine if they have an apostrophe in the title. - [x] Fixed issue with duplicating blocks that you pointed out. - [x] Added Left/Right margin to Section and Columns - [x] Heading alignment should have responsive options now - [x] Fixed vertical alignment option in Section, which didn't work before. Given how Gutenberg works, make sure all the pre-made templates that are part of Otter also work in 1.1.1 version. Let me know if something breaks or if you have any questions, and thank you for testing!",1.0,"Test 1.1.1 Version - @stefan-cotitosu Here's a list of all the changes: - [x] Added Typography Option to Block Toolbar in Advanced Heading and Button Group - [x] Fixed Padding Resizer in Section - [x] Template Library should work fine in all devices, including category picker and search - [x] Fixed alignment on span tag, so use Advanced Heading with span tag and see if the alignment works or not. - [x] Added Line Height option to Button Group - [x] Added option to collapse button on devices - [x] Fixed icons in button group. Add Pricing Block and try to add an icon to the button. - [x] Increased maximum font size limit in Heading - [x] Fixed Font Weight value from `regular` to `normal` - [x] Plugin should work just fine with Gutenberg plugin installed in 5.0, without Gutenberg installed in 5.0, and with Gutenberg plugin installed in 4.9. - [x] Fixed unescaped char in Post Grid Title, so now blog posts should work fine if they have an apostrophe in the title. - [x] Fixed issue with duplicating blocks that you pointed out. - [x] Added Left/Right margin to Section and Columns - [x] Heading alignment should have responsive options now - [x] Fixed vertical alignment option in Section, which didn't work before. Given how Gutenberg works, make sure all the pre-made templates that are part of Otter also work in 1.1.1 version. Let me know if something breaks or if you have any questions, and thank you for testing!",0,test version stefan cotitosu here s a list of all the changes added typography option to block toolbar in advanced heading and button group fixed padding resizer in section template library should work fine in all devices including category picker and search fixed alignment on span tag so use advanced heading with span tag and see if the alignment works or not added line height option to button group added option to collapse button on devices fixed icons in button group add pricing block and try to add an icon to the button increased maximum font size limit in heading fixed font weight value from regular to normal plugin should work just fine with gutenberg plugin installed in without gutenberg installed in and with gutenberg plugin installed in fixed unescaped char in post grid title so now blog posts should work fine if they have an apostrophe in the title fixed issue with duplicating blocks that you pointed out added left right margin to section and columns heading alignment should have responsive options now fixed vertical alignment option in section which didn t work before given how gutenberg works make sure all the pre made templates that are part of otter also work in version let me know if something breaks or if you have any questions and thank you for testing ,0 4050,15266922105.0,IssuesEvent,2021-02-22 09:26:21,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,Move Helm Charts from Google Cloud to a Git Repo,automation future,"As discussed in https://github.com/keptn/keptn/discussions/2871 we should consider moving our helm charts away from google cloud. One example would be to put them into a Git Repo, same as with https://github.com/Dynatrace/helm-charts/tree/master/repos/stable . # Definition of Done - [ ] Create a repository in github.com/keptn, e.g., keptn/helm-charts - [ ] Copy content from https://console.cloud.google.com/storage/browser/keptn-installer;tab=objects?forceOnBucketsSortingFiltering=false&project=sai-research&prefix=&forceOnObjectsSortingFiltering=false over to the new repo - [ ] CLI: Change `keptn install` command to fetch content from the new repo - [ ] Automation (for release branches): Create an automation task (e.g., in integration_test.yaml) to upload the helm-chart to the new repository - [ ] Update documentation regarding Advanced Keptn Installation, e.g., https://keptn.sh/docs/0.8.x/operate/advanced_install_options/",1.0,"Move Helm Charts from Google Cloud to a Git Repo - As discussed in https://github.com/keptn/keptn/discussions/2871 we should consider moving our helm charts away from google cloud. One example would be to put them into a Git Repo, same as with https://github.com/Dynatrace/helm-charts/tree/master/repos/stable . # Definition of Done - [ ] Create a repository in github.com/keptn, e.g., keptn/helm-charts - [ ] Copy content from https://console.cloud.google.com/storage/browser/keptn-installer;tab=objects?forceOnBucketsSortingFiltering=false&project=sai-research&prefix=&forceOnObjectsSortingFiltering=false over to the new repo - [ ] CLI: Change `keptn install` command to fetch content from the new repo - [ ] Automation (for release branches): Create an automation task (e.g., in integration_test.yaml) to upload the helm-chart to the new repository - [ ] Update documentation regarding Advanced Keptn Installation, e.g., https://keptn.sh/docs/0.8.x/operate/advanced_install_options/",1,move helm charts from google cloud to a git repo as discussed in we should consider moving our helm charts away from google cloud one example would be to put them into a git repo same as with definition of done create a repository in github com keptn e g keptn helm charts copy content from over to the new repo cli change keptn install command to fetch content from the new repo automation for release branches create an automation task e g in integration test yaml to upload the helm chart to the new repository update documentation regarding advanced keptn installation e g ,1 68904,7113593665.0,IssuesEvent,2018-01-17 21:02:52,vmware/vic,https://api.github.com/repos/vmware/vic,closed,6-04-Create-Basic - name collision with previous test with leaked VOL,component/test priority/high,"https://ci.vcna.io/vmware/vic/15671 ``` VCH-15671-8641-VOL long VCH-15671-5084-VOL test-volumes VCH-15671-4897-VOL VCH-15671-4916-VOL VCH-15671-3559-VOL VCH-15671-1666-VOL VCH-15671-4169-VOL VCH-15671-1037-VOL' contains 'VCH-15671-1037' ```",1.0,"6-04-Create-Basic - name collision with previous test with leaked VOL - https://ci.vcna.io/vmware/vic/15671 ``` VCH-15671-8641-VOL long VCH-15671-5084-VOL test-volumes VCH-15671-4897-VOL VCH-15671-4916-VOL VCH-15671-3559-VOL VCH-15671-1666-VOL VCH-15671-4169-VOL VCH-15671-1037-VOL' contains 'VCH-15671-1037' ```",0, create basic name collision with previous test with leaked vol vch vol long vch vol test volumes vch vol vch vol vch vol vch vol vch vol vch vol contains vch ,0 8122,26214237592.0,IssuesEvent,2023-01-04 09:37:17,apimatic/requests-client-adapter,https://api.github.com/repos/apimatic/requests-client-adapter,closed,Update PYPI package deployment script,automation,The task is to update the PYPI package deployment script in order to use an environment and also automate the tag and changelog creation.,1.0,Update PYPI package deployment script - The task is to update the PYPI package deployment script in order to use an environment and also automate the tag and changelog creation.,1,update pypi package deployment script the task is to update the pypi package deployment script in order to use an environment and also automate the tag and changelog creation ,1 9663,30206962039.0,IssuesEvent,2023-07-05 10:04:32,tikv/pd,https://api.github.com/repos/tikv/pd,closed,Evict leader scheduler can not show after pd leader recovery from failure,type/bug severity/major found/automation affects-6.0 affects-6.3 affects-6.4 affects-6.6 affects-7.0 affects-7.1,"## Bug Report ### What did you do? 1、Add evict-leader-scheduler to two tikv; 2、Inject pd leader instance down chaos; 3、After more than 5min,show scheduler config, found no evict-leader,try remove it at this time, return 404 also; 4、After several hours,show scheduler again, found it exist. ### What did you expect to see? In step 3, there should be evict-leader show here. ### What did you see instead? In step 3, no evict leader scheduler. ### What version of PD are you using (`pd-server -V`)? / # /pd-server -V Release Version: v5.5.0-alpha-72-gcc256b5e Edition: Community Git Commit Hash: cc256b5ed3b64315f8667388e33517b8ff52760a Git Branch: master UTC Build Time: 2022-03-01 08:26:57 Test logs: [2022/03/04 13:20:41.459 +08:00] [INFO] [pdutil.go:105] [""/pd-ctl scheduler remove evict-leader-scheduler:[404] \""[PD:scheduler:ErrSchedulerNotFound]scheduler not found\""""] 2022-03-04T13:20:41.729+0800 INFO k8s/client.go:107 it should be noted that a long-running command will not be interrupted even the use case has ended. For more information, please refer to https://github.com/pingcap/test-infra/discussions/129 [2022/03/04 13:20:42.163 +08:00] [INFO] [pdutil.go:105] [""/pd-ctl scheduler add evict-leader-scheduler 4:Success!""] [2022/03/04 13:20:42.223 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=4007] [2022/03/04 13:20:52.283 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=4007.5] [2022/03/04 13:21:02.338 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=4008] [2022/03/04 13:21:12.406 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=4008] [2022/03/04 13:21:22.479 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=2004] [2022/03/04 13:21:32.553 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=0] 2022-03-04T13:21:32.553+0800 INFO k8s/client.go:107 it should be noted that a long-running command will not be interrupted even the use case has ended. For more information, please refer to https://github.com/pingcap/test-infra/discussions/129 [2022/03/04 13:21:33.108 +08:00] [INFO] [pdutil.go:105] [""/pd-ctl scheduler add evict-leader-scheduler 5:Success!""] [2022/03/04 13:21:33.162 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=5352] [2022/03/04 13:21:43.236 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=5352] [2022/03/04 13:21:53.315 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=5351.5] [2022/03/04 13:22:03.380 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=5349.5] [2022/03/04 13:22:13.436 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=5349.5] [2022/03/04 13:22:23.498 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=2674] [2022/03/04 13:22:33.564 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=0] [2022/03/04 13:22:33.564 +08:00] [INFO] [chaos.go:358] [""fault will last for""] [duration=2m0s] [2022/03/04 13:22:34.056 +08:00] [INFO] [chaos.go:86] [""Run chaos""] [name=""pd leader""] [selectors=""[testbed-oltp-hm-7wksp/tc-pd-0]""] [experiment=""{\""Duration\"":\""\"",\""Scheduler\"":null}""] [2022/03/04 13:24:34.128 +08:00] [INFO] [chaos.go:151] [""Clean chaos""] [name=""pd leader""] [chaosId=""ns=testbed-oltp-hm-7wksp,kind=failure,name=pod-failure-qcsfgfnq,spec=&k8s.ChaosIdentifier{Namespace:\""testbed-oltp-hm-7wksp\"", Name:\""pod-failure-qcsfgfnq\"", Spec:FailureExperimentSpec{Duration: \""\"", Scheduler: }}""] 2022-03-04T13:26:34.329+0800 INFO k8s/client.go:107 it should be noted that a long-running command will not be interrupted even the use case has ended. For more information, please refer to https://github.com/pingcap/test-infra/discussions/129 [2022/03/04 13:26:34.750 +08:00] [INFO] [pdutil.go:105] [""/pd-ctl scheduler config evict-leader-scheduler:[404] scheduler not found""] 2022-03-04T13:26:34.751+0800 INFO k8s/client.go:107 it should be noted that a long-running command will not be interrupted even the use case has ended. For more information, please refer to https://github.com/pingcap/test-infra/discussions/129 [2022/03/04 13:26:35.188 +08:00] [INFO] [pdutil.go:105] [""/pd-ctl scheduler remove evict-leader-scheduler:[404] \""[PD:scheduler:ErrSchedulerNotFound]scheduler not found\""""] ![image](https://user-images.githubusercontent.com/9443637/156759125-86f27f6a-1b0d-4949-9c51-516247229a5d.png) ",1.0,"Evict leader scheduler can not show after pd leader recovery from failure - ## Bug Report ### What did you do? 1、Add evict-leader-scheduler to two tikv; 2、Inject pd leader instance down chaos; 3、After more than 5min,show scheduler config, found no evict-leader,try remove it at this time, return 404 also; 4、After several hours,show scheduler again, found it exist. ### What did you expect to see? In step 3, there should be evict-leader show here. ### What did you see instead? In step 3, no evict leader scheduler. ### What version of PD are you using (`pd-server -V`)? / # /pd-server -V Release Version: v5.5.0-alpha-72-gcc256b5e Edition: Community Git Commit Hash: cc256b5ed3b64315f8667388e33517b8ff52760a Git Branch: master UTC Build Time: 2022-03-01 08:26:57 Test logs: [2022/03/04 13:20:41.459 +08:00] [INFO] [pdutil.go:105] [""/pd-ctl scheduler remove evict-leader-scheduler:[404] \""[PD:scheduler:ErrSchedulerNotFound]scheduler not found\""""] 2022-03-04T13:20:41.729+0800 INFO k8s/client.go:107 it should be noted that a long-running command will not be interrupted even the use case has ended. For more information, please refer to https://github.com/pingcap/test-infra/discussions/129 [2022/03/04 13:20:42.163 +08:00] [INFO] [pdutil.go:105] [""/pd-ctl scheduler add evict-leader-scheduler 4:Success!""] [2022/03/04 13:20:42.223 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=4007] [2022/03/04 13:20:52.283 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=4007.5] [2022/03/04 13:21:02.338 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=4008] [2022/03/04 13:21:12.406 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=4008] [2022/03/04 13:21:22.479 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=2004] [2022/03/04 13:21:32.553 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-1=0] 2022-03-04T13:21:32.553+0800 INFO k8s/client.go:107 it should be noted that a long-running command will not be interrupted even the use case has ended. For more information, please refer to https://github.com/pingcap/test-infra/discussions/129 [2022/03/04 13:21:33.108 +08:00] [INFO] [pdutil.go:105] [""/pd-ctl scheduler add evict-leader-scheduler 5:Success!""] [2022/03/04 13:21:33.162 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=5352] [2022/03/04 13:21:43.236 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=5352] [2022/03/04 13:21:53.315 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=5351.5] [2022/03/04 13:22:03.380 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=5349.5] [2022/03/04 13:22:13.436 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=5349.5] [2022/03/04 13:22:23.498 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=2674] [2022/03/04 13:22:33.564 +08:00] [INFO] [check.go:471] [""current leader:""] [tc-tikv-3=0] [2022/03/04 13:22:33.564 +08:00] [INFO] [chaos.go:358] [""fault will last for""] [duration=2m0s] [2022/03/04 13:22:34.056 +08:00] [INFO] [chaos.go:86] [""Run chaos""] [name=""pd leader""] [selectors=""[testbed-oltp-hm-7wksp/tc-pd-0]""] [experiment=""{\""Duration\"":\""\"",\""Scheduler\"":null}""] [2022/03/04 13:24:34.128 +08:00] [INFO] [chaos.go:151] [""Clean chaos""] [name=""pd leader""] [chaosId=""ns=testbed-oltp-hm-7wksp,kind=failure,name=pod-failure-qcsfgfnq,spec=&k8s.ChaosIdentifier{Namespace:\""testbed-oltp-hm-7wksp\"", Name:\""pod-failure-qcsfgfnq\"", Spec:FailureExperimentSpec{Duration: \""\"", Scheduler: }}""] 2022-03-04T13:26:34.329+0800 INFO k8s/client.go:107 it should be noted that a long-running command will not be interrupted even the use case has ended. For more information, please refer to https://github.com/pingcap/test-infra/discussions/129 [2022/03/04 13:26:34.750 +08:00] [INFO] [pdutil.go:105] [""/pd-ctl scheduler config evict-leader-scheduler:[404] scheduler not found""] 2022-03-04T13:26:34.751+0800 INFO k8s/client.go:107 it should be noted that a long-running command will not be interrupted even the use case has ended. For more information, please refer to https://github.com/pingcap/test-infra/discussions/129 [2022/03/04 13:26:35.188 +08:00] [INFO] [pdutil.go:105] [""/pd-ctl scheduler remove evict-leader-scheduler:[404] \""[PD:scheduler:ErrSchedulerNotFound]scheduler not found\""""] ![image](https://user-images.githubusercontent.com/9443637/156759125-86f27f6a-1b0d-4949-9c51-516247229a5d.png) ",1,evict leader scheduler can not show after pd leader recovery from failure bug report what did you do 、add evict leader scheduler to two tikv 、inject pd leader instance down chaos 、after more than ,show scheduler config found no evict leader,try remove it at this time return also; 、after several hours,show scheduler again found it exist what did you expect to see in step there should be evict leader show here what did you see instead in step no evict leader scheduler what version of pd are you using pd server v pd server v release version alpha edition community git commit hash git branch master utc build time test logs scheduler not found info client go it should be noted that a long running command will not be interrupted even the use case has ended for more information please refer to info client go it should be noted that a long running command will not be interrupted even the use case has ended for more information please refer to info client go it should be noted that a long running command will not be interrupted even the use case has ended for more information please refer to scheduler not found info client go it should be noted that a long running command will not be interrupted even the use case has ended for more information please refer to scheduler not found ,1 318693,9696303743.0,IssuesEvent,2019-05-25 06:07:45,sahana/SAMBRO,https://api.github.com/repos/sahana/SAMBRO,closed,Test sites for Clients,High Priority Maldives Myanmar Philippines,"Create a test site on each of the client's servers. Then email them instructions saying that they need to use the test site for testing and then then transfer to the production site. Maybe something like this: - test.dmgwarnings.gov.mm - test.sambro.meteophilipinas.gov.ph - test.dhandhaana.ndmc.gov.mv ",1.0,"Test sites for Clients - Create a test site on each of the client's servers. Then email them instructions saying that they need to use the test site for testing and then then transfer to the production site. Maybe something like this: - test.dmgwarnings.gov.mm - test.sambro.meteophilipinas.gov.ph - test.dhandhaana.ndmc.gov.mv ",0,test sites for clients create a test site on each of the client s servers then email them instructions saying that they need to use the test site for testing and then then transfer to the production site maybe something like this test dmgwarnings gov mm test sambro meteophilipinas gov ph test dhandhaana ndmc gov mv ,0 429594,30084584543.0,IssuesEvent,2023-06-29 07:39:13,mp911de/logstash-gelf,https://api.github.com/repos/mp911de/logstash-gelf,closed,Q: Distinction between what's out in the wild,type: documentation,"Hey Mark, maybe it's worth distinguishing out the purpose of this software, especially differentiate against: * encoders only * handlers only What do you think? I see colleagues being a bit confused about what it does and when to use what. E.g. consider the STDOUT/fluentd case where no sending is needed at all. Thanks and regards :wave: A. ",1.0,"Q: Distinction between what's out in the wild - Hey Mark, maybe it's worth distinguishing out the purpose of this software, especially differentiate against: * encoders only * handlers only What do you think? I see colleagues being a bit confused about what it does and when to use what. E.g. consider the STDOUT/fluentd case where no sending is needed at all. Thanks and regards :wave: A. ",0,q distinction between what s out in the wild hey mark maybe it s worth distinguishing out the purpose of this software especially differentiate against encoders only handlers only what do you think i see colleagues being a bit confused about what it does and when to use what e g consider the stdout fluentd case where no sending is needed at all thanks and regards wave a ,0 15247,11426421532.0,IssuesEvent,2020-02-03 21:54:34,NitzanHod/deep-night,https://api.github.com/repos/NitzanHod/deep-night,closed,Saving best results,infrastructure,"Save a best.pth file in addition to last X results. Best should be chosen upon the validation set.",1.0,"Saving best results - Save a best.pth file in addition to last X results. Best should be chosen upon the validation set.",0,saving best results save a best pth file in addition to last x results best should be chosen upon the validation set ,0 15626,10194595039.0,IssuesEvent,2019-08-12 16:01:37,voyages-sncf-technologies/hesperides-gui,https://api.github.com/repos/voyages-sncf-technologies/hesperides-gui,closed,Diff at module level,enhancement usability user request,"When running diff against a module which have a property key referenced at the global level. You will be able to preview your changes as expected, and you will be able to save whitout any error. But given the behaviour of these kind of properties it will take automatically the value of the global properties instead of the one you just saved, leaving you with the feeling of a bug in Hesperides. If you wish to save this value, go into the global properties diff. To prevent loosing time and mind in this matter, it would be interesting to give the user a feedback saying that these are global and thus can't be save at a module level",True,"Diff at module level - When running diff against a module which have a property key referenced at the global level. You will be able to preview your changes as expected, and you will be able to save whitout any error. But given the behaviour of these kind of properties it will take automatically the value of the global properties instead of the one you just saved, leaving you with the feeling of a bug in Hesperides. If you wish to save this value, go into the global properties diff. To prevent loosing time and mind in this matter, it would be interesting to give the user a feedback saying that these are global and thus can't be save at a module level",0,diff at module level when running diff against a module which have a property key referenced at the global level you will be able to preview your changes as expected and you will be able to save whitout any error but given the behaviour of these kind of properties it will take automatically the value of the global properties instead of the one you just saved leaving you with the feeling of a bug in hesperides if you wish to save this value go into the global properties diff to prevent loosing time and mind in this matter it would be interesting to give the user a feedback saying that these are global and thus can t be save at a module level,0 2274,11688900871.0,IssuesEvent,2020-03-05 15:14:42,submariner-io/submariner-operator,https://api.github.com/repos/submariner-io/submariner-operator,closed,"subctl flag to accept ""Enabling service discovery on OpenShift""",automation enhancement subctl,"Please add a flag to auto-accept ""Enabling service discovery on OpenShift will disable OpenShift updates, do you want to continue?"" when running subctl deploy with --service-discovery. ",1.0,"subctl flag to accept ""Enabling service discovery on OpenShift"" - Please add a flag to auto-accept ""Enabling service discovery on OpenShift will disable OpenShift updates, do you want to continue?"" when running subctl deploy with --service-discovery. ",1,subctl flag to accept enabling service discovery on openshift please add a flag to auto accept enabling service discovery on openshift will disable openshift updates do you want to continue when running subctl deploy with service discovery ,1 4728,17359807597.0,IssuesEvent,2021-07-29 18:53:46,JacobLinCool/BA,https://api.github.com/repos/JacobLinCool/BA,closed,Automation (2021/30/7 1:47:01 AM),automation,"**Updated.** (2021/30/7 1:55:35 AM) ## 登入: 完成 ``` [2021/30/7 1:47:03 AM] 開始執行帳號登入程序 [2021/30/7 1:47:08 AM] 正在檢測登入狀態 [2021/30/7 1:47:11 AM] 登入狀態: 未登入 [2021/30/7 1:47:14 AM] 嘗試登入中 [2021/30/7 1:47:25 AM] 已嘗試登入,重新檢測登入狀態 [2021/30/7 1:47:25 AM] 正在檢測登入狀態 [2021/30/7 1:47:28 AM] 登入狀態: 已登入 [2021/30/7 1:47:28 AM] 帳號登入程序已完成 ``` ## 簽到: 完成 ``` [2021/30/7 1:47:28 AM] [簽到] 開始執行 [2021/30/7 1:47:32 AM] [簽到] 已連續簽到天數: 36 [2021/30/7 1:47:32 AM] [簽到] 今日已簽到 ✔ [2021/30/7 1:47:32 AM] [簽到] 正在檢測雙倍簽到獎勵狀態 [2021/30/7 1:47:36 AM] [簽到] 已獲得雙倍簽到獎勵 ✔ [2021/30/7 1:47:37 AM] [簽到] 執行完畢 ✨ ``` ## 答題: 完成 ``` [2021/30/7 1:47:37 AM] [動畫瘋答題] 開始執行 [2021/30/7 1:47:37 AM] [動畫瘋答題] 正在檢測答題狀態 [2021/30/7 1:47:39 AM] [動畫瘋答題] 今日已經答過題目了 ✔ [2021/30/7 1:47:40 AM] [動畫瘋答題] 執行完畢 ✨ ``` ## 抽獎: 執行中 ``` [2021/30/7 1:47:41 AM] [抽抽樂] 開始執行 [2021/30/7 1:47:41 AM] [抽抽樂] 正在尋找抽抽樂 [2021/30/7 1:47:43 AM] [抽抽樂] 找到 9 個抽抽樂 [2021/30/7 1:47:43 AM] [抽抽樂] 1: 又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 [2021/30/7 1:47:43 AM] [抽抽樂] 2: 你的明眸護眼法寶 - Awesome LED觸控式可調雙光源螢幕掛燈 [2021/30/7 1:47:43 AM] [抽抽樂] 3: 熱銷百萬!2021最新款 「TruEgos Super 2NC雙工抗噪真無線」-限時抽抽樂! [2021/30/7 1:47:43 AM] [抽抽樂] 4: XPG競爆你的電競生活,好禮大方送,附送 MANA 電競口香糖 8/11 上市前搶先嚐! [2021/30/7 1:47:43 AM] [抽抽樂] 5: GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版! [2021/30/7 1:47:43 AM] [抽抽樂] 6: EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/30/7 1:47:43 AM] [抽抽樂] 7: 【亞瑟3C生活】手遊好夥伴,ENERGEA - L型可移動雙彎頭編織抗菌充電線-抽抽樂 [2021/30/7 1:47:43 AM] [抽抽樂] 8: 創力-株式会社つくり-怪物彈珠系列周邊-限時抽抽樂! [2021/30/7 1:47:43 AM] [抽抽樂] 9: NVMe Gen4 固態飆速-PNY CS3040 固態硬碟強悍滿載 [2021/30/7 1:47:43 AM] [抽抽樂] 正在嘗試執行第 1 個抽抽樂: 又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 [2021/30/7 1:47:45 AM] [抽抽樂] 正在執行第 1 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:47:51 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:48:05 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:48:09 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:48:10 AM] [抽抽樂] 正在執行第 2 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:48:17 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:48:31 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:48:34 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:48:36 AM] [抽抽樂] 正在執行第 3 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:48:42 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:48:56 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:49:00 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:49:01 AM] [抽抽樂] 正在執行第 4 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:49:07 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:49:21 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:49:25 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:49:27 AM] [抽抽樂] 正在執行第 5 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:49:33 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:50:16 AM] [抽抽樂] 未進入結算頁面,重試中 ✘ [2021/30/7 1:50:19 AM] [抽抽樂] 正在執行第 6 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:50:25 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:51:08 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:51:11 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:51:13 AM] [抽抽樂] 正在執行第 7 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:51:19 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:52:02 AM] [抽抽樂] 未進入結算頁面,重試中 ✘ [2021/30/7 1:52:05 AM] [抽抽樂] 正在執行第 8 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:52:11 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:52:54 AM] [抽抽樂] 未進入結算頁面,重試中 ✘ [2021/30/7 1:52:56 AM] [抽抽樂] 正在執行第 9 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:53:02 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:53:16 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:53:20 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:53:22 AM] [抽抽樂] 正在執行第 10 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:53:29 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:54:12 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:54:15 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:54:17 AM] [抽抽樂] 正在執行第 11 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:54:23 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:54:37 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:54:41 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:54:43 AM] [抽抽樂] 正在執行第 12 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:54:49 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:55:32 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:55:35 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ ``` ",1.0,"Automation (2021/30/7 1:47:01 AM) - **Updated.** (2021/30/7 1:55:35 AM) ## 登入: 完成 ``` [2021/30/7 1:47:03 AM] 開始執行帳號登入程序 [2021/30/7 1:47:08 AM] 正在檢測登入狀態 [2021/30/7 1:47:11 AM] 登入狀態: 未登入 [2021/30/7 1:47:14 AM] 嘗試登入中 [2021/30/7 1:47:25 AM] 已嘗試登入,重新檢測登入狀態 [2021/30/7 1:47:25 AM] 正在檢測登入狀態 [2021/30/7 1:47:28 AM] 登入狀態: 已登入 [2021/30/7 1:47:28 AM] 帳號登入程序已完成 ``` ## 簽到: 完成 ``` [2021/30/7 1:47:28 AM] [簽到] 開始執行 [2021/30/7 1:47:32 AM] [簽到] 已連續簽到天數: 36 [2021/30/7 1:47:32 AM] [簽到] 今日已簽到 ✔ [2021/30/7 1:47:32 AM] [簽到] 正在檢測雙倍簽到獎勵狀態 [2021/30/7 1:47:36 AM] [簽到] 已獲得雙倍簽到獎勵 ✔ [2021/30/7 1:47:37 AM] [簽到] 執行完畢 ✨ ``` ## 答題: 完成 ``` [2021/30/7 1:47:37 AM] [動畫瘋答題] 開始執行 [2021/30/7 1:47:37 AM] [動畫瘋答題] 正在檢測答題狀態 [2021/30/7 1:47:39 AM] [動畫瘋答題] 今日已經答過題目了 ✔ [2021/30/7 1:47:40 AM] [動畫瘋答題] 執行完畢 ✨ ``` ## 抽獎: 執行中 ``` [2021/30/7 1:47:41 AM] [抽抽樂] 開始執行 [2021/30/7 1:47:41 AM] [抽抽樂] 正在尋找抽抽樂 [2021/30/7 1:47:43 AM] [抽抽樂] 找到 9 個抽抽樂 [2021/30/7 1:47:43 AM] [抽抽樂] 1: 又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 [2021/30/7 1:47:43 AM] [抽抽樂] 2: 你的明眸護眼法寶 - Awesome LED觸控式可調雙光源螢幕掛燈 [2021/30/7 1:47:43 AM] [抽抽樂] 3: 熱銷百萬!2021最新款 「TruEgos Super 2NC雙工抗噪真無線」-限時抽抽樂! [2021/30/7 1:47:43 AM] [抽抽樂] 4: XPG競爆你的電競生活,好禮大方送,附送 MANA 電競口香糖 8/11 上市前搶先嚐! [2021/30/7 1:47:43 AM] [抽抽樂] 5: GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版! [2021/30/7 1:47:43 AM] [抽抽樂] 6: EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/30/7 1:47:43 AM] [抽抽樂] 7: 【亞瑟3C生活】手遊好夥伴,ENERGEA - L型可移動雙彎頭編織抗菌充電線-抽抽樂 [2021/30/7 1:47:43 AM] [抽抽樂] 8: 創力-株式会社つくり-怪物彈珠系列周邊-限時抽抽樂! [2021/30/7 1:47:43 AM] [抽抽樂] 9: NVMe Gen4 固態飆速-PNY CS3040 固態硬碟強悍滿載 [2021/30/7 1:47:43 AM] [抽抽樂] 正在嘗試執行第 1 個抽抽樂: 又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 [2021/30/7 1:47:45 AM] [抽抽樂] 正在執行第 1 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:47:51 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:48:05 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:48:09 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:48:10 AM] [抽抽樂] 正在執行第 2 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:48:17 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:48:31 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:48:34 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:48:36 AM] [抽抽樂] 正在執行第 3 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:48:42 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:48:56 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:49:00 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:49:01 AM] [抽抽樂] 正在執行第 4 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:49:07 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:49:21 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:49:25 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:49:27 AM] [抽抽樂] 正在執行第 5 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:49:33 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:50:16 AM] [抽抽樂] 未進入結算頁面,重試中 ✘ [2021/30/7 1:50:19 AM] [抽抽樂] 正在執行第 6 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:50:25 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:51:08 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:51:11 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:51:13 AM] [抽抽樂] 正在執行第 7 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:51:19 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:52:02 AM] [抽抽樂] 未進入結算頁面,重試中 ✘ [2021/30/7 1:52:05 AM] [抽抽樂] 正在執行第 8 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:52:11 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:52:54 AM] [抽抽樂] 未進入結算頁面,重試中 ✘ [2021/30/7 1:52:56 AM] [抽抽樂] 正在執行第 9 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:53:02 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:53:16 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:53:20 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:53:22 AM] [抽抽樂] 正在執行第 10 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:53:29 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:54:12 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:54:15 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:54:17 AM] [抽抽樂] 正在執行第 11 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:54:23 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:54:37 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:54:41 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ [2021/30/7 1:54:43 AM] [抽抽樂] 正在執行第 12 次抽獎,可能需要多達 1 分鐘 [2021/30/7 1:54:49 AM] [抽抽樂] 正在觀看廣告 [2021/30/7 1:55:32 AM] [抽抽樂] 正在確認結算頁面 [2021/30/7 1:55:35 AM] [抽抽樂] 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 ✔ ``` ",1,automation am updated am 登入 完成 開始執行帳號登入程序 正在檢測登入狀態 登入狀態 未登入 嘗試登入中 已嘗試登入,重新檢測登入狀態 正在檢測登入狀態 登入狀態 已登入 帳號登入程序已完成 簽到 完成 開始執行 已連續簽到天數 今日已簽到 ✔ 正在檢測雙倍簽到獎勵狀態 已獲得雙倍簽到獎勵 ✔ 執行完畢 ✨ 答題 完成 開始執行 正在檢測答題狀態 今日已經答過題目了 ✔ 執行完畢 ✨ 抽獎 執行中 開始執行 正在尋找抽抽樂 找到 個抽抽樂 又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 你的明眸護眼法寶 awesome led觸控式可調雙光源螢幕掛燈 熱銷百萬! 「truegos super 」 限時抽抽樂! xpg競爆你的電競生活,好禮大方送,附送 mana 電競口香糖 上市前搶先嚐! gokids玩樂小子|深入絕地:暗黑世界傳說 ,史詩繁中再版! epos |sennheiser 最強電競耳機─王者回歸,gsp 【 】手遊好夥伴,energea l型可移動雙彎頭編織抗菌充電線 抽抽樂 創力 株式会社つくり 怪物彈珠系列周邊 限時抽抽樂! nvme 固態飆速 pny 固態硬碟強悍滿載 正在嘗試執行第 個抽抽樂: 又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 正在確認結算頁面 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 ✔ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 正在確認結算頁面 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 ✔ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 正在確認結算頁面 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 ✔ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 正在確認結算頁面 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 ✔ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 未進入結算頁面,重試中 ✘ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 正在確認結算頁面 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 ✔ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 未進入結算頁面,重試中 ✘ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 未進入結算頁面,重試中 ✘ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 正在確認結算頁面 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 ✔ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 正在確認結算頁面 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 ✔ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 正在確認結算頁面 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 ✔ 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 正在確認結算頁面 已完成一次抽抽樂:又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 ✔ ,1 33872,27962662503.0,IssuesEvent,2023-03-24 16:48:01,UBCSailbot/network_systems,https://api.github.com/repos/UBCSailbot/network_systems,closed,Publish ROS package actions,infrastructure,"### Purpose Each action that is used across our ROS packages should be published so that updates can be pushed to all ROS package repositories automatically. ### Changes - Create repository for lint action - Create repository for test action - Use published actions in this repository ### Final To do - [ ] Merge sailbot_workspace PR without switching branches - [ ] Manually delete branch and re-check ""Automatically delete head branches"" - [ ] Point network_systems PR to sailbot_workspace main branch, updated required jobs, then merge - [ ] Create sailbot_workspace PR to point to main branch ### Resources - https://docs.github.com/en/actions/creating-actions ",1.0,"Publish ROS package actions - ### Purpose Each action that is used across our ROS packages should be published so that updates can be pushed to all ROS package repositories automatically. ### Changes - Create repository for lint action - Create repository for test action - Use published actions in this repository ### Final To do - [ ] Merge sailbot_workspace PR without switching branches - [ ] Manually delete branch and re-check ""Automatically delete head branches"" - [ ] Point network_systems PR to sailbot_workspace main branch, updated required jobs, then merge - [ ] Create sailbot_workspace PR to point to main branch ### Resources - https://docs.github.com/en/actions/creating-actions ",0,publish ros package actions purpose each action that is used across our ros packages should be published so that updates can be pushed to all ros package repositories automatically changes create repository for lint action create repository for test action use published actions in this repository final to do merge sailbot workspace pr without switching branches manually delete branch and re check automatically delete head branches point network systems pr to sailbot workspace main branch updated required jobs then merge create sailbot workspace pr to point to main branch resources ,0 3666,14268710446.0,IssuesEvent,2020-11-20 23:06:03,aws/aws-sdk-go-v2,https://api.github.com/repos/aws/aws-sdk-go-v2,closed,Refactor SDK's presign URL feature,automation-exempt breaking-change refactor,"The v2 SDK should provide an easy way to create a presigned URL for an operation similar to the concept in the v1 SDK. There were a couple issues with the v1 SDK's implementation that the v2 SDK's should avoid. 1.) Presign was available for all API operations regardless if it was actually supported. 2.) Presign was supported for all HTTP methods, even though it was nearly impossible to use non-GET and non-streaming operations. 3.) Presign exposed an `Expires` behavior for all operations, when this was only valid for Amazon S3 operations. To address these issues the v2 SDK's presigner design should be generated independently of the API client. And the presigner should be generated with a select subset of supported operations that are known to work, and where it makes sense to have a presigner for. The work with autofill presign, #799 provided the ground work for generated presign clients for an API. This change also broke out the `Expires` concept into a conditionally generated option available on the presign clients for APIs that support that behavior. The PR #888 works to refactor the presign client generator implemented in #799 and export it for for APIs like Amazon S3. The presign client should be based off of the API client, providing flexable options to initialize it, with optional parameters as needed. ```go // PresignOptions represents the presign client options type PresignOptions struct { // ClientOptions are list of functional options to mutate client options used by // presign client ClientOptions []func(*Options) // Presigner is the presigner used by the presign url client Presigner v4.HTTPPresigner } // PresignClient represents the presign url client type PresignClient struct { /*... */} // NewPresignClient generates a presign client using provided Client options and // presign options func NewPresignClient(options Options, optFns ...func(*PresignOptions)) *PresignClient { /*...*/} // NewPresignClientWrapper generates a presign client using provided API Client and // presign options func NewPresignClientWrapper(c *Client, optFns ...func(*PresignOptions)) *PresignClient {/*...*/} // NewPresignClientFromConfig generates a presign client using provided AWS config // and presign options func NewPresignClientFromConfig(cfg aws.Config, optFns ...func(*PresignOptions)) *PresignClient {/*...*/} // PresignCopyDBClusterSnapshot will create a AWS sigv4 request presigned URL for the CopyDBClusterSnapshot operation. func (c *PresignClient) PresignCopyDBClusterSnapshot(ctx context.Context, params *CopyDBClusterSnapshotInput, optFns ...func(*PresignOptions)) (req *v4.PresignedHTTPRequest, err error) { ```",1.0,"Refactor SDK's presign URL feature - The v2 SDK should provide an easy way to create a presigned URL for an operation similar to the concept in the v1 SDK. There were a couple issues with the v1 SDK's implementation that the v2 SDK's should avoid. 1.) Presign was available for all API operations regardless if it was actually supported. 2.) Presign was supported for all HTTP methods, even though it was nearly impossible to use non-GET and non-streaming operations. 3.) Presign exposed an `Expires` behavior for all operations, when this was only valid for Amazon S3 operations. To address these issues the v2 SDK's presigner design should be generated independently of the API client. And the presigner should be generated with a select subset of supported operations that are known to work, and where it makes sense to have a presigner for. The work with autofill presign, #799 provided the ground work for generated presign clients for an API. This change also broke out the `Expires` concept into a conditionally generated option available on the presign clients for APIs that support that behavior. The PR #888 works to refactor the presign client generator implemented in #799 and export it for for APIs like Amazon S3. The presign client should be based off of the API client, providing flexable options to initialize it, with optional parameters as needed. ```go // PresignOptions represents the presign client options type PresignOptions struct { // ClientOptions are list of functional options to mutate client options used by // presign client ClientOptions []func(*Options) // Presigner is the presigner used by the presign url client Presigner v4.HTTPPresigner } // PresignClient represents the presign url client type PresignClient struct { /*... */} // NewPresignClient generates a presign client using provided Client options and // presign options func NewPresignClient(options Options, optFns ...func(*PresignOptions)) *PresignClient { /*...*/} // NewPresignClientWrapper generates a presign client using provided API Client and // presign options func NewPresignClientWrapper(c *Client, optFns ...func(*PresignOptions)) *PresignClient {/*...*/} // NewPresignClientFromConfig generates a presign client using provided AWS config // and presign options func NewPresignClientFromConfig(cfg aws.Config, optFns ...func(*PresignOptions)) *PresignClient {/*...*/} // PresignCopyDBClusterSnapshot will create a AWS sigv4 request presigned URL for the CopyDBClusterSnapshot operation. func (c *PresignClient) PresignCopyDBClusterSnapshot(ctx context.Context, params *CopyDBClusterSnapshotInput, optFns ...func(*PresignOptions)) (req *v4.PresignedHTTPRequest, err error) { ```",1,refactor sdk s presign url feature the sdk should provide an easy way to create a presigned url for an operation similar to the concept in the sdk there were a couple issues with the sdk s implementation that the sdk s should avoid presign was available for all api operations regardless if it was actually supported presign was supported for all http methods even though it was nearly impossible to use non get and non streaming operations presign exposed an expires behavior for all operations when this was only valid for amazon operations to address these issues the sdk s presigner design should be generated independently of the api client and the presigner should be generated with a select subset of supported operations that are known to work and where it makes sense to have a presigner for the work with autofill presign provided the ground work for generated presign clients for an api this change also broke out the expires concept into a conditionally generated option available on the presign clients for apis that support that behavior the pr works to refactor the presign client generator implemented in and export it for for apis like amazon the presign client should be based off of the api client providing flexable options to initialize it with optional parameters as needed go presignoptions represents the presign client options type presignoptions struct clientoptions are list of functional options to mutate client options used by presign client clientoptions func options presigner is the presigner used by the presign url client presigner httppresigner presignclient represents the presign url client type presignclient struct newpresignclient generates a presign client using provided client options and presign options func newpresignclient options options optfns func presignoptions presignclient newpresignclientwrapper generates a presign client using provided api client and presign options func newpresignclientwrapper c client optfns func presignoptions presignclient newpresignclientfromconfig generates a presign client using provided aws config and presign options func newpresignclientfromconfig cfg aws config optfns func presignoptions presignclient presigncopydbclustersnapshot will create a aws request presigned url for the copydbclustersnapshot operation func c presignclient presigncopydbclustersnapshot ctx context context params copydbclustersnapshotinput optfns func presignoptions req presignedhttprequest err error ,1 649251,21260902930.0,IssuesEvent,2022-04-13 04:03:30,arnog/mathlive,https://api.github.com/repos/arnog/mathlive,closed,Documentation difficult to navigate,high priority,"I am trying to upgrade MathLive from 0.59 to 0.70. For now I just want the same features as earlier but with any bugfixes that have come since. So I'm looking for an option to turn off all the sounds. Browsing here: https://cortexjs.io/docs/mathlive/ it says ""See MathfieldOptions for more details about these options."" but then I don't see how I can get to that part of the documentation. Using the search feature to look for ""MathfieldOptions"" I don't find anything either.",1.0,"Documentation difficult to navigate - I am trying to upgrade MathLive from 0.59 to 0.70. For now I just want the same features as earlier but with any bugfixes that have come since. So I'm looking for an option to turn off all the sounds. Browsing here: https://cortexjs.io/docs/mathlive/ it says ""See MathfieldOptions for more details about these options."" but then I don't see how I can get to that part of the documentation. Using the search feature to look for ""MathfieldOptions"" I don't find anything either.",0,documentation difficult to navigate i am trying to upgrade mathlive from to for now i just want the same features as earlier but with any bugfixes that have come since so i m looking for an option to turn off all the sounds browsing here it says see mathfieldoptions for more details about these options but then i don t see how i can get to that part of the documentation using the search feature to look for mathfieldoptions i don t find anything either ,0 364,5726231968.0,IssuesEvent,2017-04-20 18:27:49,rancher/rancher,https://api.github.com/repos/rancher/rancher,opened,Missing container labels in container metadata.,kind/bug setup/automation,"Rancher server version - v1.5.6-rc1 HA setup with 3 node cluster. Set up has 3 hosts: Following automation runs relating to metadata checks reports failure: test_metadata_byname_2015_07_25 test_metadata_byname_2015_12_19 test_metadata_byname_2016_07_29 test_metadata_self_2015_07_25 test_metadata_self_2015_12_19 test_metadata_self_2016_07_29 test_metadata_scaleup test_metadata_scaledown When comparing metadata from ``` containers/``` with ```services//containers/``` , following container labels are not present in containers/ but present in ```services//containers/``` io.rancher.container.mac_address o.rancher.container.ip io.rancher.cni.network o.rancher.cni.wait",1.0,"Missing container labels in container metadata. - Rancher server version - v1.5.6-rc1 HA setup with 3 node cluster. Set up has 3 hosts: Following automation runs relating to metadata checks reports failure: test_metadata_byname_2015_07_25 test_metadata_byname_2015_12_19 test_metadata_byname_2016_07_29 test_metadata_self_2015_07_25 test_metadata_self_2015_12_19 test_metadata_self_2016_07_29 test_metadata_scaleup test_metadata_scaledown When comparing metadata from ``` containers/``` with ```services//containers/``` , following container labels are not present in containers/ but present in ```services//containers/``` io.rancher.container.mac_address o.rancher.container.ip io.rancher.cni.network o.rancher.cni.wait",1,missing container labels in container metadata rancher server version ha setup with node cluster set up has hosts following automation runs relating to metadata checks reports failure test metadata byname test metadata byname test metadata byname test metadata self test metadata self test metadata self test metadata scaleup test metadata scaledown when comparing metadata from containers with services containers following container labels are not present in containers but present in services containers io rancher container mac address o rancher container ip io rancher cni network o rancher cni wait,1 4006,15158884011.0,IssuesEvent,2021-02-12 02:27:33,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,"[Bug] Pixel 2 API 28 x86 Emulator - Unable to turn on ""Set default browser"" switch ",P5 S3 eng:automation wontfix 🐞 bug,"## Steps to reproduce 1. Create a Pixel 2, or Pixel 3 API 28 x86 Emulator in AS. 2. Install the fenix debug app 3. Set as default browser by going to Settings>Set as default browser, tap the switch and select Firefox Preview as default. 4. Return to the Set default browser screen by tapping the device back button 5. Check the switch is on. ### Expected behavior Switch is on. ### Actual behavior Set default browser switch is not enabled. Doesn't reproduce on the physical Pixel 2 device, or on API 29 (Q). ### Device information * Android device: Pixel 2 API 28 x86 Emulator. * Fenix version: geckoNightlyDebug version ",1.0,"[Bug] Pixel 2 API 28 x86 Emulator - Unable to turn on ""Set default browser"" switch - ## Steps to reproduce 1. Create a Pixel 2, or Pixel 3 API 28 x86 Emulator in AS. 2. Install the fenix debug app 3. Set as default browser by going to Settings>Set as default browser, tap the switch and select Firefox Preview as default. 4. Return to the Set default browser screen by tapping the device back button 5. Check the switch is on. ### Expected behavior Switch is on. ### Actual behavior Set default browser switch is not enabled. Doesn't reproduce on the physical Pixel 2 device, or on API 29 (Q). ### Device information * Android device: Pixel 2 API 28 x86 Emulator. * Fenix version: geckoNightlyDebug version ",1, pixel api emulator unable to turn on set default browser switch steps to reproduce create a pixel or pixel api emulator in as install the fenix debug app set as default browser by going to settings set as default browser tap the switch and select firefox preview as default return to the set default browser screen by tapping the device back button check the switch is on expected behavior switch is on actual behavior set default browser switch is not enabled doesn t reproduce on the physical pixel device or on api q device information android device pixel api emulator fenix version geckonightlydebug version ,1 8025,26125208074.0,IssuesEvent,2022-12-28 17:30:49,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Cypress Test for Shared IDP,automation,"1. Apply Shared IDP while creating Authorization Profile 1.1 Authenticates api owner 1.2 Activates the namespace 1.3 Create an authorization profile 2. Update IDP issuer for shared IDP profile 2.1 authenticates Janis (api owner) to get the user session token 2.2 Prepare the Request Specification for the API to update issuer URL 2.3 Put the resource and verify the success code in the response 2.4 Verify the updated IDP issuer URL ",1.0,"Cypress Test for Shared IDP - 1. Apply Shared IDP while creating Authorization Profile 1.1 Authenticates api owner 1.2 Activates the namespace 1.3 Create an authorization profile 2. Update IDP issuer for shared IDP profile 2.1 authenticates Janis (api owner) to get the user session token 2.2 Prepare the Request Specification for the API to update issuer URL 2.3 Put the resource and verify the success code in the response 2.4 Verify the updated IDP issuer URL ",1,cypress test for shared idp apply shared idp while creating authorization profile authenticates api owner activates the namespace create an authorization profile update idp issuer for shared idp profile authenticates janis api owner to get the user session token prepare the request specification for the api to update issuer url put the resource and verify the success code in the response verify the updated idp issuer url ,1 8547,27115153357.0,IssuesEvent,2023-02-15 17:59:11,pc2ccs/pc2v9,https://api.github.com/repos/pc2ccs/pc2v9,opened,Automate copying of non-xsl files into target scoreboard directories ,automation CI - Continuous Improvement Small LOE,"**Is your feature request related to a problem?** Yes. When the scoreboard generates the html there are needed files .css and .png files. Those files must be manually copied. **Feature Description**: For all non .xsl files in the data/xml copy those files into the html and public_html directories, same directories as the directories where the html files are created. **Have you considered other ways to accomplish the same thing?** Manually done. **Do you have any specific suggestions for how your feature would be ***implemented*** in PC^2?** Each time the html is generated, copy the non xsl files. Consider not copying the files if the files are identical. For exmaple 02/04/2023 07:42 PM 2,070 standings.css if the file in xsl is a differnt time or size, then update the standings.css 02/04/2023 07:55 PM 2,555 standings.css **Additional context**: ",1.0,"Automate copying of non-xsl files into target scoreboard directories - **Is your feature request related to a problem?** Yes. When the scoreboard generates the html there are needed files .css and .png files. Those files must be manually copied. **Feature Description**: For all non .xsl files in the data/xml copy those files into the html and public_html directories, same directories as the directories where the html files are created. **Have you considered other ways to accomplish the same thing?** Manually done. **Do you have any specific suggestions for how your feature would be ***implemented*** in PC^2?** Each time the html is generated, copy the non xsl files. Consider not copying the files if the files are identical. For exmaple 02/04/2023 07:42 PM 2,070 standings.css if the file in xsl is a differnt time or size, then update the standings.css 02/04/2023 07:55 PM 2,555 standings.css **Additional context**: ",1,automate copying of non xsl files into target scoreboard directories is your feature request related to a problem yes when the scoreboard generates the html there are needed files css and png files those files must be manually copied feature description for all non xsl files in the data xml copy those files into the html and public html directories same directories as the directories where the html files are created have you considered other ways to accomplish the same thing manually done do you have any specific suggestions for how your feature would be implemented in pc each time the html is generated copy the non xsl files consider not copying the files if the files are identical for exmaple pm standings css if the file in xsl is a differnt time or size then update the standings css pm standings css additional context ,1 1726,10605197733.0,IssuesEvent,2019-10-10 19:54:42,ritsec/cluster-duck,https://api.github.com/repos/ritsec/cluster-duck,opened,Organize Existing Automation,automation,"Organize Existing Automation ============== Our current Ansible and Terraform should be reorganized more coherently. A single central Ansible repository would be created to store host_vars, group_vars, the inventory file(s), and the overall playbook or playbooks. Then, each role would be split out into its own separate repository and pushed to Ansible Galaxy. The central Ansible repository would then reference the roles pushed in Ansible Galaxy. We could also use other roles, such as those created by geerlingguy, that are deemed of sufficient quality to use. _Please note that this repository would be separate from our openstack-ansible repository_. Our Terraform should be split into two repositories: an internal and external repository. The external repository would contain all Terraform for our AWS environment, and could also include anything used for any other public clouds we choose to use. The internal repository would contain all Terraform for our internal OpenStack environment. The goal should be that these repositories could be used to completely re-deploy our entire club infrastructure with the proper variables set. The only required dependencies should be AWS and OpenStack accounts with the proper permissions. Yes, that does mean that we would need to have OpenStack set up. Tasks ----- All of the following tasks must be complete before this issue can be closed. Be sure to reference this issue in the relevant issues/PRs in other repositories. - [ ] Create internal Terraform repository - [ ] Create external Terraform repository - [ ] Update existing Ansible repository to match the goals outlined in this issue - [ ] Merge existing Terraform and Ansible into the repositories as described above - [ ] Document how to run our Terraform and Ansible and explain what it is used for",1.0,"Organize Existing Automation - Organize Existing Automation ============== Our current Ansible and Terraform should be reorganized more coherently. A single central Ansible repository would be created to store host_vars, group_vars, the inventory file(s), and the overall playbook or playbooks. Then, each role would be split out into its own separate repository and pushed to Ansible Galaxy. The central Ansible repository would then reference the roles pushed in Ansible Galaxy. We could also use other roles, such as those created by geerlingguy, that are deemed of sufficient quality to use. _Please note that this repository would be separate from our openstack-ansible repository_. Our Terraform should be split into two repositories: an internal and external repository. The external repository would contain all Terraform for our AWS environment, and could also include anything used for any other public clouds we choose to use. The internal repository would contain all Terraform for our internal OpenStack environment. The goal should be that these repositories could be used to completely re-deploy our entire club infrastructure with the proper variables set. The only required dependencies should be AWS and OpenStack accounts with the proper permissions. Yes, that does mean that we would need to have OpenStack set up. Tasks ----- All of the following tasks must be complete before this issue can be closed. Be sure to reference this issue in the relevant issues/PRs in other repositories. - [ ] Create internal Terraform repository - [ ] Create external Terraform repository - [ ] Update existing Ansible repository to match the goals outlined in this issue - [ ] Merge existing Terraform and Ansible into the repositories as described above - [ ] Document how to run our Terraform and Ansible and explain what it is used for",1,organize existing automation organize existing automation our current ansible and terraform should be reorganized more coherently a single central ansible repository would be created to store host vars group vars the inventory file s and the overall playbook or playbooks then each role would be split out into its own separate repository and pushed to ansible galaxy the central ansible repository would then reference the roles pushed in ansible galaxy we could also use other roles such as those created by geerlingguy that are deemed of sufficient quality to use please note that this repository would be separate from our openstack ansible repository our terraform should be split into two repositories an internal and external repository the external repository would contain all terraform for our aws environment and could also include anything used for any other public clouds we choose to use the internal repository would contain all terraform for our internal openstack environment the goal should be that these repositories could be used to completely re deploy our entire club infrastructure with the proper variables set the only required dependencies should be aws and openstack accounts with the proper permissions yes that does mean that we would need to have openstack set up tasks all of the following tasks must be complete before this issue can be closed be sure to reference this issue in the relevant issues prs in other repositories create internal terraform repository create external terraform repository update existing ansible repository to match the goals outlined in this issue merge existing terraform and ansible into the repositories as described above document how to run our terraform and ansible and explain what it is used for,1 251839,21525467247.0,IssuesEvent,2022-04-28 17:58:22,hashgraph/guardian,https://api.github.com/repos/hashgraph/guardian,opened,Develop cypress test case PUT policy using policy ID,Automation Testing,"As a QA engineer, I need to develop a test case using the method `PUT` that publishes the policy with the specified (internal) policy ID onto IPFS, and sends a message featuring its IPFS CID into the corresponding Hedera topic. Only users with the Root Authority role are allowed to make the request. This ticket should be done after #774(3) is done. We need to develop TC for the `POST` method to import policy via message.",1.0,"Develop cypress test case PUT policy using policy ID - As a QA engineer, I need to develop a test case using the method `PUT` that publishes the policy with the specified (internal) policy ID onto IPFS, and sends a message featuring its IPFS CID into the corresponding Hedera topic. Only users with the Root Authority role are allowed to make the request. This ticket should be done after #774(3) is done. We need to develop TC for the `POST` method to import policy via message.",0,develop cypress test case put policy using policy id as a qa engineer i need to develop a test case using the method put that publishes the policy with the specified internal policy id onto ipfs and sends a message featuring its ipfs cid into the corresponding hedera topic only users with the root authority role are allowed to make the request this ticket should be done after is done we need to develop tc for the post method to import policy via message ,0 659798,21941719704.0,IssuesEvent,2022-05-23 18:51:30,tdene/synth_opt_adders,https://api.github.com/repos/tdene/synth_opt_adders,closed,Fix LT and TL transforms,bug medium priority,"checkLT and checkTL were broken by commit [134742d](https://github.com/tdene/synth_opt_adders/commit/134742d823719a5b074799ed635d15ed7b5ec133). The underlying transforms still work, and they can be composed via LF + FT or TF + FL. But since the validity-checking functions do not work, LT and TL can never be directly called.",1.0,"Fix LT and TL transforms - checkLT and checkTL were broken by commit [134742d](https://github.com/tdene/synth_opt_adders/commit/134742d823719a5b074799ed635d15ed7b5ec133). The underlying transforms still work, and they can be composed via LF + FT or TF + FL. But since the validity-checking functions do not work, LT and TL can never be directly called.",0,fix lt and tl transforms checklt and checktl were broken by commit the underlying transforms still work and they can be composed via lf ft or tf fl but since the validity checking functions do not work lt and tl can never be directly called ,0 714258,24555879639.0,IssuesEvent,2022-10-12 15:49:50,googleapis/gax-java,https://api.github.com/repos/googleapis/gax-java,closed,Jitter should be unconditional,type: cleanup priority: p2,"In RetrySettings ``` public abstract boolean isJittered() Jitter determines if the delay time should be randomized. In most cases, if jitter is set to true the actual delay time is calculated in the following way: actualDelay = rand_between(0, min(maxRetryDelay, delay)) The default value is true. ``` This method is way overspecified. We should deprecate it and always jitter. ",1.0,"Jitter should be unconditional - In RetrySettings ``` public abstract boolean isJittered() Jitter determines if the delay time should be randomized. In most cases, if jitter is set to true the actual delay time is calculated in the following way: actualDelay = rand_between(0, min(maxRetryDelay, delay)) The default value is true. ``` This method is way overspecified. We should deprecate it and always jitter. ",0,jitter should be unconditional in retrysettings public abstract boolean isjittered jitter determines if the delay time should be randomized in most cases if jitter is set to true the actual delay time is calculated in the following way actualdelay rand between min maxretrydelay delay the default value is true this method is way overspecified we should deprecate it and always jitter ,0 91604,18669010335.0,IssuesEvent,2021-10-30 10:41:58,Onelinerhub/onelinerhub,https://api.github.com/repos/Onelinerhub/onelinerhub,closed,Write shortest possible code: python how to take screenshot (python),help wanted good first issue code python,"Please write shortest code example for this question: **python how to take screenshot** in python ### How to do it: 1. Go to [python codes](https://github.com/Onelinerhub/onelinerhub/tree/main/python) 2. Create new file (named in underscore case, should contain key words from title) with `md` extension (markdown file). 3. Propose new file with following content (please use all three blocks if possible - title, code itself and explanations list): ~~~ # python how to take screenshot ```python code part1 part2 part3 ... ``` - part1 - explain code part 1 - part2 - explain code part 2 - ... ~~~ More [advanced template](https://github.com/Onelinerhub/onelinerhub/blob/main/template.md) for examples and linked solutions. More [docs here](https://github.com/Onelinerhub/onelinerhub#onelinerhub).",1.0,"Write shortest possible code: python how to take screenshot (python) - Please write shortest code example for this question: **python how to take screenshot** in python ### How to do it: 1. Go to [python codes](https://github.com/Onelinerhub/onelinerhub/tree/main/python) 2. Create new file (named in underscore case, should contain key words from title) with `md` extension (markdown file). 3. Propose new file with following content (please use all three blocks if possible - title, code itself and explanations list): ~~~ # python how to take screenshot ```python code part1 part2 part3 ... ``` - part1 - explain code part 1 - part2 - explain code part 2 - ... ~~~ More [advanced template](https://github.com/Onelinerhub/onelinerhub/blob/main/template.md) for examples and linked solutions. More [docs here](https://github.com/Onelinerhub/onelinerhub#onelinerhub).",0,write shortest possible code python how to take screenshot python please write shortest code example for this question python how to take screenshot in python how to do it go to create new file named in underscore case should contain key words from title with md extension markdown file propose new file with following content please use all three blocks if possible title code itself and explanations list python how to take screenshot python code explain code part explain code part more for examples and linked solutions more ,0 98869,11099276005.0,IssuesEvent,2019-12-16 16:42:00,bitfocus/companion-module-requests,https://api.github.com/repos/bitfocus/companion-module-requests,closed,Request companion module for jinx! LED matrix control,Missing documentation Windows only,"jinx! LED software also is an open source projects. With this software you can create your own LED matrix no matter how assymmetric they are and run some content over it. I am working on some art projects with LED´s and it would be great if one of you could make the stream deck work with it. Thanks in advance, Sebastian ",1.0,"Request companion module for jinx! LED matrix control - jinx! LED software also is an open source projects. With this software you can create your own LED matrix no matter how assymmetric they are and run some content over it. I am working on some art projects with LED´s and it would be great if one of you could make the stream deck work with it. Thanks in advance, Sebastian ",0,request companion module for jinx led matrix control jinx led software also is an open source projects with this software you can create your own led matrix no matter how assymmetric they are and run some content over it i am working on some art projects with led´s and it would be great if one of you could make the stream deck work with it thanks in advance sebastian ,0 301666,22768904122.0,IssuesEvent,2022-07-08 08:06:46,godotengine/godot,https://api.github.com/repos/godotengine/godot,closed,Setting a non-existing audio bus to a stream does not generate a warning,enhancement documentation topic:audio," **Godot version:** af094253a617392a636ebbb7cf7f56f96077f469 **OS/device including version:** Arch Linux **Issue description:** Expected to get a warning when setting a non-existing audio bus to a stream, but that was not the case. **Steps to reproduce:** ```gdscript func _ready(): var sfx_audio_stream_player : AudioStreamPlayer = AudioStreamPlayer.new() sfx_audio_stream_player.set_bus(""NonExistingAudioBus"") print(sfx_audio_stream_player.get_bus()) # falls back to Master audio bus ```",1.0,"Setting a non-existing audio bus to a stream does not generate a warning - **Godot version:** af094253a617392a636ebbb7cf7f56f96077f469 **OS/device including version:** Arch Linux **Issue description:** Expected to get a warning when setting a non-existing audio bus to a stream, but that was not the case. **Steps to reproduce:** ```gdscript func _ready(): var sfx_audio_stream_player : AudioStreamPlayer = AudioStreamPlayer.new() sfx_audio_stream_player.set_bus(""NonExistingAudioBus"") print(sfx_audio_stream_player.get_bus()) # falls back to Master audio bus ```",0,setting a non existing audio bus to a stream does not generate a warning please search existing issues for potential duplicates before filing yours godot version os device including version arch linux issue description expected to get a warning when setting a non existing audio bus to a stream but that was not the case steps to reproduce gdscript func ready var sfx audio stream player audiostreamplayer audiostreamplayer new sfx audio stream player set bus nonexistingaudiobus print sfx audio stream player get bus falls back to master audio bus ,0 3703,14371624460.0,IssuesEvent,2020-12-01 12:51:30,gchq/Gaffer,https://api.github.com/repos/gchq/Gaffer,closed,Automatically update Koryphe version on new koryphe release,automation,"Add an action which updates the koryphe version upon a new Koryphe release. It can use the ./cd/updateKorypheVersion.sh script if necessary. This will save developer time.",1.0,"Automatically update Koryphe version on new koryphe release - Add an action which updates the koryphe version upon a new Koryphe release. It can use the ./cd/updateKorypheVersion.sh script if necessary. This will save developer time.",1,automatically update koryphe version on new koryphe release add an action which updates the koryphe version upon a new koryphe release it can use the cd updatekorypheversion sh script if necessary this will save developer time ,1 406411,27563436991.0,IssuesEvent,2023-03-08 00:41:04,SainsburyWellcomeCentre/aeon_acquisition,https://api.github.com/repos/SainsburyWellcomeCentre/aeon_acquisition,closed,document harp registers,documentation,"a description of what register numbers correspond to, and the associated datatype (as is done partially already in the low-level python api)",1.0,"document harp registers - a description of what register numbers correspond to, and the associated datatype (as is done partially already in the low-level python api)",0,document harp registers a description of what register numbers correspond to and the associated datatype as is done partially already in the low level python api ,0 8838,27172314776.0,IssuesEvent,2023-02-17 20:40:00,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Copy/Move Item api is not throwing exception when i provide invalid parentReference.id with in drive,type:bug area:Copy/Move status:backlogged automation:Closed,"#### Category - [ ] Question - [ ] Documentation issue - [X ] Bug Both Move and copy apis are successful when we provide invalid ParentReference.id with in drive. And it is copying the file to same folder as source item. Expected Behaviour : It should throw invalid parentReferece.id error. This functionality is working fine when we provide invalid parentReference.path. Copy Item : POST https://graph.microsoft.com/v1.0/drives/b!dckkz8p7EkKltFwLt5o0QJ6VxtVc1UlOlv0XvR1bhLAo3oh17BLfTIZmGKhBkrry/root:/home/new-item.jpg:/microsoft.graph.copy {'name':'test5.jpg','parentReference':{'driveId':'b!dckkz8p7EkKltFwLt5o0QJ6VxtVc1UlOlv0XvR1bhLAo3oh17BLfTIZmGKhBkrry','id':'fgdftgarsgdreg'}} Move Item : PATCH https://graph.microsoft.com/v1.0/drives/b!dckkz8p7EkKltFwLt5o0QJ6VxtVc1UlOlv0XvR1bhLAo3oh17BLfTIZmGKhBkrry/root:/home/new-item.jpg: {'name':'test6.jpg','parentReference':{'id':'fgdftgarsgdreg'}} ",1.0,"Copy/Move Item api is not throwing exception when i provide invalid parentReference.id with in drive - #### Category - [ ] Question - [ ] Documentation issue - [X ] Bug Both Move and copy apis are successful when we provide invalid ParentReference.id with in drive. And it is copying the file to same folder as source item. Expected Behaviour : It should throw invalid parentReferece.id error. This functionality is working fine when we provide invalid parentReference.path. Copy Item : POST https://graph.microsoft.com/v1.0/drives/b!dckkz8p7EkKltFwLt5o0QJ6VxtVc1UlOlv0XvR1bhLAo3oh17BLfTIZmGKhBkrry/root:/home/new-item.jpg:/microsoft.graph.copy {'name':'test5.jpg','parentReference':{'driveId':'b!dckkz8p7EkKltFwLt5o0QJ6VxtVc1UlOlv0XvR1bhLAo3oh17BLfTIZmGKhBkrry','id':'fgdftgarsgdreg'}} Move Item : PATCH https://graph.microsoft.com/v1.0/drives/b!dckkz8p7EkKltFwLt5o0QJ6VxtVc1UlOlv0XvR1bhLAo3oh17BLfTIZmGKhBkrry/root:/home/new-item.jpg: {'name':'test6.jpg','parentReference':{'id':'fgdftgarsgdreg'}} ",1,copy move item api is not throwing exception when i provide invalid parentreference id with in drive category question documentation issue bug both move and copy apis are successful when we provide invalid parentreference id with in drive and it is copying the file to same folder as source item expected behaviour it should throw invalid parentreferece id error this functionality is working fine when we provide invalid parentreference path copy item post name jpg parentreference driveid b id fgdftgarsgdreg move item patch name jpg parentreference id fgdftgarsgdreg ,1 721968,24845240070.0,IssuesEvent,2022-10-26 15:24:14,Lightning-AI/lightning,https://api.github.com/repos/Lightning-AI/lightning,closed,Trainer.test() hangs when run from python interactive shell with multiple GPUs,bug priority: 1 strategy: ddp trainer: connector,"## 🐛 Bug `Trainer.test()` hangs when run from an interactive shell when the Trainer uses `strategy=""ddp""` and `gpus > 1`. ### To Reproduce Run the python script below from the command line in a >1 GPU environment (`python -i` leaves the python interactive console open after the script completes): ```sh python -i ``` After the script completes, rerun the Trainer.test() function call at the terminal: ```python trainer.test(model, datamodule=dm) ``` The `trainer.test()` call within the script runs successfully, but the terminal hangs when `trainer.test()` is called from the interactive shell. Here is the console log for reference: ```sh (ptl) python -i script.py GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs `Trainer(limit_train_batches=1)` was configured so 1 batch per epoch will be used. `Trainer(limit_val_batches=1)` was configured so 1 batch will be used. Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2 Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2 ---------------------------------------------------------------------------------------------------- distributed_backend=nccl All distributed processes registered. Starting with 2 processes ---------------------------------------------------------------------------------------------------- LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3] LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2,3] /anaconda/envs/ptl/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:240: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 24 which is the number of cpus on this machine) in the `DataLoader` init to improve performance. rank_zero_warn( /anaconda/envs/ptl/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1933: PossibleUserWarning: The number of training batches (1) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. rank_zero_warn( /anaconda/envs/ptl/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:240: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 24 which is the number of cpus on this machine) in the `DataLoader` init to improve performance. rank_zero_warn( Epoch 0: 0%| | 0/2 [00:00>> trainer.test(model, datamodule=dm) ``` ### Expected behavior Either the test loop should run successfully or an error should be thrown (if we shouldn't be running `test()` with GPUs > 1). ### Python Script ```python import torch, os, logging, sys from torch.utils.data import DataLoader, Dataset from pytorch_lightning import LightningModule, Trainer, LightningDataModule #logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class BoringModel(LightningModule): def __init__(self): super().__init__() self.layer = torch.nn.Linear(32, 2) def forward(self, x): return self.layer(x) def training_step(self, batch, batch_idx): loss = self(batch).sum() self.log(""train_loss"", loss) return {""loss"": loss} def validation_step(self, batch, batch_idx): loss = self(batch).sum() self.log(""valid_loss"", loss) def test_step(self, batch, batch_idx): loss = self(batch).sum() self.log(""test_loss"", loss) def configure_optimizers(self): return torch.optim.SGD(self.layer.parameters(), lr=0.1) class DataModule(LightningDataModule): def setup(self, stage=None) -> None: self._dataloader = DataLoader(RandomDataset(32, 64), batch_size=2) def train_dataloader(self): return self._dataloader def test_dataloader(self): return self._dataloader def val_dataloader(self): return self._dataloader if __name__ == ""__main__"": model = BoringModel() dm = DataModule() trainer = Trainer( gpus=2, limit_train_batches=1, limit_val_batches=1, num_sanity_val_steps=0, max_epochs=1, enable_model_summary=False, strategy=""ddp"" ) trainer.fit(model, datamodule=dm) trainer.test(model, datamodule=dm) ``` ### Environment * CUDA: - GPU: - Tesla K80 - Tesla K80 - Tesla K80 - Tesla K80 - available: True - version: 11.3 * Packages: - numpy: 1.22.3 - pyTorch_debug: False - pyTorch_version: 1.11.0 - pytorch-lightning: 1.6.4 - tqdm: 4.64.0 * System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.8.13 - version: #80~18.04.1-Ubuntu SMP Wed Apr 13 02:07:09 UTC 2022 ### Additional context This was run in Azure Machine Learning Studio. cc @tchaton @rohitgr7 @justusschock @kaushikb11 @awaelchli @akihironitta @ninginthecloud",1.0,"Trainer.test() hangs when run from python interactive shell with multiple GPUs - ## 🐛 Bug `Trainer.test()` hangs when run from an interactive shell when the Trainer uses `strategy=""ddp""` and `gpus > 1`. ### To Reproduce Run the python script below from the command line in a >1 GPU environment (`python -i` leaves the python interactive console open after the script completes): ```sh python -i ``` After the script completes, rerun the Trainer.test() function call at the terminal: ```python trainer.test(model, datamodule=dm) ``` The `trainer.test()` call within the script runs successfully, but the terminal hangs when `trainer.test()` is called from the interactive shell. Here is the console log for reference: ```sh (ptl) python -i script.py GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs `Trainer(limit_train_batches=1)` was configured so 1 batch per epoch will be used. `Trainer(limit_val_batches=1)` was configured so 1 batch will be used. Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2 Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2 ---------------------------------------------------------------------------------------------------- distributed_backend=nccl All distributed processes registered. Starting with 2 processes ---------------------------------------------------------------------------------------------------- LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3] LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2,3] /anaconda/envs/ptl/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:240: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 24 which is the number of cpus on this machine) in the `DataLoader` init to improve performance. rank_zero_warn( /anaconda/envs/ptl/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1933: PossibleUserWarning: The number of training batches (1) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. rank_zero_warn( /anaconda/envs/ptl/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:240: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 24 which is the number of cpus on this machine) in the `DataLoader` init to improve performance. rank_zero_warn( Epoch 0: 0%| | 0/2 [00:00>> trainer.test(model, datamodule=dm) ``` ### Expected behavior Either the test loop should run successfully or an error should be thrown (if we shouldn't be running `test()` with GPUs > 1). ### Python Script ```python import torch, os, logging, sys from torch.utils.data import DataLoader, Dataset from pytorch_lightning import LightningModule, Trainer, LightningDataModule #logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class BoringModel(LightningModule): def __init__(self): super().__init__() self.layer = torch.nn.Linear(32, 2) def forward(self, x): return self.layer(x) def training_step(self, batch, batch_idx): loss = self(batch).sum() self.log(""train_loss"", loss) return {""loss"": loss} def validation_step(self, batch, batch_idx): loss = self(batch).sum() self.log(""valid_loss"", loss) def test_step(self, batch, batch_idx): loss = self(batch).sum() self.log(""test_loss"", loss) def configure_optimizers(self): return torch.optim.SGD(self.layer.parameters(), lr=0.1) class DataModule(LightningDataModule): def setup(self, stage=None) -> None: self._dataloader = DataLoader(RandomDataset(32, 64), batch_size=2) def train_dataloader(self): return self._dataloader def test_dataloader(self): return self._dataloader def val_dataloader(self): return self._dataloader if __name__ == ""__main__"": model = BoringModel() dm = DataModule() trainer = Trainer( gpus=2, limit_train_batches=1, limit_val_batches=1, num_sanity_val_steps=0, max_epochs=1, enable_model_summary=False, strategy=""ddp"" ) trainer.fit(model, datamodule=dm) trainer.test(model, datamodule=dm) ``` ### Environment * CUDA: - GPU: - Tesla K80 - Tesla K80 - Tesla K80 - Tesla K80 - available: True - version: 11.3 * Packages: - numpy: 1.22.3 - pyTorch_debug: False - pyTorch_version: 1.11.0 - pytorch-lightning: 1.6.4 - tqdm: 4.64.0 * System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.8.13 - version: #80~18.04.1-Ubuntu SMP Wed Apr 13 02:07:09 UTC 2022 ### Additional context This was run in Azure Machine Learning Studio. cc @tchaton @rohitgr7 @justusschock @kaushikb11 @awaelchli @akihironitta @ninginthecloud",0,trainer test hangs when run from python interactive shell with multiple gpus 🐛 bug trainer test hangs when run from an interactive shell when the trainer uses strategy ddp and gpus to reproduce run the python script below from the command line in a gpu environment python i leaves the python interactive console open after the script completes sh python i after the script completes rerun the trainer test function call at the terminal python trainer test model datamodule dm the trainer test call within the script runs successfully but the terminal hangs when trainer test is called from the interactive shell here is the console log for reference sh ptl python i script py gpu available true used true tpu available false using tpu cores ipu available false using ipus hpu available false using hpus trainer limit train batches was configured so batch per epoch will be used trainer limit val batches was configured so batch will be used initializing distributed global rank member initializing distributed global rank member distributed backend nccl all distributed processes registered starting with processes local rank cuda visible devices local rank cuda visible devices anaconda envs ptl lib site packages pytorch lightning trainer connectors data connector py possibleuserwarning the dataloader train dataloader does not have many workers which may be a bottleneck consider increasing the value of the num workers argument try which is the number of cpus on this machine in the dataloader init to improve performance rank zero warn anaconda envs ptl lib site packages pytorch lightning trainer trainer py possibleuserwarning the number of training batches is smaller than the logging interval trainer log every n steps set a lower value for log every n steps if you want to see logs for the training epoch rank zero warn anaconda envs ptl lib site packages pytorch lightning trainer connectors data connector py possibleuserwarning the dataloader val dataloader does not have many workers which may be a bottleneck consider increasing the value of the num workers argument try which is the number of cpus on this machine in the dataloader init to improve performance rank zero warn epoch warning find unused parameters true was specified in ddp constructor but did not find any unused parameters in the forward pass this flag results in an extra traversal of the autograd graph every iteration which can adversely affect performance if your model indeed never has any unused parameters in the forward pass consider turning this flag off note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters function operator epoch █████████████████████████████████████████████████████████████████████████▌ warning find unused parameters true was specified in ddp constructor but did not find any unused parameters in the forward pass this flag results in an extra traversal of the autograd graph every iteration which can adversely affect performance if your model indeed never has any unused parameters in the forward pass consider turning this flag off note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters function operator epoch ██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ local rank cuda visible devices local rank cuda visible devices anaconda envs ptl lib site packages pytorch lightning trainer connectors data connector py possibleuserwarning using distributedsampler with the dataloaders during trainer test it is recommended to use trainer devices to ensure each sample batch gets evaluated exactly once otherwise multi device settings use distributedsampler that replicates some samples to make sure all devices have same batch size in case of uneven inputs rank zero warn anaconda envs ptl lib site packages pytorch lightning trainer connectors data connector py possibleuserwarning the dataloader test dataloader does not have many workers which may be a bottleneck consider increasing the value of the num workers argument try which is the number of cpus on this machine in the dataloader init to improve performance rank zero warn testing dataloader ███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── test metric dataloader ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── test loss ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── trainer test model datamodule dm expected behavior either the test loop should run successfully or an error should be thrown if we shouldn t be running test with gpus python script python import torch os logging sys from torch utils data import dataloader dataset from pytorch lightning import lightningmodule trainer lightningdatamodule logging basicconfig stream sys stdout level logging debug class randomdataset dataset def init self size length self len length self data torch randn length size def getitem self index return self data def len self return self len class boringmodel lightningmodule def init self super init self layer torch nn linear def forward self x return self layer x def training step self batch batch idx loss self batch sum self log train loss loss return loss loss def validation step self batch batch idx loss self batch sum self log valid loss loss def test step self batch batch idx loss self batch sum self log test loss loss def configure optimizers self return torch optim sgd self layer parameters lr class datamodule lightningdatamodule def setup self stage none none self dataloader dataloader randomdataset batch size def train dataloader self return self dataloader def test dataloader self return self dataloader def val dataloader self return self dataloader if name main model boringmodel dm datamodule trainer trainer gpus limit train batches limit val batches num sanity val steps max epochs enable model summary false strategy ddp trainer fit model datamodule dm trainer test model datamodule dm environment cuda gpu tesla tesla tesla tesla available true version packages numpy pytorch debug false pytorch version pytorch lightning tqdm system os linux architecture elf processor python version ubuntu smp wed apr utc additional context this was run in azure machine learning studio cc tchaton justusschock awaelchli akihironitta ninginthecloud,0 72251,8714252889.0,IssuesEvent,2018-12-07 07:05:12,bigbomio/bigbom-marketplace,https://api.github.com/repos/bigbomio/bigbom-marketplace,opened,Build in-app wallet feature,design-proposal enhancement,"In order to have better UX for users, we came up with a solution to build an in-app wallet management system. This app will have these features: 1. Able to create and restore wallet from seed phrase. Password-protected encrypted key vault. 2. Able to manage ETH and Token balance. At first it will support ETH, BBO, DAI, USDC and TUSD. 3. Able sign & broadcast transaction to different network. This will provide the capabilities of offloading some transactions to different network than Ethereum Mainnet, while keeping all the payment-related on the Ethereum Mainnet. 4. Integrates into the app like as web component, so user can sign & confirm transaction like normal applications. The idea was from seeing the use of eth-lightwallet, and we feel that we can use part of Metamask source code to build our own wallet management system.",1.0,"Build in-app wallet feature - In order to have better UX for users, we came up with a solution to build an in-app wallet management system. This app will have these features: 1. Able to create and restore wallet from seed phrase. Password-protected encrypted key vault. 2. Able to manage ETH and Token balance. At first it will support ETH, BBO, DAI, USDC and TUSD. 3. Able sign & broadcast transaction to different network. This will provide the capabilities of offloading some transactions to different network than Ethereum Mainnet, while keeping all the payment-related on the Ethereum Mainnet. 4. Integrates into the app like as web component, so user can sign & confirm transaction like normal applications. The idea was from seeing the use of eth-lightwallet, and we feel that we can use part of Metamask source code to build our own wallet management system.",0,build in app wallet feature in order to have better ux for users we came up with a solution to build an in app wallet management system this app will have these features able to create and restore wallet from seed phrase password protected encrypted key vault able to manage eth and token balance at first it will support eth bbo dai usdc and tusd able sign broadcast transaction to different network this will provide the capabilities of offloading some transactions to different network than ethereum mainnet while keeping all the payment related on the ethereum mainnet integrates into the app like as web component so user can sign confirm transaction like normal applications the idea was from seeing the use of eth lightwallet and we feel that we can use part of metamask source code to build our own wallet management system ,0 3504,13878874467.0,IssuesEvent,2020-10-17 11:50:12,Tithibots/tithif,https://api.github.com/repos/Tithibots/tithif,opened,"Create session.py for handling facebook sessions, similar to other project tithiwa. ",Selenium Automation good first issue hacktoberfest python,"In the case of Facebook, we need to get cookies and store them in the session file Then main logic is already built just look at this [tithiwa/session.py](https://github.com/Tithibots/tithiwa/blob/main/tithiwa/session.py) You need to do 1. Change file extextion from `.wa` to `.fb` 2. Get cookies and set cookies, look at [this](https://www.selenium.dev/documentation/en/support_packages/working_with_cookies/) ",1.0,"Create session.py for handling facebook sessions, similar to other project tithiwa. - In the case of Facebook, we need to get cookies and store them in the session file Then main logic is already built just look at this [tithiwa/session.py](https://github.com/Tithibots/tithiwa/blob/main/tithiwa/session.py) You need to do 1. Change file extextion from `.wa` to `.fb` 2. Get cookies and set cookies, look at [this](https://www.selenium.dev/documentation/en/support_packages/working_with_cookies/) ",1,create session py for handling facebook sessions similar to other project tithiwa in the case of facebook we need to get cookies and store them in the session file then main logic is already built just look at this you need to do change file extextion from wa to fb get cookies and set cookies look at ,1 482091,13896706890.0,IssuesEvent,2020-10-19 17:35:12,AY2021S1-CS2103T-T13-2/tp,https://api.github.com/repos/AY2021S1-CS2103T-T13-2/tp,closed,"As a user who has used the app a few times now, I want to be able to search for my old flash cards within the question",priority.High type.Story,`find` command being able to handle specified `q/keyword`.,1.0,"As a user who has used the app a few times now, I want to be able to search for my old flash cards within the question - `find` command being able to handle specified `q/keyword`.",0,as a user who has used the app a few times now i want to be able to search for my old flash cards within the question find command being able to handle specified q keyword ,0 33,2906183789.0,IssuesEvent,2015-06-19 08:18:04,MISP/MISP,https://api.github.com/repos/MISP/MISP,opened,Sync filtering based on tags,automation c-list enhancement,"As discussed for various instances (especially incident response groups), Sync filtering based on tag would be an easy option to limit the distribution at the sync point. Potential users: @cudeso (misp.be) @adulau (CIRCL and other)",1.0,"Sync filtering based on tags - As discussed for various instances (especially incident response groups), Sync filtering based on tag would be an easy option to limit the distribution at the sync point. Potential users: @cudeso (misp.be) @adulau (CIRCL and other)",1,sync filtering based on tags as discussed for various instances especially incident response groups sync filtering based on tag would be an easy option to limit the distribution at the sync point potential users cudeso misp be adulau circl and other ,1 2060,11351382001.0,IssuesEvent,2020-01-24 11:05:09,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,Bump Elastic Stack to 7.6,automation ci,"A new release is coming so we have to update the default Elastic Stack on branch 7.x * Update `ELASTIC_STACK_VERSION` param on Jenkinsfiles (.ci/*.groovy and .ci/Jenkinsfile files) * Update `.ci/scripts/7.0-upgrade.sh` Elastic stack used for the update * Update `SUPPORTED_VERSIONS` on [compose.py](https://github.com/elastic/apm-integration-testing/blob/7.x/scripts/compose.py#L2115-L2131) * Update [APM server versions](https://github.com/elastic/apm-integration-testing/blob/7.x/tests/versions/apm_server.yml)",1.0,"Bump Elastic Stack to 7.6 - A new release is coming so we have to update the default Elastic Stack on branch 7.x * Update `ELASTIC_STACK_VERSION` param on Jenkinsfiles (.ci/*.groovy and .ci/Jenkinsfile files) * Update `.ci/scripts/7.0-upgrade.sh` Elastic stack used for the update * Update `SUPPORTED_VERSIONS` on [compose.py](https://github.com/elastic/apm-integration-testing/blob/7.x/scripts/compose.py#L2115-L2131) * Update [APM server versions](https://github.com/elastic/apm-integration-testing/blob/7.x/tests/versions/apm_server.yml)",1,bump elastic stack to a new release is coming so we have to update the default elastic stack on branch x update elastic stack version param on jenkinsfiles ci groovy and ci jenkinsfile files update ci scripts upgrade sh elastic stack used for the update update supported versions on update ,1 2173,11502157262.0,IssuesEvent,2020-02-12 18:33:32,surge-synthesizer/surge,https://api.github.com/repos/surge-synthesizer/surge,closed,C1-8 controls only report values 0-9 to host now (really: windows w_char sprintfs),Host automation UI related Windows,"It was 0.00-100.00% previously. Reaper 64-bit VST3.",1.0,"C1-8 controls only report values 0-9 to host now (really: windows w_char sprintfs) - It was 0.00-100.00% previously. Reaper 64-bit VST3.",1, controls only report values to host now really windows w char sprintfs it was previously reaper bit ,1 76817,14685200031.0,IssuesEvent,2021-01-01 07:37:50,log2timeline/plaso,https://api.github.com/repos/log2timeline/plaso,closed,DeprecationWarning: assertDictContainsSubset is deprecated,code health testing,"``` /usr/lib64/python3.9/unittest/case.py:1134: DeprecationWarning: assertDictContainsSubset is deprecated warnings.warn('assertDictContainsSubset is deprecated', ```",1.0,"DeprecationWarning: assertDictContainsSubset is deprecated - ``` /usr/lib64/python3.9/unittest/case.py:1134: DeprecationWarning: assertDictContainsSubset is deprecated warnings.warn('assertDictContainsSubset is deprecated', ```",0,deprecationwarning assertdictcontainssubset is deprecated usr unittest case py deprecationwarning assertdictcontainssubset is deprecated warnings warn assertdictcontainssubset is deprecated ,0 288533,24912114303.0,IssuesEvent,2022-10-30 00:53:01,veiko/jest-a11y,https://api.github.com/repos/veiko/jest-a11y,opened,test: assertAriaExpanded ,good first issue 🦺 test,"### Describe the feature you'd like: Tests for `assertAriaExpanded` ### Suggested implementation: - Create `src/utils/__tests__/assertAriaExpanded.spec.tsx` - Follow implementation at `src/utils/__tests__/assertFocusTrap.spec.tsx`",1.0,"test: assertAriaExpanded - ### Describe the feature you'd like: Tests for `assertAriaExpanded` ### Suggested implementation: - Create `src/utils/__tests__/assertAriaExpanded.spec.tsx` - Follow implementation at `src/utils/__tests__/assertFocusTrap.spec.tsx`",0,test assertariaexpanded describe the feature you d like tests for assertariaexpanded suggested implementation create src utils tests assertariaexpanded spec tsx follow implementation at src utils tests assertfocustrap spec tsx ,0 245108,20746216539.0,IssuesEvent,2022-03-14 23:27:47,apache/trafficserver,https://api.github.com/repos/apache/trafficserver,closed,Failure in schedule_on_thread test,AuTest,"Saw this failure in PR #7736. I think this is unrelated to the PR, so I am noting it here before I move on. This is the output of the traffic_server process ``` [Apr 23 07:40:02.093] traffic_server DIAG: (TSContSchedule_test.init) initializing plugin for testing TSContScheduleOnThread [Apr 23 07:40:02.144] [ET_TASK 1] DIAG: (TSContSchedule_test.schedule) [TSContSchedule_test] scheduling continuation [Apr 23 07:40:02.145] [ET_NET 10] DIAG: (TSContSchedule_test.handler) TSContScheduleOnThread handler 1 thread [0x7f2637b4a010] [Apr 23 07:40:02.145] [ET_NET 10] DIAG: (TSContSchedule_test.handler) [TSContSchedule_test] scheduling continuation [Apr 23 07:40:02.145] [ET_NET 10] DIAG: (TSContSchedule_test.handler) TSContScheduleOnThread handler 2 thread [0x7f2637b4a010] [Apr 23 07:40:02.145] [ET_NET 10] DIAG: (TSContSchedule_test.handler) TSContScheduleOnThread handler 2 thread [0x7f2637b4a010] [Apr 23 07:40:02.145] [ET_NET 10] DIAG: (TSContSchedule_test.check) pass [should be the same thread] Traffic Server 10.0.0 Apr 23 2021 07:24:41 docker-gd-2 traffic_server: using root directory '/var/tmp/ausb-7736.15400/schedule_on_thread/ts ``` The output is missing ``` (TSContSchedule_test.check) pass [should no be the same thread] ``` The test case command is just a print. The real test is launched from the plugin init. Perhaps the test is shutting down before the final bits of the test complete?",1.0,"Failure in schedule_on_thread test - Saw this failure in PR #7736. I think this is unrelated to the PR, so I am noting it here before I move on. This is the output of the traffic_server process ``` [Apr 23 07:40:02.093] traffic_server DIAG: (TSContSchedule_test.init) initializing plugin for testing TSContScheduleOnThread [Apr 23 07:40:02.144] [ET_TASK 1] DIAG: (TSContSchedule_test.schedule) [TSContSchedule_test] scheduling continuation [Apr 23 07:40:02.145] [ET_NET 10] DIAG: (TSContSchedule_test.handler) TSContScheduleOnThread handler 1 thread [0x7f2637b4a010] [Apr 23 07:40:02.145] [ET_NET 10] DIAG: (TSContSchedule_test.handler) [TSContSchedule_test] scheduling continuation [Apr 23 07:40:02.145] [ET_NET 10] DIAG: (TSContSchedule_test.handler) TSContScheduleOnThread handler 2 thread [0x7f2637b4a010] [Apr 23 07:40:02.145] [ET_NET 10] DIAG: (TSContSchedule_test.handler) TSContScheduleOnThread handler 2 thread [0x7f2637b4a010] [Apr 23 07:40:02.145] [ET_NET 10] DIAG: (TSContSchedule_test.check) pass [should be the same thread] Traffic Server 10.0.0 Apr 23 2021 07:24:41 docker-gd-2 traffic_server: using root directory '/var/tmp/ausb-7736.15400/schedule_on_thread/ts ``` The output is missing ``` (TSContSchedule_test.check) pass [should no be the same thread] ``` The test case command is just a print. The real test is launched from the plugin init. Perhaps the test is shutting down before the final bits of the test complete?",0,failure in schedule on thread test saw this failure in pr i think this is unrelated to the pr so i am noting it here before i move on this is the output of the traffic server process traffic server diag tscontschedule test init initializing plugin for testing tscontscheduleonthread diag tscontschedule test schedule scheduling continuation diag tscontschedule test handler tscontscheduleonthread handler thread diag tscontschedule test handler scheduling continuation diag tscontschedule test handler tscontscheduleonthread handler thread diag tscontschedule test handler tscontscheduleonthread handler thread diag tscontschedule test check pass traffic server apr docker gd traffic server using root directory var tmp ausb schedule on thread ts the output is missing tscontschedule test check pass the test case command is just a print the real test is launched from the plugin init perhaps the test is shutting down before the final bits of the test complete ,0 10067,31549486526.0,IssuesEvent,2023-09-02 00:02:27,rancher/qa-tasks,https://api.github.com/repos/rancher/qa-tasks,opened,Update vsphere internal documentation,area/jenkins-job area/env-automation,"we need to update internal docs with how to: * access the new setup * update jenkins with new VPN for vsphere runs ",1.0,"Update vsphere internal documentation - we need to update internal docs with how to: * access the new setup * update jenkins with new VPN for vsphere runs ",1,update vsphere internal documentation we need to update internal docs with how to access the new setup update jenkins with new vpn for vsphere runs ,1 163085,20260405058.0,IssuesEvent,2022-02-15 06:36:51,YJSoft/nextworld,https://api.github.com/repos/YJSoft/nextworld,opened,jest-26.1.0.tgz: 2 vulnerabilities (highest severity is: 5.6),security vulnerability,"
Vulnerable Library - jest-26.1.0.tgz

Path to dependency file: /package.json

Path to vulnerable library: /node_modules/node-notifier/package.json

Found in HEAD commit: e6bb75a02bdc1736d9bdac96043a42156c72da5b

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2020-7789](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7789) | Medium | 5.6 | node-notifier-7.0.0.tgz | Transitive | 26.2.0 | ❌ | | [CVE-2021-32640](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640) | Medium | 5.3 | ws-7.3.0.tgz | Transitive | 26.2.0 | ❌ | ## Details
CVE-2020-7789 ### Vulnerable Library - node-notifier-7.0.0.tgz

A Node.js module for sending notifications on native Mac, Windows (post and pre 8) and Linux (or Growl as fallback)

Library home page: https://registry.npmjs.org/node-notifier/-/node-notifier-7.0.0.tgz

Path to dependency file: /package.json

Path to vulnerable library: /node_modules/node-notifier/package.json

Dependency Hierarchy: - jest-26.1.0.tgz (Root Library) - core-26.1.0.tgz - reporters-26.1.0.tgz - :x: **node-notifier-7.0.0.tgz** (Vulnerable Library)

Found in HEAD commit: e6bb75a02bdc1736d9bdac96043a42156c72da5b

Found in base branch: master

### Vulnerability Details

This affects the package node-notifier before 9.0.0. It allows an attacker to run arbitrary commands on Linux machines due to the options params not being sanitised when being passed an array.

Publish Date: 2020-12-11

URL: CVE-2020-7789

### CVSS 3 Score Details (5.6)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789

Release Date: 2020-12-11

Fix Resolution (node-notifier): 8.0.1

Direct dependency fix Resolution (jest): 26.2.0

Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2021-32640 ### Vulnerable Library - ws-7.3.0.tgz

Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js

Library home page: https://registry.npmjs.org/ws/-/ws-7.3.0.tgz

Path to dependency file: /package.json

Path to vulnerable library: /node_modules/ws/package.json

Dependency Hierarchy: - jest-26.1.0.tgz (Root Library) - jest-cli-26.1.0.tgz - jest-config-26.1.0.tgz - jest-environment-jsdom-26.1.0.tgz - jsdom-16.2.2.tgz - :x: **ws-7.3.0.tgz** (Vulnerable Library)

Found in HEAD commit: e6bb75a02bdc1736d9bdac96043a42156c72da5b

Found in base branch: master

### Vulnerability Details

ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.

Publish Date: 2021-05-25

URL: CVE-2021-32640

### CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693

Release Date: 2021-05-25

Fix Resolution (ws): 7.4.6

Direct dependency fix Resolution (jest): 26.2.0

Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
",True,"jest-26.1.0.tgz: 2 vulnerabilities (highest severity is: 5.6) -
Vulnerable Library - jest-26.1.0.tgz

Path to dependency file: /package.json

Path to vulnerable library: /node_modules/node-notifier/package.json

Found in HEAD commit: e6bb75a02bdc1736d9bdac96043a42156c72da5b

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2020-7789](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7789) | Medium | 5.6 | node-notifier-7.0.0.tgz | Transitive | 26.2.0 | ❌ | | [CVE-2021-32640](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640) | Medium | 5.3 | ws-7.3.0.tgz | Transitive | 26.2.0 | ❌ | ## Details
CVE-2020-7789 ### Vulnerable Library - node-notifier-7.0.0.tgz

A Node.js module for sending notifications on native Mac, Windows (post and pre 8) and Linux (or Growl as fallback)

Library home page: https://registry.npmjs.org/node-notifier/-/node-notifier-7.0.0.tgz

Path to dependency file: /package.json

Path to vulnerable library: /node_modules/node-notifier/package.json

Dependency Hierarchy: - jest-26.1.0.tgz (Root Library) - core-26.1.0.tgz - reporters-26.1.0.tgz - :x: **node-notifier-7.0.0.tgz** (Vulnerable Library)

Found in HEAD commit: e6bb75a02bdc1736d9bdac96043a42156c72da5b

Found in base branch: master

### Vulnerability Details

This affects the package node-notifier before 9.0.0. It allows an attacker to run arbitrary commands on Linux machines due to the options params not being sanitised when being passed an array.

Publish Date: 2020-12-11

URL: CVE-2020-7789

### CVSS 3 Score Details (5.6)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789

Release Date: 2020-12-11

Fix Resolution (node-notifier): 8.0.1

Direct dependency fix Resolution (jest): 26.2.0

Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2021-32640 ### Vulnerable Library - ws-7.3.0.tgz

Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js

Library home page: https://registry.npmjs.org/ws/-/ws-7.3.0.tgz

Path to dependency file: /package.json

Path to vulnerable library: /node_modules/ws/package.json

Dependency Hierarchy: - jest-26.1.0.tgz (Root Library) - jest-cli-26.1.0.tgz - jest-config-26.1.0.tgz - jest-environment-jsdom-26.1.0.tgz - jsdom-16.2.2.tgz - :x: **ws-7.3.0.tgz** (Vulnerable Library)

Found in HEAD commit: e6bb75a02bdc1736d9bdac96043a42156c72da5b

Found in base branch: master

### Vulnerability Details

ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.

Publish Date: 2021-05-25

URL: CVE-2021-32640

### CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693

Release Date: 2021-05-25

Fix Resolution (ws): 7.4.6

Direct dependency fix Resolution (jest): 26.2.0

Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
",0,jest tgz vulnerabilities highest severity is vulnerable library jest tgz path to dependency file package json path to vulnerable library node modules node notifier package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium node notifier tgz transitive ❌ medium ws tgz transitive ❌ details cve vulnerable library node notifier tgz a node js module for sending notifications on native mac windows post and pre and linux or growl as fallback library home page a href path to dependency file package json path to vulnerable library node modules node notifier package json dependency hierarchy jest tgz root library core tgz reporters tgz x node notifier tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package node notifier before it allows an attacker to run arbitrary commands on linux machines due to the options params not being sanitised when being passed an array publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node notifier direct dependency fix resolution jest step up your open source security game with whitesource cve vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client and server for node js library home page a href path to dependency file package json path to vulnerable library node modules ws package json dependency hierarchy jest tgz root library jest cli tgz jest config tgz jest environment jsdom tgz jsdom tgz x ws tgz vulnerable library found in head commit a href found in base branch master vulnerability details ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ws direct dependency fix resolution jest step up your open source security game with whitesource istransitivedependency false dependencytree jest isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails this affects the package node notifier before it allows an attacker to run arbitrary commands on linux machines due to the options params not being sanitised when being passed an array vulnerabilityurl istransitivedependency false dependencytree jest isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options vulnerabilityurl ,0 5265,18953621913.0,IssuesEvent,2021-11-18 17:34:57,codeanit/til,https://api.github.com/repos/codeanit/til,opened,Automation and orchestration,basics devops automation diff,"Automation suggests that a sysadmin has invented a system to cause a computer to do something that would normally have to be done manually. In automation, the sysadmin has already made most of the decisions on what needs to be done, and all the computer must do is execute a ""recipe"" of tasks. Orchestration suggests that a sysadmin has set up a system to do something on its own based on a set of rules, parameters, and observations. In orchestration, the sysadmin knows the desired end result but leaves it up to the computer to decide what to do. The difference between automation and orchestration is primarily in intent and tooling. Technically, automation can be considered a subset of orchestration. While orchestration suggests many moving parts, automation usually refers to a singular task or a small number of strongly related tasks. Orchestration works at a higher level and is expected to make decisions based on changing conditions and requirements. For instance, automation usually involves scripting, often in Bash or Python or similar, and it often suggests scheduling something to happen at either a precise time or upon a specific event. However, orchestration often begins with an application that's purpose-built for a set of tasks that may happen irregularly, on demand, or as a result of any number of trigger events, and the exact results may even depend on a variety of conditions. - [x] https://opensource.com/article/20/11/orchestration-vs-automation",1.0,"Automation and orchestration - Automation suggests that a sysadmin has invented a system to cause a computer to do something that would normally have to be done manually. In automation, the sysadmin has already made most of the decisions on what needs to be done, and all the computer must do is execute a ""recipe"" of tasks. Orchestration suggests that a sysadmin has set up a system to do something on its own based on a set of rules, parameters, and observations. In orchestration, the sysadmin knows the desired end result but leaves it up to the computer to decide what to do. The difference between automation and orchestration is primarily in intent and tooling. Technically, automation can be considered a subset of orchestration. While orchestration suggests many moving parts, automation usually refers to a singular task or a small number of strongly related tasks. Orchestration works at a higher level and is expected to make decisions based on changing conditions and requirements. For instance, automation usually involves scripting, often in Bash or Python or similar, and it often suggests scheduling something to happen at either a precise time or upon a specific event. However, orchestration often begins with an application that's purpose-built for a set of tasks that may happen irregularly, on demand, or as a result of any number of trigger events, and the exact results may even depend on a variety of conditions. - [x] https://opensource.com/article/20/11/orchestration-vs-automation",1,automation and orchestration automation suggests that a sysadmin has invented a system to cause a computer to do something that would normally have to be done manually in automation the sysadmin has already made most of the decisions on what needs to be done and all the computer must do is execute a recipe of tasks orchestration suggests that a sysadmin has set up a system to do something on its own based on a set of rules parameters and observations in orchestration the sysadmin knows the desired end result but leaves it up to the computer to decide what to do the difference between automation and orchestration is primarily in intent and tooling technically automation can be considered a subset of orchestration while orchestration suggests many moving parts automation usually refers to a singular task or a small number of strongly related tasks orchestration works at a higher level and is expected to make decisions based on changing conditions and requirements for instance automation usually involves scripting often in bash or python or similar and it often suggests scheduling something to happen at either a precise time or upon a specific event however orchestration often begins with an application that s purpose built for a set of tasks that may happen irregularly on demand or as a result of any number of trigger events and the exact results may even depend on a variety of conditions ,1 5572,20122122095.0,IssuesEvent,2022-02-08 04:16:26,org-acme/test2,https://api.github.com/repos/org-acme/test2,closed,Test,automation security checks,"The following protections were added to the main branch of the repository: - A - B - C @jlmayorga",1.0,"Test - The following protections were added to the main branch of the repository: - A - B - C @jlmayorga",1,test the following protections were added to the main branch of the repository a b c jlmayorga,1 6986,24091823359.0,IssuesEvent,2022-09-19 15:19:03,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,opened,[B/R] restore failed under nemesis,area/docdb status/awaiting-triage qa_automation,"### Description env: - 5 nodes - c5.xlarge - 2.15.3.0-b133 Case is run cycle endlessly till some backup/restore task will fail: 1. create 10 tables with 2 gb data each (different amount of columns each. All fields is varchar) 2. start backup creation 3. during backup we start some random nemesis like: restart VM, start/stop VM, restart tserver/master processes with yb-cluster.ctl.sh, detach random volume 4. check that backup is ended successfully 5. restore backup on same universe in different namespace 6. run same nemesis during restore 7. check that restore is happen 8. partially check that data is the same as it was before backup This is going by cycles. And we run it and wait till something failed Here is a reports (write me if you want to find out a logs, because i can't load it here (too big)): `03ec66fc-f068-431f-8424-d53a978ccf6e` - failed after 11 hours - failed on restore `081e4203-83d5-414b-9975-2aa8a0279944` - after 5 hours - failed on restore `358dd167-5dda-493a-8927-1a0f5d9c42ed` - 3 hours and 2 cycles - failed on restore",1.0,"[B/R] restore failed under nemesis - ### Description env: - 5 nodes - c5.xlarge - 2.15.3.0-b133 Case is run cycle endlessly till some backup/restore task will fail: 1. create 10 tables with 2 gb data each (different amount of columns each. All fields is varchar) 2. start backup creation 3. during backup we start some random nemesis like: restart VM, start/stop VM, restart tserver/master processes with yb-cluster.ctl.sh, detach random volume 4. check that backup is ended successfully 5. restore backup on same universe in different namespace 6. run same nemesis during restore 7. check that restore is happen 8. partially check that data is the same as it was before backup This is going by cycles. And we run it and wait till something failed Here is a reports (write me if you want to find out a logs, because i can't load it here (too big)): `03ec66fc-f068-431f-8424-d53a978ccf6e` - failed after 11 hours - failed on restore `081e4203-83d5-414b-9975-2aa8a0279944` - after 5 hours - failed on restore `358dd167-5dda-493a-8927-1a0f5d9c42ed` - 3 hours and 2 cycles - failed on restore",1, restore failed under nemesis description env nodes xlarge case is run cycle endlessly till some backup restore task will fail create tables with gb data each different amount of columns each all fields is varchar start backup creation during backup we start some random nemesis like restart vm start stop vm restart tserver master processes with yb cluster ctl sh detach random volume check that backup is ended successfully restore backup on same universe in different namespace run same nemesis during restore check that restore is happen partially check that data is the same as it was before backup this is going by cycles and we run it and wait till something failed here is a reports write me if you want to find out a logs because i can t load it here too big failed after hours failed on restore after hours failed on restore hours and cycles failed on restore,1 6114,22204121574.0,IssuesEvent,2022-06-07 13:38:49,gchq/gaffer-docker,https://api.github.com/repos/gchq/gaffer-docker,closed,Improve process for updating copyright headers,automation,"At the moment there are no automatic checks to ensure that copyright dates are updated when a file is modified. Instead there's a reliance on checking this as part of code review. This approach has resulted in copyright dates becoming wrong. There are some existing scripts which need to be run manually. However, these do not update headers correctly in all cases and are not suitable for automatic running by GitHub Actions CI. The existing scripts should be replaced so that copyright dates can be checked as part of GitHub Actions CI. This removes any need for manual checks during code review.",1.0,"Improve process for updating copyright headers - At the moment there are no automatic checks to ensure that copyright dates are updated when a file is modified. Instead there's a reliance on checking this as part of code review. This approach has resulted in copyright dates becoming wrong. There are some existing scripts which need to be run manually. However, these do not update headers correctly in all cases and are not suitable for automatic running by GitHub Actions CI. The existing scripts should be replaced so that copyright dates can be checked as part of GitHub Actions CI. This removes any need for manual checks during code review.",1,improve process for updating copyright headers at the moment there are no automatic checks to ensure that copyright dates are updated when a file is modified instead there s a reliance on checking this as part of code review this approach has resulted in copyright dates becoming wrong there are some existing scripts which need to be run manually however these do not update headers correctly in all cases and are not suitable for automatic running by github actions ci the existing scripts should be replaced so that copyright dates can be checked as part of github actions ci this removes any need for manual checks during code review ,1 683120,23368716343.0,IssuesEvent,2022-08-10 17:43:03,PG649-3D-RPG/Creature-Generation,https://api.github.com/repos/PG649-3D-RPG/Creature-Generation,opened,Package blocking project building,bug priority,"Please don't add random dependencies without communicating it clearly in group meeting. Maybe add check on Main branch, whether package still compiles. Library/PackageCache/com.pg649.creaturegenerator@f8fdd242ea/Runtime/Metaball/MeshGenerator.cs(4,7): error CS0246: The type or namespace name 'log4net' could not be found (are you missing a using directive or an assembly reference?) ",1.0,"Package blocking project building - Please don't add random dependencies without communicating it clearly in group meeting. Maybe add check on Main branch, whether package still compiles. Library/PackageCache/com.pg649.creaturegenerator@f8fdd242ea/Runtime/Metaball/MeshGenerator.cs(4,7): error CS0246: The type or namespace name 'log4net' could not be found (are you missing a using directive or an assembly reference?) ",0,package blocking project building please don t add random dependencies without communicating it clearly in group meeting maybe add check on main branch whether package still compiles library packagecache com creaturegenerator runtime metaball meshgenerator cs error the type or namespace name could not be found are you missing a using directive or an assembly reference ,0 3645,14241760948.0,IssuesEvent,2020-11-19 00:08:04,rstudio/rstudio,https://api.github.com/repos/rstudio/rstudio,reopened,Add internal command for closing all buffers without saving,automation,"For automation, it would be very helpful to have a command to close all open buffers (including satellite windows and source columns) without prompting to save modified buffers. Perhaps simply: `.rs.api.closeAllSourceBuffersWithoutSaving()` or something equally ugly. Don't think it should be hooked to a menu or even the command-palette as accidentally triggering would be Bad (tm). ",1.0,"Add internal command for closing all buffers without saving - For automation, it would be very helpful to have a command to close all open buffers (including satellite windows and source columns) without prompting to save modified buffers. Perhaps simply: `.rs.api.closeAllSourceBuffersWithoutSaving()` or something equally ugly. Don't think it should be hooked to a menu or even the command-palette as accidentally triggering would be Bad (tm). ",1,add internal command for closing all buffers without saving for automation it would be very helpful to have a command to close all open buffers including satellite windows and source columns without prompting to save modified buffers perhaps simply rs api closeallsourcebufferswithoutsaving or something equally ugly don t think it should be hooked to a menu or even the command palette as accidentally triggering would be bad tm ,1 3311,13445670348.0,IssuesEvent,2020-09-08 11:47:50,rudiments-dev/hardcore,https://api.github.com/repos/rudiments-dev/hardcore,opened,Настроить уведомления codecov,automation,"- [ ] Уменьшить количество присылаемых сообщений после генерации отчета codecov.io Настройка: https://docs.codecov.io/docs/notifications#preventing-notifications-until-after-n-builds У нас нет возможности полностью отключить нотификации, так как они привязаны к нотификациям гитхаба, но есть возможность отсылать сообщения только после нескольких билдов для одного коммита: ``` codecov: notify: after_n_builds: 5 ```",1.0,"Настроить уведомления codecov - - [ ] Уменьшить количество присылаемых сообщений после генерации отчета codecov.io Настройка: https://docs.codecov.io/docs/notifications#preventing-notifications-until-after-n-builds У нас нет возможности полностью отключить нотификации, так как они привязаны к нотификациям гитхаба, но есть возможность отсылать сообщения только после нескольких билдов для одного коммита: ``` codecov: notify: after_n_builds: 5 ```",1,настроить уведомления codecov уменьшить количество присылаемых сообщений после генерации отчета codecov io настройка у нас нет возможности полностью отключить нотификации так как они привязаны к нотификациям гитхаба но есть возможность отсылать сообщения только после нескольких билдов для одного коммита codecov notify after n builds ,1 4795,17539912937.0,IssuesEvent,2021-08-12 10:41:46,pnp/powershell,https://api.github.com/repos/pnp/powershell,closed,[BUG] Set-PnPSite -Sharing Disabled command fails each time with a 401 Unauthorized error.,bug azure-automation,"### Reporting an Issue or Missing Feature I used the Set-PnPSite command to disable the sharing settings for the SPO sites associated with Teams' team. The Set-PnPSite-Sharing Disabled command now fails each time with a 401 Unauthorized error. This began to happen suddenly at July 31, 2021. It runs on AzureAutomation and has not changed any code or modules. The app specified in-ClientId is pre-registered in Azure AD, and the ""Sites.Manage.All"" API permission is assigned to the app. It seems to occur under the following conditions - Occurs on SPO sites linked to the microsoft 365 group - Occurs when logging in with a certificate - Occurs when Set-PnPSite is set to the Sharing option - Occurs with Set-PnPSite but not with Set-SPOSite I detected this with SharePointPnPPowerShellOnline 3.24. 2008.1, but it appears to be happening with PnP 1.7. 0 as well. ### Expected behavior Set-PnPSite command should succeed without error. ### Actual behavior Set-PnPSite command fails with 401 Unauthorized. ### Steps to reproduce behavior 1.Connect-PnPOnline -Url 'SPO sites URL associated with Teams' team' -Tenant 'TenantId' -ClientId 'ApplicationId' -Thumbprint 'CertificateThumbprint' 2.Set-PnPSite -Sharing Disabled -ErrorAction Stop ### What is the version of the Cmdlet module you are running? PnP.PowerShell 1.7.0 SharePointPnPPowerShellOnline 3.24.2008.1 ### Which operating system/environment are you running PnP PowerShell on? - [X] Windows - [ ] Linux - [ ] MacOS - [ ] Azure Cloud Shell - [ ] Azure Functions - [X] Other : Azure Automation ",1.0,"[BUG] Set-PnPSite -Sharing Disabled command fails each time with a 401 Unauthorized error. - ### Reporting an Issue or Missing Feature I used the Set-PnPSite command to disable the sharing settings for the SPO sites associated with Teams' team. The Set-PnPSite-Sharing Disabled command now fails each time with a 401 Unauthorized error. This began to happen suddenly at July 31, 2021. It runs on AzureAutomation and has not changed any code or modules. The app specified in-ClientId is pre-registered in Azure AD, and the ""Sites.Manage.All"" API permission is assigned to the app. It seems to occur under the following conditions - Occurs on SPO sites linked to the microsoft 365 group - Occurs when logging in with a certificate - Occurs when Set-PnPSite is set to the Sharing option - Occurs with Set-PnPSite but not with Set-SPOSite I detected this with SharePointPnPPowerShellOnline 3.24. 2008.1, but it appears to be happening with PnP 1.7. 0 as well. ### Expected behavior Set-PnPSite command should succeed without error. ### Actual behavior Set-PnPSite command fails with 401 Unauthorized. ### Steps to reproduce behavior 1.Connect-PnPOnline -Url 'SPO sites URL associated with Teams' team' -Tenant 'TenantId' -ClientId 'ApplicationId' -Thumbprint 'CertificateThumbprint' 2.Set-PnPSite -Sharing Disabled -ErrorAction Stop ### What is the version of the Cmdlet module you are running? PnP.PowerShell 1.7.0 SharePointPnPPowerShellOnline 3.24.2008.1 ### Which operating system/environment are you running PnP PowerShell on? - [X] Windows - [ ] Linux - [ ] MacOS - [ ] Azure Cloud Shell - [ ] Azure Functions - [X] Other : Azure Automation ",1, set pnpsite sharing disabled command fails each time with a unauthorized error reporting an issue or missing feature i used the set pnpsite command to disable the sharing settings for the spo sites associated with teams team the set pnpsite sharing disabled command now fails each time with a unauthorized error this began to happen suddenly at july it runs on azureautomation and has not changed any code or modules the app specified in clientid is pre registered in azure ad and the sites manage all api permission is assigned to the app it seems to occur under the following conditions occurs on spo sites linked to the microsoft group occurs when logging in with a certificate occurs when set pnpsite is set to the sharing option occurs with set pnpsite but not with set sposite i detected this with sharepointpnppowershellonline but it appears to be happening with pnp as well expected behavior set pnpsite command should succeed without error actual behavior set pnpsite command fails with unauthorized steps to reproduce behavior connect pnponline  url  spo sites url associated with teams team tenant  tenantid   clientid  applicationid   thumbprint  certificatethumbprint set pnpsite  sharing disabled erroraction stop what is the version of the cmdlet module you are running pnp powershell sharepointpnppowershellonline which operating system environment are you running pnp powershell on windows linux macos azure cloud shell azure functions other azure automation ,1 6095,22149651614.0,IssuesEvent,2022-06-03 15:28:02,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,[Meta] FTR configurable test users for xpack,Team:QA Meta automation,"fixes: https://github.com/elastic/kibana/issues/26937 Reference for OSS tests: https://github.com/elastic/kibana/pull/52431 **Objective** : We should run all CI tests with security enabled and with a user who has the minimal documented privileges to allow them to be successful. **Describe a specific use case for the feature:** The x-pack tests already do run with security enabled but the `test_user` has the superuser role. This issue tries to eliminate the usage of superuser role in the tests and instead use the right set of roles and privileges required to run the tests. Here I have listed the xpack apps which currently run as `superuser`. Each of the tests in these apps need to be run as a `test_user` with the right set of roles and privileges. These tests exclude `feature controls tests` As an example xpack tests under `api_keys`, and `dashboard_view_mode.js` have been modified to run as a `test_user` with the right set of roles and privileges. Please Note: This requires contribution from all teams. ******************************************* ************************************************************************************** - [x] advanced settings ( feature controls) - [x] https://github.com/elastic/kibana/pull/31652 - [x] api_keys #60808 - [x] apm - [x] https://github.com/elastic/kibana/pull/31652 - [x] canvas https://github.com/elastic/kibana/pull/75917 - [x] cross-cluster-replication https://github.com/elastic/kibana/pull/70007 - dashboard - [x] https://github.com/elastic/kibana/pull/76055( dashboard drilldown- WIP) - [x] https://github.com/elastic/kibana/pull/78377 (dashboad_to_dashboard_drilldown) - [x] https://github.com/elastic/kibana/pull/52431 ( dashboard- OSS) - [x] data_views - [x] security - [x] https://github.com/elastic/kibana/pull/31652 - [x] dev_tools - [x] search profiler https://github.com/elastic/kibana/pull/69841 - discover - [x] async_scripted_field https://github.com/elastic/kibana/pull/71988 - [x] graph - [x] https://github.com/elastic/kibana/pull/31652 - [x] grok_debugger - [x] https://github.com/elastic/kibana/pull/108301 - [x] home - The home tests in x-pack are configured for test_user. - https://github.com/elastic/kibana/pull/77665 - [x] index-lifecycle-management - [x] https://github.com/elastic/kibana/pull/121262 - [x] index management - [x] https://github.com/elastic/kibana/pull/113078 - [x] infra - [x] https://github.com/elastic/kibana/pull/31652 - [x] ingest pipelines - [x] https://github.com/elastic/kibana/pull/102409 - [x] lens - [x] https://github.com/elastic/kibana/pull/76673 - [x] license_management - [x] https://github.com/elastic/kibana/pull/91097 - [x] logstash - [x] https://github.com/elastic/kibana/pull/126652 - [x] maps - [x] https://github.com/elastic/kibana/pull/70649 - [x] https://github.com/elastic/kibana/pull/75890 - [x] https://github.com/elastic/kibana/pull/75914 - [x] https://github.com/elastic/kibana/pull/75920 - [x] https://github.com/elastic/kibana/pull/84383 - [x] ml - [x] https://github.com/elastic/kibana/pull/31652 - [x] monitoring - [x] https://github.com/elastic/kibana/pull/31652 - [x] remote_clusters - [x] https://github.com/elastic/kibana/pull/77212 - [x] reporting_management - [x] https://github.com/elastic/kibana/pull/111626 - [x] rollup_job - [x] https://github.com/elastic/kibana/issues/84970 (sub-meta) - [x] rollup_job.js - https://github.com/elastic/kibana/pull/79567 - [x] saved object management - [x] https://github.com/elastic/kibana/pull/31652 - [x] security - [x] snapshot_restore ( regular tests) - [x] https://github.com/elastic/kibana/pull/126011 - [x] spaces ( feature controls+ regular tests) - [x] Feature Controls https://github.com/elastic/kibana/pull/38472 - [x] status_page ( regular tests) - Has no login specific functionality - [x] transform ( feature controls+ regular tests) - [x] upgrade_assistant ( feature controls+ regular tests) - [x] https://github.com/elastic/kibana/pull/70071 - [x] uptime ( feature controls only) - [x] https://github.com/elastic/kibana/pull/31652 - [x] visualize ( https://github.com/elastic/kibana/issues/79354 ( sub-meta) - [x] watcher - [x] https://github.com/elastic/kibana/pull/89068 ****************************************************************************************************************************** cc @elastic/kibana-qa ",1.0,"[Meta] FTR configurable test users for xpack - fixes: https://github.com/elastic/kibana/issues/26937 Reference for OSS tests: https://github.com/elastic/kibana/pull/52431 **Objective** : We should run all CI tests with security enabled and with a user who has the minimal documented privileges to allow them to be successful. **Describe a specific use case for the feature:** The x-pack tests already do run with security enabled but the `test_user` has the superuser role. This issue tries to eliminate the usage of superuser role in the tests and instead use the right set of roles and privileges required to run the tests. Here I have listed the xpack apps which currently run as `superuser`. Each of the tests in these apps need to be run as a `test_user` with the right set of roles and privileges. These tests exclude `feature controls tests` As an example xpack tests under `api_keys`, and `dashboard_view_mode.js` have been modified to run as a `test_user` with the right set of roles and privileges. Please Note: This requires contribution from all teams. ******************************************* ************************************************************************************** - [x] advanced settings ( feature controls) - [x] https://github.com/elastic/kibana/pull/31652 - [x] api_keys #60808 - [x] apm - [x] https://github.com/elastic/kibana/pull/31652 - [x] canvas https://github.com/elastic/kibana/pull/75917 - [x] cross-cluster-replication https://github.com/elastic/kibana/pull/70007 - dashboard - [x] https://github.com/elastic/kibana/pull/76055( dashboard drilldown- WIP) - [x] https://github.com/elastic/kibana/pull/78377 (dashboad_to_dashboard_drilldown) - [x] https://github.com/elastic/kibana/pull/52431 ( dashboard- OSS) - [x] data_views - [x] security - [x] https://github.com/elastic/kibana/pull/31652 - [x] dev_tools - [x] search profiler https://github.com/elastic/kibana/pull/69841 - discover - [x] async_scripted_field https://github.com/elastic/kibana/pull/71988 - [x] graph - [x] https://github.com/elastic/kibana/pull/31652 - [x] grok_debugger - [x] https://github.com/elastic/kibana/pull/108301 - [x] home - The home tests in x-pack are configured for test_user. - https://github.com/elastic/kibana/pull/77665 - [x] index-lifecycle-management - [x] https://github.com/elastic/kibana/pull/121262 - [x] index management - [x] https://github.com/elastic/kibana/pull/113078 - [x] infra - [x] https://github.com/elastic/kibana/pull/31652 - [x] ingest pipelines - [x] https://github.com/elastic/kibana/pull/102409 - [x] lens - [x] https://github.com/elastic/kibana/pull/76673 - [x] license_management - [x] https://github.com/elastic/kibana/pull/91097 - [x] logstash - [x] https://github.com/elastic/kibana/pull/126652 - [x] maps - [x] https://github.com/elastic/kibana/pull/70649 - [x] https://github.com/elastic/kibana/pull/75890 - [x] https://github.com/elastic/kibana/pull/75914 - [x] https://github.com/elastic/kibana/pull/75920 - [x] https://github.com/elastic/kibana/pull/84383 - [x] ml - [x] https://github.com/elastic/kibana/pull/31652 - [x] monitoring - [x] https://github.com/elastic/kibana/pull/31652 - [x] remote_clusters - [x] https://github.com/elastic/kibana/pull/77212 - [x] reporting_management - [x] https://github.com/elastic/kibana/pull/111626 - [x] rollup_job - [x] https://github.com/elastic/kibana/issues/84970 (sub-meta) - [x] rollup_job.js - https://github.com/elastic/kibana/pull/79567 - [x] saved object management - [x] https://github.com/elastic/kibana/pull/31652 - [x] security - [x] snapshot_restore ( regular tests) - [x] https://github.com/elastic/kibana/pull/126011 - [x] spaces ( feature controls+ regular tests) - [x] Feature Controls https://github.com/elastic/kibana/pull/38472 - [x] status_page ( regular tests) - Has no login specific functionality - [x] transform ( feature controls+ regular tests) - [x] upgrade_assistant ( feature controls+ regular tests) - [x] https://github.com/elastic/kibana/pull/70071 - [x] uptime ( feature controls only) - [x] https://github.com/elastic/kibana/pull/31652 - [x] visualize ( https://github.com/elastic/kibana/issues/79354 ( sub-meta) - [x] watcher - [x] https://github.com/elastic/kibana/pull/89068 ****************************************************************************************************************************** cc @elastic/kibana-qa ",1, ftr configurable test users for xpack fixes reference for oss tests objective we should run all ci tests with security enabled and with a user who has the minimal documented privileges to allow them to be successful describe a specific use case for the feature the x pack tests already do run with security enabled but the test user has the superuser role this issue tries to eliminate the usage of superuser role in the tests and instead use the right set of roles and privileges required to run the tests here i have listed the xpack apps which currently run as superuser each of the tests in these apps need to be run as a test user with the right set of roles and privileges these tests exclude feature controls tests as an example xpack tests under api keys and dashboard view mode js have been modified to run as a test user with the right set of roles and privileges please note this requires contribution from all teams advanced settings feature controls api keys apm canvas cross cluster replication dashboard dashboard drilldown wip dashboad to dashboard drilldown dashboard oss data views security dev tools search profiler discover async scripted field graph grok debugger home the home tests in x pack are configured for test user index lifecycle management index management infra ingest pipelines lens license management logstash maps ml monitoring remote clusters reporting management rollup job sub meta rollup job js saved object management security snapshot restore regular tests spaces feature controls regular tests feature controls status page regular tests has no login specific functionality transform feature controls regular tests upgrade assistant feature controls regular tests uptime feature controls only visualize sub meta watcher cc elastic kibana qa ,1 2046,11308631293.0,IssuesEvent,2020-01-19 07:23:10,soffes/home,https://api.github.com/repos/soffes/home,opened,Dim lights at night,automation enhancement,"All lights should dim and go towards a redish tint when the sun starts setting and vice versa. There are a few exceptions: - outdoor lights - if party mode is on - guest bedroom This will likely require automations for all switches and an automation that runs in a timer loop every ~10 minutes to change to the next tick in the gradient.",1.0,"Dim lights at night - All lights should dim and go towards a redish tint when the sun starts setting and vice versa. There are a few exceptions: - outdoor lights - if party mode is on - guest bedroom This will likely require automations for all switches and an automation that runs in a timer loop every ~10 minutes to change to the next tick in the gradient.",1,dim lights at night all lights should dim and go towards a redish tint when the sun starts setting and vice versa there are a few exceptions outdoor lights if party mode is on guest bedroom this will likely require automations for all switches and an automation that runs in a timer loop every minutes to change to the next tick in the gradient ,1 809140,30176474803.0,IssuesEvent,2023-07-04 05:20:56,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,www.google.com - see bug description,priority-critical browser-focus-geckoview engine-gecko," **URL**: https://www.google.com/search?q=adna+Maresa+Villanueva+Cedillo&client=firefox-b-m&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiWw73V4fP_AhUSH0QIHViTBmIQ_AUIBygC&biw=486&bih=955&biw=486&bih=955&biw=486&bih=955#imgrc=0fx922ikGoTfBM **Browser / Version**: Firefox Mobile 115.0 **Operating System**: Android 12 **Tested Another Browser**: No **Problem type**: Something else **Description**: the photo is personal **Steps to Reproduce**: The photo that appears is mine
View the screenshot
Browser Configuration
  • gfx.webrender.all: false
  • gfx.webrender.blob-images: true
  • gfx.webrender.enabled: false
  • image.mem.shared: true
  • buildID: 20230629134642
  • channel: release
  • hasTouchScreen: true
  • mixed active content blocked: false
  • mixed passive content blocked: false
  • tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2023/7/f5931510-82af-417a-a6b5-09e7b399e6ab) _From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"www.google.com - see bug description - **URL**: https://www.google.com/search?q=adna+Maresa+Villanueva+Cedillo&client=firefox-b-m&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiWw73V4fP_AhUSH0QIHViTBmIQ_AUIBygC&biw=486&bih=955&biw=486&bih=955&biw=486&bih=955#imgrc=0fx922ikGoTfBM **Browser / Version**: Firefox Mobile 115.0 **Operating System**: Android 12 **Tested Another Browser**: No **Problem type**: Something else **Description**: the photo is personal **Steps to Reproduce**: The photo that appears is mine
View the screenshot
Browser Configuration
  • gfx.webrender.all: false
  • gfx.webrender.blob-images: true
  • gfx.webrender.enabled: false
  • image.mem.shared: true
  • buildID: 20230629134642
  • channel: release
  • hasTouchScreen: true
  • mixed active content blocked: false
  • mixed passive content blocked: false
  • tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2023/7/f5931510-82af-417a-a6b5-09e7b399e6ab) _From [webcompat.com](https://webcompat.com/) with ❤️_",0, see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description the photo is personal steps to reproduce the photo that appears is mine view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ ,0 10062,31473648757.0,IssuesEvent,2023-08-30 09:13:17,PGijsbers/pgijsbers.github.io,https://api.github.com/repos/PGijsbers/pgijsbers.github.io,closed,Benchmark (and possibly add) font subsetting,performance automation,"Test the effect of font subsetting by creating a pipeline that: 1. detects characters used per page 2a. creates a single subset for all pages and 2b. creates a single subset per page 3. compresses the subset to woff2 Compare the effects of 2a to 2b, and use the best alternative. At this point, there should be very large overlap between the pages, so that subsetting to a single file makes more sense for reusability. See also [fonttools](https://pypi.org/project/fonttools/).",1.0,"Benchmark (and possibly add) font subsetting - Test the effect of font subsetting by creating a pipeline that: 1. detects characters used per page 2a. creates a single subset for all pages and 2b. creates a single subset per page 3. compresses the subset to woff2 Compare the effects of 2a to 2b, and use the best alternative. At this point, there should be very large overlap between the pages, so that subsetting to a single file makes more sense for reusability. See also [fonttools](https://pypi.org/project/fonttools/).",1,benchmark and possibly add font subsetting test the effect of font subsetting by creating a pipeline that detects characters used per page creates a single subset for all pages and creates a single subset per page compresses the subset to compare the effects of to and use the best alternative at this point there should be very large overlap between the pages so that subsetting to a single file makes more sense for reusability see also ,1 813,8200573749.0,IssuesEvent,2018-09-01 06:13:02,brave/browser-laptop,https://api.github.com/repos/brave/browser-laptop,closed,Auto exclude things in dev-dependencies for tools/lib/ignoredPaths.js,addressed-with-brave-core automation wontfix,"Currently things that we don't want packaged need to be listed explicitly in: tools/lib/ignoredPaths.js It would be good to automatically not include things that are listed in dev dependencies in package.json ",1.0,"Auto exclude things in dev-dependencies for tools/lib/ignoredPaths.js - Currently things that we don't want packaged need to be listed explicitly in: tools/lib/ignoredPaths.js It would be good to automatically not include things that are listed in dev dependencies in package.json ",1,auto exclude things in dev dependencies for tools lib ignoredpaths js currently things that we don t want packaged need to be listed explicitly in tools lib ignoredpaths js it would be good to automatically not include things that are listed in dev dependencies in package json ,1 29120,8290522412.0,IssuesEvent,2018-09-19 17:37:43,hashicorp/packer,https://api.github.com/repos/hashicorp/packer,closed,Use `DescribeRegions` for region validation,builder/amazon enhancement,"When we use packer for copying AMI to all public region (including the one newly released eu-west-3), we get validation error. In our tool, we get regions from describe-regions api and pass region list to packer. But packer maintains its own list of region which lag behind the actual list. Hence at present we can't use packer to copy to new region at the same time we need to change our tool (to remove eu-west-3) to make packer validation happy. Code: Packer already has the code change with describe-regions api but is commented https://github.com/hashicorp/packer/blob/cb2ad49b21e4fcbf30ee888bee9c28bcb7d066df/builder/amazon/common/regions.go#L3 . we understand that describe-regions will list only limited regions (for example if you in us-east-1, you won't get us-gov-west-1 etc) but when you are running packer for a public region (us-east-1) you don't to know about us-gov-west-1 for validation. Correct me if i am wrong If you guys are ok, we can change the code and cut new PR. ",1.0,"Use `DescribeRegions` for region validation - When we use packer for copying AMI to all public region (including the one newly released eu-west-3), we get validation error. In our tool, we get regions from describe-regions api and pass region list to packer. But packer maintains its own list of region which lag behind the actual list. Hence at present we can't use packer to copy to new region at the same time we need to change our tool (to remove eu-west-3) to make packer validation happy. Code: Packer already has the code change with describe-regions api but is commented https://github.com/hashicorp/packer/blob/cb2ad49b21e4fcbf30ee888bee9c28bcb7d066df/builder/amazon/common/regions.go#L3 . we understand that describe-regions will list only limited regions (for example if you in us-east-1, you won't get us-gov-west-1 etc) but when you are running packer for a public region (us-east-1) you don't to know about us-gov-west-1 for validation. Correct me if i am wrong If you guys are ok, we can change the code and cut new PR. ",0,use describeregions for region validation when we use packer for copying ami to all public region including the one newly released eu west we get validation error in our tool we get regions from describe regions api and pass region list to packer but packer maintains its own list of region which lag behind the actual list hence at present we can t use packer to copy to new region at the same time we need to change our tool to remove eu west to make packer validation happy code packer already has the code change with describe regions api but is commented we understand that describe regions will list only limited regions for example if you in us east you won t get us gov west etc but when you are running packer for a public region us east you don t to know about us gov west for validation correct me if i am wrong if you guys are ok we can change the code and cut new pr ,0 5541,20020916241.0,IssuesEvent,2022-02-01 16:19:48,elastic/apm-pipeline-library,https://api.github.com/repos/elastic/apm-pipeline-library,closed,Email message shows a `null` in the header,bug ci Team:Automation," ",1.0,"Email message shows a `null` in the header - ",1,email message shows a null in the header img width alt screenshot at src ,1 156517,5970635874.0,IssuesEvent,2017-05-30 23:20:33,dondi/GRNsight,https://api.github.com/repos/dondi/GRNsight,closed,Disable mousewheel zoom after zoom slider is fully functional,bug functionality priority 0.5 review requested,"This has been noted by @kdahlquist and @NAnguiano. Immediately upon graph load, the mousewheel function switches over to controlling the zoom level, which is disconcerting if the user meant instead to scroll the browser window. This should be changed somehow, but it is not in the critical path to release.",1.0,"Disable mousewheel zoom after zoom slider is fully functional - This has been noted by @kdahlquist and @NAnguiano. Immediately upon graph load, the mousewheel function switches over to controlling the zoom level, which is disconcerting if the user meant instead to scroll the browser window. This should be changed somehow, but it is not in the critical path to release.",0,disable mousewheel zoom after zoom slider is fully functional this has been noted by kdahlquist and nanguiano immediately upon graph load the mousewheel function switches over to controlling the zoom level which is disconcerting if the user meant instead to scroll the browser window this should be changed somehow but it is not in the critical path to release ,0 173157,6520996526.0,IssuesEvent,2017-08-28 18:46:16,zehro/UAH-Theatre,https://api.github.com/repos/zehro/UAH-Theatre,closed,Set Up Initial Course Tasks and Project Management,Epic Priority - High Support Team - Dev Ops,"## Description In accordance with the project management rubric, we need to set up several things before starting to code. This will involve doing paperwork and setting up the Zenhub workflow for the team.",1.0,"Set Up Initial Course Tasks and Project Management - ## Description In accordance with the project management rubric, we need to set up several things before starting to code. This will involve doing paperwork and setting up the Zenhub workflow for the team.",0,set up initial course tasks and project management description in accordance with the project management rubric we need to set up several things before starting to code this will involve doing paperwork and setting up the zenhub workflow for the team ,0 808285,30054181807.0,IssuesEvent,2023-06-28 04:56:06,woocommerce/woocommerce-ios,https://api.github.com/repos/woocommerce/woocommerce-ios,closed,"Update the ""Cancel"" button in the Domain purchase view",category: accessibility priority: low,"It's unclear why we display the button ""Cancel"" to dismiss the Domain purchase view under the Onboarding list. It would be better to show a button ""Done"" on the top right side of the view.",1.0,"Update the ""Cancel"" button in the Domain purchase view - It's unclear why we display the button ""Cancel"" to dismiss the Domain purchase view under the Onboarding list. It would be better to show a button ""Done"" on the top right side of the view.",0,update the cancel button in the domain purchase view it s unclear why we display the button cancel to dismiss the domain purchase view under the onboarding list it would be better to show a button done on the top right side of the view ,0 2604,12336102217.0,IssuesEvent,2020-05-14 13:05:31,coolOrangeLabs/powerGateTemplate,https://api.github.com/repos/coolOrangeLabs/powerGateTemplate,closed,Businesslogic-Questionnaire: ERP writing back to Vault,Automation,"## Details **First, it is very important to understand that the ERP Information must not be maintained inside Vault!** We should try to avoid writing back to Vault, since this can increase highly the complexity. Anyway, sometimes it makes sense to write data back. ## Question 1. What ERP properties must be written back to Vault? 1. When are these ERP data required inside Vault? ",1.0,"Businesslogic-Questionnaire: ERP writing back to Vault - ## Details **First, it is very important to understand that the ERP Information must not be maintained inside Vault!** We should try to avoid writing back to Vault, since this can increase highly the complexity. Anyway, sometimes it makes sense to write data back. ## Question 1. What ERP properties must be written back to Vault? 1. When are these ERP data required inside Vault? ",1,businesslogic questionnaire erp writing back to vault details first it is very important to understand that the erp information must not be maintained inside vault we should try to avoid writing back to vault since this can increase highly the complexity anyway sometimes it makes sense to write data back question what erp properties must be written back to vault when are these erp data required inside vault ,1 293409,8989606784.0,IssuesEvent,2019-02-01 00:37:45,apache/bookkeeper,https://api.github.com/repos/apache/bookkeeper,closed,"EnsemblePlacementPolicy exposes third party API ""Pair"" from commons-lang3 in a public API",priority/blocker release/4.9.0,"We are going to release a public API which depends on a third party library. This will introduce compatibility problems in the future. EnsemblePlacementPolicy API is an extension point on the client side, we should not depend on APIs from third party providers. The issue was introduced in #1883 This is very like to #67, #300 and #804 ",1.0,"EnsemblePlacementPolicy exposes third party API ""Pair"" from commons-lang3 in a public API - We are going to release a public API which depends on a third party library. This will introduce compatibility problems in the future. EnsemblePlacementPolicy API is an extension point on the client side, we should not depend on APIs from third party providers. The issue was introduced in #1883 This is very like to #67, #300 and #804 ",0,ensembleplacementpolicy exposes third party api pair from commons in a public api we are going to release a public api which depends on a third party library this will introduce compatibility problems in the future ensembleplacementpolicy api is an extension point on the client side we should not depend on apis from third party providers the issue was introduced in this is very like to and ,0 51588,13207530861.0,IssuesEvent,2020-08-14 23:28:21,icecube-trac/tix4,https://api.github.com/repos/icecube-trac/tix4,opened,I3Frame should provide method to get keys of certain type (Trac #659),IceTray Incomplete Migration Migrated from Trac defect,"
Migrated from https://code.icecube.wisc.edu/projects/icecube/ticket/659, reported by sboeserand owned by sboeser

```json { ""status"": ""closed"", ""changetime"": ""2015-02-11T21:49:41"", ""_ts"": ""1423691381706959"", ""description"": ""In addition to I3Frame::keys(), it should provide a method like I3Frame::keys_of_type() that only return the names of frame objects that are of type T.\n\nExample implementation: \nhttp://code.icecube.wisc.edu/svn/sandbox/sboeser/IceStars/radioeventbrowser/public/radioeventbrowser/I3FrameTypes.h\n\nUse case:\nE.g in event displays: allow the user to select which of the raw data version/ reconstructions / whatever to show."", ""reporter"": ""sboeser"", ""cc"": ""sboeser@physik.uni-bonn.de"", ""resolution"": ""wontfix"", ""time"": ""2011-11-28T22:44:28"", ""component"": ""IceTray"", ""summary"": ""I3Frame should provide method to get keys of certain type"", ""priority"": ""normal"", ""keywords"": """", ""milestone"": """", ""owner"": ""sboeser"", ""type"": ""defect"" } ```

",1.0,"I3Frame should provide method to get keys of certain type (Trac #659) -
Migrated from https://code.icecube.wisc.edu/projects/icecube/ticket/659, reported by sboeserand owned by sboeser

```json { ""status"": ""closed"", ""changetime"": ""2015-02-11T21:49:41"", ""_ts"": ""1423691381706959"", ""description"": ""In addition to I3Frame::keys(), it should provide a method like I3Frame::keys_of_type() that only return the names of frame objects that are of type T.\n\nExample implementation: \nhttp://code.icecube.wisc.edu/svn/sandbox/sboeser/IceStars/radioeventbrowser/public/radioeventbrowser/I3FrameTypes.h\n\nUse case:\nE.g in event displays: allow the user to select which of the raw data version/ reconstructions / whatever to show."", ""reporter"": ""sboeser"", ""cc"": ""sboeser@physik.uni-bonn.de"", ""resolution"": ""wontfix"", ""time"": ""2011-11-28T22:44:28"", ""component"": ""IceTray"", ""summary"": ""I3Frame should provide method to get keys of certain type"", ""priority"": ""normal"", ""keywords"": """", ""milestone"": """", ""owner"": ""sboeser"", ""type"": ""defect"" } ```

",0, should provide method to get keys of certain type trac migrated from json status closed changetime ts description in addition to keys it should provide a method like keys of type that only return the names of frame objects that are of type t n nexample implementation n case ne g in event displays allow the user to select which of the raw data version reconstructions whatever to show reporter sboeser cc sboeser physik uni bonn de resolution wontfix time component icetray summary should provide method to get keys of certain type priority normal keywords milestone owner sboeser type defect ,0 36341,4728455747.0,IssuesEvent,2016-10-18 15:59:40,scieloorg/opac,https://api.github.com/repos/scieloorg/opac,opened,Reestruturação do menu da esquerda: restrito ao texto do artigo,Design,"Em reunião redefinimos o menu a esquerda: - Saem Figuras, Tabelas, Métricas, Como citar esse artigo - Data de publicação deve aparecer abaixo do DOI. - Alterar label ""Data de publicação"" para ""Datas"". Manter posição atual.",1.0,"Reestruturação do menu da esquerda: restrito ao texto do artigo - Em reunião redefinimos o menu a esquerda: - Saem Figuras, Tabelas, Métricas, Como citar esse artigo - Data de publicação deve aparecer abaixo do DOI. - Alterar label ""Data de publicação"" para ""Datas"". Manter posição atual.",0,reestruturação do menu da esquerda restrito ao texto do artigo em reunião redefinimos o menu a esquerda saem figuras tabelas métricas como citar esse artigo data de publicação deve aparecer abaixo do doi alterar label data de publicação para datas manter posição atual ,0 14588,5716795590.0,IssuesEvent,2017-04-19 15:50:16,MarkPieszak/aspnetcore-angular2-universal,https://api.github.com/repos/MarkPieszak/aspnetcore-angular2-universal,closed,sass-resource-loader: Zone already loaded error,Build system," Hi Mark, I'm using bootstrap 4 and trying to include some scss mixins with sass-resource-loader. The problem is when I try to specify the path to include I get ""Zone already loded error"". I replaced scss loader with this in webpack.common.js: { test: /\.scss$/, use: [ 'to-string-loader', 'css-loader', 'sass-loader', { loader: 'sass-resources-loader', options: { // Provide path to the file with resources resources: root('./node_modules/bootstrap/scss/mixins/mixin.scss'), }, }, ], }, Do you have any idea to solve this? Thanks ",1.0,"sass-resource-loader: Zone already loaded error - Hi Mark, I'm using bootstrap 4 and trying to include some scss mixins with sass-resource-loader. The problem is when I try to specify the path to include I get ""Zone already loded error"". I replaced scss loader with this in webpack.common.js: { test: /\.scss$/, use: [ 'to-string-loader', 'css-loader', 'sass-loader', { loader: 'sass-resources-loader', options: { // Provide path to the file with resources resources: root('./node_modules/bootstrap/scss/mixins/mixin.scss'), }, }, ], }, Do you have any idea to solve this? Thanks ",0,sass resource loader zone already loaded error hi mark i m using bootstrap and trying to include some scss mixins with sass resource loader the problem is when i try to specify the path to include i get zone already loded error i replaced scss loader with this in webpack common js test scss use to string loader css loader sass loader loader sass resources loader options provide path to the file with resources resources root node modules bootstrap scss mixins mixin scss do you have any idea to solve this thanks ,0 8201,26442193767.0,IssuesEvent,2023-01-16 02:09:59,jaskirat1208/backtest-platform,https://api.github.com/repos/jaskirat1208/backtest-platform,closed,Automate pypi release procedure,help wanted automation,"Whenever new release is created, the package should automatically be published via setup.py. Refer GitHub actions publish to PyPi section for more details.",1.0,"Automate pypi release procedure - Whenever new release is created, the package should automatically be published via setup.py. Refer GitHub actions publish to PyPi section for more details.",1,automate pypi release procedure whenever new release is created the package should automatically be published via setup py refer github actions publish to pypi section for more details ,1 79363,3534998775.0,IssuesEvent,2016-01-16 05:22:20,Apollo-Community/ApolloStation,https://api.github.com/repos/Apollo-Community/ApolloStation,closed,Artifact Creation,bug exploit priority: medium,"As a regular miner with a normal pickaxe, you can mine and walk away from the strange rocks and keep doing so until it gives you an artifact. It seems to be that the rock reloads each time you hit it, and it can give you infinite artifacts.",1.0,"Artifact Creation - As a regular miner with a normal pickaxe, you can mine and walk away from the strange rocks and keep doing so until it gives you an artifact. It seems to be that the rock reloads each time you hit it, and it can give you infinite artifacts.",0,artifact creation as a regular miner with a normal pickaxe you can mine and walk away from the strange rocks and keep doing so until it gives you an artifact it seems to be that the rock reloads each time you hit it and it can give you infinite artifacts ,0 275500,30250048439.0,IssuesEvent,2023-07-06 19:45:14,UpendoVentures/DNNYAF-to-CommunityForums,https://api.github.com/repos/UpendoVentures/DNNYAF-to-CommunityForums,opened,DotNetNuke.Web-9.3.2.24.dll: 2 vulnerabilities (highest severity is: 7.5),Mend: dependency security vulnerability,"
Vulnerable Library - DotNetNuke.Web-9.3.2.24.dll

DotNetNuke.Web

Library home page: https://api.nuget.org/packages/dotnetnuke.web.9.3.2.nupkg

Path to vulnerable library: /References/DNN/09.03.02/DotNetNuke.Web.dll

Found in HEAD commit: 474c0373c1c9d816e68d571122d2f262c41a0773

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (DotNetNuke.Web version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2021-40186](https://www.mend.io/vulnerability-database/CVE-2021-40186) | High | 7.5 | DotNetNuke.Web-9.3.2.24.dll | Direct | DotNetNuke.Web - 9.11.0;DotNetNuke.Core - 9.11.0 | ❌ | | [CVE-2022-2922](https://www.mend.io/vulnerability-database/CVE-2022-2922) | Medium | 4.9 | DotNetNuke.Web-9.3.2.24.dll | Direct | DotNetNuke.Core - 9.11.0, DotNetNuke.Web - 9.11.0 | ❌ | ## Details
CVE-2021-40186 ### Vulnerable Library - DotNetNuke.Web-9.3.2.24.dll

DotNetNuke.Web

Library home page: https://api.nuget.org/packages/dotnetnuke.web.9.3.2.nupkg

Path to vulnerable library: /References/DNN/09.03.02/DotNetNuke.Web.dll

Dependency Hierarchy: - :x: **DotNetNuke.Web-9.3.2.24.dll** (Vulnerable Library)

Found in HEAD commit: 474c0373c1c9d816e68d571122d2f262c41a0773

Found in base branch: main

### Vulnerability Details

The AppCheck research team identified a Server-Side Request Forgery (SSRF) vulnerability within the DNN CMS platform, formerly known as DotNetNuke. SSRF vulnerabilities allow the attacker to exploit the target system to make network requests on their behalf, allowing a range of possible attacks. In the most common scenario, the attacker exploits SSRF vulnerabilities to attack systems behind the firewall and access sensitive information from Cloud Provider metadata services.

Publish Date: 2022-06-02

URL: CVE-2021-40186

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://nvd.nist.gov/vuln/detail/CVE-2021-40186

Release Date: 2022-06-02

Fix Resolution: DotNetNuke.Web - 9.11.0;DotNetNuke.Core - 9.11.0

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2022-2922 ### Vulnerable Library - DotNetNuke.Web-9.3.2.24.dll

DotNetNuke.Web

Library home page: https://api.nuget.org/packages/dotnetnuke.web.9.3.2.nupkg

Path to vulnerable library: /References/DNN/09.03.02/DotNetNuke.Web.dll

Dependency Hierarchy: - :x: **DotNetNuke.Web-9.3.2.24.dll** (Vulnerable Library)

Found in HEAD commit: 474c0373c1c9d816e68d571122d2f262c41a0773

Found in base branch: main

### Vulnerability Details

Relative Path Traversal in GitHub repository dnnsoftware/dnn.platform prior to 9.11.0.

Publish Date: 2022-09-30

URL: CVE-2022-2922

### CVSS 3 Score Details (4.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-9w72-2f23-57gm

Release Date: 2022-09-30

Fix Resolution: DotNetNuke.Core - 9.11.0, DotNetNuke.Web - 9.11.0

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
",True,"DotNetNuke.Web-9.3.2.24.dll: 2 vulnerabilities (highest severity is: 7.5) -
Vulnerable Library - DotNetNuke.Web-9.3.2.24.dll

DotNetNuke.Web

Library home page: https://api.nuget.org/packages/dotnetnuke.web.9.3.2.nupkg

Path to vulnerable library: /References/DNN/09.03.02/DotNetNuke.Web.dll

Found in HEAD commit: 474c0373c1c9d816e68d571122d2f262c41a0773

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (DotNetNuke.Web version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2021-40186](https://www.mend.io/vulnerability-database/CVE-2021-40186) | High | 7.5 | DotNetNuke.Web-9.3.2.24.dll | Direct | DotNetNuke.Web - 9.11.0;DotNetNuke.Core - 9.11.0 | ❌ | | [CVE-2022-2922](https://www.mend.io/vulnerability-database/CVE-2022-2922) | Medium | 4.9 | DotNetNuke.Web-9.3.2.24.dll | Direct | DotNetNuke.Core - 9.11.0, DotNetNuke.Web - 9.11.0 | ❌ | ## Details
CVE-2021-40186 ### Vulnerable Library - DotNetNuke.Web-9.3.2.24.dll

DotNetNuke.Web

Library home page: https://api.nuget.org/packages/dotnetnuke.web.9.3.2.nupkg

Path to vulnerable library: /References/DNN/09.03.02/DotNetNuke.Web.dll

Dependency Hierarchy: - :x: **DotNetNuke.Web-9.3.2.24.dll** (Vulnerable Library)

Found in HEAD commit: 474c0373c1c9d816e68d571122d2f262c41a0773

Found in base branch: main

### Vulnerability Details

The AppCheck research team identified a Server-Side Request Forgery (SSRF) vulnerability within the DNN CMS platform, formerly known as DotNetNuke. SSRF vulnerabilities allow the attacker to exploit the target system to make network requests on their behalf, allowing a range of possible attacks. In the most common scenario, the attacker exploits SSRF vulnerabilities to attack systems behind the firewall and access sensitive information from Cloud Provider metadata services.

Publish Date: 2022-06-02

URL: CVE-2021-40186

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://nvd.nist.gov/vuln/detail/CVE-2021-40186

Release Date: 2022-06-02

Fix Resolution: DotNetNuke.Web - 9.11.0;DotNetNuke.Core - 9.11.0

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2022-2922 ### Vulnerable Library - DotNetNuke.Web-9.3.2.24.dll

DotNetNuke.Web

Library home page: https://api.nuget.org/packages/dotnetnuke.web.9.3.2.nupkg

Path to vulnerable library: /References/DNN/09.03.02/DotNetNuke.Web.dll

Dependency Hierarchy: - :x: **DotNetNuke.Web-9.3.2.24.dll** (Vulnerable Library)

Found in HEAD commit: 474c0373c1c9d816e68d571122d2f262c41a0773

Found in base branch: main

### Vulnerability Details

Relative Path Traversal in GitHub repository dnnsoftware/dnn.platform prior to 9.11.0.

Publish Date: 2022-09-30

URL: CVE-2022-2922

### CVSS 3 Score Details (4.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-9w72-2f23-57gm

Release Date: 2022-09-30

Fix Resolution: DotNetNuke.Core - 9.11.0, DotNetNuke.Web - 9.11.0

Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
",0,dotnetnuke web dll vulnerabilities highest severity is vulnerable library dotnetnuke web dll dotnetnuke web library home page a href path to vulnerable library references dnn dotnetnuke web dll found in head commit a href vulnerabilities cve severity cvss dependency type fixed in dotnetnuke web version remediation available high dotnetnuke web dll direct dotnetnuke web dotnetnuke core medium dotnetnuke web dll direct dotnetnuke core dotnetnuke web details cve vulnerable library dotnetnuke web dll dotnetnuke web library home page a href path to vulnerable library references dnn dotnetnuke web dll dependency hierarchy x dotnetnuke web dll vulnerable library found in head commit a href found in base branch main vulnerability details the appcheck research team identified a server side request forgery ssrf vulnerability within the dnn cms platform formerly known as dotnetnuke ssrf vulnerabilities allow the attacker to exploit the target system to make network requests on their behalf allowing a range of possible attacks in the most common scenario the attacker exploits ssrf vulnerabilities to attack systems behind the firewall and access sensitive information from cloud provider metadata services publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution dotnetnuke web dotnetnuke core step up your open source security game with mend cve vulnerable library dotnetnuke web dll dotnetnuke web library home page a href path to vulnerable library references dnn dotnetnuke web dll dependency hierarchy x dotnetnuke web dll vulnerable library found in head commit a href found in base branch main vulnerability details relative path traversal in github repository dnnsoftware dnn platform prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution dotnetnuke core dotnetnuke web step up your open source security game with mend ,0 889,8621651498.0,IssuesEvent,2018-11-20 17:55:59,jgyates/genmon,https://api.github.com/repos/jgyates/genmon,closed,Question From Control Engineer,automation - monitoring apps question,"Hello Im an automation engineer, I would like to bring this into my PLC, I understand the serial pins in the connector, but can you tell me what the registers are? Im used to seeing 40100 type registers for MODBUS. Also the machine ID? Thanks so much!",1.0,"Question From Control Engineer - Hello Im an automation engineer, I would like to bring this into my PLC, I understand the serial pins in the connector, but can you tell me what the registers are? Im used to seeing 40100 type registers for MODBUS. Also the machine ID? Thanks so much!",1,question from control engineer hello im an automation engineer i would like to bring this into my plc i understand the serial pins in the connector but can you tell me what the registers are im used to seeing type registers for modbus also the machine id thanks so much ,1 7729,25489974950.0,IssuesEvent,2022-11-26 23:36:22,ccodwg/Covid19CanadaBot,https://api.github.com/repos/ccodwg/Covid19CanadaBot,closed,Improvements to update validation,data-validation automation messaging,"- [x] Condense top-line summary to use abbreviations (e.g., for cumulative, 7-day avg) - [x] Align numbers if possible to make it easier to read - [ ] If possible, export as HTML so that formatting can be preserved in email",1.0,"Improvements to update validation - - [x] Condense top-line summary to use abbreviations (e.g., for cumulative, 7-day avg) - [x] Align numbers if possible to make it easier to read - [ ] If possible, export as HTML so that formatting can be preserved in email",1,improvements to update validation condense top line summary to use abbreviations e g for cumulative day avg align numbers if possible to make it easier to read if possible export as html so that formatting can be preserved in email,1 2939,12833808917.0,IssuesEvent,2020-07-07 09:55:50,fc-dev/GitNarwhal,https://api.github.com/repos/fc-dev/GitNarwhal,closed,Gradle: windows installer icon,automation,"On the task windows64installer the icon.ico should be copied from resources to the innosetup folder - add a configurable property in the plugin for the icon path - copy the icon",1.0,"Gradle: windows installer icon - On the task windows64installer the icon.ico should be copied from resources to the innosetup folder - add a configurable property in the plugin for the icon path - copy the icon",1,gradle windows installer icon on the task the icon ico should be copied from resources to the innosetup folder add a configurable property in the plugin for the icon path copy the icon,1 936,8752750261.0,IssuesEvent,2018-12-14 04:59:58,pypa/pip,https://api.github.com/repos/pypa/pip,closed,Travis CI is broken due to a bitbucket deprecation,C: automation,Related to mercurial. The current master's Travis CI logs have the error message which has a suggestion for how to fix.,1.0,Travis CI is broken due to a bitbucket deprecation - Related to mercurial. The current master's Travis CI logs have the error message which has a suggestion for how to fix.,1,travis ci is broken due to a bitbucket deprecation related to mercurial the current master s travis ci logs have the error message which has a suggestion for how to fix ,1 9211,27748970890.0,IssuesEvent,2023-03-15 19:08:26,microsoft/go,https://api.github.com/repos/microsoft/go,opened,Handle upstream change to use include `.0` third digit in major releases,release-automation,"* https://github.com/golang/go/issues/57631 ❤️ this change! Our infra already uses the third digit throughout, with a conversion from `1.20` -> `1.20.0-1` treated as filling in default values. I'm fairly sure our conversion func won't need any changes to work, but we should add tests. On the other hand, I suspect we'll need to update code that takes our 1.20.0-1 version and produces `go1.20` for tag lookup, for example. To make the swap smooth and avoid a broken build when this change gets released, we might want to go ahead and implement a fallback solution (try both) ahead of time.",1.0,"Handle upstream change to use include `.0` third digit in major releases - * https://github.com/golang/go/issues/57631 ❤️ this change! Our infra already uses the third digit throughout, with a conversion from `1.20` -> `1.20.0-1` treated as filling in default values. I'm fairly sure our conversion func won't need any changes to work, but we should add tests. On the other hand, I suspect we'll need to update code that takes our 1.20.0-1 version and produces `go1.20` for tag lookup, for example. To make the swap smooth and avoid a broken build when this change gets released, we might want to go ahead and implement a fallback solution (try both) ahead of time.",1,handle upstream change to use include third digit in major releases ❤️ this change our infra already uses the third digit throughout with a conversion from treated as filling in default values i m fairly sure our conversion func won t need any changes to work but we should add tests on the other hand i suspect we ll need to update code that takes our version and produces for tag lookup for example to make the swap smooth and avoid a broken build when this change gets released we might want to go ahead and implement a fallback solution try both ahead of time ,1 6332,22776194832.0,IssuesEvent,2022-07-08 14:40:49,willowtreeapps/vocable-ios,https://api.github.com/repos/willowtreeapps/vocable-ios,closed,Sensitivity settings UI test,automation Ready for Review,"**Description** [test case TC-51](https://vocable.wtadev.com/case/51/): Verify the Sensitivity buttons in Settings work as expected with unit tests. - The default setting is ""Medium"" - Tapping a different button highlights it and removes the highlight from unselected buttons. - User settings persists after leaving and returning to Settings. **Acceptance Criteria**: All of the UI tests pass.",1.0,"Sensitivity settings UI test - **Description** [test case TC-51](https://vocable.wtadev.com/case/51/): Verify the Sensitivity buttons in Settings work as expected with unit tests. - The default setting is ""Medium"" - Tapping a different button highlights it and removes the highlight from unselected buttons. - User settings persists after leaving and returning to Settings. **Acceptance Criteria**: All of the UI tests pass.",1,sensitivity settings ui test description verify the sensitivity buttons in settings work as expected with unit tests the default setting is medium tapping a different button highlights it and removes the highlight from unselected buttons user settings persists after leaving and returning to settings acceptance criteria all of the ui tests pass ,1 3663,14263689451.0,IssuesEvent,2020-11-20 14:46:40,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,"wait-service Service ""heartbeat"" is missing a healthcheck configuration",automation bug,"I pulled the latest version of the APM-integration-test env, and am getting an error starting up. First I had to update my version of docker-compose because the compose version was updated to 2.4. Now I am running into an issue with ""wait-service"" : `ERROR: for wait-service Service ""heartbeat"" is missing a healthcheck configuration` It looks like it's trying to add a wait-service for every container: ``` ""wait-service"": { ""container_name"": ""wait"", ""depends_on"": { ""apm-server"": { ""condition"": ""service_healthy"" }, ""elasticsearch"": { ""condition"": ""service_healthy"" }, ""filebeat"": { ""condition"": ""service_healthy"" }, ""heartbeat"": { ""condition"": ""service_healthy"" }, (...) ``` But none of the *beats have health checks implemented. Looking at the code I can't see where it's generating that `wait-service` block. Does anyone know where comes from? ### Regression Most likely https://github.com/elastic/apm-integration-testing/issues/964 ### Workaround - Removing the sections from my docker-compose manually starts fine",1.0,"wait-service Service ""heartbeat"" is missing a healthcheck configuration - I pulled the latest version of the APM-integration-test env, and am getting an error starting up. First I had to update my version of docker-compose because the compose version was updated to 2.4. Now I am running into an issue with ""wait-service"" : `ERROR: for wait-service Service ""heartbeat"" is missing a healthcheck configuration` It looks like it's trying to add a wait-service for every container: ``` ""wait-service"": { ""container_name"": ""wait"", ""depends_on"": { ""apm-server"": { ""condition"": ""service_healthy"" }, ""elasticsearch"": { ""condition"": ""service_healthy"" }, ""filebeat"": { ""condition"": ""service_healthy"" }, ""heartbeat"": { ""condition"": ""service_healthy"" }, (...) ``` But none of the *beats have health checks implemented. Looking at the code I can't see where it's generating that `wait-service` block. Does anyone know where comes from? ### Regression Most likely https://github.com/elastic/apm-integration-testing/issues/964 ### Workaround - Removing the sections from my docker-compose manually starts fine",1,wait service service heartbeat is missing a healthcheck configuration i pulled the latest version of the apm integration test env and am getting an error starting up first i had to update my version of docker compose because the compose version was updated to now i am running into an issue with wait service error for wait service service heartbeat is missing a healthcheck configuration it looks like it s trying to add a wait service for every container wait service container name wait depends on apm server condition service healthy elasticsearch condition service healthy filebeat condition service healthy heartbeat condition service healthy but none of the beats have health checks implemented looking at the code i can t see where it s generating that wait service block does anyone know where comes from regression most likely workaround removing the sections from my docker compose manually starts fine,1 253,5037597001.0,IssuesEvent,2016-12-17 19:19:57,rancher/rancher,https://api.github.com/repos/rancher/rancher,closed,Deadlock exception seen in logs when host is updated.,setup/automation,"Rancher sever version - v1.2.0-pre2-rc3 Following Deadlock exception seen in logs when host is updated when cattle validation test is executed. ``` 2016-08-15 22:40:55,457 ERROR [:] [] [] [] [cutorService-24] [o.a.c.m.context.NoExceptionRunnable ] Uncaught exception org.jooq.exception.DataAccessException: SQL [update `host` set `host`.`data` = ? where `host`.`id` = ?]; Deadlock found when trying to get lock; try restarting transaction at org.jooq.impl.Utils.translate(Utils.java:1287) ~[jooq-3.3.0.jar:na] at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:495) ~[jooq-3.3.0.jar:na] at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:326) ~[jooq-3.3.0.jar:na] at org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:296) ~[jooq-3.3.0.jar:na] at org.jooq.impl.UpdatableRecordImpl.access$200(UpdatableRecordImpl.java:90) ~[jooq-3.3.0.jar:na] at org.jooq.impl.UpdatableRecordImpl$3.operate(UpdatableRecordImpl.java:260) ~[jooq-3.3.0.jar:na] at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:123) ~[jooq-3.3.0.jar:na] at org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:255) ~[jooq-3.3.0.jar:na] at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:149) ~[jooq-3.3.0.jar:na] at io.cattle.platform.object.impl.JooqObjectManager.persistRecord(JooqObjectManager.java:223) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.object.impl.JooqObjectManager.setFieldsInternal(JooqObjectManager.java:130) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.object.impl.JooqObjectManager$3.execute(JooqObjectManager.java:118) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.engine.idempotent.Idempotent.change(Idempotent.java:88) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.object.impl.JooqObjectManager.setFields(JooqObjectManager.java:115) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.object.impl.JooqObjectManager.setFields(JooqObjectManager.java:110) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.object.impl.AbstractObjectManager.setFields(AbstractObjectManager.java:135) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.servicediscovery.service.impl.ServiceDiscoveryServiceImpl.updateObjectEndPoints(ServiceDiscoveryServiceImpl.java:491) ~[cattle-iaas-service-discovery-server-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.servicediscovery.service.impl.ServiceDiscoveryServiceImpl.reconcileHostEndpointsImpl(ServiceDiscoveryServiceImpl.java:508) ~[cattle-iaas-service-discovery-server-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.servicediscovery.service.impl.ServiceDiscoveryServiceImpl$7.run(ServiceDiscoveryServiceImpl.java:781) ~[cattle-iaas-service-discovery-server-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.servicediscovery.service.impl.ServiceDiscoveryServiceImpl.reconcileForHost(ServiceDiscoveryServiceImpl.java:792) ~[cattle-iaas-service-discovery-server-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.servicediscovery.service.impl.ServiceDiscoveryServiceImpl.hostEndpointsUpdate(ServiceDiscoveryServiceImpl.java:776) ~[cattle-iaas-service-discovery-server-0.5.0-SNAPSHOT.jar:na] at sun.reflect.GeneratedMethodAccessor1284.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_101] at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_101] at io.cattle.platform.eventing.annotation.MethodInvokingListener$1.doWithLockNoResult(MethodInvokingListener.java:76) ~[cattle-framework-eventing-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.LockCallbackNoReturn.doWithLock(LockCallbackNoReturn.java:7) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.LockCallbackNoReturn.doWithLock(LockCallbackNoReturn.java:3) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.impl.AbstractLockManagerImpl$3.doWithLock(AbstractLockManagerImpl.java:40) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.impl.LockManagerImpl.doLock(LockManagerImpl.java:33) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.impl.AbstractLockManagerImpl.lock(AbstractLockManagerImpl.java:13) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.impl.AbstractLockManagerImpl.lock(AbstractLockManagerImpl.java:37) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.eventing.annotation.MethodInvokingListener.onEvent(MethodInvokingListener.java:72) ~[cattle-framework-eventing-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.eventing.impl.AbstractThreadPoolingEventService$2.doRun(AbstractThreadPoolingEventService.java:135) ~[cattle-framework-eventing-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.NoExceptionRunnable.runInContext(NoExceptionRunnable.java:15) ~[cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:108) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_101] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_101] Caused by: java.sql.BatchUpdateException: Deadlock found when trying to get lock; try restarting transaction at org.mariadb.jdbc.MariaDbServerPreparedStatement.execute(MariaDbServerPreparedStatement.java:376) ~[mariadb-java-client-1.3.4.jar:na] at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4] at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:194) ~[jooq-3.3.0.jar:na] at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:376) ~[jooq-3.3.0.jar:na] at org.jooq.impl.AbstractStoreQuery.execute(AbstractStoreQuery.java:289) ~[jooq-3.3.0.jar:na] at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:322) ~[jooq-3.3.0.jar:na] ... 39 common frames omitted Caused by: java.sql.SQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction at org.mariadb.jdbc.internal.util.ExceptionMapper.get(ExceptionMapper.java:127) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.internal.util.ExceptionMapper.throwException(ExceptionMapper.java:69) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.MariaDbServerPreparedStatement.executeQueryEpilog(MariaDbServerPreparedStatement.java:338) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.MariaDbServerPreparedStatement.executeInternal(MariaDbServerPreparedStatement.java:293) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.MariaDbServerPreparedStatement.execute(MariaDbServerPreparedStatement.java:371) ~[mariadb-java-client-1.3.4.jar:na] ... 45 common frames omitted Caused by: org.mariadb.jdbc.internal.util.dao.QueryException: Deadlock found when trying to get lock; try restarting transaction at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.getResult(AbstractQueryProtocol.java:475) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executePreparedQuery(AbstractQueryProtocol.java:588) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.MariaDbServerPreparedStatement.executeInternal(MariaDbServerPreparedStatement.java:281) ~[mariadb-java-client-1.3.4.jar:na] ... 46 common frames omitted ``` ",1.0,"Deadlock exception seen in logs when host is updated. - Rancher sever version - v1.2.0-pre2-rc3 Following Deadlock exception seen in logs when host is updated when cattle validation test is executed. ``` 2016-08-15 22:40:55,457 ERROR [:] [] [] [] [cutorService-24] [o.a.c.m.context.NoExceptionRunnable ] Uncaught exception org.jooq.exception.DataAccessException: SQL [update `host` set `host`.`data` = ? where `host`.`id` = ?]; Deadlock found when trying to get lock; try restarting transaction at org.jooq.impl.Utils.translate(Utils.java:1287) ~[jooq-3.3.0.jar:na] at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:495) ~[jooq-3.3.0.jar:na] at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:326) ~[jooq-3.3.0.jar:na] at org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:296) ~[jooq-3.3.0.jar:na] at org.jooq.impl.UpdatableRecordImpl.access$200(UpdatableRecordImpl.java:90) ~[jooq-3.3.0.jar:na] at org.jooq.impl.UpdatableRecordImpl$3.operate(UpdatableRecordImpl.java:260) ~[jooq-3.3.0.jar:na] at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:123) ~[jooq-3.3.0.jar:na] at org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:255) ~[jooq-3.3.0.jar:na] at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:149) ~[jooq-3.3.0.jar:na] at io.cattle.platform.object.impl.JooqObjectManager.persistRecord(JooqObjectManager.java:223) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.object.impl.JooqObjectManager.setFieldsInternal(JooqObjectManager.java:130) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.object.impl.JooqObjectManager$3.execute(JooqObjectManager.java:118) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.engine.idempotent.Idempotent.change(Idempotent.java:88) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.object.impl.JooqObjectManager.setFields(JooqObjectManager.java:115) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.object.impl.JooqObjectManager.setFields(JooqObjectManager.java:110) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.object.impl.AbstractObjectManager.setFields(AbstractObjectManager.java:135) ~[cattle-framework-object-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.servicediscovery.service.impl.ServiceDiscoveryServiceImpl.updateObjectEndPoints(ServiceDiscoveryServiceImpl.java:491) ~[cattle-iaas-service-discovery-server-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.servicediscovery.service.impl.ServiceDiscoveryServiceImpl.reconcileHostEndpointsImpl(ServiceDiscoveryServiceImpl.java:508) ~[cattle-iaas-service-discovery-server-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.servicediscovery.service.impl.ServiceDiscoveryServiceImpl$7.run(ServiceDiscoveryServiceImpl.java:781) ~[cattle-iaas-service-discovery-server-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.servicediscovery.service.impl.ServiceDiscoveryServiceImpl.reconcileForHost(ServiceDiscoveryServiceImpl.java:792) ~[cattle-iaas-service-discovery-server-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.servicediscovery.service.impl.ServiceDiscoveryServiceImpl.hostEndpointsUpdate(ServiceDiscoveryServiceImpl.java:776) ~[cattle-iaas-service-discovery-server-0.5.0-SNAPSHOT.jar:na] at sun.reflect.GeneratedMethodAccessor1284.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_101] at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_101] at io.cattle.platform.eventing.annotation.MethodInvokingListener$1.doWithLockNoResult(MethodInvokingListener.java:76) ~[cattle-framework-eventing-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.LockCallbackNoReturn.doWithLock(LockCallbackNoReturn.java:7) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.LockCallbackNoReturn.doWithLock(LockCallbackNoReturn.java:3) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.impl.AbstractLockManagerImpl$3.doWithLock(AbstractLockManagerImpl.java:40) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.impl.LockManagerImpl.doLock(LockManagerImpl.java:33) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.impl.AbstractLockManagerImpl.lock(AbstractLockManagerImpl.java:13) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.lock.impl.AbstractLockManagerImpl.lock(AbstractLockManagerImpl.java:37) ~[cattle-framework-lock-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.eventing.annotation.MethodInvokingListener.onEvent(MethodInvokingListener.java:72) ~[cattle-framework-eventing-0.5.0-SNAPSHOT.jar:na] at io.cattle.platform.eventing.impl.AbstractThreadPoolingEventService$2.doRun(AbstractThreadPoolingEventService.java:135) ~[cattle-framework-eventing-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.NoExceptionRunnable.runInContext(NoExceptionRunnable.java:15) ~[cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:108) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_101] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_101] Caused by: java.sql.BatchUpdateException: Deadlock found when trying to get lock; try restarting transaction at org.mariadb.jdbc.MariaDbServerPreparedStatement.execute(MariaDbServerPreparedStatement.java:376) ~[mariadb-java-client-1.3.4.jar:na] at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4] at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) ~[commons-dbcp-1.4.jar:1.4] at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:194) ~[jooq-3.3.0.jar:na] at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:376) ~[jooq-3.3.0.jar:na] at org.jooq.impl.AbstractStoreQuery.execute(AbstractStoreQuery.java:289) ~[jooq-3.3.0.jar:na] at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:322) ~[jooq-3.3.0.jar:na] ... 39 common frames omitted Caused by: java.sql.SQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction at org.mariadb.jdbc.internal.util.ExceptionMapper.get(ExceptionMapper.java:127) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.internal.util.ExceptionMapper.throwException(ExceptionMapper.java:69) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.MariaDbServerPreparedStatement.executeQueryEpilog(MariaDbServerPreparedStatement.java:338) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.MariaDbServerPreparedStatement.executeInternal(MariaDbServerPreparedStatement.java:293) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.MariaDbServerPreparedStatement.execute(MariaDbServerPreparedStatement.java:371) ~[mariadb-java-client-1.3.4.jar:na] ... 45 common frames omitted Caused by: org.mariadb.jdbc.internal.util.dao.QueryException: Deadlock found when trying to get lock; try restarting transaction at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.getResult(AbstractQueryProtocol.java:475) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executePreparedQuery(AbstractQueryProtocol.java:588) ~[mariadb-java-client-1.3.4.jar:na] at org.mariadb.jdbc.MariaDbServerPreparedStatement.executeInternal(MariaDbServerPreparedStatement.java:281) ~[mariadb-java-client-1.3.4.jar:na] ... 46 common frames omitted ``` ",1,deadlock exception seen in logs when host is updated rancher sever version following deadlock exception seen in logs when host is updated when cattle validation test is executed error uncaught exception org jooq exception dataaccessexception sql deadlock found when trying to get lock try restarting transaction at org jooq impl utils translate utils java at org jooq impl defaultexecutecontext sqlexception defaultexecutecontext java at org jooq impl abstractquery execute abstractquery java at org jooq impl updatablerecordimpl updatablerecordimpl java at org jooq impl updatablerecordimpl access updatablerecordimpl java at org jooq impl updatablerecordimpl operate updatablerecordimpl java at org jooq impl recorddelegate operate recorddelegate java at org jooq impl updatablerecordimpl storeupdate updatablerecordimpl java at org jooq impl updatablerecordimpl update updatablerecordimpl java at io cattle platform object impl jooqobjectmanager persistrecord jooqobjectmanager java at io cattle platform object impl jooqobjectmanager setfieldsinternal jooqobjectmanager java at io cattle platform object impl jooqobjectmanager execute jooqobjectmanager java at io cattle platform engine idempotent idempotent change idempotent java at io cattle platform object impl jooqobjectmanager setfields jooqobjectmanager java at io cattle platform object impl jooqobjectmanager setfields jooqobjectmanager java at io cattle platform object impl abstractobjectmanager setfields abstractobjectmanager java at io cattle platform servicediscovery service impl servicediscoveryserviceimpl updateobjectendpoints servicediscoveryserviceimpl java at io cattle platform servicediscovery service impl servicediscoveryserviceimpl reconcilehostendpointsimpl servicediscoveryserviceimpl java at io cattle platform servicediscovery service impl servicediscoveryserviceimpl run servicediscoveryserviceimpl java at io cattle platform servicediscovery service impl servicediscoveryserviceimpl reconcileforhost servicediscoveryserviceimpl java at io cattle platform servicediscovery service impl servicediscoveryserviceimpl hostendpointsupdate servicediscoveryserviceimpl java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at io cattle platform eventing annotation methodinvokinglistener dowithlocknoresult methodinvokinglistener java at io cattle platform lock lockcallbacknoreturn dowithlock lockcallbacknoreturn java at io cattle platform lock lockcallbacknoreturn dowithlock lockcallbacknoreturn java at io cattle platform lock impl abstractlockmanagerimpl dowithlock abstractlockmanagerimpl java at io cattle platform lock impl lockmanagerimpl dolock lockmanagerimpl java at io cattle platform lock impl abstractlockmanagerimpl lock abstractlockmanagerimpl java at io cattle platform lock impl abstractlockmanagerimpl lock abstractlockmanagerimpl java at io cattle platform eventing annotation methodinvokinglistener onevent methodinvokinglistener java at io cattle platform eventing impl abstractthreadpoolingeventservice dorun abstractthreadpoolingeventservice java at org apache cloudstack managed context noexceptionrunnable runincontext noexceptionrunnable java at org apache cloudstack managed context managedcontextrunnable run managedcontextrunnable java at org apache cloudstack managed context impl defaultmanagedcontext call defaultmanagedcontext java at org apache cloudstack managed context impl defaultmanagedcontext callwithcontext defaultmanagedcontext java at org apache cloudstack managed context impl defaultmanagedcontext runwithcontext defaultmanagedcontext java at org apache cloudstack managed context managedcontextrunnable run managedcontextrunnable java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java sql batchupdateexception deadlock found when trying to get lock try restarting transaction at org mariadb jdbc mariadbserverpreparedstatement execute mariadbserverpreparedstatement java at org apache commons dbcp delegatingpreparedstatement execute delegatingpreparedstatement java at org apache commons dbcp delegatingpreparedstatement execute delegatingpreparedstatement java at org jooq tools jdbc defaultpreparedstatement execute defaultpreparedstatement java at org jooq impl abstractquery execute abstractquery java at org jooq impl abstractstorequery execute abstractstorequery java at org jooq impl abstractquery execute abstractquery java common frames omitted caused by java sql sqltransactionrollbackexception deadlock found when trying to get lock try restarting transaction at org mariadb jdbc internal util exceptionmapper get exceptionmapper java at org mariadb jdbc internal util exceptionmapper throwexception exceptionmapper java at org mariadb jdbc mariadbserverpreparedstatement executequeryepilog mariadbserverpreparedstatement java at org mariadb jdbc mariadbserverpreparedstatement executeinternal mariadbserverpreparedstatement java at org mariadb jdbc mariadbserverpreparedstatement execute mariadbserverpreparedstatement java common frames omitted caused by org mariadb jdbc internal util dao queryexception deadlock found when trying to get lock try restarting transaction at org mariadb jdbc internal protocol abstractqueryprotocol getresult abstractqueryprotocol java at org mariadb jdbc internal protocol abstractqueryprotocol executepreparedquery abstractqueryprotocol java at org mariadb jdbc mariadbserverpreparedstatement executeinternal mariadbserverpreparedstatement java common frames omitted ,1 14731,5757414458.0,IssuesEvent,2017-04-26 03:57:14,istio/manager,https://api.github.com/repos/istio/manager,closed,Flaky ingress + TLS test,bug build & test infrastructure,"Reference: https://testing.istio.io/job/manager/job/presubmit/1114/console Here are the important log bits: `[2017-04-12 17:08:24.995][29][critical][main] error initializing configuration '/etc/envoy/envoy-rev2.json': Failed to load certificate chain file /etc/tls.crt` Due to a bug in Envoy, invalid schema breaks the hot restart protocol, and causes the test to fail.",1.0,"Flaky ingress + TLS test - Reference: https://testing.istio.io/job/manager/job/presubmit/1114/console Here are the important log bits: `[2017-04-12 17:08:24.995][29][critical][main] error initializing configuration '/etc/envoy/envoy-rev2.json': Failed to load certificate chain file /etc/tls.crt` Due to a bug in Envoy, invalid schema breaks the hot restart protocol, and causes the test to fail.",0,flaky ingress tls test reference here are the important log bits error initializing configuration etc envoy envoy json failed to load certificate chain file etc tls crt due to a bug in envoy invalid schema breaks the hot restart protocol and causes the test to fail ,0 233365,18976349164.0,IssuesEvent,2021-11-20 03:18:13,onnx/onnx,https://api.github.com/repos/onnx/onnx,closed,Wrong comment in onnx/onnx/backend/test/case/node/argmax.py ,bug test,"# Bug Report ### Describe the bug Some result comment in onnx/onnx/backend/test/case/node/argmax.py are incorrect ### Expected behavior A clear and concise description of what you expected to happen. Here is the patch, but I don't have permission to create a branch and a PR ``` $ git show HEAD commit 61df6e4427d07cd48625522c85f9f315c69ac076 (HEAD -> yilyu/fix_ArgMax_examples) Author: Yi-Hong Lyu Date: Wed Nov 17 16:49:04 2021 -0800 Fix some examples for ArgMax - test_argmax_no_keepdims_example - test_argmax_default_axis_example - test_argmax_no_keepdims_example_select_last_index Signed-off-by: Yi-Hong Lyu diff --git a/onnx/backend/test/case/node/argmax.py b/onnx/backend/test/case/node/argmax.py index 21af504e..fd96b5a9 100644 --- a/onnx/backend/test/case/node/argmax.py +++ b/onnx/backend/test/case/node/argmax.py @@ -41,7 +41,7 @@ class ArgMax(Base): outputs=['result'], axis=axis, keepdims=keepdims) - # result: [[0, 1]] + # result: [0, 1] result = argmax_use_numpy(data, axis=axis, keepdims=keepdims) expect(node, inputs=[data], outputs=[result], name='test_argmax_no_keepdims_example') @@ -80,7 +80,7 @@ class ArgMax(Base): outputs=['result'], keepdims=keepdims) - # result: [[1], [1]] + # result: [[1, 1]] result = argmax_use_numpy(data, keepdims=keepdims) expect(node, inputs=[data], outputs=[result], name='test_argmax_default_axis_example') @@ -121,7 +121,7 @@ class ArgMax(Base): axis=axis, keepdims=keepdims, select_last_index=True) - # result: [[1, 1]] + # result: [1, 1] result = argmax_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims) expect(node, inputs=[data], outputs=[result], name='test_argmax_no_keepdims_example_select_last_index') ``` ",1.0,"Wrong comment in onnx/onnx/backend/test/case/node/argmax.py - # Bug Report ### Describe the bug Some result comment in onnx/onnx/backend/test/case/node/argmax.py are incorrect ### Expected behavior A clear and concise description of what you expected to happen. Here is the patch, but I don't have permission to create a branch and a PR ``` $ git show HEAD commit 61df6e4427d07cd48625522c85f9f315c69ac076 (HEAD -> yilyu/fix_ArgMax_examples) Author: Yi-Hong Lyu Date: Wed Nov 17 16:49:04 2021 -0800 Fix some examples for ArgMax - test_argmax_no_keepdims_example - test_argmax_default_axis_example - test_argmax_no_keepdims_example_select_last_index Signed-off-by: Yi-Hong Lyu diff --git a/onnx/backend/test/case/node/argmax.py b/onnx/backend/test/case/node/argmax.py index 21af504e..fd96b5a9 100644 --- a/onnx/backend/test/case/node/argmax.py +++ b/onnx/backend/test/case/node/argmax.py @@ -41,7 +41,7 @@ class ArgMax(Base): outputs=['result'], axis=axis, keepdims=keepdims) - # result: [[0, 1]] + # result: [0, 1] result = argmax_use_numpy(data, axis=axis, keepdims=keepdims) expect(node, inputs=[data], outputs=[result], name='test_argmax_no_keepdims_example') @@ -80,7 +80,7 @@ class ArgMax(Base): outputs=['result'], keepdims=keepdims) - # result: [[1], [1]] + # result: [[1, 1]] result = argmax_use_numpy(data, keepdims=keepdims) expect(node, inputs=[data], outputs=[result], name='test_argmax_default_axis_example') @@ -121,7 +121,7 @@ class ArgMax(Base): axis=axis, keepdims=keepdims, select_last_index=True) - # result: [[1, 1]] + # result: [1, 1] result = argmax_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims) expect(node, inputs=[data], outputs=[result], name='test_argmax_no_keepdims_example_select_last_index') ``` ",0,wrong comment in onnx onnx backend test case node argmax py bug report describe the bug some result comment in onnx onnx backend test case node argmax py are incorrect expected behavior a clear and concise description of what you expected to happen here is the patch but i don t have permission to create a branch and a pr git show head commit head yilyu fix argmax examples author yi hong lyu date wed nov fix some examples for argmax test argmax no keepdims example test argmax default axis example test argmax no keepdims example select last index signed off by yi hong lyu diff git a onnx backend test case node argmax py b onnx backend test case node argmax py index a onnx backend test case node argmax py b onnx backend test case node argmax py class argmax base outputs axis axis keepdims keepdims result result result argmax use numpy data axis axis keepdims keepdims expect node inputs outputs name test argmax no keepdims example class argmax base outputs keepdims keepdims result result result argmax use numpy data keepdims keepdims expect node inputs outputs name test argmax default axis example class argmax base axis axis keepdims keepdims select last index true result result result argmax use numpy select last index data axis axis keepdims keepdims expect node inputs outputs name test argmax no keepdims example select last index ,0 402427,27368909677.0,IssuesEvent,2023-02-27 21:33:54,aws/eks-anywhere,https://api.github.com/repos/aws/eks-anywhere,opened,Document Skip Version Upgrades Not Supported,team/cli area/cli documentation,We should document that skip version CLI upgrades are not supported.,1.0,Document Skip Version Upgrades Not Supported - We should document that skip version CLI upgrades are not supported.,0,document skip version upgrades not supported we should document that skip version cli upgrades are not supported ,0 500769,14514285799.0,IssuesEvent,2020-12-13 07:44:11,opencv/opencv,https://api.github.com/repos/opencv/opencv,closed,VideoCapture has Memory leak in opencv4.0.0,category: 3rdparty category: videoio(camera) incomplete needs investigation platform: win32 priority: low," - OpenCV => 4.0.0 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 In the following code, the memory will increase slowly along with the time elapsed. ``` int main() { cv::VideoCapture cap; cv::Mat frame; while (true) { cap.open(0); cap >> frame; cv::imshow(""vidoe"", frame); cv::waitKey(10); cap.release(); } return 0; } ``` ",1.0,"VideoCapture has Memory leak in opencv4.0.0 - - OpenCV => 4.0.0 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 In the following code, the memory will increase slowly along with the time elapsed. ``` int main() { cv::VideoCapture cap; cv::Mat frame; while (true) { cap.open(0); cap >> frame; cv::imshow(""vidoe"", frame); cv::waitKey(10); cap.release(); } return 0; } ``` ",0,videocapture has memory leak in opencv operating system platform windows bit compiler visual studio in the following code the memory will increase slowly along with the time elapsed int main cv videocapture cap cv mat frame while true cap open cap frame cv imshow vidoe frame cv waitkey cap release return ,0 136619,12728147759.0,IssuesEvent,2020-06-25 01:35:54,gofiber/fiber,https://api.github.com/repos/gofiber/fiber,reopened,🐛 WriteTimeout doesn't time out writing the response,Status: Under Investigation Type: Documentation,"**Fiber version** v1.12.0 **Issue description** When a client makes a request and is slow in reading the response I want the connection to be closed. In the stdlib's `http.Server` there's a field called `WriteTimeout` and it says: > WriteTimeout is the maximum duration before timing out writes of the response. And it does the trick: ```go package main import ( ""fmt"" ""log"" ""net/http"" ""time"" ) func main() { http.HandleFunc(""/hello"", func(w http.ResponseWriter, r *http.Request) { time.Sleep(2 * time.Second) // Sleep 2s if _, err := fmt.Fprintf(w, ""hello world""); err != nil { log.Printf(""Error writing response: %v"", err) } }) srv := &http.Server{ Addr: ""localhost:8080"", WriteTimeout: time.Second, // Timeout after 1s } log.Fatal(srv.ListenAndServe()) } ``` Test: ```text $ curl -v localhost:8080/hello * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 8080 (#0) > GET /hello HTTP/1.1 > Host: localhost:8080 > User-Agent: curl/7.47.0 > Accept: */* > * Empty reply from server * Connection #0 to host localhost left intact curl: (52) Empty reply from server ``` Now, Fiber has the same field and description in its `fiber.Settings`. *But it doesn't work*. ```go package main import ( ""log"" ""time"" ""github.com/gofiber/fiber"" ) func main() { app := fiber.New(&fiber.Settings{ WriteTimeout: time.Second, // Timeout after 1s }) app.All(""/hello"", func(c *fiber.Ctx) { time.Sleep(2 * time.Second) // Sleep 2s c.SendString(""hello world"") }) log.Fatal(app.Listen(""localhost:8080"")) } ``` Test: ```text $ curl -v localhost:8080/hello * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 8080 (#0) > GET /hello HTTP/1.1 > Host: localhost:8080 > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 200 OK < Date: Fri, 19 Jun 2020 19:23:44 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 11 < * Connection #0 to host localhost left intact hello world ``` Given that the settings field is named the same and the Godoc is the same, I would also expect the same *behavior*.",1.0,"🐛 WriteTimeout doesn't time out writing the response - **Fiber version** v1.12.0 **Issue description** When a client makes a request and is slow in reading the response I want the connection to be closed. In the stdlib's `http.Server` there's a field called `WriteTimeout` and it says: > WriteTimeout is the maximum duration before timing out writes of the response. And it does the trick: ```go package main import ( ""fmt"" ""log"" ""net/http"" ""time"" ) func main() { http.HandleFunc(""/hello"", func(w http.ResponseWriter, r *http.Request) { time.Sleep(2 * time.Second) // Sleep 2s if _, err := fmt.Fprintf(w, ""hello world""); err != nil { log.Printf(""Error writing response: %v"", err) } }) srv := &http.Server{ Addr: ""localhost:8080"", WriteTimeout: time.Second, // Timeout after 1s } log.Fatal(srv.ListenAndServe()) } ``` Test: ```text $ curl -v localhost:8080/hello * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 8080 (#0) > GET /hello HTTP/1.1 > Host: localhost:8080 > User-Agent: curl/7.47.0 > Accept: */* > * Empty reply from server * Connection #0 to host localhost left intact curl: (52) Empty reply from server ``` Now, Fiber has the same field and description in its `fiber.Settings`. *But it doesn't work*. ```go package main import ( ""log"" ""time"" ""github.com/gofiber/fiber"" ) func main() { app := fiber.New(&fiber.Settings{ WriteTimeout: time.Second, // Timeout after 1s }) app.All(""/hello"", func(c *fiber.Ctx) { time.Sleep(2 * time.Second) // Sleep 2s c.SendString(""hello world"") }) log.Fatal(app.Listen(""localhost:8080"")) } ``` Test: ```text $ curl -v localhost:8080/hello * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 8080 (#0) > GET /hello HTTP/1.1 > Host: localhost:8080 > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 200 OK < Date: Fri, 19 Jun 2020 19:23:44 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 11 < * Connection #0 to host localhost left intact hello world ``` Given that the settings field is named the same and the Godoc is the same, I would also expect the same *behavior*.",0,🐛 writetimeout doesn t time out writing the response fiber version issue description when a client makes a request and is slow in reading the response i want the connection to be closed in the stdlib s http server there s a field called writetimeout and it says writetimeout is the maximum duration before timing out writes of the response and it does the trick go package main import fmt log net http time func main http handlefunc hello func w http responsewriter r http request time sleep time second sleep if err fmt fprintf w hello world err nil log printf error writing response v err srv http server addr localhost writetimeout time second timeout after log fatal srv listenandserve test text curl v localhost hello trying connected to localhost port get hello http host localhost user agent curl accept empty reply from server connection to host localhost left intact curl empty reply from server now fiber has the same field and description in its fiber settings but it doesn t work go package main import log time github com gofiber fiber func main app fiber new fiber settings writetimeout time second timeout after app all hello func c fiber ctx time sleep time second sleep c sendstring hello world log fatal app listen localhost test text curl v localhost hello trying connected to localhost port get hello http host localhost user agent curl accept http ok date fri jun gmt content type text plain charset utf content length connection to host localhost left intact hello world given that the settings field is named the same and the godoc is the same i would also expect the same behavior ,0 133372,18297373229.0,IssuesEvent,2021-10-05 21:54:12,vipinsun/blockchain-carbon-accounting,https://api.github.com/repos/vipinsun/blockchain-carbon-accounting,closed,WS-2016-0075 (Medium) detected in github.com/smartystreets/goconvey-v1.6.4 - autoclosed,security vulnerability,"## WS-2016-0075 - Medium Severity Vulnerability
Vulnerable Library - github.com/smartystreets/goconvey-v1.6.4

Go testing in the browser. Integrates with `go test`. Write behavioral tests in Go.

Dependency Hierarchy: - github.com/hyperledger/fabric-v1.4.1 (Root Library) - github.com/spf13/viper-v1.7.1 - github.com/go-ini/ini-v1.51.0 - :x: **github.com/smartystreets/goconvey-v1.6.4** (Vulnerable Library)

Found in HEAD commit: d388e16464e00b9ce84df0d247029f534a429b90

Found in base branch: main

Vulnerability Details

Regular expression denial of service vulnerability in the moment package, by using a specific 40 characters long string in the ""format"" method.

Publish Date: 2016-10-24

URL: WS-2016-0075

CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/moment/moment/pull/3525

Release Date: 2016-10-24

Fix Resolution: 2.15.2

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2016-0075 (Medium) detected in github.com/smartystreets/goconvey-v1.6.4 - autoclosed - ## WS-2016-0075 - Medium Severity Vulnerability
Vulnerable Library - github.com/smartystreets/goconvey-v1.6.4

Go testing in the browser. Integrates with `go test`. Write behavioral tests in Go.

Dependency Hierarchy: - github.com/hyperledger/fabric-v1.4.1 (Root Library) - github.com/spf13/viper-v1.7.1 - github.com/go-ini/ini-v1.51.0 - :x: **github.com/smartystreets/goconvey-v1.6.4** (Vulnerable Library)

Found in HEAD commit: d388e16464e00b9ce84df0d247029f534a429b90

Found in base branch: main

Vulnerability Details

Regular expression denial of service vulnerability in the moment package, by using a specific 40 characters long string in the ""format"" method.

Publish Date: 2016-10-24

URL: WS-2016-0075

CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/moment/moment/pull/3525

Release Date: 2016-10-24

Fix Resolution: 2.15.2

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in github com smartystreets goconvey autoclosed ws medium severity vulnerability vulnerable library github com smartystreets goconvey go testing in the browser integrates with go test write behavioral tests in go dependency hierarchy github com hyperledger fabric root library github com viper github com go ini ini x github com smartystreets goconvey vulnerable library found in head commit a href found in base branch main vulnerability details regular expression denial of service vulnerability in the moment package by using a specific characters long string in the format method publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 176805,28209831807.0,IssuesEvent,2023-04-05 02:32:43,fedry/webmon,https://api.github.com/repos/fedry/webmon,closed,🛑 Passion Designs is down,status passion-designs,"In [`efa3a74`](https://github.com/fedry/webmon/commit/efa3a7411a3a44c86b25e9b81abf5a0cf47fe35b ), Passion Designs (https://passiondesigns.co.id) was **down**: - HTTP code: 0 - Response time: 0 ms ",1.0,"🛑 Passion Designs is down - In [`efa3a74`](https://github.com/fedry/webmon/commit/efa3a7411a3a44c86b25e9b81abf5a0cf47fe35b ), Passion Designs (https://passiondesigns.co.id) was **down**: - HTTP code: 0 - Response time: 0 ms ",0,🛑 passion designs is down in passion designs was down http code response time ms ,0 7011,24122353745.0,IssuesEvent,2022-09-20 19:57:15,mlcommons/ck,https://api.github.com/repos/mlcommons/ck,closed,[CK2/CM] Searching a cached entry using extra_tags,enhancement cm-script-automation,"Currently we can save extra_tags in any cached entry. Since script tags are checked first and then the cached tags, these extra_tags are not useful while searching for a cached entry. As discussed during our conf-call a proper solution will be to input `--extra_tags` as a user input during search and use it only while searching the cached entries. ",1.0,"[CK2/CM] Searching a cached entry using extra_tags - Currently we can save extra_tags in any cached entry. Since script tags are checked first and then the cached tags, these extra_tags are not useful while searching for a cached entry. As discussed during our conf-call a proper solution will be to input `--extra_tags` as a user input during search and use it only while searching the cached entries. ",1, searching a cached entry using extra tags currently we can save extra tags in any cached entry since script tags are checked first and then the cached tags these extra tags are not useful while searching for a cached entry as discussed during our conf call a proper solution will be to input extra tags as a user input during search and use it only while searching the cached entries ,1 7088,24229587980.0,IssuesEvent,2022-09-26 17:01:56,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,FAILED: Automated Tests(116),automation,"Stats: { ""suites"": 44, ""tests"": 322, ""passes"": 206, ""pending"": 0, ""failures"": 116, ""start"": ""2022-09-23T16:08:19.214Z"", ""end"": ""2022-09-23T16:45:38.563Z"", ""duration"": 800655, ""testsRegistered"": 322, ""passPercent"": 63.975155279503106, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 0, ""hasSkipped"": false } Failed Tests: ""activate the service for Test environment"" ""activate the service for Dev environment"" ""grant namespace access to Mark (access manager)"" ""Grant CredentialIssuer.Admin permission to Janis (API Owner)"" ""authenticates Mark (Access-Manager)"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""Verify that API is accessible with the generated API Key"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Click on Grant Access button"" ""Grant Access to Test environment"" ""Verify that API is accessible with the generated API Key for Test environment"" ""Verify the rate limiting is applied for free access"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list"" ""set IP address that is not accessible in the network as allowed IP and set Route as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is accessible in the network as allowed IP and set route as scope"" ""set IP address that is accessible in the network as allowed IP and set service as scope"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list "" ""set api rate limit as per the test config, Local Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Local Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""authenticates Mark (Access-Manager)"" ""verify that consumers are filters as per given parameter"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Verify that labels can be deleted"" ""Verify that labels can be updated"" ""Verify that labels can be added"" ""Grant namespace access to access manager(Mark)"" ""Grant CredentialIssuer.Admin permission to credential issuer(Wendy)"" ""Select the namespace created for client credential "" ""Creates authorization profile for Client ID/Secret"" ""Creates authorization profile for JWT - Generated Key Pair"" ""Creates authorization profile for JWKS URL"" ""Adds environment with Client ID/Secret authenticator to product"" ""Adds environment with JWT - Generated Key Pair authenticator to product"" ""Adds environment with JWT - JWKS URL authenticator to product"" ""Applies authorization plugin to service published to Kong Gateway"" ""activate the service for Test environment"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using client ID and secret; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Get current API Key"" ""Verify that only one API key(new key) is set to the consumer in Kong gateway"" ""Verify that API is not accessible with the old API Key"" ""Regenrate credential client ID and Secret"" ""Make sure that the old client ID and Secret is disabled"" ""Delete Service Accounts"" ""grant namespace access to Mark (access manager)"" ""Grant permission to Janis (API Owner)"" ""Grant permission to Wendy"" ""Grant \""Access.Manager\"" access to Mark (access manager)"" ""Authenticates Mark (Access-Manager)"" ""Verify that the option to approve request is displayed"" ""Grant only \""Namespace.Manage\"" permission to Wendy"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that all the namespace options and activities are displayed"" ""Grant only \""CredentialIssuer.Admin\"" access to Wendy (access manager)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that only Authorization Profile option is displayed in Namespace page"" ""Verify that authorization profile for Client ID/Secret is generated"" ""Grant only \""Namespace.View\"" permission to Mark"" ""authenticates Mark"" ""Verify that service accounts are not created"" ""Grant \""GatewayConfig.Publish\"" and \""Namespace.View\"" access to Wendy (access manager)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that GWA API allows user to publish the API to Kong gateway"" ""Delete the product environment and verify the success code in the response"" ""Get the resource and verify that product environment is deleted"" ""Force delete the namespace and verify the success code in the response"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3113985598",1.0,"FAILED: Automated Tests(116) - Stats: { ""suites"": 44, ""tests"": 322, ""passes"": 206, ""pending"": 0, ""failures"": 116, ""start"": ""2022-09-23T16:08:19.214Z"", ""end"": ""2022-09-23T16:45:38.563Z"", ""duration"": 800655, ""testsRegistered"": 322, ""passPercent"": 63.975155279503106, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 0, ""hasSkipped"": false } Failed Tests: ""activate the service for Test environment"" ""activate the service for Dev environment"" ""grant namespace access to Mark (access manager)"" ""Grant CredentialIssuer.Admin permission to Janis (API Owner)"" ""authenticates Mark (Access-Manager)"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""Verify that API is accessible with the generated API Key"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Click on Grant Access button"" ""Grant Access to Test environment"" ""Verify that API is accessible with the generated API Key for Test environment"" ""Verify the rate limiting is applied for free access"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list"" ""set IP address that is not accessible in the network as allowed IP and set Route as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""set IP address that is accessible in the network as allowed IP and set route as scope"" ""set IP address that is accessible in the network as allowed IP and set service as scope"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""Navigate to Consumer page and filter the product"" ""set api ip-restriction to global service level"" ""set IP address that is not accessible in the network as allowed IP and set service as scope"" ""verify IP Restriction error when the API calls other than the allowed IP"" ""authenticates Mark (Access Manager)"" ""Navigate to Consumer page and filter the product"" ""Select the consumer from the list "" ""set api rate limit as per the test config, Local Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Local Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit as per the test config, Redis Policy and Scope as Route"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""set api rate limit to global service level"" ""Verify that Rate limiting is set at global service level"" ""set api rate limit as per the test config, Redis Policy and Scope as Service"" ""verify rate limit error when the API calls beyond the limit"" ""authenticates Mark (Access-Manager)"" ""verify the request details"" ""Add group labels in request details window"" ""approves an access request"" ""authenticates Mark (Access-Manager)"" ""verify that consumers are filters as per given parameter"" ""authenticates Mark (Access-Manager)"" ""Navigate to Consumer page and filter the product"" ""Click on the first consumer"" ""Verify that labels can be deleted"" ""Verify that labels can be updated"" ""Verify that labels can be added"" ""Grant namespace access to access manager(Mark)"" ""Grant CredentialIssuer.Admin permission to credential issuer(Wendy)"" ""Select the namespace created for client credential "" ""Creates authorization profile for Client ID/Secret"" ""Creates authorization profile for JWT - Generated Key Pair"" ""Creates authorization profile for JWKS URL"" ""Adds environment with Client ID/Secret authenticator to product"" ""Adds environment with JWT - Generated Key Pair authenticator to product"" ""Adds environment with JWT - JWKS URL authenticator to product"" ""Applies authorization plugin to service published to Kong Gateway"" ""activate the service for Test environment"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using client ID and secret; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Creates an access request"" ""Access Manager logs in"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Get current API Key"" ""Verify that only one API key(new key) is set to the consumer in Kong gateway"" ""Verify that API is not accessible with the old API Key"" ""Regenrate credential client ID and Secret"" ""Make sure that the old client ID and Secret is disabled"" ""Delete Service Accounts"" ""grant namespace access to Mark (access manager)"" ""Grant permission to Janis (API Owner)"" ""Grant permission to Wendy"" ""Grant \""Access.Manager\"" access to Mark (access manager)"" ""Authenticates Mark (Access-Manager)"" ""Verify that the option to approve request is displayed"" ""Grant only \""Namespace.Manage\"" permission to Wendy"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that all the namespace options and activities are displayed"" ""Grant only \""CredentialIssuer.Admin\"" access to Wendy (access manager)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that only Authorization Profile option is displayed in Namespace page"" ""Verify that authorization profile for Client ID/Secret is generated"" ""Grant only \""Namespace.View\"" permission to Mark"" ""authenticates Mark"" ""Verify that service accounts are not created"" ""Grant \""GatewayConfig.Publish\"" and \""Namespace.View\"" access to Wendy (access manager)"" ""Authenticates Wendy (Credential-Issuer)"" ""Verify that GWA API allows user to publish the API to Kong gateway"" ""Delete the product environment and verify the success code in the response"" ""Get the resource and verify that product environment is deleted"" ""Force delete the namespace and verify the success code in the response"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3113985598",1,failed automated tests stats suites tests passes pending failures start end duration testsregistered passpercent pendingpercent other hasother false skipped hasskipped false failed tests activate the service for test environment activate the service for dev environment grant namespace access to mark access manager grant credentialissuer admin permission to janis api owner authenticates mark access manager authenticates mark access manager verify the request details add group labels in request details window approves an access request verify that api is accessible with the generated api key authenticates mark access manager navigate to consumer page and filter the product click on the first consumer click on grant access button grant access to test environment verify that api is accessible with the generated api key for test environment verify the rate limiting is applied for free access authenticates mark access manager navigate to consumer page and filter the product select the consumer from the list set ip address that is not accessible in the network as allowed ip and set route as scope verify ip restriction error when the api calls other than the allowed ip set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip set ip address that is accessible in the network as allowed ip and set route as scope set ip address that is accessible in the network as allowed ip and set service as scope navigate to consumer page and filter the product set api ip restriction to global service level set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip navigate to consumer page and filter the product set api ip restriction to global service level set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip authenticates mark access manager navigate to consumer page and filter the product select the consumer from the list set api rate limit as per the test config local policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit as per the test config local policy and scope as route verify rate limit error when the api calls beyond the limit set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit as per the test config redis policy and scope as route verify rate limit error when the api calls beyond the limit set api rate limit to global service level verify that rate limiting is set at global service level set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit to global service level verify that rate limiting is set at global service level set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit authenticates mark access manager verify the request details add group labels in request details window approves an access request authenticates mark access manager verify that consumers are filters as per given parameter authenticates mark access manager navigate to consumer page and filter the product click on the first consumer verify that labels can be deleted verify that labels can be updated verify that labels can be added grant namespace access to access manager mark grant credentialissuer admin permission to credential issuer wendy select the namespace created for client credential creates authorization profile for client id secret creates authorization profile for jwt generated key pair creates authorization profile for jwks url adds environment with client id secret authenticator to product adds environment with jwt generated key pair authenticator to product adds environment with jwt jwks url authenticator to product applies authorization plugin to service published to kong gateway activate the service for test environment creates an access request access manager logs in approves an access request get access token using client id and secret make api request creates an access request access manager logs in approves an access request get access token using jwt key pair make api request creates an access request access manager logs in approves an access request get access token using jwt key pair make api request get current api key verify that only one api key new key is set to the consumer in kong gateway verify that api is not accessible with the old api key regenrate credential client id and secret make sure that the old client id and secret is disabled delete service accounts grant namespace access to mark access manager grant permission to janis api owner grant permission to wendy grant access manager access to mark access manager authenticates mark access manager verify that the option to approve request is displayed grant only namespace manage permission to wendy authenticates wendy credential issuer verify that all the namespace options and activities are displayed grant only credentialissuer admin access to wendy access manager authenticates wendy credential issuer verify that only authorization profile option is displayed in namespace page verify that authorization profile for client id secret is generated grant only namespace view permission to mark authenticates mark verify that service accounts are not created grant gatewayconfig publish and namespace view access to wendy access manager authenticates wendy credential issuer verify that gwa api allows user to publish the api to kong gateway delete the product environment and verify the success code in the response get the resource and verify that product environment is deleted force delete the namespace and verify the success code in the response run link ,1 221621,24651001746.0,IssuesEvent,2022-10-17 18:38:18,samqws-marketing/walmartlabs-concord,https://api.github.com/repos/samqws-marketing/walmartlabs-concord,opened,CVE-2022-37603 (Medium) detected in loader-utils-2.0.0.tgz,security vulnerability,"## CVE-2022-37603 - Medium Severity Vulnerability
Vulnerable Library - loader-utils-2.0.0.tgz

utils for webpack loaders

Library home page: https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/loader-utils/package.json

Dependency Hierarchy: - react-scripts-4.0.3.tgz (Root Library) - css-loader-4.3.0.tgz - :x: **loader-utils-2.0.0.tgz** (Vulnerable Library)

Found in HEAD commit: b9420f3b9e73a9d381266ece72f7afb756f35a76

Found in base branch: master

Vulnerability Details

A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the url variable in interpolateName.js.

Publish Date: 2022-10-14

URL: CVE-2022-37603

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2022-10-14

Fix Resolution (loader-utils): 2.0.1

Direct dependency fix Resolution (react-scripts): 5.0.0

*** - [ ] Check this box to open an automated fix PR ",True,"CVE-2022-37603 (Medium) detected in loader-utils-2.0.0.tgz - ## CVE-2022-37603 - Medium Severity Vulnerability
Vulnerable Library - loader-utils-2.0.0.tgz

utils for webpack loaders

Library home page: https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/loader-utils/package.json

Dependency Hierarchy: - react-scripts-4.0.3.tgz (Root Library) - css-loader-4.3.0.tgz - :x: **loader-utils-2.0.0.tgz** (Vulnerable Library)

Found in HEAD commit: b9420f3b9e73a9d381266ece72f7afb756f35a76

Found in base branch: master

Vulnerability Details

A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the url variable in interpolateName.js.

Publish Date: 2022-10-14

URL: CVE-2022-37603

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2022-10-14

Fix Resolution (loader-utils): 2.0.1

Direct dependency fix Resolution (react-scripts): 5.0.0

*** - [ ] Check this box to open an automated fix PR ",0,cve medium detected in loader utils tgz cve medium severity vulnerability vulnerable library loader utils tgz utils for webpack loaders library home page a href path to dependency file package json path to vulnerable library node modules loader utils package json dependency hierarchy react scripts tgz root library css loader tgz x loader utils tgz vulnerable library found in head commit a href found in base branch master vulnerability details a regular expression denial of service redos flaw was found in function interpolatename in interpolatename js in webpack loader utils via the url variable in interpolatename js publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution loader utils direct dependency fix resolution react scripts check this box to open an automated fix pr ,0 86747,15755866032.0,IssuesEvent,2021-03-31 02:31:27,himalay/jokes,https://api.github.com/repos/himalay/jokes,opened,CVE-2020-15366 (Medium) detected in ajv-6.11.0.tgz,security vulnerability,"## CVE-2020-15366 - Medium Severity Vulnerability
Vulnerable Library - ajv-6.11.0.tgz

Another JSON Schema Validator

Library home page: https://registry.npmjs.org/ajv/-/ajv-6.11.0.tgz

Path to dependency file: jokes/package.json

Path to vulnerable library: jokes/node_modules/ajv/package.json

Dependency Hierarchy: - request-2.88.2.tgz (Root Library) - har-validator-5.1.3.tgz - :x: **ajv-6.11.0.tgz** (Vulnerable Library)

Vulnerability Details

An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)

Publish Date: 2020-07-15

URL: CVE-2020-15366

CVSS 3 Score Details (5.6)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/ajv-validator/ajv/releases/tag/v6.12.3

Release Date: 2020-07-15

Fix Resolution: ajv - 6.12.3

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-15366 (Medium) detected in ajv-6.11.0.tgz - ## CVE-2020-15366 - Medium Severity Vulnerability
Vulnerable Library - ajv-6.11.0.tgz

Another JSON Schema Validator

Library home page: https://registry.npmjs.org/ajv/-/ajv-6.11.0.tgz

Path to dependency file: jokes/package.json

Path to vulnerable library: jokes/node_modules/ajv/package.json

Dependency Hierarchy: - request-2.88.2.tgz (Root Library) - har-validator-5.1.3.tgz - :x: **ajv-6.11.0.tgz** (Vulnerable Library)

Vulnerability Details

An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)

Publish Date: 2020-07-15

URL: CVE-2020-15366

CVSS 3 Score Details (5.6)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/ajv-validator/ajv/releases/tag/v6.12.3

Release Date: 2020-07-15

Fix Resolution: ajv - 6.12.3

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in ajv tgz cve medium severity vulnerability vulnerable library ajv tgz another json schema validator library home page a href path to dependency file jokes package json path to vulnerable library jokes node modules ajv package json dependency hierarchy request tgz root library har validator tgz x ajv tgz vulnerable library vulnerability details an issue was discovered in ajv validate in ajv aka another json schema validator a carefully crafted json schema could be provided that allows execution of other code by prototype pollution while untrusted schemas are recommended against the worst case of an untrusted schema should be a denial of service not execution of code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ajv step up your open source security game with whitesource ,0 1892,11032320849.0,IssuesEvent,2019-12-06 19:53:21,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,Support many more Content-Type in MockWebServer (UI tests),eng:automation 🐞 bug,"Our MockWebServer for UI testing doesn't support many other Content-Type in the header of the MockResponse. Found this while trying to display a simple image on a blank HTML file in a test. If we use something like: ```Kotlin private fun contentType(path: String): String? { if (path.endsWith("".png"")) return ""image/png"" if (path.endsWith("".jpg"")) return ""image/jpeg"" if (path.endsWith("".jpeg"")) return ""image/jpeg"" if (path.endsWith("".gif"")) return ""image/gif"" if (path.endsWith("".html"")) return ""text/html; charset=utf-8"" return if (path.endsWith("".txt"")) ""text/plain; charset=utf-8"" else ""application/octet-stream"" } ``` and in the `MockResponse` when you have the right path you can `setHeader` I'm not familiar with the way this MockResponse was written historically nor the way assets are fetched here. (@rpappalax) – I'm pretty sure once you have the pathing right, you can just set the header. ```Kotlin override fun dispatch(request: RecordedRequest): MockResponse { val assetManager = InstrumentationRegistry.getInstrumentation().context.assets val assetContents = try { val pathNoLeadingSlash = request.path.drop(1) assetManager.open(pathNoLeadingSlash).use { inputStream -> inputStream.bufferedReader().use { it.readText() } } } catch (e: IOException) { // e.g. file not found. // We're on a background thread so we need to forward the exception to the main thread. mainThreadHandler.postAtFrontOfQueue { throw e } return MockResponse().setResponseCode(HTTP_NOT_FOUND) } return MockResponse().setResponseCode(HTTP_OK).setBody(assetContents) } ```",1.0,"Support many more Content-Type in MockWebServer (UI tests) - Our MockWebServer for UI testing doesn't support many other Content-Type in the header of the MockResponse. Found this while trying to display a simple image on a blank HTML file in a test. If we use something like: ```Kotlin private fun contentType(path: String): String? { if (path.endsWith("".png"")) return ""image/png"" if (path.endsWith("".jpg"")) return ""image/jpeg"" if (path.endsWith("".jpeg"")) return ""image/jpeg"" if (path.endsWith("".gif"")) return ""image/gif"" if (path.endsWith("".html"")) return ""text/html; charset=utf-8"" return if (path.endsWith("".txt"")) ""text/plain; charset=utf-8"" else ""application/octet-stream"" } ``` and in the `MockResponse` when you have the right path you can `setHeader` I'm not familiar with the way this MockResponse was written historically nor the way assets are fetched here. (@rpappalax) – I'm pretty sure once you have the pathing right, you can just set the header. ```Kotlin override fun dispatch(request: RecordedRequest): MockResponse { val assetManager = InstrumentationRegistry.getInstrumentation().context.assets val assetContents = try { val pathNoLeadingSlash = request.path.drop(1) assetManager.open(pathNoLeadingSlash).use { inputStream -> inputStream.bufferedReader().use { it.readText() } } } catch (e: IOException) { // e.g. file not found. // We're on a background thread so we need to forward the exception to the main thread. mainThreadHandler.postAtFrontOfQueue { throw e } return MockResponse().setResponseCode(HTTP_NOT_FOUND) } return MockResponse().setResponseCode(HTTP_OK).setBody(assetContents) } ```",1,support many more content type in mockwebserver ui tests our mockwebserver for ui testing doesn t support many other content type in the header of the mockresponse found this while trying to display a simple image on a blank html file in a test if we use something like kotlin private fun contenttype path string string if path endswith png return image png if path endswith jpg return image jpeg if path endswith jpeg return image jpeg if path endswith gif return image gif if path endswith html return text html charset utf return if path endswith txt text plain charset utf else application octet stream and in the mockresponse when you have the right path you can setheader i m not familiar with the way this mockresponse was written historically nor the way assets are fetched here rpappalax – i m pretty sure once you have the pathing right you can just set the header kotlin override fun dispatch request recordedrequest mockresponse val assetmanager instrumentationregistry getinstrumentation context assets val assetcontents try val pathnoleadingslash request path drop assetmanager open pathnoleadingslash use inputstream inputstream bufferedreader use it readtext catch e ioexception e g file not found we re on a background thread so we need to forward the exception to the main thread mainthreadhandler postatfrontofqueue throw e return mockresponse setresponsecode http not found return mockresponse setresponsecode http ok setbody assetcontents ,1 1874,4524837008.0,IssuesEvent,2016-09-07 01:05:09,PrinceOfAmber/Cyclic,https://api.github.com/repos/PrinceOfAmber/Cyclic,closed,Crash to Desktop when putting tools in crafting grid,bug compatibility critical / crash,"I have already posted this issue on Botania's Github, but for completeness I wanted you to also be aware of it so that you can make sure it is not something on your end as well. If I put any of the Emerald tools in a crafting window, the client crashes to desktop with the following error: [http://pastebin.com/Bg5VYC5Z](http://pastebin.com/Bg5VYC5Z) I have also seen this behavior in Mekanism Tools, and Router Reborn's RR Pickaxe. Router Reborn's author was able to fix the issue on his end due to using an old itemstack method or something, however the Mekanism author does not believe it is on his end at this point which prompted me to make the issue on Botania's Github found here (which also has references to the Mekanism issue): [https://github.com/williewillus/Botania/issues/471](https://github.com/williewillus/Botania/issues/471) Maybe through some combined effort everyone involved can work out what is going wrong.",True,"Crash to Desktop when putting tools in crafting grid - I have already posted this issue on Botania's Github, but for completeness I wanted you to also be aware of it so that you can make sure it is not something on your end as well. If I put any of the Emerald tools in a crafting window, the client crashes to desktop with the following error: [http://pastebin.com/Bg5VYC5Z](http://pastebin.com/Bg5VYC5Z) I have also seen this behavior in Mekanism Tools, and Router Reborn's RR Pickaxe. Router Reborn's author was able to fix the issue on his end due to using an old itemstack method or something, however the Mekanism author does not believe it is on his end at this point which prompted me to make the issue on Botania's Github found here (which also has references to the Mekanism issue): [https://github.com/williewillus/Botania/issues/471](https://github.com/williewillus/Botania/issues/471) Maybe through some combined effort everyone involved can work out what is going wrong.",0,crash to desktop when putting tools in crafting grid i have already posted this issue on botania s github but for completeness i wanted you to also be aware of it so that you can make sure it is not something on your end as well if i put any of the emerald tools in a crafting window the client crashes to desktop with the following error i have also seen this behavior in mekanism tools and router reborn s rr pickaxe router reborn s author was able to fix the issue on his end due to using an old itemstack method or something however the mekanism author does not believe it is on his end at this point which prompted me to make the issue on botania s github found here which also has references to the mekanism issue maybe through some combined effort everyone involved can work out what is going wrong ,0 3717,3226784979.0,IssuesEvent,2015-10-10 16:04:12,PolymerElements/polymer-starter-kit,https://api.github.com/repos/PolymerElements/polymer-starter-kit,closed,Service worker is broken in PSK 1.1.0,bug build-process,"To reproduce, 1. Get a fresh install of PSK 2. Follow README.md/Service Worker to enable service worker support 3. `gulp:serve`, and open devtool. Service worker tries to get bootstrap file from the root, instead of elements/. This happens because platinum elements is in index.build.js. To fix this, either move bootstrap file to root or, as mentioned in #397, roll back to vulcanize.",1.0,"Service worker is broken in PSK 1.1.0 - To reproduce, 1. Get a fresh install of PSK 2. Follow README.md/Service Worker to enable service worker support 3. `gulp:serve`, and open devtool. Service worker tries to get bootstrap file from the root, instead of elements/. This happens because platinum elements is in index.build.js. To fix this, either move bootstrap file to root or, as mentioned in #397, roll back to vulcanize.",0,service worker is broken in psk to reproduce get a fresh install of psk follow readme md service worker to enable service worker support gulp serve and open devtool service worker tries to get bootstrap file from the root instead of elements this happens because platinum elements is in index build js to fix this either move bootstrap file to root or as mentioned in roll back to vulcanize ,0 231004,17659971222.0,IssuesEvent,2021-08-21 09:37:10,espressif/arduino-esp32,https://api.github.com/repos/espressif/arduino-esp32,closed,"documentation missing about WebServer example ""HelloServer.ino""",Status: Stale Type: Documentation,"a detailed documentation is missing about WebServer example ""HelloServer.ino"": what is #include for? what is void handleRoot() for? what is void handleNotFound() for? what is server.on(""/"", handleRoot) for? what is server.on(""/inline"", []() ) for? what means: server.send(200, ""text/plain"", ""this works as well""); (i.e., as well to what?) what is server.onNotFound(handleNotFound) for? what is server.handleClient() for? if a documentation is provided though, please insert a link to that description into the source code. Thank you in advance! ``` #include #include #include #include const char* ssid = ""........""; const char* password = ""........""; WebServer server(80); const int led = 13; void handleRoot() { digitalWrite(led, 1); server.send(200, ""text/plain"", ""hello from esp8266!""); // BTW: faulty! should be ""esp32"" instead! digitalWrite(led, 0); } void handleNotFound() { digitalWrite(led, 1); String message = ""File Not Found\n\n""; message += ""URI: ""; message += server.uri(); message += ""\nMethod: ""; message += (server.method() == HTTP_GET) ? ""GET"" : ""POST""; message += ""\nArguments: ""; message += server.args(); message += ""\n""; for (uint8_t i = 0; i < server.args(); i++) { message += "" "" + server.argName(i) + "": "" + server.arg(i) + ""\n""; } server.send(404, ""text/plain"", message); digitalWrite(led, 0); } void setup(void) { pinMode(led, OUTPUT); digitalWrite(led, 0); Serial.begin(115200); WiFi.mode(WIFI_STA); WiFi.begin(ssid, password); Serial.println(""""); // Wait for connection while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("".""); } Serial.println(""""); Serial.print(""Connected to ""); Serial.println(ssid); Serial.print(""IP address: ""); Serial.println(WiFi.localIP()); if (MDNS.begin(""esp32"")) { Serial.println(""MDNS responder started""); } server.on(""/"", handleRoot); server.on(""/inline"", []() { server.send(200, ""text/plain"", ""this works as well""); }); server.onNotFound(handleNotFound); server.begin(); Serial.println(""HTTP server started""); } void loop(void) { server.handleClient(); } ```",1.0,"documentation missing about WebServer example ""HelloServer.ino"" - a detailed documentation is missing about WebServer example ""HelloServer.ino"": what is #include for? what is void handleRoot() for? what is void handleNotFound() for? what is server.on(""/"", handleRoot) for? what is server.on(""/inline"", []() ) for? what means: server.send(200, ""text/plain"", ""this works as well""); (i.e., as well to what?) what is server.onNotFound(handleNotFound) for? what is server.handleClient() for? if a documentation is provided though, please insert a link to that description into the source code. Thank you in advance! ``` #include #include #include #include const char* ssid = ""........""; const char* password = ""........""; WebServer server(80); const int led = 13; void handleRoot() { digitalWrite(led, 1); server.send(200, ""text/plain"", ""hello from esp8266!""); // BTW: faulty! should be ""esp32"" instead! digitalWrite(led, 0); } void handleNotFound() { digitalWrite(led, 1); String message = ""File Not Found\n\n""; message += ""URI: ""; message += server.uri(); message += ""\nMethod: ""; message += (server.method() == HTTP_GET) ? ""GET"" : ""POST""; message += ""\nArguments: ""; message += server.args(); message += ""\n""; for (uint8_t i = 0; i < server.args(); i++) { message += "" "" + server.argName(i) + "": "" + server.arg(i) + ""\n""; } server.send(404, ""text/plain"", message); digitalWrite(led, 0); } void setup(void) { pinMode(led, OUTPUT); digitalWrite(led, 0); Serial.begin(115200); WiFi.mode(WIFI_STA); WiFi.begin(ssid, password); Serial.println(""""); // Wait for connection while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("".""); } Serial.println(""""); Serial.print(""Connected to ""); Serial.println(ssid); Serial.print(""IP address: ""); Serial.println(WiFi.localIP()); if (MDNS.begin(""esp32"")) { Serial.println(""MDNS responder started""); } server.on(""/"", handleRoot); server.on(""/inline"", []() { server.send(200, ""text/plain"", ""this works as well""); }); server.onNotFound(handleNotFound); server.begin(); Serial.println(""HTTP server started""); } void loop(void) { server.handleClient(); } ```",0,documentation missing about webserver example helloserver ino a detailed documentation is missing about webserver example helloserver ino what is include for what is void handleroot for what is void handlenotfound for what is server on handleroot for what is server on inline for what means server send text plain this works as well i e as well to what what is server onnotfound handlenotfound for what is server handleclient for if a documentation is provided though please insert a link to that description into the source code thank you in advance include include include include const char ssid const char password webserver server const int led void handleroot digitalwrite led server send text plain hello from btw faulty should be instead digitalwrite led void handlenotfound digitalwrite led string message file not found n n message uri message server uri message nmethod message server method http get get post message narguments message server args message n for t i i server args i message server argname i server arg i n server send text plain message digitalwrite led void setup void pinmode led output digitalwrite led serial begin wifi mode wifi sta wifi begin ssid password serial println wait for connection while wifi status wl connected delay serial print serial println serial print connected to serial println ssid serial print ip address serial println wifi localip if mdns begin serial println mdns responder started server on handleroot server on inline server send text plain this works as well server onnotfound handlenotfound server begin serial println http server started void loop void server handleclient ,0 14818,18264462163.0,IssuesEvent,2021-10-04 06:34:31,vercel/hyper,https://api.github.com/repos/vercel/hyper,closed,Nano editor doesn't close when ssh tunnel killed,help wanted 🤯 Type: Compatibility,"I SSH'd into a server and opened up a log file in nano. I then locked the laptop and left it over night. When looking at the terminal this morning nano was still open but the text to say the connection has been closed was over the top of nano. I had to return multiple lines to get past nano. (can't provide screenshot because of sensitive information) ",True,"Nano editor doesn't close when ssh tunnel killed - I SSH'd into a server and opened up a log file in nano. I then locked the laptop and left it over night. When looking at the terminal this morning nano was still open but the text to say the connection has been closed was over the top of nano. I had to return multiple lines to get past nano. (can't provide screenshot because of sensitive information) ",0,nano editor doesn t close when ssh tunnel killed i ssh d into a server and opened up a log file in nano i then locked the laptop and left it over night when looking at the terminal this morning nano was still open but the text to say the connection has been closed was over the top of nano i had to return multiple lines to get past nano can t provide screenshot because of sensitive information ,0 71853,9541363252.0,IssuesEvent,2019-04-30 22:09:07,pravega/pravega,https://api.github.com/repos/pravega/pravega,closed,List of prerequisites: Release notes v0.4.0,area/documentation kind/feature priority/P2 status/in-progress version/0.4.0," I request the team to provide the latest version used and further updates on the below: Zookeeper Bookkeeper **Deployment Methods:** Docker Swarm Pravega Operator **Integration Tool** Flink connector **Tier 2 Storage:** Cloud AWS is used as an optional tier 2 storage HDFS is another optional tier 2 storage",1.0,"List of prerequisites: Release notes v0.4.0 - I request the team to provide the latest version used and further updates on the below: Zookeeper Bookkeeper **Deployment Methods:** Docker Swarm Pravega Operator **Integration Tool** Flink connector **Tier 2 Storage:** Cloud AWS is used as an optional tier 2 storage HDFS is another optional tier 2 storage",0,list of prerequisites release notes i request the team to provide the latest version used and further updates on the below zookeeper bookkeeper deployment methods docker swarm pravega operator integration tool flink connector tier storage cloud aws is used as an optional tier storage hdfs is another optional tier storage,0 9069,27484065084.0,IssuesEvent,2023-03-04 00:03:13,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,"TiCDC changefeed stucks ""Can't create more than max_prepared_stmt_count statements (current value: 16382)""",type/bug severity/major found/automation area/ticdc,"### What did you do? 1. create changefeed for MySQL sink 2. run workload ``` sysbench --db-driver=mysql --mysql-host=upstream-tidb.cdc-testbed-tps-1651903-1-86 --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=100 --table-size=500000 --create_secondary=off --time=3600 --threads=100 oltp_update_non_index prepare sysbench --db-driver=mysql --mysql-host=upstream-tidb.cdc-testbed-tps-1651903-1-86 --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=100 --table-size=500000 --create_secondary=off --time=3600 --threads=100 oltp_update_non_index run ``` ### What did you expect to see? PITR changefeed should advance normal ### What did you see instead? PITR changefeed stucks. ![eed6ba7e-7d02-41f7-80d0-db3aec8ada81](https://user-images.githubusercontent.com/7403864/221518158-2e7e2644-aa2e-48d9-acf6-555f0c013ed3.jpeg) In cdc log, we can see MySQL txn error: Error 1461 (42000): Can't create more than max_prepared_stmt_count statements (current value: 16382) ``` [2023/02/26 10:16:37.314 +00:00] [ERROR] [mysql.go:184] [""execute DMLs failed""] [error=""[CDC:ErrReachMaxTry]reach maximu m try: 8, error: [CDC:ErrMySQLTxnError]MySQL txn error: Error 1461 (42000): Can't create more than max_prepared_stmt_cou nt statements (current value: 16382): [CDC:ErrMySQLTxnError]MySQL txn error: Error 1461 (42000): Can't create more than max_prepared_stmt_count statements (current value: 16382)""] [errorVerbose=""[CDC:ErrReachMaxTry]reach maximum try: 8, err or: [CDC:ErrMySQLTxnError]MySQL txn error: Error 1461 (42000): Can't create more than max_prepared_stmt_count statements (current value: 16382): [CDC:ErrMySQLTxnError]MySQL txn error: Error 1461 (42000): Can't create more than max_prepared_ stmt_count statements (current value: 16382)\[ngithub.com/pingcap/errors.AddStack\n\tgithub.com/pingcap/errors@v0.11.5-0](http://ngithub.com/pingcap/errors.AddStack/n/tgithub.com/pingcap/errors@v0.11.5-0). 20221009092201-b66cddb77c32/errors.go:174\[ngithub.com/pingcap/errors.(*Error).GenWithStackByArgs\n\tgithub.com/pingcap/e](http://ngithub.com/pingcap/errors.(*Error).GenWithStackByArgs/n/tgithub.com/pingcap/e) rrors@v0.11.5-0.20221009092201-b66cddb77c32/normalize.go:164\[ngithub.com/pingcap/tiflow/pkg/retry.run\n\tgithub.com/ping](http://ngithub.com/pingcap/tiflow/pkg/retry.run/n/tgithub.com/ping) cap/tiflow/pkg/retry/retry_with_opt.go:69\[ngithub.com/pingcap/tiflow/pkg/retry.Do\n\tgithub.com/pingcap/tiflow/pkg/retry](http://ngithub.com/pingcap/tiflow/pkg/retry.Do/n/tgithub.com/pingcap/tiflow/pkg/retry) /retry_with_opt.go:34\[ngithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql.(*mysqlBackend).execDMLWithMaxRetries\n\tgit](http://ngithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql.(*mysqlBackend).execDMLWithMaxRetries/n/tgit) [hub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql/mysql.go:582\ngithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql](http://hub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql/mysql.go:582/ngithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql).(*m ysqlBackend).Flush\n\[tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql/mysql.go:182\ngithub.com/pingcap/tiflow/cdc/s](http://tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql/mysql.go:182/ngithub.com/pingcap/tiflow/cdc/s) ink/dmlsink/txn.(*worker).doFlush\n\[tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/worker.go:214\ngithub.com/pingcap/ti](http://tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/worker.go:214/ngithub.com/pingcap/ti) flow/cdc/sink/dmlsink/txn.(*worker).runBackgroundLoop.func1\n\[tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/worker.go](http://tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/worker.go): 163\nruntime.goexit\n\truntime/asm_amd64.s:1598""] ``` ### Versions of the cluster CDC version: [2023/02/26 10:06:56.361 +00:00] [INFO] [version.go:47] [""Welcome to Change Data Capture (CDC)""] [release-version=v6.7.0 -alpha] [git-hash=00bfd0d0580d59977adf91ea9c1e237f787d0b6c] [git-branch=heads/refs/tags/v6.7.0-alpha] [utc-build-time=""2 023-02-25 11:32:40""] [go-version=""go version go1.20.1 linux/amd64""] [failpoint-build=false]",1.0,"TiCDC changefeed stucks ""Can't create more than max_prepared_stmt_count statements (current value: 16382)"" - ### What did you do? 1. create changefeed for MySQL sink 2. run workload ``` sysbench --db-driver=mysql --mysql-host=upstream-tidb.cdc-testbed-tps-1651903-1-86 --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=100 --table-size=500000 --create_secondary=off --time=3600 --threads=100 oltp_update_non_index prepare sysbench --db-driver=mysql --mysql-host=upstream-tidb.cdc-testbed-tps-1651903-1-86 --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=100 --table-size=500000 --create_secondary=off --time=3600 --threads=100 oltp_update_non_index run ``` ### What did you expect to see? PITR changefeed should advance normal ### What did you see instead? PITR changefeed stucks. ![eed6ba7e-7d02-41f7-80d0-db3aec8ada81](https://user-images.githubusercontent.com/7403864/221518158-2e7e2644-aa2e-48d9-acf6-555f0c013ed3.jpeg) In cdc log, we can see MySQL txn error: Error 1461 (42000): Can't create more than max_prepared_stmt_count statements (current value: 16382) ``` [2023/02/26 10:16:37.314 +00:00] [ERROR] [mysql.go:184] [""execute DMLs failed""] [error=""[CDC:ErrReachMaxTry]reach maximu m try: 8, error: [CDC:ErrMySQLTxnError]MySQL txn error: Error 1461 (42000): Can't create more than max_prepared_stmt_cou nt statements (current value: 16382): [CDC:ErrMySQLTxnError]MySQL txn error: Error 1461 (42000): Can't create more than max_prepared_stmt_count statements (current value: 16382)""] [errorVerbose=""[CDC:ErrReachMaxTry]reach maximum try: 8, err or: [CDC:ErrMySQLTxnError]MySQL txn error: Error 1461 (42000): Can't create more than max_prepared_stmt_count statements (current value: 16382): [CDC:ErrMySQLTxnError]MySQL txn error: Error 1461 (42000): Can't create more than max_prepared_ stmt_count statements (current value: 16382)\[ngithub.com/pingcap/errors.AddStack\n\tgithub.com/pingcap/errors@v0.11.5-0](http://ngithub.com/pingcap/errors.AddStack/n/tgithub.com/pingcap/errors@v0.11.5-0). 20221009092201-b66cddb77c32/errors.go:174\[ngithub.com/pingcap/errors.(*Error).GenWithStackByArgs\n\tgithub.com/pingcap/e](http://ngithub.com/pingcap/errors.(*Error).GenWithStackByArgs/n/tgithub.com/pingcap/e) rrors@v0.11.5-0.20221009092201-b66cddb77c32/normalize.go:164\[ngithub.com/pingcap/tiflow/pkg/retry.run\n\tgithub.com/ping](http://ngithub.com/pingcap/tiflow/pkg/retry.run/n/tgithub.com/ping) cap/tiflow/pkg/retry/retry_with_opt.go:69\[ngithub.com/pingcap/tiflow/pkg/retry.Do\n\tgithub.com/pingcap/tiflow/pkg/retry](http://ngithub.com/pingcap/tiflow/pkg/retry.Do/n/tgithub.com/pingcap/tiflow/pkg/retry) /retry_with_opt.go:34\[ngithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql.(*mysqlBackend).execDMLWithMaxRetries\n\tgit](http://ngithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql.(*mysqlBackend).execDMLWithMaxRetries/n/tgit) [hub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql/mysql.go:582\ngithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql](http://hub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql/mysql.go:582/ngithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql).(*m ysqlBackend).Flush\n\[tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql/mysql.go:182\ngithub.com/pingcap/tiflow/cdc/s](http://tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql/mysql.go:182/ngithub.com/pingcap/tiflow/cdc/s) ink/dmlsink/txn.(*worker).doFlush\n\[tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/worker.go:214\ngithub.com/pingcap/ti](http://tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/worker.go:214/ngithub.com/pingcap/ti) flow/cdc/sink/dmlsink/txn.(*worker).runBackgroundLoop.func1\n\[tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/worker.go](http://tgithub.com/pingcap/tiflow/cdc/sink/dmlsink/txn/worker.go): 163\nruntime.goexit\n\truntime/asm_amd64.s:1598""] ``` ### Versions of the cluster CDC version: [2023/02/26 10:06:56.361 +00:00] [INFO] [version.go:47] [""Welcome to Change Data Capture (CDC)""] [release-version=v6.7.0 -alpha] [git-hash=00bfd0d0580d59977adf91ea9c1e237f787d0b6c] [git-branch=heads/refs/tags/v6.7.0-alpha] [utc-build-time=""2 023-02-25 11:32:40""] [go-version=""go version go1.20.1 linux/amd64""] [failpoint-build=false]",1,ticdc changefeed stucks can t create more than max prepared stmt count statements current value what did you do create changefeed for mysql sink run workload sysbench db driver mysql mysql host upstream tidb cdc testbed tps mysql port mysql user root mysql db workload tables table size create secondary off time threads oltp update non index prepare sysbench db driver mysql mysql host upstream tidb cdc testbed tps mysql port mysql user root mysql db workload tables table size create secondary off time threads oltp update non index run what did you expect to see pitr changefeed should advance normal what did you see instead pitr changefeed stucks in cdc log we can see mysql txn error error can t create more than max prepared stmt count statements current value reach maximu m try error mysql txn error error can t create more than max prepared stmt cou nt statements current value mysql txn error error can t create more than max prepared stmt count statements current value reach maximum try err or mysql txn error error can t create more than max prepared stmt count statements current value mysql txn error error can t create more than max prepared stmt count statements current value errors go rrors normalize go cap tiflow pkg retry retry with opt go retry with opt go ysqlbackend flush n ink dmlsink txn worker doflush n flow cdc sink dmlsink txn worker runbackgroundloop n nruntime goexit n truntime asm s versions of the cluster cdc version release version alpha utc build time ,1 1802,10816995756.0,IssuesEvent,2019-11-08 08:42:59,big-neon/bn-web,https://api.github.com/repos/big-neon/bn-web,opened,Automation: BigNeon: test 42:Reports: BoxOffice: Report should only contain Box Office transactions,Automation,"**Pre-conditions:** 1. User should have Admin access to Big Neon 2. User should belong to an Organization that has more than 1 event listed 3. Events that have been listed should have Sales from both Online and Box Offices for the day **Steps:** 1. Log into Big Neon 2. User is on the Big Neon Events landing page 3. View Current organization 4. Verify user is withing the correct Organization 5. Navigate to studio via: Box Offices > Studio 6. User should be on the Studio Dashboard page that displays events for selected Organization 7. Select ""Reports"" from the left menu panel 8. User should be redirected to the Organization Reports landing page 9. Select ""Box Office"" sales 10. Box Office Sales Summary report should be displayed to the user with Sales listed for the current day displayed 11. View the report 12. Verify that only Box Office Sales are reflected withing the Report ",1.0,"Automation: BigNeon: test 42:Reports: BoxOffice: Report should only contain Box Office transactions - **Pre-conditions:** 1. User should have Admin access to Big Neon 2. User should belong to an Organization that has more than 1 event listed 3. Events that have been listed should have Sales from both Online and Box Offices for the day **Steps:** 1. Log into Big Neon 2. User is on the Big Neon Events landing page 3. View Current organization 4. Verify user is withing the correct Organization 5. Navigate to studio via: Box Offices > Studio 6. User should be on the Studio Dashboard page that displays events for selected Organization 7. Select ""Reports"" from the left menu panel 8. User should be redirected to the Organization Reports landing page 9. Select ""Box Office"" sales 10. Box Office Sales Summary report should be displayed to the user with Sales listed for the current day displayed 11. View the report 12. Verify that only Box Office Sales are reflected withing the Report ",1,automation bigneon test reports boxoffice report should only contain box office transactions pre conditions user should have admin access to big neon user should belong to an organization that has more than event listed events that have been listed should have sales from both online and box offices for the day steps log into big neon user is on the big neon events landing page view current organization verify user is withing the correct organization navigate to studio via box offices studio user should be on the studio dashboard page that displays events for selected organization select reports from the left menu panel user should be redirected to the organization reports landing page select box office sales box office sales summary report should be displayed to the user with sales listed for the current day displayed view the report verify that only box office sales are reflected withing the report ,1 224807,17776118898.0,IssuesEvent,2021-08-30 19:27:16,Wolfst0rm/ArmorStandEditor-Issues,https://api.github.com/repos/Wolfst0rm/ArmorStandEditor-Issues,opened,Can't translate on/off keys,P1: To Be Tested,"### Expected behavior Being able to translate the on/off keys for gravity. ### Observed/Actual behavior ![image](https://user-images.githubusercontent.com/5664135/131393864-8448cbcb-48d8-4d62-a532-9fef4ed180e1.png) ![image](https://user-images.githubusercontent.com/5664135/131393883-3bef3c01-6214-4fd7-896d-5d1ed391ddd0.png) ![image](https://user-images.githubusercontent.com/5664135/131393971-af32178f-d6ba-47a9-a1a6-44362b6fdcd2.png) ### Steps/models to reproduce - ### Plugin list - ### Plugin Version - ### Server Version - ### Other -",1.0,"Can't translate on/off keys - ### Expected behavior Being able to translate the on/off keys for gravity. ### Observed/Actual behavior ![image](https://user-images.githubusercontent.com/5664135/131393864-8448cbcb-48d8-4d62-a532-9fef4ed180e1.png) ![image](https://user-images.githubusercontent.com/5664135/131393883-3bef3c01-6214-4fd7-896d-5d1ed391ddd0.png) ![image](https://user-images.githubusercontent.com/5664135/131393971-af32178f-d6ba-47a9-a1a6-44362b6fdcd2.png) ### Steps/models to reproduce - ### Plugin list - ### Plugin Version - ### Server Version - ### Other -",0,can t translate on off keys expected behavior being able to translate the on off keys for gravity observed actual behavior steps models to reproduce plugin list plugin version server version other ,0 92295,18817659055.0,IssuesEvent,2021-11-10 02:25:08,github/roadmap,https://api.github.com/repos/github/roadmap,closed,Codespaces: Devcontainer composer,cloud github enterprise beta code shipped github team codespaces universe 2021,"### Summary Crafting a devcontainer can require venturing into unfamiliar territory or cumbersome scripting. Today, users can [add a devcontainer configuration in VS Code](https://docs.github.com/en/codespaces/customizing-your-codespace/configuring-codespaces-for-your-project#using-a-predefined-container-configuration) with some options. Improving this composer will allow users to generate the right configuration files (devcontainer.json + Dockerfile) by selecting easy to understand components for their base image, runtimes, and tools. ### Intended Outcome Building a devcontainer configuration for a repo is necessary to optmize the experience with Codespaces; however, not everyone is a devcontainer expert and even familiar users benefit from well-defined, commonly used configurations. With a lightweight composer, users can focus on what they need in their container while abstracting the complexity of figuring out how all the pieces fit together in the config file. Allowing users to generate and share their own components also enables us to support a wide range of community and enterprise needs. For example, an enterprise could build a component to install internal tooling as part of their config and end-users don’t have to worry about how to incorporate this installation into their configuration. ### How will it work? 1. There will be a small set of maintained components that users can optionally add to their devcontainer via the `Add dev container configuration...` flow or ` Configure container features` command in VS Code. These features include things like the GitHub CLI and Docker-in-Docker support for the container. 2. Users can create and reference their own components - allowing them to reuse these pieces across their various configurations and share core pieces with others. 3. Components can be published to a registry whereby users can search for a specific component they need and add it to their configuration without needing to understand how to build the component.",2.0,"Codespaces: Devcontainer composer - ### Summary Crafting a devcontainer can require venturing into unfamiliar territory or cumbersome scripting. Today, users can [add a devcontainer configuration in VS Code](https://docs.github.com/en/codespaces/customizing-your-codespace/configuring-codespaces-for-your-project#using-a-predefined-container-configuration) with some options. Improving this composer will allow users to generate the right configuration files (devcontainer.json + Dockerfile) by selecting easy to understand components for their base image, runtimes, and tools. ### Intended Outcome Building a devcontainer configuration for a repo is necessary to optmize the experience with Codespaces; however, not everyone is a devcontainer expert and even familiar users benefit from well-defined, commonly used configurations. With a lightweight composer, users can focus on what they need in their container while abstracting the complexity of figuring out how all the pieces fit together in the config file. Allowing users to generate and share their own components also enables us to support a wide range of community and enterprise needs. For example, an enterprise could build a component to install internal tooling as part of their config and end-users don’t have to worry about how to incorporate this installation into their configuration. ### How will it work? 1. There will be a small set of maintained components that users can optionally add to their devcontainer via the `Add dev container configuration...` flow or ` Configure container features` command in VS Code. These features include things like the GitHub CLI and Docker-in-Docker support for the container. 2. Users can create and reference their own components - allowing them to reuse these pieces across their various configurations and share core pieces with others. 3. Components can be published to a registry whereby users can search for a specific component they need and add it to their configuration without needing to understand how to build the component.",0,codespaces devcontainer composer summary crafting a devcontainer can require venturing into unfamiliar territory or cumbersome scripting today users can with some options improving this composer will allow users to generate the right configuration files devcontainer json dockerfile by selecting easy to understand components for their base image runtimes and tools intended outcome building a devcontainer configuration for a repo is necessary to optmize the experience with codespaces however not everyone is a devcontainer expert and even familiar users benefit from well defined commonly used configurations with a lightweight composer users can focus on what they need in their container while abstracting the complexity of figuring out how all the pieces fit together in the config file allowing users to generate and share their own components also enables us to support a wide range of community and enterprise needs for example an enterprise could build a component to install internal tooling as part of their config and end users don’t have to worry about how to incorporate this installation into their configuration how will it work there will be a small set of maintained components that users can optionally add to their devcontainer via the add dev container configuration flow or configure container features command in vs code these features include things like the github cli and docker in docker support for the container users can create and reference their own components allowing them to reuse these pieces across their various configurations and share core pieces with others components can be published to a registry whereby users can search for a specific component they need and add it to their configuration without needing to understand how to build the component ,0 3797,14616828726.0,IssuesEvent,2020-12-22 13:52:48,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,Travis CI builds are disabled due to negative credit balance,automation type:critical,"Follow-up of https://github.com/keptn/keptn/issues/2356 At the moment Travis CI builds seems to be temporarily disabled. See: https://travis-ci.com/github/keptn/keptn ![2020-11-23_11-01](https://user-images.githubusercontent.com/72415058/99949468-63970100-2d7b-11eb-89e2-d7a462f4398b.png) ",1.0,"Travis CI builds are disabled due to negative credit balance - Follow-up of https://github.com/keptn/keptn/issues/2356 At the moment Travis CI builds seems to be temporarily disabled. See: https://travis-ci.com/github/keptn/keptn ![2020-11-23_11-01](https://user-images.githubusercontent.com/72415058/99949468-63970100-2d7b-11eb-89e2-d7a462f4398b.png) ",1,travis ci builds are disabled due to negative credit balance follow up of at the moment travis ci builds seems to be temporarily disabled see ,1 759531,26600016695.0,IssuesEvent,2023-01-23 15:09:43,TalaoDAO/AltMe,https://api.github.com/repos/TalaoDAO/AltMe,closed,Impossible to scan wallet address from Metamask,Priority a V4,They have add a prefix network as ethereum:0x578657858765 so the scan does not work ,1.0,Impossible to scan wallet address from Metamask - They have add a prefix network as ethereum:0x578657858765 so the scan does not work ,0,impossible to scan wallet address from metamask they have add a prefix network as ethereum so the scan does not work ,0 5695,20772920197.0,IssuesEvent,2022-03-16 07:35:42,EthanThatOneKid/acmcsuf.com,https://api.github.com/repos/EthanThatOneKid/acmcsuf.com,opened,[OFFICER_AUTOMATION] Add diamondburned for Dev Officer 2022,automation:officer,"### >>Officer Name<< diamondburned ### >>Term to Overwrite<< Spring 2022 ### >>Overwrite Officer Position Title<< Dev Officer ### >>Overwrite Officer Position Tier<< Dev Officer ### >>Overwrite Officer Picture<< ![acm-pfp](https://user-images.githubusercontent.com/8463786/158539276-ee99a528-e298-4aa5-a270-59404c6ff495.png) ### >>Overwrite Officer GitHub Username<< diamondburned",1.0,"[OFFICER_AUTOMATION] Add diamondburned for Dev Officer 2022 - ### >>Officer Name<< diamondburned ### >>Term to Overwrite<< Spring 2022 ### >>Overwrite Officer Position Title<< Dev Officer ### >>Overwrite Officer Position Tier<< Dev Officer ### >>Overwrite Officer Picture<< ![acm-pfp](https://user-images.githubusercontent.com/8463786/158539276-ee99a528-e298-4aa5-a270-59404c6ff495.png) ### >>Overwrite Officer GitHub Username<< diamondburned",1, add diamondburned for dev officer officer name diamondburned term to overwrite spring overwrite officer position title dev officer overwrite officer position tier dev officer overwrite officer picture overwrite officer github username diamondburned,1 7726,25482648085.0,IssuesEvent,2022-11-26 01:06:46,nickytonline/iamdeveloper.com,https://api.github.com/repos/nickytonline/iamdeveloper.com,closed,Automate pulling in newsletter archive,automation,"**Is your feature request related to a problem? Please describe.** Use the newsletter RSS feed, https://buttondown.email/nickytonline/rss, to pull in the content for the newsletter archive. **Describe the solution you'd like** Create a script in the `/bin` folder that: - Pulls in the RSS feed - Generates markdown with grey matter for stuff like title, date etc. - creates and persist markdown to the `./src/newlsetter` directory GitHub Action that: - Runs the above script on Sunday evenings to pull in the latest news archives - Creates a pull request that auto-merges if all the checks pass **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here. ",1.0,"Automate pulling in newsletter archive - **Is your feature request related to a problem? Please describe.** Use the newsletter RSS feed, https://buttondown.email/nickytonline/rss, to pull in the content for the newsletter archive. **Describe the solution you'd like** Create a script in the `/bin` folder that: - Pulls in the RSS feed - Generates markdown with grey matter for stuff like title, date etc. - creates and persist markdown to the `./src/newlsetter` directory GitHub Action that: - Runs the above script on Sunday evenings to pull in the latest news archives - Creates a pull request that auto-merges if all the checks pass **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here. ",1,automate pulling in newsletter archive is your feature request related to a problem please describe use the newsletter rss feed to pull in the content for the newsletter archive describe the solution you d like create a script in the bin folder that pulls in the rss feed generates markdown with grey matter for stuff like title date etc creates and persist markdown to the src newlsetter directory github action that runs the above script on sunday evenings to pull in the latest news archives creates a pull request that auto merges if all the checks pass describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here ,1 13811,4776877894.0,IssuesEvent,2016-10-27 14:55:35,dotnet/coreclr,https://api.github.com/repos/dotnet/coreclr,opened,JIT: liveness calculation underestimates heap-live-in,CodeGen,"The liveness calculation in `fgPerNodeLocalVarLiveness` (and elsewhere?) [avoids](https://github.com/dotnet/coreclr/blob/446e4f15e78278ad85b80f3160af88c15a6683fb/src/jit/liveness.cpp#L332-L334) [setting](https://github.com/dotnet/coreclr/blob/446e4f15e78278ad85b80f3160af88c15a6683fb/src/jit/liveness.cpp#L360-L362) `fgCurHeapUse` [whenever](https://github.com/dotnet/coreclr/blob/446e4f15e78278ad85b80f3160af88c15a6683fb/src/jit/liveness.cpp#L385-L387) `fgCurHeapDef` is [already set](https://github.com/dotnet/coreclr/blob/446e4f15e78278ad85b80f3160af88c15a6683fb/src/jit/liveness.cpp#L414-L416). It is doing a block-local forward walk, and `fgCurHeapUse` needs only capture upwards-exposed uses, so this would be the correct logic if those defs must-alias those uses, but since in many cases they don't, this logic is incorrect. I don't have any indication that this can cause bad codegen today -- the missing heap live-in annotation on the block can cause SSA construction to fail to generate a heap phi in that block, which in turn would give the upwards-exposed use an incorrect value number (potentially equal to one elsewhere), but because `GTF_CLS_VAR_ASG_LHS` and `GTF_IND_ASG_LHS` only [get set](https://github.com/dotnet/coreclr/blob/446e4f15e78278ad85b80f3160af88c15a6683fb/src/jit/ssabuilder.cpp#L911-L915) later during SSA construction, if the prior def is an indir or static var assignment, the code will still set the heap live-in annotation on the block; the other opcodes that define the heap (calls) also use the heap, so cannot be a problematic prior def. I haven't looked to see if somehow downstream code is dependent on this apparently-buggy behavior.",1.0,"JIT: liveness calculation underestimates heap-live-in - The liveness calculation in `fgPerNodeLocalVarLiveness` (and elsewhere?) [avoids](https://github.com/dotnet/coreclr/blob/446e4f15e78278ad85b80f3160af88c15a6683fb/src/jit/liveness.cpp#L332-L334) [setting](https://github.com/dotnet/coreclr/blob/446e4f15e78278ad85b80f3160af88c15a6683fb/src/jit/liveness.cpp#L360-L362) `fgCurHeapUse` [whenever](https://github.com/dotnet/coreclr/blob/446e4f15e78278ad85b80f3160af88c15a6683fb/src/jit/liveness.cpp#L385-L387) `fgCurHeapDef` is [already set](https://github.com/dotnet/coreclr/blob/446e4f15e78278ad85b80f3160af88c15a6683fb/src/jit/liveness.cpp#L414-L416). It is doing a block-local forward walk, and `fgCurHeapUse` needs only capture upwards-exposed uses, so this would be the correct logic if those defs must-alias those uses, but since in many cases they don't, this logic is incorrect. I don't have any indication that this can cause bad codegen today -- the missing heap live-in annotation on the block can cause SSA construction to fail to generate a heap phi in that block, which in turn would give the upwards-exposed use an incorrect value number (potentially equal to one elsewhere), but because `GTF_CLS_VAR_ASG_LHS` and `GTF_IND_ASG_LHS` only [get set](https://github.com/dotnet/coreclr/blob/446e4f15e78278ad85b80f3160af88c15a6683fb/src/jit/ssabuilder.cpp#L911-L915) later during SSA construction, if the prior def is an indir or static var assignment, the code will still set the heap live-in annotation on the block; the other opcodes that define the heap (calls) also use the heap, so cannot be a problematic prior def. I haven't looked to see if somehow downstream code is dependent on this apparently-buggy behavior.",0,jit liveness calculation underestimates heap live in the liveness calculation in fgpernodelocalvarliveness and elsewhere fgcurheapuse fgcurheapdef is it is doing a block local forward walk and fgcurheapuse needs only capture upwards exposed uses so this would be the correct logic if those defs must alias those uses but since in many cases they don t this logic is incorrect i don t have any indication that this can cause bad codegen today the missing heap live in annotation on the block can cause ssa construction to fail to generate a heap phi in that block which in turn would give the upwards exposed use an incorrect value number potentially equal to one elsewhere but because gtf cls var asg lhs and gtf ind asg lhs only later during ssa construction if the prior def is an indir or static var assignment the code will still set the heap live in annotation on the block the other opcodes that define the heap calls also use the heap so cannot be a problematic prior def i haven t looked to see if somehow downstream code is dependent on this apparently buggy behavior ,0 152483,19684482992.0,IssuesEvent,2022-01-11 20:23:40,harrinry/healthz,https://api.github.com/repos/harrinry/healthz,opened,CVE-2021-23343 (High) detected in path-parse-1.0.6.tgz,security vulnerability,"## CVE-2021-23343 - High Severity Vulnerability
Vulnerable Library - path-parse-1.0.6.tgz

Node.js path.parse() ponyfill

Library home page: https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz

Dependency Hierarchy: - eslint-plugin-import-2.22.0.tgz (Root Library) - resolve-1.17.0.tgz - :x: **path-parse-1.0.6.tgz** (Vulnerable Library)

Found in HEAD commit: a11c73151912fdc2766e3e8659b9c424044aca74

Found in base branch: master

Vulnerability Details

All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.

Publish Date: 2021-05-04

URL: CVE-2021-23343

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/jbgutierrez/path-parse/issues/8

Release Date: 2021-05-04

Fix Resolution (path-parse): 1.0.7

Direct dependency fix Resolution (eslint-plugin-import): 2.22.1

*** :rescue_worker_helmet: Automatic Remediation is available for this issue ",True,"CVE-2021-23343 (High) detected in path-parse-1.0.6.tgz - ## CVE-2021-23343 - High Severity Vulnerability
Vulnerable Library - path-parse-1.0.6.tgz

Node.js path.parse() ponyfill

Library home page: https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz

Dependency Hierarchy: - eslint-plugin-import-2.22.0.tgz (Root Library) - resolve-1.17.0.tgz - :x: **path-parse-1.0.6.tgz** (Vulnerable Library)

Found in HEAD commit: a11c73151912fdc2766e3e8659b9c424044aca74

Found in base branch: master

Vulnerability Details

All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.

Publish Date: 2021-05-04

URL: CVE-2021-23343

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/jbgutierrez/path-parse/issues/8

Release Date: 2021-05-04

Fix Resolution (path-parse): 1.0.7

Direct dependency fix Resolution (eslint-plugin-import): 2.22.1

*** :rescue_worker_helmet: Automatic Remediation is available for this issue ",0,cve high detected in path parse tgz cve high severity vulnerability vulnerable library path parse tgz node js path parse ponyfill library home page a href dependency hierarchy eslint plugin import tgz root library resolve tgz x path parse tgz vulnerable library found in head commit a href found in base branch master vulnerability details all versions of package path parse are vulnerable to regular expression denial of service redos via splitdevicere splittailre and splitpathre regular expressions redos exhibits polynomial worst case time complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution path parse direct dependency fix resolution eslint plugin import rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree eslint plugin import isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails all versions of package path parse are vulnerable to regular expression denial of service redos via splitdevicere splittailre and splitpathre regular expressions redos exhibits polynomial worst case time complexity vulnerabilityurl ,0 4752,17377474993.0,IssuesEvent,2021-07-31 01:58:56,jgyates/genmon,https://api.github.com/repos/jgyates/genmon,closed,Possibility of connecting Genmon to Homebridge (and ultimately Homekit),automation - monitoring apps,"Hi there - I love Genmon, and have been using it now for over a year. Living in florida its really useful to be able to monitor the generator remotely. Being a Mac / IOS household with Homebridge installed to bridge Somfy Shades, Insteon Switches and Nest thermostats into Homekit, it left me wondering if you have thought about this for Genmon? ",1.0,"Possibility of connecting Genmon to Homebridge (and ultimately Homekit) - Hi there - I love Genmon, and have been using it now for over a year. Living in florida its really useful to be able to monitor the generator remotely. Being a Mac / IOS household with Homebridge installed to bridge Somfy Shades, Insteon Switches and Nest thermostats into Homekit, it left me wondering if you have thought about this for Genmon? ",1,possibility of connecting genmon to homebridge and ultimately homekit hi there i love genmon and have been using it now for over a year living in florida its really useful to be able to monitor the generator remotely being a mac ios household with homebridge installed to bridge somfy shades insteon switches and nest thermostats into homekit it left me wondering if you have thought about this for genmon ,1 98671,12344379732.0,IssuesEvent,2020-05-15 06:51:19,microsoft/pyright,https://api.github.com/repos/microsoft/pyright,closed,Inconsistent behavior of NoReturn and try except branch,as designed,"Pyright incorrectly report an error for the following code: ```python class E: def __call__(self): raise ValueError def f(): try: import pathlib except ImportError: E()() a = pathlib.Path() # error: ""pathlib"" is possibly unbound ``` However, the next code is passed: ``` # now error is emitted from function, not class. def e(): raise ValueError def g(): try: import pathlib except ImportError: e() a = pathlib.Path() # this line passes the check ``` ",1.0,"Inconsistent behavior of NoReturn and try except branch - Pyright incorrectly report an error for the following code: ```python class E: def __call__(self): raise ValueError def f(): try: import pathlib except ImportError: E()() a = pathlib.Path() # error: ""pathlib"" is possibly unbound ``` However, the next code is passed: ``` # now error is emitted from function, not class. def e(): raise ValueError def g(): try: import pathlib except ImportError: e() a = pathlib.Path() # this line passes the check ``` ",0,inconsistent behavior of noreturn and try except branch pyright incorrectly report an error for the following code python class e def call self raise valueerror def f try import pathlib except importerror e a pathlib path error pathlib is possibly unbound however the next code is passed now error is emitted from function not class def e raise valueerror def g try import pathlib except importerror e a pathlib path this line passes the check ,0 267770,23318849720.0,IssuesEvent,2022-08-08 14:41:25,RafaelGSS/node-flaky-test-labeler,https://api.github.com/repos/RafaelGSS/node-flaky-test-labeler,closed,PR Title,flaky-test linux,"### Test `test-fs-stat-bigint` ### Platform Linux ARM64, Linux x64, Other ### Console output ```console console output ``` ### Build links - build link ### Additional information additional info",1.0,"PR Title - ### Test `test-fs-stat-bigint` ### Platform Linux ARM64, Linux x64, Other ### Console output ```console console output ``` ### Build links - build link ### Additional information additional info",0,pr title test test fs stat bigint platform linux linux other console output console console output build links build link additional information additional info,0 7544,25098546708.0,IssuesEvent,2022-11-08 12:00:38,kylecorry31/Trail-Sense,https://api.github.com/repos/kylecorry31/Trail-Sense,opened,Automation: Upload to Google Play,automation,"Create a script to create a release on Google Play containing the .aab file and changelog. Should set release as draft, and I can manually publish. Can be merged into the release.py script so both the GitHub and Google Play releases happen at the same time.",1.0,"Automation: Upload to Google Play - Create a script to create a release on Google Play containing the .aab file and changelog. Should set release as draft, and I can manually publish. Can be merged into the release.py script so both the GitHub and Google Play releases happen at the same time.",1,automation upload to google play create a script to create a release on google play containing the aab file and changelog should set release as draft and i can manually publish can be merged into the release py script so both the github and google play releases happen at the same time ,1 14376,10151268699.0,IssuesEvent,2019-08-05 19:52:45,wirepas/gateway,https://api.github.com/repos/wirepas/gateway,closed,Improve error strings,dbus error handling sink service transport,"Here's and example of a current exception where the device's role is set incorrectly ``` 2019-08-01 13:20:42,222 | [ERROR] wirepas_gateway@sink_manager.py:212:Cannot set App Config GDBus.Error:com.wirepas.sink.config.error: [WPC_set_app_config_data]: C Mesh Lib ret = 10 Traceback (most recent call last): File ""/home/vladislav/.local/lib/python3.7/site-packages/wirepas_gateway/dbus/sink_manager.py"", line 206, in write_config self.proxy.SetAppConfig(seq, diag, data) File ""/home/vladislav/.local/lib/python3.7/site-packages/pydbus/proxy_method.py"", line 75, in call 0, timeout_to_glib(timeout), None).unpack() gi.repository.GLib.GError: g-io-error-quark: GDBus.Error:com.wirepas.sink.config.error: [WPC_set_app_config_data]: C Mesh Lib ret = 10 (36) ``` The error in the device's role is stated with error 10, which means `APP_RES_NODE_NOT_A_SINK` Please work on getting this information spelled out to the terminal to avoid having to drill down on the code.",1.0,"Improve error strings - Here's and example of a current exception where the device's role is set incorrectly ``` 2019-08-01 13:20:42,222 | [ERROR] wirepas_gateway@sink_manager.py:212:Cannot set App Config GDBus.Error:com.wirepas.sink.config.error: [WPC_set_app_config_data]: C Mesh Lib ret = 10 Traceback (most recent call last): File ""/home/vladislav/.local/lib/python3.7/site-packages/wirepas_gateway/dbus/sink_manager.py"", line 206, in write_config self.proxy.SetAppConfig(seq, diag, data) File ""/home/vladislav/.local/lib/python3.7/site-packages/pydbus/proxy_method.py"", line 75, in call 0, timeout_to_glib(timeout), None).unpack() gi.repository.GLib.GError: g-io-error-quark: GDBus.Error:com.wirepas.sink.config.error: [WPC_set_app_config_data]: C Mesh Lib ret = 10 (36) ``` The error in the device's role is stated with error 10, which means `APP_RES_NODE_NOT_A_SINK` Please work on getting this information spelled out to the terminal to avoid having to drill down on the code.",0,improve error strings here s and example of a current exception where the device s role is set incorrectly wirepas gateway sink manager py cannot set app config gdbus error com wirepas sink config error c mesh lib ret traceback most recent call last file home vladislav local lib site packages wirepas gateway dbus sink manager py line in write config self proxy setappconfig seq diag data file home vladislav local lib site packages pydbus proxy method py line in call timeout to glib timeout none unpack gi repository glib gerror g io error quark gdbus error com wirepas sink config error c mesh lib ret the error in the device s role is stated with error which means app res node not a sink please work on getting this information spelled out to the terminal to avoid having to drill down on the code ,0 4443,16551340357.0,IssuesEvent,2021-05-28 08:56:18,SAP/fundamental-ngx,https://api.github.com/repos/SAP/fundamental-ngx,closed,bug: time picker (platform): time picker input not focused after opening and closing via button ,E2E automation Low ariba bug platform,"#### Is this a bug, enhancement, or feature request? bug #### Briefly describe your proposal. the button to expand the time picker is missing the focus state #### If this is a bug, please provide steps for reproducing it. 1. go to https://fundamental-ngx.netlify.app/#/platform/time-picker 2. click the time picker btn twice 3. check focus state of input field AR: none ER: dotted border ![2021-03-25_3](https://user-images.githubusercontent.com/47522152/112433980-a974cd80-8d4b-11eb-9d05-bd169dc14cf9.png) expected: ![2021-05-26_8](https://user-images.githubusercontent.com/47522152/119674293-c3fb1e00-be44-11eb-82c7-55dee0255a80.png) ",1.0,"bug: time picker (platform): time picker input not focused after opening and closing via button - #### Is this a bug, enhancement, or feature request? bug #### Briefly describe your proposal. the button to expand the time picker is missing the focus state #### If this is a bug, please provide steps for reproducing it. 1. go to https://fundamental-ngx.netlify.app/#/platform/time-picker 2. click the time picker btn twice 3. check focus state of input field AR: none ER: dotted border ![2021-03-25_3](https://user-images.githubusercontent.com/47522152/112433980-a974cd80-8d4b-11eb-9d05-bd169dc14cf9.png) expected: ![2021-05-26_8](https://user-images.githubusercontent.com/47522152/119674293-c3fb1e00-be44-11eb-82c7-55dee0255a80.png) ",1,bug time picker platform time picker input not focused after opening and closing via button is this a bug enhancement or feature request bug briefly describe your proposal the button to expand the time picker is missing the focus state if this is a bug please provide steps for reproducing it go to click the time picker btn twice check focus state of input field ar none er dotted border expected ,1 6907,24028770022.0,IssuesEvent,2022-09-15 13:36:06,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] The default access mode of a restored RWX volume is RWO,kind/bug area/manager reproduce/always priority/0 require/automation-e2e feature/backup-restore backport-needed/1.3.2,"## Describe the bug It should be always the same as the access mode of the original volume ## To Reproduce Steps to reproduce the behavior: 1. Create a RWX volume 2. Create a backup for the volume 3. Create a volume from the backup without changing any parameters during the restore 4. The access mode of the restore volume is RWO ## Expected behavior The access mode of the restore volume should be RWX ## Environment - Longhorn version: v1.2.3 ## Additional context This issue is found when investigating https://github.com/longhorn/longhorn/issues/3367 ",1.0,"[BUG] The default access mode of a restored RWX volume is RWO - ## Describe the bug It should be always the same as the access mode of the original volume ## To Reproduce Steps to reproduce the behavior: 1. Create a RWX volume 2. Create a backup for the volume 3. Create a volume from the backup without changing any parameters during the restore 4. The access mode of the restore volume is RWO ## Expected behavior The access mode of the restore volume should be RWX ## Environment - Longhorn version: v1.2.3 ## Additional context This issue is found when investigating https://github.com/longhorn/longhorn/issues/3367 ",1, the default access mode of a restored rwx volume is rwo describe the bug it should be always the same as the access mode of the original volume to reproduce steps to reproduce the behavior create a rwx volume create a backup for the volume create a volume from the backup without changing any parameters during the restore the access mode of the restore volume is rwo expected behavior the access mode of the restore volume should be rwx environment longhorn version additional context this issue is found when investigating ,1 278698,30702388561.0,IssuesEvent,2023-07-27 01:25:55,Nivaskumark/CVE-2020-0074-frameworks_base,https://api.github.com/repos/Nivaskumark/CVE-2020-0074-frameworks_base,reopened,CVE-2021-0687 (Medium) detected in baseandroid-11.0.0_r39,Mend: dependency security vulnerability,"## CVE-2021-0687 - Medium Severity Vulnerability
Vulnerable Library - baseandroid-11.0.0_r39

Android framework classes and services

Library home page: https://android.googlesource.com/platform/frameworks/base

Found in HEAD commit: f63c00c11df9fe4c62ee2ed7d5f72e3a7ebec027

Found in base branch: master

Vulnerable Source Files (1)

/core/java/android/text/Layout.java

Vulnerability Details

In ellipsize of Layout.java, there is a possible ANR due to improper input validation. This could lead to local denial of service with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-9 Android-10 Android-11 Android-8.1Android ID: A-188913943

Publish Date: 2021-10-06

URL: CVE-2021-0687

CVSS 3 Score Details (5.0)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://source.android.com/security/bulletin/2021-09-01

Release Date: 2021-10-06

Fix Resolution: android-11.0.0_r43

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-0687 (Medium) detected in baseandroid-11.0.0_r39 - ## CVE-2021-0687 - Medium Severity Vulnerability
Vulnerable Library - baseandroid-11.0.0_r39

Android framework classes and services

Library home page: https://android.googlesource.com/platform/frameworks/base

Found in HEAD commit: f63c00c11df9fe4c62ee2ed7d5f72e3a7ebec027

Found in base branch: master

Vulnerable Source Files (1)

/core/java/android/text/Layout.java

Vulnerability Details

In ellipsize of Layout.java, there is a possible ANR due to improper input validation. This could lead to local denial of service with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-9 Android-10 Android-11 Android-8.1Android ID: A-188913943

Publish Date: 2021-10-06

URL: CVE-2021-0687

CVSS 3 Score Details (5.0)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://source.android.com/security/bulletin/2021-09-01

Release Date: 2021-10-06

Fix Resolution: android-11.0.0_r43

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in baseandroid cve medium severity vulnerability vulnerable library baseandroid android framework classes and services library home page a href found in head commit a href found in base branch master vulnerable source files core java android text layout java vulnerability details in ellipsize of layout java there is a possible anr due to improper input validation this could lead to local denial of service with no additional execution privileges needed user interaction is needed for exploitation product androidversions android android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with mend ,0 101427,11235416563.0,IssuesEvent,2020-01-09 08:16:58,cake-contrib/Cake.Codecov,https://api.github.com/repos/cake-contrib/Cake.Codecov,opened,Add wyam documentation,Documentation Up-for-grabs,"Similar to other cake addins, we should generate some basic documentation using wyam.",1.0,"Add wyam documentation - Similar to other cake addins, we should generate some basic documentation using wyam.",0,add wyam documentation similar to other cake addins we should generate some basic documentation using wyam ,0 2023,11273567765.0,IssuesEvent,2020-01-14 16:47:38,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,[UI Testst] Add test for Logins and Password menu (sign in with FxA),eng:automation,"There is already a test to check this option in Settings -> Privacy menu[: Logins and Passwords ](https://github.com/mozilla-mobile/fenix/blob/master/app/src/androidTest/java/org/mozilla/fenix/ui/SettingsPrivacyTest.kt#L115)menu but that test only checks the UI without being signed in with a FxAccount, not that a login is actually shown after signing in with. With this new test we check that part so that the coverage for this feature is complete, with and without using FxAccount and so with and without synced logins",1.0,"[UI Testst] Add test for Logins and Password menu (sign in with FxA) - There is already a test to check this option in Settings -> Privacy menu[: Logins and Passwords ](https://github.com/mozilla-mobile/fenix/blob/master/app/src/androidTest/java/org/mozilla/fenix/ui/SettingsPrivacyTest.kt#L115)menu but that test only checks the UI without being signed in with a FxAccount, not that a login is actually shown after signing in with. With this new test we check that part so that the coverage for this feature is complete, with and without using FxAccount and so with and without synced logins",1, add test for logins and password menu sign in with fxa there is already a test to check this option in settings privacy menu but that test only checks the ui without being signed in with a fxaccount not that a login is actually shown after signing in with with this new test we check that part so that the coverage for this feature is complete with and without using fxaccount and so with and without synced logins,1 83266,3632764512.0,IssuesEvent,2016-02-11 11:31:04,ddddirk/lotrdb,https://api.github.com/repos/ddddirk/lotrdb,closed,"Feature request: Add ""notes"" section to decks",low priority,"Back when the CardgameDB.com deckbuilder was the only one around, I created many decks with it. It provided a section called ""Strategy"" in the ""Save"" tab, which I made extensive use of to write myself notes on how I designed the deck to work. E.g., ""Try to put Support of the Eagles on Legolas, then get Eagles of the Misty Mountain out and use Vassal of the Windlord to boost it (after attacking, it goes under Eagles of the Misty Mountain). Do that a few times and now Legolas can kill just about anything with 8 Attack."" Now that I've discovered your deckbuilder, I've been transferring my decks to it rather than continue to use CardgameDB's not-quite-as-nice interface. I couldn't transfer them directly, but OCTGN's Deck Converter program has been doing a fine job of turning CardgameDB's deck format into .o8d files, which I could then upload to The Rivendell Councilroom. All is well. Except for my notes, that is. Since The Rivendell Councilroom doesn't have any way to store notes, when I import .o8d files, my notes get lost. (The .o8d format also allows for a sideboard, which The Rivendell Councilroom doesn't, but I've gotten around that by creating two decks, one called ""Eagles deck"" and one called ""Eagles sideboard"".) What I'd like to see is for decks to allow storage for arbitrary text in a ""Notes"" field. That would allow me to import my decks from CardgameDB or OCTGN without losing information. Would that be a lot of work? Or would that be relatively simple?",1.0,"Feature request: Add ""notes"" section to decks - Back when the CardgameDB.com deckbuilder was the only one around, I created many decks with it. It provided a section called ""Strategy"" in the ""Save"" tab, which I made extensive use of to write myself notes on how I designed the deck to work. E.g., ""Try to put Support of the Eagles on Legolas, then get Eagles of the Misty Mountain out and use Vassal of the Windlord to boost it (after attacking, it goes under Eagles of the Misty Mountain). Do that a few times and now Legolas can kill just about anything with 8 Attack."" Now that I've discovered your deckbuilder, I've been transferring my decks to it rather than continue to use CardgameDB's not-quite-as-nice interface. I couldn't transfer them directly, but OCTGN's Deck Converter program has been doing a fine job of turning CardgameDB's deck format into .o8d files, which I could then upload to The Rivendell Councilroom. All is well. Except for my notes, that is. Since The Rivendell Councilroom doesn't have any way to store notes, when I import .o8d files, my notes get lost. (The .o8d format also allows for a sideboard, which The Rivendell Councilroom doesn't, but I've gotten around that by creating two decks, one called ""Eagles deck"" and one called ""Eagles sideboard"".) What I'd like to see is for decks to allow storage for arbitrary text in a ""Notes"" field. That would allow me to import my decks from CardgameDB or OCTGN without losing information. Would that be a lot of work? Or would that be relatively simple?",0,feature request add notes section to decks back when the cardgamedb com deckbuilder was the only one around i created many decks with it it provided a section called strategy in the save tab which i made extensive use of to write myself notes on how i designed the deck to work e g try to put support of the eagles on legolas then get eagles of the misty mountain out and use vassal of the windlord to boost it after attacking it goes under eagles of the misty mountain do that a few times and now legolas can kill just about anything with attack now that i ve discovered your deckbuilder i ve been transferring my decks to it rather than continue to use cardgamedb s not quite as nice interface i couldn t transfer them directly but octgn s deck converter program has been doing a fine job of turning cardgamedb s deck format into files which i could then upload to the rivendell councilroom all is well except for my notes that is since the rivendell councilroom doesn t have any way to store notes when i import files my notes get lost the format also allows for a sideboard which the rivendell councilroom doesn t but i ve gotten around that by creating two decks one called eagles deck and one called eagles sideboard what i d like to see is for decks to allow storage for arbitrary text in a notes field that would allow me to import my decks from cardgamedb or octgn without losing information would that be a lot of work or would that be relatively simple ,0 48699,13184720455.0,IssuesEvent,2020-08-12 19:58:19,icecube-trac/tix3,https://api.github.com/repos/icecube-trac/tix3,opened,svn version upgrade needed (Trac #73),Incomplete Migration Migrated from Trac defect infrastructure,"
_Migrated from https://code.icecube.wisc.edu/ticket/73 , reported by blaufuss and owned by cgils_

```json { ""status"": ""closed"", ""changetime"": ""2007-06-26T16:16:23"", ""description"": ""DART nodes need svn version 1.4.x or better for latest\nand greatest dartboard voodoo.\n\n"", ""reporter"": ""blaufuss"", ""cc"": """", ""resolution"": ""wont or cant fix"", ""_ts"": ""1182874583000000"", ""component"": ""infrastructure"", ""summary"": ""svn version upgrade needed"", ""priority"": ""normal"", ""keywords"": """", ""time"": ""2007-06-25T21:20:12"", ""milestone"": """", ""owner"": ""cgils"", ""type"": ""defect"" } ```

",1.0,"svn version upgrade needed (Trac #73) -
_Migrated from https://code.icecube.wisc.edu/ticket/73 , reported by blaufuss and owned by cgils_

```json { ""status"": ""closed"", ""changetime"": ""2007-06-26T16:16:23"", ""description"": ""DART nodes need svn version 1.4.x or better for latest\nand greatest dartboard voodoo.\n\n"", ""reporter"": ""blaufuss"", ""cc"": """", ""resolution"": ""wont or cant fix"", ""_ts"": ""1182874583000000"", ""component"": ""infrastructure"", ""summary"": ""svn version upgrade needed"", ""priority"": ""normal"", ""keywords"": """", ""time"": ""2007-06-25T21:20:12"", ""milestone"": """", ""owner"": ""cgils"", ""type"": ""defect"" } ```

",0,svn version upgrade needed trac migrated from reported by blaufuss and owned by cgils json status closed changetime description dart nodes need svn version x or better for latest nand greatest dartboard voodoo n n reporter blaufuss cc resolution wont or cant fix ts component infrastructure summary svn version upgrade needed priority normal keywords time milestone owner cgils type defect ,0 46286,9920956251.0,IssuesEvent,2019-06-30 14:11:43,scorelab/senz,https://api.github.com/repos/scorelab/senz,closed,Setting up tests for the backend,GoogleSummerOfCode2019 testing,"**Description** No tests present for the backend part of senz. **Solution** Adding tests using Mocha and Chai in the test/ directory. ",1.0,"Setting up tests for the backend - **Description** No tests present for the backend part of senz. **Solution** Adding tests using Mocha and Chai in the test/ directory. ",0,setting up tests for the backend description no tests present for the backend part of senz solution adding tests using mocha and chai in the test directory ,0 21466,11660239703.0,IssuesEvent,2020-03-03 02:39:50,cityofaustin/atd-mobility-project-database,https://api.github.com/repos/cityofaustin/atd-mobility-project-database,closed,Discuss GIS <---> Knack integration,Project: Mobility Project Database Service: PM Type: Meeting Workgroup: AMD Workgroup: ATSD,"@amenity commented on [Fri Jun 07 2019](https://github.com/cityofaustin/atd-data-tech/issues/185) ### Objective For MoPed, we'll need to be able to edit features in a GIS application. ### Participants Jaime, John, Nathan, Cole, Amenity ### Agenda ------ - [ ] Schedule meeting - [ ] Meet - [ ] Attach meeting notes to this issue - [ ] Create resulting issues ",1.0,"Discuss GIS <---> Knack integration - @amenity commented on [Fri Jun 07 2019](https://github.com/cityofaustin/atd-data-tech/issues/185) ### Objective For MoPed, we'll need to be able to edit features in a GIS application. ### Participants Jaime, John, Nathan, Cole, Amenity ### Agenda ------ - [ ] Schedule meeting - [ ] Meet - [ ] Attach meeting notes to this issue - [ ] Create resulting issues ",0,discuss gis knack integration amenity commented on objective for moped we ll need to be able to edit features in a gis application participants jaime john nathan cole amenity agenda schedule meeting meet attach meeting notes to this issue create resulting issues ,0 124301,16603020835.0,IssuesEvent,2021-06-01 22:25:08,microsoft/TypeScript,https://api.github.com/repos/microsoft/TypeScript,closed,Inconsistency in types after type inference,Design Limitation,"I'm currently working on providing a nice API in a library where users will be providing implementations of a particular interface. Interface's methods are meant to be executed async in a particular order. Result of calling one method will be transformed in an known way and passed to the next method. In order to not force my users typing signatures again and again for every part of the process in type parameters, I wanted to benefit from type inference and not to have specify types at all, while still having proper type checking. Example below illustrates the intention. While trying things out — faced some inconsistencies in typescript (or VS Code?) which IMHO worth reporting. **TypeScript Version:** 2.9.0-dev.201xxxxx **Search Terms:** ""inference"" **Code** ```ts type F = [X]; type G = [Y]; interface IProcess { start(): X; process(opt: F): Y; finish(opt: G): void; } function foo(desc: IProcess): void { const x = desc.start(); const y = desc.process([x]); desc.finish([y]); } foo({ start() { return { name: ""Joe"" }; }, process([x]) { return x.name; }, finish([x]) { console.log(x); } }); ``` **Expected behavior:** In `process` function `x.name` would be of a type `string` **Actual behavior:** * `Property 'name' does not exist on type 'XX'.` * Even though: `var x: XX = { name: string }` ![types](https://user-images.githubusercontent.com/72801/40473581-d9db1690-5f3c-11e8-822a-06cad6e31e91.gif) **Playground Link:** [playground link](http://www.typescriptlang.org/play/#src=type%20F%3CX%3E%20%3D%20%5BX%5D%3B%0D%0Atype%20G%3CY%3E%20%3D%20%5BY%5D%3B%0D%0Ainterface%20IProcess%3CX%2C%20Y%3E%20%7B%0D%0A%20%20start()%3A%20X%3B%0D%0A%20%20process%3CXX%20%3D%20X%3E(opt%3A%20F%3CXX%3E)%3A%20Y%3B%0D%0A%20%20finish(opt%3A%20G%3CY%3E)%3A%20void%3B%0D%0A%7D%0D%0Afunction%20foo%3CX%2C%20Y%3E(desc%3A%20IProcess%3CX%2C%20Y%3E)%3A%20void%20%7B%0D%0A%20%20const%20x%20%3D%20desc.start()%3B%0D%0A%20%20const%20y%20%3D%20desc.process(%5Bx%5D)%3B%0D%0A%20%20desc.finish(%5By%5D)%3B%0D%0A%7D%0D%0Afoo(%7B%0D%0A%20%20start()%20%7B%0D%0A%20%20%20%20return%20%7B%0D%0A%20%20%20%20%20%20name%3A%20%22Joe%22%0D%0A%20%20%20%20%7D%3B%0D%0A%20%20%7D%2C%0D%0A%20%20process(%5Bx%5D)%20%7B%0D%0A%20%20%20%20return%20x.name%3B%0D%0A%20%20%7D%2C%0D%0A%20%20finish(%5Bx%5D)%20%7B%0D%0A%20%20%20%20console.log(x)%3B%0D%0A%20%20%7D%0D%0A%7D)%3B%0D%0A) **Related Issues:** didn't find any ",1.0,"Inconsistency in types after type inference - I'm currently working on providing a nice API in a library where users will be providing implementations of a particular interface. Interface's methods are meant to be executed async in a particular order. Result of calling one method will be transformed in an known way and passed to the next method. In order to not force my users typing signatures again and again for every part of the process in type parameters, I wanted to benefit from type inference and not to have specify types at all, while still having proper type checking. Example below illustrates the intention. While trying things out — faced some inconsistencies in typescript (or VS Code?) which IMHO worth reporting. **TypeScript Version:** 2.9.0-dev.201xxxxx **Search Terms:** ""inference"" **Code** ```ts type F = [X]; type G = [Y]; interface IProcess { start(): X; process(opt: F): Y; finish(opt: G): void; } function foo(desc: IProcess): void { const x = desc.start(); const y = desc.process([x]); desc.finish([y]); } foo({ start() { return { name: ""Joe"" }; }, process([x]) { return x.name; }, finish([x]) { console.log(x); } }); ``` **Expected behavior:** In `process` function `x.name` would be of a type `string` **Actual behavior:** * `Property 'name' does not exist on type 'XX'.` * Even though: `var x: XX = { name: string }` ![types](https://user-images.githubusercontent.com/72801/40473581-d9db1690-5f3c-11e8-822a-06cad6e31e91.gif) **Playground Link:** [playground link](http://www.typescriptlang.org/play/#src=type%20F%3CX%3E%20%3D%20%5BX%5D%3B%0D%0Atype%20G%3CY%3E%20%3D%20%5BY%5D%3B%0D%0Ainterface%20IProcess%3CX%2C%20Y%3E%20%7B%0D%0A%20%20start()%3A%20X%3B%0D%0A%20%20process%3CXX%20%3D%20X%3E(opt%3A%20F%3CXX%3E)%3A%20Y%3B%0D%0A%20%20finish(opt%3A%20G%3CY%3E)%3A%20void%3B%0D%0A%7D%0D%0Afunction%20foo%3CX%2C%20Y%3E(desc%3A%20IProcess%3CX%2C%20Y%3E)%3A%20void%20%7B%0D%0A%20%20const%20x%20%3D%20desc.start()%3B%0D%0A%20%20const%20y%20%3D%20desc.process(%5Bx%5D)%3B%0D%0A%20%20desc.finish(%5By%5D)%3B%0D%0A%7D%0D%0Afoo(%7B%0D%0A%20%20start()%20%7B%0D%0A%20%20%20%20return%20%7B%0D%0A%20%20%20%20%20%20name%3A%20%22Joe%22%0D%0A%20%20%20%20%7D%3B%0D%0A%20%20%7D%2C%0D%0A%20%20process(%5Bx%5D)%20%7B%0D%0A%20%20%20%20return%20x.name%3B%0D%0A%20%20%7D%2C%0D%0A%20%20finish(%5Bx%5D)%20%7B%0D%0A%20%20%20%20console.log(x)%3B%0D%0A%20%20%7D%0D%0A%7D)%3B%0D%0A) **Related Issues:** didn't find any ",0,inconsistency in types after type inference i m currently working on providing a nice api in a library where users will be providing implementations of a particular interface interface s methods are meant to be executed async in a particular order result of calling one method will be transformed in an known way and passed to the next method in order to not force my users typing signatures again and again for every part of the process in type parameters i wanted to benefit from type inference and not to have specify types at all while still having proper type checking example below illustrates the intention while trying things out — faced some inconsistencies in typescript or vs code which imho worth reporting typescript version dev search terms inference code ts type f type g interface iprocess start x process opt f y finish opt g void function foo desc iprocess void const x desc start const y desc process desc finish foo start return name joe process return x name finish console log x expected behavior in process function x name would be of a type string actual behavior property name does not exist on type xx even though var x xx name string playground link related issues didn t find any ,0 155021,24391428391.0,IssuesEvent,2022-10-04 15:30:29,Qiskit/platypus,https://api.github.com/repos/Qiskit/platypus,closed,Lesson 1 midway animation,content design motion & animation,Halfway through the first lesson there is a natural transition that would benefit from animation. ,1.0,Lesson 1 midway animation - Halfway through the first lesson there is a natural transition that would benefit from animation. ,0,lesson midway animation halfway through the first lesson there is a natural transition that would benefit from animation ,0 6617,23551096051.0,IssuesEvent,2022-08-21 20:55:54,submariner-io/shipyard,https://api.github.com/repos/submariner-io/shipyard,closed,Test day tooling (OpenShift),automation size:medium priority:medium confirmed,"We're having regular test days but the process of standing up clusters for testing is manual (except KIND based), it could be beneficial to develop some automated tooling (eg some scripts or a binary) that will do a lot of the manual work for standing up the clusters. E.G. to stand up a test env on AWS one needs to: 1. Deploy a couple of clusters on AWS, using the desired OpenShift version 2. Alter kubeconfigs for properly working with multiple clusters 3. Prepare the cloud for submariner deployment 4. Deploy submariner properly (e.g. globalnet if clusters were deployed with overlapping IPs, specific cable driver is needed) 5. Run `subctl verify` 6. Open an issue labeled with the testday label with `subctl gather` output in case of fasilure (of the verify or manual testing) Seems like most of these steps can be rather easily automated in one or a few commands which will reduce user-errors and streamline test days. It also seems that these steps will be rather similar for different clouds when deploying with OpenShift. This might also ease the testing process, thus encouraging participation in the test days.",1.0,"Test day tooling (OpenShift) - We're having regular test days but the process of standing up clusters for testing is manual (except KIND based), it could be beneficial to develop some automated tooling (eg some scripts or a binary) that will do a lot of the manual work for standing up the clusters. E.G. to stand up a test env on AWS one needs to: 1. Deploy a couple of clusters on AWS, using the desired OpenShift version 2. Alter kubeconfigs for properly working with multiple clusters 3. Prepare the cloud for submariner deployment 4. Deploy submariner properly (e.g. globalnet if clusters were deployed with overlapping IPs, specific cable driver is needed) 5. Run `subctl verify` 6. Open an issue labeled with the testday label with `subctl gather` output in case of fasilure (of the verify or manual testing) Seems like most of these steps can be rather easily automated in one or a few commands which will reduce user-errors and streamline test days. It also seems that these steps will be rather similar for different clouds when deploying with OpenShift. This might also ease the testing process, thus encouraging participation in the test days.",1,test day tooling openshift we re having regular test days but the process of standing up clusters for testing is manual except kind based it could be beneficial to develop some automated tooling eg some scripts or a binary that will do a lot of the manual work for standing up the clusters e g to stand up a test env on aws one needs to deploy a couple of clusters on aws using the desired openshift version alter kubeconfigs for properly working with multiple clusters prepare the cloud for submariner deployment deploy submariner properly e g globalnet if clusters were deployed with overlapping ips specific cable driver is needed run subctl verify open an issue labeled with the testday label with subctl gather output in case of fasilure of the verify or manual testing seems like most of these steps can be rather easily automated in one or a few commands which will reduce user errors and streamline test days it also seems that these steps will be rather similar for different clouds when deploying with openshift this might also ease the testing process thus encouraging participation in the test days ,1 78966,9811768613.0,IssuesEvent,2019-06-13 01:20:22,pwa-builder/PWABuilder,https://api.github.com/repos/pwa-builder/PWABuilder,reopened,Code tags [34],design,"**Describe the bug** Code tags doesn't match design **Expected behavior** - [x] missing border radius - [ ] missing font treatment - [x] 8px padding left/right _code_ ``` /* Rectangle 2 */ position: absolute; height: 24px; left: 0%; right: 0%; top: 0px; background: rgba(60, 60, 60, 0.05); border-radius: 4px; /* app_name */ position: absolute; height: 24px; left: 8px; right: 8px; top: calc(50% - 24px/2); font-family: Fira Code; font-style: normal; font-weight: normal; font-size: 12px; line-height: 14px; display: flex; align-items: center; text-align: center; color: #000000; ``` **Screenshots** In current build: ![image](https://user-images.githubusercontent.com/47799119/58820239-e73cbb00-85e6-11e9-9325-d1c879acc64a.png) Design treatment: ![image](https://user-images.githubusercontent.com/47799119/58820283-063b4d00-85e7-11e9-858d-ea985917a6a7.png) **Additional info (please complete the following information):** - _OS_ | Windows 10 OS Build 18362.113 - _Browser_ | Google Chrome - _Browser version:_ 74.0.3729.169 (Official Build) (64-bit) (cohort: Stable) _Revision_ | 78e4f8db3ce38f6c26cf56eed7ae9b331fc67ada-refs/branch-heads/3729@{#1013} ",1.0,"Code tags [34] - **Describe the bug** Code tags doesn't match design **Expected behavior** - [x] missing border radius - [ ] missing font treatment - [x] 8px padding left/right _code_ ``` /* Rectangle 2 */ position: absolute; height: 24px; left: 0%; right: 0%; top: 0px; background: rgba(60, 60, 60, 0.05); border-radius: 4px; /* app_name */ position: absolute; height: 24px; left: 8px; right: 8px; top: calc(50% - 24px/2); font-family: Fira Code; font-style: normal; font-weight: normal; font-size: 12px; line-height: 14px; display: flex; align-items: center; text-align: center; color: #000000; ``` **Screenshots** In current build: ![image](https://user-images.githubusercontent.com/47799119/58820239-e73cbb00-85e6-11e9-9325-d1c879acc64a.png) Design treatment: ![image](https://user-images.githubusercontent.com/47799119/58820283-063b4d00-85e7-11e9-858d-ea985917a6a7.png) **Additional info (please complete the following information):** - _OS_ | Windows 10 OS Build 18362.113 - _Browser_ | Google Chrome - _Browser version:_ 74.0.3729.169 (Official Build) (64-bit) (cohort: Stable) _Revision_ | 78e4f8db3ce38f6c26cf56eed7ae9b331fc67ada-refs/branch-heads/3729@{#1013} ",0,code tags describe the bug code tags doesn t match design expected behavior missing border radius missing font treatment padding left right code rectangle position absolute height left right top background rgba border radius app name position absolute height left right top calc font family fira code font style normal font weight normal font size line height display flex align items center text align center color screenshots in current build design treatment additional info please complete the following information os windows  os build browser google chrome browser version   official build   bit   cohort stable revision refs branch heads ,0 3700,14369697600.0,IssuesEvent,2020-12-01 10:06:01,elastic/integrations,https://api.github.com/repos/elastic/integrations,closed,[CI] Introduce concurrency in package testing,team:automation,"Currently we run pipeline tests for all packages first, followed by system tests for all packages. As the number of packages grows, as we define pipeline and system tests for more packages, and when we add a third type of test ([asset loading tests](https://github.com/elastic/elastic-package/issues/115)), this serial ordering will slow down CI jobs. An initial optimization might be to run each type of test for all packages concurrently. So if we have N packages and M test types, there will be M concurrent sub-jobs, one for running each type of test on the N packages. A further optimization might be to run each type of test (pipeline, system, etc.) for each package concurrently. So if we have N packages and M test types, there will be N x M concurrent sub-jobs, one for running each type of test on each package.",1.0,"[CI] Introduce concurrency in package testing - Currently we run pipeline tests for all packages first, followed by system tests for all packages. As the number of packages grows, as we define pipeline and system tests for more packages, and when we add a third type of test ([asset loading tests](https://github.com/elastic/elastic-package/issues/115)), this serial ordering will slow down CI jobs. An initial optimization might be to run each type of test for all packages concurrently. So if we have N packages and M test types, there will be M concurrent sub-jobs, one for running each type of test on the N packages. A further optimization might be to run each type of test (pipeline, system, etc.) for each package concurrently. So if we have N packages and M test types, there will be N x M concurrent sub-jobs, one for running each type of test on each package.",1, introduce concurrency in package testing currently we run pipeline tests for all packages first followed by system tests for all packages as the number of packages grows as we define pipeline and system tests for more packages and when we add a third type of test this serial ordering will slow down ci jobs an initial optimization might be to run each type of test for all packages concurrently so if we have n packages and m test types there will be m concurrent sub jobs one for running each type of test on the n packages a further optimization might be to run each type of test pipeline system etc for each package concurrently so if we have n packages and m test types there will be n x m concurrent sub jobs one for running each type of test on each package ,1 76019,21103720574.0,IssuesEvent,2022-04-04 16:38:01,envoyproxy/envoy,https://api.github.com/repos/envoyproxy/envoy,closed,deps: Adopt patch free Chromium URL library,area/build help wanted,"When we have upstream Chromium to give us an IDN-free optional build we can adopt Chromium URL library without patching it. Context: https://github.com/envoyproxy/envoy/pull/14583#discussion_r559660170",1.0,"deps: Adopt patch free Chromium URL library - When we have upstream Chromium to give us an IDN-free optional build we can adopt Chromium URL library without patching it. Context: https://github.com/envoyproxy/envoy/pull/14583#discussion_r559660170",0,deps adopt patch free chromium url library when we have upstream chromium to give us an idn free optional build we can adopt chromium url library without patching it context ,0 831,8343188285.0,IssuesEvent,2018-09-30 01:09:24,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Question on Contributor role,automation/svc cxp product-question triaged,"For the Contributor role it says: Action: Microsoft.Automation/automationAccounts/ Description: Create and manage resources of all types I wanted to check if Action could contain the asterisk at the end to indicate access to all resources below this level? For example ""Microsoft.Automation/automationAccounts/*"" --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 0db74982-1ecc-bfed-5cd1-e68450963636 * Version Independent ID: 75190263-b53d-6346-b3ad-0c0c2715bc5b * Content: [Role-based access control in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-role-based-access-control) * Content Source: [articles/automation/automation-role-based-access-control.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-role-based-access-control.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Question on Contributor role - For the Contributor role it says: Action: Microsoft.Automation/automationAccounts/ Description: Create and manage resources of all types I wanted to check if Action could contain the asterisk at the end to indicate access to all resources below this level? For example ""Microsoft.Automation/automationAccounts/*"" --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 0db74982-1ecc-bfed-5cd1-e68450963636 * Version Independent ID: 75190263-b53d-6346-b3ad-0c0c2715bc5b * Content: [Role-based access control in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-role-based-access-control) * Content Source: [articles/automation/automation-role-based-access-control.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-role-based-access-control.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,question on contributor role for the contributor role it says action microsoft automation automationaccounts description create and manage resources of all types i wanted to check if action could contain the asterisk at the end to indicate access to all resources below this level for example microsoft automation automationaccounts document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id bfed version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 1301,9853123957.0,IssuesEvent,2019-06-19 14:12:15,spacemeshos/go-spacemesh,https://api.github.com/repos/spacemeshos/go-spacemesh,closed,Split system tests in travis,CI automation,"# Overview / Motivation Travis runs system tests as 1 step. In order to bring more visibility on tests we wich to split the system tests for each protocol # The Task Split system tests for: p2p mining hare sync # Implementation Notes TODO: Add links to relevant resources, specs, related issues, etc... # Contribution Guidelines Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter [dev channel](https://gitter.im/spacemesh-os/Lobby) 1. Introduce yourself on go-spacemesh [dev chat channel](https://gitter.im/spacemesh-os/Lobby) - ask our team any question you may have about this task 2. Fork branch `develop` to your own repo and work in your repo 3. You must document all methods, enums and types with [godoc comments](https://blog.golang.org/godoc-documenting-go-code) 4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature 5. When ready for code review, submit a PR from your repo back to branch `develop` 6. Attach relevant issue to PR ",1.0,"Split system tests in travis - # Overview / Motivation Travis runs system tests as 1 step. In order to bring more visibility on tests we wich to split the system tests for each protocol # The Task Split system tests for: p2p mining hare sync # Implementation Notes TODO: Add links to relevant resources, specs, related issues, etc... # Contribution Guidelines Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter [dev channel](https://gitter.im/spacemesh-os/Lobby) 1. Introduce yourself on go-spacemesh [dev chat channel](https://gitter.im/spacemesh-os/Lobby) - ask our team any question you may have about this task 2. Fork branch `develop` to your own repo and work in your repo 3. You must document all methods, enums and types with [godoc comments](https://blog.golang.org/godoc-documenting-go-code) 4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature 5. When ready for code review, submit a PR from your repo back to branch `develop` 6. Attach relevant issue to PR ",1,split system tests in travis overview motivation travis runs system tests as step in order to bring more visibility on tests we wich to split the system tests for each protocol the task split system tests for mining hare sync implementation notes todo add links to relevant resources specs related issues etc contribution guidelines important issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity we will not assign tasks to developers who have nt introduced themselves on our gitter introduce yourself on go spacemesh ask our team any question you may have about this task fork branch develop to your own repo and work in your repo you must document all methods enums and types with you must write go unit tests for all types and methods when submitting a component and integration tests if you submit a feature when ready for code review submit a pr from your repo back to branch develop attach relevant issue to pr ,1 2621,12345272820.0,IssuesEvent,2020-05-15 08:39:15,jcallaghan/home-assistant-config,https://api.github.com/repos/jcallaghan/home-assistant-config,opened,Fresh milk delivery,calendar household integration: automation routine,"### Background I have recently started getting fresh milk and eggs from the local dairy. This morning I completely forgot that it gets delivered Mondays and Fridays. So my milk was sat on the doorstep as I started my day. Luckily after making my first mug I tea, I remembered I needed to fetch it in. This got me thinking, and here we are issue #. ### Objective Create an unobtrusive sensor/thing to remind me to place out clean empty bottles the night before milk day but also bring the fresh bottles in on milk day as soon as possible. ### Ideas * The automation could leverage motion in the hallway, front door contact or the disarming of the alarm and using TTS to provide an audible reminder. * Using some kind of tray where the milk bottles can be placed will allow me to use a sensor to provide insight if the task has completed. * I think an Aqara vibration or contact sensor would be discrete and mobile allow me to retrofit it onto the milk bottle holder.",1.0,"Fresh milk delivery - ### Background I have recently started getting fresh milk and eggs from the local dairy. This morning I completely forgot that it gets delivered Mondays and Fridays. So my milk was sat on the doorstep as I started my day. Luckily after making my first mug I tea, I remembered I needed to fetch it in. This got me thinking, and here we are issue #. ### Objective Create an unobtrusive sensor/thing to remind me to place out clean empty bottles the night before milk day but also bring the fresh bottles in on milk day as soon as possible. ### Ideas * The automation could leverage motion in the hallway, front door contact or the disarming of the alarm and using TTS to provide an audible reminder. * Using some kind of tray where the milk bottles can be placed will allow me to use a sensor to provide insight if the task has completed. * I think an Aqara vibration or contact sensor would be discrete and mobile allow me to retrofit it onto the milk bottle holder.",1,fresh milk delivery background i have recently started getting fresh milk and eggs from the local dairy this morning i completely forgot that it gets delivered mondays and fridays so my milk was sat on the doorstep as i started my day luckily after making my first mug i tea i remembered i needed to fetch it in this got me thinking and here we are issue objective create an unobtrusive sensor thing to remind me to place out clean empty bottles the night before milk day but also bring the fresh bottles in on milk day as soon as possible ideas the automation could leverage motion in the hallway front door contact or the disarming of the alarm and using tts to provide an audible reminder using some kind of tray where the milk bottles can be placed will allow me to use a sensor to provide insight if the task has completed i think an aqara vibration or contact sensor would be discrete and mobile allow me to retrofit it onto the milk bottle holder ,1 16353,31023720511.0,IssuesEvent,2023-08-10 07:41:38,malkitbenning/Coursework-Planner,https://api.github.com/repos/malkitbenning/Coursework-Planner,opened,[TECH ED] Module Project: Level 250,🏕 Priority Mandatory 🎯 Topic Requirements 🎯 Topic Delivery 🎯 Topic Iteration 🐋 Size X-Large Week 2 📅 Databases,"From Module-Databases created by [SallyMcGrath](https://github.com/SallyMcGrath): CodeYourFuture/Module-Databases#6 ### Link to the coursework https://github.com/CodeYourFuture/Full-Stack-Project-Assessment ### Why are we doing this? Continue building your full stack project. You must reach level 250 by the end of this week and seek code review. Make sure you complete each level before moving on to the next stage. Your project can only be assessed as reaching a level when all the requirements for that level are met. Read the requirements carefully. When you get stuck, open a draft PR and explain your blocker. Get help from colleagues and mentors. As a professional developer, you will often encounter blockers in your daily work. It can feel frustrating. Learning to share your blockers productively and resolve them collaboratively is an important step in becoming a good developer. ### Maximum time in hours 8 ### How to get help Share your blockers in your class channel. Use the opportunity to refine your skill in [Asking Questions](https://syllabus.codeyourfuture.io/guides/asking-questions) like a developer. ### How to submit 1. Fork to your Github account. 2. Make regular small commits with clear messages. 3. When you are ready, open a PR to the CYF repo, following the instructions in the PR template. ### How to review 1. Complete your PR template 2. Ask for review from a classmate or mentor 3. Make changes based on their feedback 4. Review and refactor again next week ### Anything else? _No response_",1.0,"[TECH ED] Module Project: Level 250 - From Module-Databases created by [SallyMcGrath](https://github.com/SallyMcGrath): CodeYourFuture/Module-Databases#6 ### Link to the coursework https://github.com/CodeYourFuture/Full-Stack-Project-Assessment ### Why are we doing this? Continue building your full stack project. You must reach level 250 by the end of this week and seek code review. Make sure you complete each level before moving on to the next stage. Your project can only be assessed as reaching a level when all the requirements for that level are met. Read the requirements carefully. When you get stuck, open a draft PR and explain your blocker. Get help from colleagues and mentors. As a professional developer, you will often encounter blockers in your daily work. It can feel frustrating. Learning to share your blockers productively and resolve them collaboratively is an important step in becoming a good developer. ### Maximum time in hours 8 ### How to get help Share your blockers in your class channel. Use the opportunity to refine your skill in [Asking Questions](https://syllabus.codeyourfuture.io/guides/asking-questions) like a developer. ### How to submit 1. Fork to your Github account. 2. Make regular small commits with clear messages. 3. When you are ready, open a PR to the CYF repo, following the instructions in the PR template. ### How to review 1. Complete your PR template 2. Ask for review from a classmate or mentor 3. Make changes based on their feedback 4. Review and refactor again next week ### Anything else? _No response_",0, module project level from module databases created by codeyourfuture module databases link to the coursework why are we doing this continue building your full stack project you must reach level by the end of this week and seek code review make sure you complete each level before moving on to the next stage your project can only be assessed as reaching a level when all the requirements for that level are met read the requirements carefully when you get stuck open a draft pr and explain your blocker get help from colleagues and mentors as a professional developer you will often encounter blockers in your daily work it can feel frustrating learning to share your blockers productively and resolve them collaboratively is an important step in becoming a good developer maximum time in hours how to get help share your blockers in your class channel use the opportunity to refine your skill in like a developer how to submit fork to your github account make regular small commits with clear messages when you are ready open a pr to the cyf repo following the instructions in the pr template how to review complete your pr template ask for review from a classmate or mentor make changes based on their feedback review and refactor again next week anything else no response ,0 152268,23941465882.0,IssuesEvent,2022-09-11 23:52:52,boostorg/url,https://api.github.com/repos/boostorg/url,closed,Discuss set_scheme vs set_scheme_id,Design,Discuss whether it should be `url_base::set_scheme` or `url_base::set_scheme_id`,1.0,Discuss set_scheme vs set_scheme_id - Discuss whether it should be `url_base::set_scheme` or `url_base::set_scheme_id`,0,discuss set scheme vs set scheme id discuss whether it should be url base set scheme or url base set scheme id ,0 3778,14548754692.0,IssuesEvent,2020-12-16 02:04:03,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Page is scrolled down when I type a text with the space symbol (Safari 11),BROWSER: Safari STATE: Need improvement STATE: Stale SYSTEM: automations,"### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? Page is scrolled down when I type a text with the space symbol into a content editable element. ### What is the expected behavior? Page should not be scrolled. ### How would you reproduce the current behavior (if this is a bug)? Start the client test into the Safari 11. #### Provide the test code and the tested page URL (if applicable) Tested page URL: `test\client\fixtures\automation\content-editable\api-actions-content-editable-test\index-test.js` ### Specify your * operating system: win 8.1 * testcafe version: 0.18.6-dev20171211 * node.js version: 9.0.0",1.0,"Page is scrolled down when I type a text with the space symbol (Safari 11) - ### Are you requesting a feature or reporting a bug? bug ### What is the current behavior? Page is scrolled down when I type a text with the space symbol into a content editable element. ### What is the expected behavior? Page should not be scrolled. ### How would you reproduce the current behavior (if this is a bug)? Start the client test into the Safari 11. #### Provide the test code and the tested page URL (if applicable) Tested page URL: `test\client\fixtures\automation\content-editable\api-actions-content-editable-test\index-test.js` ### Specify your * operating system: win 8.1 * testcafe version: 0.18.6-dev20171211 * node.js version: 9.0.0",1,page is scrolled down when i type a text with the space symbol safari are you requesting a feature or reporting a bug bug what is the current behavior page is scrolled down when i type a text with the space symbol into a content editable element what is the expected behavior page should not be scrolled how would you reproduce the current behavior if this is a bug start the client test into the safari provide the test code and the tested page url if applicable tested page url test client fixtures automation content editable api actions content editable test index test js specify your operating system win testcafe version node js version ,1 7564,25133909692.0,IssuesEvent,2022-11-09 17:02:57,kubefirst/kubefirst,https://api.github.com/repos/kubefirst/kubefirst,opened, Improve Release Process - CI/Process,automation,"## Improve Release Process In order to make sure our releases are covered by automation validations we could have a choice of making a ""safe-release"", meaning if choose so before we trigger the release creation process we trigger also some automation to verify the release. ```bash - Define a `commit_id` - TEST_INSTALL: In parallel: - CHECK_BINARY: build code and run `kubefirst info` - smoke test - LOCAL_TEST: IF `CHECK_BINARY` works: trigger a pod to run a local install(check if works) - smoke test - AWS_GITLAB_TEST: IF `LOCAL_TEST` works: trigger a pod to run a aws-gitlab - `domain gitlab.ci.domain.com` - AWS_GITHUB_TEST: IF `LOCAL_TEST` works: trigger a pod to run a aws-github - `domain github.ci.domain.com` > `AWS_GITLAB_TEST` and `AWS_GITHUB_TEST` could work in parallel. - IF `TEST_INSTALL` works: Generate release from a `commit_id` -> As we do today ``` ",1.0," Improve Release Process - CI/Process - ## Improve Release Process In order to make sure our releases are covered by automation validations we could have a choice of making a ""safe-release"", meaning if choose so before we trigger the release creation process we trigger also some automation to verify the release. ```bash - Define a `commit_id` - TEST_INSTALL: In parallel: - CHECK_BINARY: build code and run `kubefirst info` - smoke test - LOCAL_TEST: IF `CHECK_BINARY` works: trigger a pod to run a local install(check if works) - smoke test - AWS_GITLAB_TEST: IF `LOCAL_TEST` works: trigger a pod to run a aws-gitlab - `domain gitlab.ci.domain.com` - AWS_GITHUB_TEST: IF `LOCAL_TEST` works: trigger a pod to run a aws-github - `domain github.ci.domain.com` > `AWS_GITLAB_TEST` and `AWS_GITHUB_TEST` could work in parallel. - IF `TEST_INSTALL` works: Generate release from a `commit_id` -> As we do today ``` ",1, improve release process ci process improve release process in order to make sure our releases are covered by automation validations we could have a choice of making a safe release meaning if choose so before we trigger the release creation process we trigger also some automation to verify the release bash define a commit id test install in parallel check binary build code and run kubefirst info smoke test local test if check binary works trigger a pod to run a local install check if works smoke test aws gitlab test if local test works trigger a pod to run a aws gitlab domain gitlab ci domain com aws github test if local test works trigger a pod to run a aws github domain github ci domain com aws gitlab test and aws github test could work in parallel if test install works generate release from a commit id as we do today ,1 9753,30489498401.0,IssuesEvent,2023-07-18 06:35:07,jakubfiglak/digestor-5000,https://api.github.com/repos/jakubfiglak/digestor-5000,opened,Automation around submitting a resource,:robot: automation,"- get page metadata (use something like https://jsonlink.io/ or https://urlmeta.org/ perhaps) and save it to the database - potentially run through some AI engine and generate relevant tags?",1.0,"Automation around submitting a resource - - get page metadata (use something like https://jsonlink.io/ or https://urlmeta.org/ perhaps) and save it to the database - potentially run through some AI engine and generate relevant tags?",1,automation around submitting a resource get page metadata use something like or perhaps and save it to the database potentially run through some ai engine and generate relevant tags ,1 1459,10164366912.0,IssuesEvent,2019-08-07 11:30:13,elastic/apm-server,https://api.github.com/repos/elastic/apm-server,opened,[Automation][ci] more resilience to environmental issues ,automation ci enhancement,"Some broken builds were related to some issues when accessing to dockerhub, in order to minimize this particular glitches we will retry twice with sleep time, it might take a bit longer the builds but ideally, it might reduce the broken builds when there are issues related to accessing dockerhub for instance. An example of the typical environmental issue is: ![image](https://user-images.githubusercontent.com/2871786/62619571-fe619700-b90e-11e9-91d9-22db50e127a0.png) ",1.0,"[Automation][ci] more resilience to environmental issues - Some broken builds were related to some issues when accessing to dockerhub, in order to minimize this particular glitches we will retry twice with sleep time, it might take a bit longer the builds but ideally, it might reduce the broken builds when there are issues related to accessing dockerhub for instance. An example of the typical environmental issue is: ![image](https://user-images.githubusercontent.com/2871786/62619571-fe619700-b90e-11e9-91d9-22db50e127a0.png) ",1, more resilience to environmental issues some broken builds were related to some issues when accessing to dockerhub in order to minimize this particular glitches we will retry twice with sleep time it might take a bit longer the builds but ideally it might reduce the broken builds when there are issues related to accessing dockerhub for instance an example of the typical environmental issue is ,1 50137,7569098463.0,IssuesEvent,2018-04-23 02:09:59,dpiessens/specbind,https://api.github.com/repos/dpiessens/specbind,closed,"Typos on ""Waiting Steps"" wiki page",Documentation bug,"Hi. I noticed a few typos on the [Waiting Steps](https://github.com/dpiessens/specbind/wiki/Waiting-Steps) wiki page. They're all in the ""Check to see if ajax calls are completed"" section, at the very bottom. 1. I noticed there are 4 tables when there should only be 2. 2. All the `Given` steps should read `I waited for ...` 3. All the `When` and `Then` steps should read `I wait for ...` Thank you.",1.0,"Typos on ""Waiting Steps"" wiki page - Hi. I noticed a few typos on the [Waiting Steps](https://github.com/dpiessens/specbind/wiki/Waiting-Steps) wiki page. They're all in the ""Check to see if ajax calls are completed"" section, at the very bottom. 1. I noticed there are 4 tables when there should only be 2. 2. All the `Given` steps should read `I waited for ...` 3. All the `When` and `Then` steps should read `I wait for ...` Thank you.",0,typos on waiting steps wiki page hi i noticed a few typos on the wiki page they re all in the check to see if ajax calls are completed section at the very bottom i noticed there are tables when there should only be all the given steps should read i waited for all the when and then steps should read i wait for thank you ,0 5368,19341880863.0,IssuesEvent,2021-12-15 06:09:49,tikv/pd,https://api.github.com/repos/tikv/pd,closed,PD does not initiate scheduling normally when set a tiflash replica leading to tiflash table can not be synchronized,type/bug severity/moderate found/automation,"## Bug Report ### What did you do? In [tiflash_regression_test_daily](https://ci.pingcap.net/blue/organizations/jenkins/tiflash_regression_test_daily/detail/tiflash_regression_test_daily/940/pipeline/), PD does not initiate scheduling normally when set a tiflash replica pd log: [_tmp_ti_ci_release_pd_pd.log](https://github.com/tikv/pd/files/7578127/_tmp_ti_ci_release_pd_pd.log) ### What did you expect to see? 当设置 replica 副本后 在 pd.log 同步过程会有如下调度日志 : `[""add operator""] [region-id=...] [operator=""\""rule-split-region ... keys [ ]} ... ]` `[""add operator""] [region-id=...] [operator=""\""add-rule-peer ..., steps:[add learner peer ... on store ` ### What did you see instead? 在 pd.log 观察并没有出现调度日志,实际检查中表也没有成功同步 tiflash replica `[2021/11/21 10:59:29.586 +00:00] [INFO] [rule_manager.go:243] [""placement rule updated""] [rule=""{\""group_id\"":\""tiflash\"",\""id\"":\""table-65-r\"",\""override\"":true,\""start_key\"":\""7480000000000000FF415F720000000000FA\"",\""end_key\"":\""7480000000000000FF4200000000000000F8\"",\""role\"":\""learner\"",\""count\"":1,\""label_constraints\"":[{\""key\"":\""engine\"",\""op\"":\""in\"",\""values\"":[\""tiflash\""]}],\""create_timestamp\"":1637492369}""] [2021/11/21 10:59:29.675 +00:00] [INFO] [operator_controller.go:437] [""add operator""] [region-id=25] [operator=""\""balance-region {mv peer: store [1] to [4]} (kind:region,leader, region:25(12,1), createAt:2021-11-21 10:59:29.675239984 +0000 UTC m=+249.320532592, startAt:0001-01-01 00:00:00 +0000 UTC, currentStep:0, steps:[add learner peer 86 on store 4, use joint consensus, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner, transfer leader from store 1 to store 4, leave joint state, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner, remove peer on store 1])\""""] [additional-info=""{\""sourceScore\"":\""125.24\"",\""targetScore\"":\""91.40\""}""] [2021/11/21 10:59:29.675 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""add learner peer 86 on store 4""] [source=create] [2021/11/21 10:59:29.677 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=25] [detail=""Add peer:{id:86 store_id:4 role:Learner }""] [old-confver=1] [new-confver=2] [2021/11/21 10:59:29.677 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""add learner peer 86 on store 4""] [source=heartbeat] [2021/11/21 10:59:30.679 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""use joint consensus, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner""] [source=heartbeat] [2021/11/21 10:59:30.680 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=25] [detail=""Remove peer:{id:26 store_id:1 },Remove peer:{id:86 store_id:4 role:Learner },Add peer:{id:26 store_id:1 role:DemotingVoter },Add peer:{id:86 store_id:4 role:IncomingVoter }""] [old-confver=2] [new-confver=4] [2021/11/21 10:59:30.680 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""transfer leader from store 1 to store 4""] [source=heartbeat] [2021/11/21 10:59:30.681 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""transfer leader from store 1 to store 4""] [source=heartbeat] [2021/11/21 10:59:30.684 +00:00] [INFO] [region.go:543] [""leader changed""] [region-id=25] [from=1] [to=4] [2021/11/21 10:59:30.684 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""leave joint state, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner""] [source=heartbeat] [2021/11/21 10:59:30.685 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""leave joint state, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner""] [source=heartbeat] [2021/11/21 10:59:30.686 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=25] [detail=""Remove peer:{id:26 store_id:1 role:DemotingVoter },Remove peer:{id:86 store_id:4 role:IncomingVoter },Add peer:{id:26 store_id:1 role:Learner },Add peer:{id:86 store_id:4 }""] [old-confver=4] [new-confver=6] [2021/11/21 10:59:30.686 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""remove peer on store 1""] [source=heartbeat] [2021/11/21 10:59:30.688 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=25] [detail=""Remove peer:{id:26 store_id:1 role:Learner }""] [old-confver=6] [new-confver=7] [2021/11/21 10:59:30.688 +00:00] [INFO] [operator_controller.go:552] [""operator finish""] [region-id=25] [takes=1.012679281s] [operator=""\""balance-region {mv peer: store [1] to [4]} (kind:region,leader, region:25(12,1), createAt:2021-11-21 10:59:29.675239984 +0000 UTC m=+249.320532592, startAt:2021-11-21 10:59:29.675433739 +0000 UTC m=+249.320726316, currentStep:5, steps:[add learner peer 86 on store 4, use joint consensus, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner, transfer leader from store 1 to store 4, leave joint state, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner, remove peer on store 1]) finished\""""] [additional-info=""{\""sourceScore\"":\""125.24\"",\""targetScore\"":\""91.40\""}""] [2021/11/21 10:59:31.313 +00:00] [INFO] [cluster_worker.go:128] [""alloc ids for region split""] [region-id=87] [peer-ids=""[88,89]""] [2021/11/21 10:59:31.315 +00:00] [INFO] [region.go:522] [""region Version changed""] [region-id=2] [detail=""StartKey Changed:{7480000000000000FF4100000000000000F8} -> {7480000000000000FF4400000000000000F8}, EndKey:{}""] [old-version=31] [new-version=32] [2021/11/21 10:59:31.315 +00:00] [INFO] [cluster_worker.go:220] [""region batch split, generate new regions""] [region-id=2] [origin=""id:87 start_key:\""7480000000000000FF4100000000000000F8\"" end_key:\""7480000000000000FF4400000000000000F8\"" region_epoch: peers: peers:""] [total=1] [2021/11/21 10:59:31.315 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=2] [step=""transfer leader from store 1 to store 4""] [source=heartbeat] [2021/11/21 10:59:31.318 +00:00] [INFO] [region.go:543] [""leader changed""] [region-id=2] [from=1] [to=4] [2021/11/21 10:59:31.318 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=2] [step=""leave joint state, promote learner peer 85 on store 4 to voter, demote voter peer 75 on store 1 to learner""] [source=heartbeat] [2021/11/21 10:59:31.319 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=2] [step=""leave joint state, promote learner peer 85 on store 4 to voter, demote voter peer 75 on store 1 to learner""] [source=heartbeat] [2021/11/21 10:59:31.320 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=2] [detail=""Remove peer:{id:75 store_id:1 role:DemotingVoter },Remove peer:{id:85 store_id:4 role:IncomingVoter },Add peer:{id:75 store_id:1 role:Learner },Add peer:{id:85 store_id:4 }""] [old-confver=16] [new-confver=18] [2021/11/21 10:59:31.320 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=2] [step=""remove peer on store 1""] [source=heartbeat] [2021/11/21 10:59:31.321 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=2] [detail=""Remove peer:{id:75 store_id:1 role:Learner }""] [old-confver=18] [new-confver=19] [2021/11/21 10:59:31.321 +00:00] [INFO] [operator_controller.go:552] [""operator finish""] [region-id=2] [takes=4.117323726s] [operator=""\""balance-region {mv peer: store [1] to [4]} (kind:region,leader, region:2(31,13), createAt:2021-11-21 10:59:27.203934286 +0000 UTC m=+246.849226941, startAt:2021-11-21 10:59:27.204187668 +0000 UTC m=+246.849480286, currentStep:5, steps:[add learner peer 85 on store 4, use joint consensus, promote learner peer 85 on store 4 to voter, demote voter peer 75 on store 1 to learner, transfer leader from store 1 to store 4, leave joint state, promote learner peer 85 on store 4 to voter, demote voter peer 75 on store 1 to learner, remove peer on store 1]) finished\""""] [additional-info=""{\""sourceScore\"":\""97.03\"",\""targetScore\"":\""90.27\""}""]` there is no scheduling behavior leading to tiflash table can not be synchronized. ### What version of PD are you using (`pd-server -V`)? 5.3.0",1.0,"PD does not initiate scheduling normally when set a tiflash replica leading to tiflash table can not be synchronized - ## Bug Report ### What did you do? In [tiflash_regression_test_daily](https://ci.pingcap.net/blue/organizations/jenkins/tiflash_regression_test_daily/detail/tiflash_regression_test_daily/940/pipeline/), PD does not initiate scheduling normally when set a tiflash replica pd log: [_tmp_ti_ci_release_pd_pd.log](https://github.com/tikv/pd/files/7578127/_tmp_ti_ci_release_pd_pd.log) ### What did you expect to see? 当设置 replica 副本后 在 pd.log 同步过程会有如下调度日志 : `[""add operator""] [region-id=...] [operator=""\""rule-split-region ... keys [ ]} ... ]` `[""add operator""] [region-id=...] [operator=""\""add-rule-peer ..., steps:[add learner peer ... on store ` ### What did you see instead? 在 pd.log 观察并没有出现调度日志,实际检查中表也没有成功同步 tiflash replica `[2021/11/21 10:59:29.586 +00:00] [INFO] [rule_manager.go:243] [""placement rule updated""] [rule=""{\""group_id\"":\""tiflash\"",\""id\"":\""table-65-r\"",\""override\"":true,\""start_key\"":\""7480000000000000FF415F720000000000FA\"",\""end_key\"":\""7480000000000000FF4200000000000000F8\"",\""role\"":\""learner\"",\""count\"":1,\""label_constraints\"":[{\""key\"":\""engine\"",\""op\"":\""in\"",\""values\"":[\""tiflash\""]}],\""create_timestamp\"":1637492369}""] [2021/11/21 10:59:29.675 +00:00] [INFO] [operator_controller.go:437] [""add operator""] [region-id=25] [operator=""\""balance-region {mv peer: store [1] to [4]} (kind:region,leader, region:25(12,1), createAt:2021-11-21 10:59:29.675239984 +0000 UTC m=+249.320532592, startAt:0001-01-01 00:00:00 +0000 UTC, currentStep:0, steps:[add learner peer 86 on store 4, use joint consensus, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner, transfer leader from store 1 to store 4, leave joint state, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner, remove peer on store 1])\""""] [additional-info=""{\""sourceScore\"":\""125.24\"",\""targetScore\"":\""91.40\""}""] [2021/11/21 10:59:29.675 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""add learner peer 86 on store 4""] [source=create] [2021/11/21 10:59:29.677 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=25] [detail=""Add peer:{id:86 store_id:4 role:Learner }""] [old-confver=1] [new-confver=2] [2021/11/21 10:59:29.677 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""add learner peer 86 on store 4""] [source=heartbeat] [2021/11/21 10:59:30.679 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""use joint consensus, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner""] [source=heartbeat] [2021/11/21 10:59:30.680 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=25] [detail=""Remove peer:{id:26 store_id:1 },Remove peer:{id:86 store_id:4 role:Learner },Add peer:{id:26 store_id:1 role:DemotingVoter },Add peer:{id:86 store_id:4 role:IncomingVoter }""] [old-confver=2] [new-confver=4] [2021/11/21 10:59:30.680 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""transfer leader from store 1 to store 4""] [source=heartbeat] [2021/11/21 10:59:30.681 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""transfer leader from store 1 to store 4""] [source=heartbeat] [2021/11/21 10:59:30.684 +00:00] [INFO] [region.go:543] [""leader changed""] [region-id=25] [from=1] [to=4] [2021/11/21 10:59:30.684 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""leave joint state, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner""] [source=heartbeat] [2021/11/21 10:59:30.685 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""leave joint state, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner""] [source=heartbeat] [2021/11/21 10:59:30.686 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=25] [detail=""Remove peer:{id:26 store_id:1 role:DemotingVoter },Remove peer:{id:86 store_id:4 role:IncomingVoter },Add peer:{id:26 store_id:1 role:Learner },Add peer:{id:86 store_id:4 }""] [old-confver=4] [new-confver=6] [2021/11/21 10:59:30.686 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=25] [step=""remove peer on store 1""] [source=heartbeat] [2021/11/21 10:59:30.688 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=25] [detail=""Remove peer:{id:26 store_id:1 role:Learner }""] [old-confver=6] [new-confver=7] [2021/11/21 10:59:30.688 +00:00] [INFO] [operator_controller.go:552] [""operator finish""] [region-id=25] [takes=1.012679281s] [operator=""\""balance-region {mv peer: store [1] to [4]} (kind:region,leader, region:25(12,1), createAt:2021-11-21 10:59:29.675239984 +0000 UTC m=+249.320532592, startAt:2021-11-21 10:59:29.675433739 +0000 UTC m=+249.320726316, currentStep:5, steps:[add learner peer 86 on store 4, use joint consensus, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner, transfer leader from store 1 to store 4, leave joint state, promote learner peer 86 on store 4 to voter, demote voter peer 26 on store 1 to learner, remove peer on store 1]) finished\""""] [additional-info=""{\""sourceScore\"":\""125.24\"",\""targetScore\"":\""91.40\""}""] [2021/11/21 10:59:31.313 +00:00] [INFO] [cluster_worker.go:128] [""alloc ids for region split""] [region-id=87] [peer-ids=""[88,89]""] [2021/11/21 10:59:31.315 +00:00] [INFO] [region.go:522] [""region Version changed""] [region-id=2] [detail=""StartKey Changed:{7480000000000000FF4100000000000000F8} -> {7480000000000000FF4400000000000000F8}, EndKey:{}""] [old-version=31] [new-version=32] [2021/11/21 10:59:31.315 +00:00] [INFO] [cluster_worker.go:220] [""region batch split, generate new regions""] [region-id=2] [origin=""id:87 start_key:\""7480000000000000FF4100000000000000F8\"" end_key:\""7480000000000000FF4400000000000000F8\"" region_epoch: peers: peers:""] [total=1] [2021/11/21 10:59:31.315 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=2] [step=""transfer leader from store 1 to store 4""] [source=heartbeat] [2021/11/21 10:59:31.318 +00:00] [INFO] [region.go:543] [""leader changed""] [region-id=2] [from=1] [to=4] [2021/11/21 10:59:31.318 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=2] [step=""leave joint state, promote learner peer 85 on store 4 to voter, demote voter peer 75 on store 1 to learner""] [source=heartbeat] [2021/11/21 10:59:31.319 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=2] [step=""leave joint state, promote learner peer 85 on store 4 to voter, demote voter peer 75 on store 1 to learner""] [source=heartbeat] [2021/11/21 10:59:31.320 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=2] [detail=""Remove peer:{id:75 store_id:1 role:DemotingVoter },Remove peer:{id:85 store_id:4 role:IncomingVoter },Add peer:{id:75 store_id:1 role:Learner },Add peer:{id:85 store_id:4 }""] [old-confver=16] [new-confver=18] [2021/11/21 10:59:31.320 +00:00] [INFO] [operator_controller.go:635] [""send schedule command""] [region-id=2] [step=""remove peer on store 1""] [source=heartbeat] [2021/11/21 10:59:31.321 +00:00] [INFO] [region.go:531] [""region ConfVer changed""] [region-id=2] [detail=""Remove peer:{id:75 store_id:1 role:Learner }""] [old-confver=18] [new-confver=19] [2021/11/21 10:59:31.321 +00:00] [INFO] [operator_controller.go:552] [""operator finish""] [region-id=2] [takes=4.117323726s] [operator=""\""balance-region {mv peer: store [1] to [4]} (kind:region,leader, region:2(31,13), createAt:2021-11-21 10:59:27.203934286 +0000 UTC m=+246.849226941, startAt:2021-11-21 10:59:27.204187668 +0000 UTC m=+246.849480286, currentStep:5, steps:[add learner peer 85 on store 4, use joint consensus, promote learner peer 85 on store 4 to voter, demote voter peer 75 on store 1 to learner, transfer leader from store 1 to store 4, leave joint state, promote learner peer 85 on store 4 to voter, demote voter peer 75 on store 1 to learner, remove peer on store 1]) finished\""""] [additional-info=""{\""sourceScore\"":\""97.03\"",\""targetScore\"":\""90.27\""}""]` there is no scheduling behavior leading to tiflash table can not be synchronized. ### What version of PD are you using (`pd-server -V`)? 5.3.0",1,pd does not initiate scheduling normally when set a tiflash replica leading to tiflash table can not be synchronized bug report what did you do in pd does not initiate scheduling normally when set a tiflash replica pd log what did you expect to see 当设置 replica 副本后 在 pd log 同步过程会有如下调度日志 : operator add rule peer steps add learner peer on store what did you see instead 在 pd log 观察并没有出现调度日志,实际检查中表也没有成功同步 tiflash replica create timestamp to kind region leader region createat utc m startat utc currentstep steps to kind region leader region createat utc m startat utc m currentstep steps finished to kind region leader region createat utc m startat utc m currentstep steps finished there is no scheduling behavior leading to tiflash table can not be synchronized what version of pd are you using pd server v ,1 265825,8359637497.0,IssuesEvent,2018-10-03 08:55:04,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,www.instagram.com - site is not usable,browser-firefox-mobile priority-critical," **URL**: https://www.instagram.com/ **Browser / Version**: Firefox Mobile 64.0 **Operating System**: Android 7.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: unstable site menu **Steps to Reproduce**: _From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"www.instagram.com - site is not usable - **URL**: https://www.instagram.com/ **Browser / Version**: Firefox Mobile 64.0 **Operating System**: Android 7.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: unstable site menu **Steps to Reproduce**: _From [webcompat.com](https://webcompat.com/) with ❤️_",0, site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description unstable site menu steps to reproduce from with ❤️ ,0 8456,26968445227.0,IssuesEvent,2023-02-09 01:19:52,influxdata/ui,https://api.github.com/repos/influxdata/ui,closed,Query Context Updates for the UI,team/ui epic team/unity team/automation,There's no global query context in the UI and we have to keep/sync the states of the query in multiple places/components. The solution is to move towards having a global query object/context which keeps track on all the queries and we don't have to rely on different mechanisms to store query and their states.,1.0,Query Context Updates for the UI - There's no global query context in the UI and we have to keep/sync the states of the query in multiple places/components. The solution is to move towards having a global query object/context which keeps track on all the queries and we don't have to rely on different mechanisms to store query and their states.,1,query context updates for the ui there s no global query context in the ui and we have to keep sync the states of the query in multiple places components the solution is to move towards having a global query object context which keeps track on all the queries and we don t have to rely on different mechanisms to store query and their states ,1 6771,23884544883.0,IssuesEvent,2022-09-08 06:24:43,appsmithorg/appsmith,https://api.github.com/repos/appsmithorg/appsmith,closed,[Bug]: Reset action is failing intermittently with an exception,Bug Needs Triaging Automation medium,"### Is there an existing issue for this? - [X] I have searched the existing issues ### Description Test automation involving reset action of all widgets is seen failing for specific tests. ### Steps To Reproduce 1.Make sure you have Text Widget / CheckboxGroup widget / Submit widget 2.Bind Text box to display the selected valur in checkbox i.e {{CheckboxGroup1.selectedValues}} 3.Similarly bind Submit button on click to reset the checkboxgroup widget to reset. i.e. {{resetWidget(""CheckboxGroup1"",true).then(() => showAlert(""success""))}} Expected: Widget should reset on click of submit button Actual: Intermittently the reset is not happening and below issue is seen in attachment Note : this is not related to CheckboxGroup alone its seen with other widgets as well ### Public Sample App _No response_ ### Version v1.7.15",1.0,"[Bug]: Reset action is failing intermittently with an exception - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Description Test automation involving reset action of all widgets is seen failing for specific tests. ### Steps To Reproduce 1.Make sure you have Text Widget / CheckboxGroup widget / Submit widget 2.Bind Text box to display the selected valur in checkbox i.e {{CheckboxGroup1.selectedValues}} 3.Similarly bind Submit button on click to reset the checkboxgroup widget to reset. i.e. {{resetWidget(""CheckboxGroup1"",true).then(() => showAlert(""success""))}} Expected: Widget should reset on click of submit button Actual: Intermittently the reset is not happening and below issue is seen in attachment Note : this is not related to CheckboxGroup alone its seen with other widgets as well ### Public Sample App _No response_ ### Version v1.7.15",1, reset action is failing intermittently with an exception is there an existing issue for this i have searched the existing issues description test automation involving reset action of all widgets is seen failing for specific tests steps to reproduce make sure you have text widget checkboxgroup widget submit widget bind text box to display the selected valur in checkbox i e selectedvalues similarly bind submit button on click to reset the checkboxgroup widget to reset i e resetwidget true then showalert success expected widget should reset on click of submit button actual intermittently the reset is not happening and below issue is seen in attachment img width alt screenshot at pm src note this is not related to checkboxgroup alone its seen with other widgets as well public sample app no response version ,1 159034,13754332593.0,IssuesEvent,2020-10-06 16:47:38,laravel/nova-issues,https://api.github.com/repos/laravel/nova-issues,closed,[Feature Request] Add withoutConfirmation() to actions,documentation,"I understand that it's generally bad UX to have something performed immediately after selecting an item from a dropdown, but I have a use case where this is precisely what I want. I am using inline actions through showOnTableRow() and I want to perform non-destructive actions immediately upon clicking on them, without having to see a modal and click the 'confirm' button. It's fine and dandy that this remains the default behaviour but adding a simple withoutConfirmation() option to the actions would add more flexibility and reduce the number of necessary clicks in some cases where it makes sense. Thanks",1.0,"[Feature Request] Add withoutConfirmation() to actions - I understand that it's generally bad UX to have something performed immediately after selecting an item from a dropdown, but I have a use case where this is precisely what I want. I am using inline actions through showOnTableRow() and I want to perform non-destructive actions immediately upon clicking on them, without having to see a modal and click the 'confirm' button. It's fine and dandy that this remains the default behaviour but adding a simple withoutConfirmation() option to the actions would add more flexibility and reduce the number of necessary clicks in some cases where it makes sense. Thanks",0, add withoutconfirmation to actions i understand that it s generally bad ux to have something performed immediately after selecting an item from a dropdown but i have a use case where this is precisely what i want i am using inline actions through showontablerow and i want to perform non destructive actions immediately upon clicking on them without having to see a modal and click the confirm button it s fine and dandy that this remains the default behaviour but adding a simple withoutconfirmation option to the actions would add more flexibility and reduce the number of necessary clicks in some cases where it makes sense thanks,0 37447,12479233558.0,IssuesEvent,2020-05-29 17:53:04,jgeraigery/azure-iot-platform-dotnet,https://api.github.com/repos/jgeraigery/azure-iot-platform-dotnet,opened,CVE-2020-7656 (Medium) detected in jquery-1.7.1.min.js,security vulnerability,"## CVE-2020-7656 - Medium Severity Vulnerability
Vulnerable Library - jquery-1.7.1.min.js

JavaScript library for DOM operations

Library home page: https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js

Path to dependency file: /tmp/ws-scm/azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/express/index.html

Path to vulnerable library: /azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/express/index.html,/azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/hapi/html/index.html,/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/examples/multiplex/index.html,/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/examples/hapi/html/index.html,/azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/multiplex/index.html,/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/examples/echo/index.html,/azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/express-3.x/index.html,/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/examples/express-3.x/index.html,/azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/echo/index.html

Dependency Hierarchy: - :x: **jquery-1.7.1.min.js** (Vulnerable Library)

Found in HEAD commit: 5e199ac49eaf3d57e4aa1095f8e2069a6ef4c3c9

Vulnerability Details

jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove """", which results in the enclosed script logic to be executed.

Publish Date: 2020-05-19

URL: CVE-2020-7656

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7656

Release Date: 2020-05-19

Fix Resolution: 1.9.0b1

",True,"CVE-2020-7656 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2020-7656 - Medium Severity Vulnerability
Vulnerable Library - jquery-1.7.1.min.js

JavaScript library for DOM operations

Library home page: https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js

Path to dependency file: /tmp/ws-scm/azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/express/index.html

Path to vulnerable library: /azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/express/index.html,/azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/hapi/html/index.html,/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/examples/multiplex/index.html,/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/examples/hapi/html/index.html,/azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/multiplex/index.html,/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/examples/echo/index.html,/azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/express-3.x/index.html,/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/examples/express-3.x/index.html,/azure-iot-platform-dotnet/src/webui/azure-iot-ux-fluent-controls/node_modules/sockjs/examples/echo/index.html

Dependency Hierarchy: - :x: **jquery-1.7.1.min.js** (Vulnerable Library)

Found in HEAD commit: 5e199ac49eaf3d57e4aa1095f8e2069a6ef4c3c9

Vulnerability Details

jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove """", which results in the enclosed script logic to be executed.

Publish Date: 2020-05-19

URL: CVE-2020-7656

CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7656

Release Date: 2020-05-19

Fix Resolution: 1.9.0b1

",0,cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm azure iot platform dotnet src webui azure iot ux fluent controls node modules sockjs examples express index html path to vulnerable library azure iot platform dotnet src webui azure iot ux fluent controls node modules sockjs examples express index html azure iot platform dotnet src webui azure iot ux fluent controls node modules sockjs examples hapi html index html azure iot platform dotnet src webui node modules sockjs examples multiplex index html azure iot platform dotnet src webui node modules sockjs examples hapi html index html azure iot platform dotnet src webui azure iot ux fluent controls node modules sockjs examples multiplex index html azure iot platform dotnet src webui node modules sockjs examples echo index html azure iot platform dotnet src webui azure iot ux fluent controls node modules sockjs examples express x index html azure iot platform dotnet src webui node modules sockjs examples express x index html azure iot platform dotnet src webui azure iot ux fluent controls node modules sockjs examples echo index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery prior to allows cross site scripting attacks via the load method the load method fails to recognize and remove html tags that contain a whitespace character i e which results in the enclosed script logic to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails jquery prior to allows cross site scripting attacks via the load method the load method fails to recognize and remove html tags that contain a whitespace character i e script which results in the enclosed script logic to be executed vulnerabilityurl ,0 9874,30688713481.0,IssuesEvent,2023-07-26 13:56:45,celestiaorg/devops,https://api.github.com/repos/celestiaorg/devops,closed,feat: deploy a common jaeger-cluster,enhancement kubernetes celestia-app celestia-node automation devops knuu,"based on this thread: https://celestia-team.slack.com/archives/C04MKGKSU9H/p1690288259644599 we are gonna have one central jaeger-cluster to use it from all the knuu tests",1.0,"feat: deploy a common jaeger-cluster - based on this thread: https://celestia-team.slack.com/archives/C04MKGKSU9H/p1690288259644599 we are gonna have one central jaeger-cluster to use it from all the knuu tests",1,feat deploy a common jaeger cluster based on this thread we are gonna have one central jaeger cluster to use it from all the knuu tests,1 3180,13166084019.0,IssuesEvent,2020-08-11 07:55:30,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,reopened,FNX3-15646 ⁃ Make *GreenfieldRaptorRelease builds not use `org.mozilla.fenix` application ID,eng:automation eng:task,"Post #1321, we can finally use the (greenfield) Raptor release builds. But they use the `org.mozilla.fenix` application ID _and_ are signed with a special ""dep"" key. That means you can't pave over your existing `org.mozilla.fenix` Nightly vehicle, making it very difficult to do automation testing _and_ dogfood Fenix at the same time. Let's make the Raptor/automation builds have a different application ID.",1.0,"FNX3-15646 ⁃ Make *GreenfieldRaptorRelease builds not use `org.mozilla.fenix` application ID - Post #1321, we can finally use the (greenfield) Raptor release builds. But they use the `org.mozilla.fenix` application ID _and_ are signed with a special ""dep"" key. That means you can't pave over your existing `org.mozilla.fenix` Nightly vehicle, making it very difficult to do automation testing _and_ dogfood Fenix at the same time. Let's make the Raptor/automation builds have a different application ID.",1, ⁃ make greenfieldraptorrelease builds not use org mozilla fenix application id post we can finally use the greenfield raptor release builds but they use the org mozilla fenix application id and are signed with a special dep key that means you can t pave over your existing org mozilla fenix nightly vehicle making it very difficult to do automation testing and dogfood fenix at the same time let s make the raptor automation builds have a different application id ,1 737784,25531426608.0,IssuesEvent,2022-11-29 08:45:30,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,usbank.com - site is not usable,browser-firefox priority-normal os-mac engine-gecko," **URL**: https://usbank.com **Browser / Version**: Firefox 107.0 **Operating System**: Mac OS X 10.15 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Browser unsupported **Steps to Reproduce**: When I open the US Bank website and log-in, after the authentication, then the page goes blank. The person on US Bank phone said it started last week with some, but not all, customers since Firefox did an update. Please fix this ASAP
Browser Configuration
  • None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"usbank.com - site is not usable - **URL**: https://usbank.com **Browser / Version**: Firefox 107.0 **Operating System**: Mac OS X 10.15 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Browser unsupported **Steps to Reproduce**: When I open the US Bank website and log-in, after the authentication, then the page goes blank. The person on US Bank phone said it started last week with some, but not all, customers since Firefox did an update. Please fix this ASAP
Browser Configuration
  • None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,usbank com site is not usable url browser version firefox operating system mac os x tested another browser yes chrome problem type site is not usable description browser unsupported steps to reproduce when i open the us bank website and log in after the authentication then the page goes blank the person on us bank phone said it started last week with some but not all customers since firefox did an update please fix this asap browser configuration none from with ❤️ ,0 539415,15788274917.0,IssuesEvent,2021-04-01 20:32:00,iterative/dvc.org,https://api.github.com/repos/iterative/dvc.org,closed,Exp Use Case,doc-content priority-p1,"Child issue of #2266 Proposed requirements - Identify potential opportunities to use experiments - Show value added by experiments Questions - Is this needed as a separate section? - Are example projects needed for this? - Who is the target user to read this over other exp docs?",1.0,"Exp Use Case - Child issue of #2266 Proposed requirements - Identify potential opportunities to use experiments - Show value added by experiments Questions - Is this needed as a separate section? - Are example projects needed for this? - Who is the target user to read this over other exp docs?",0,exp use case child issue of proposed requirements identify potential opportunities to use experiments show value added by experiments questions is this needed as a separate section are example projects needed for this who is the target user to read this over other exp docs ,0 6613,23518202417.0,IssuesEvent,2022-08-19 00:57:27,returntocorp/semgrep,https://api.github.com/repos/returntocorp/semgrep,closed,--config=auto detects wrong project URL,bug priority:medium feature:config-automation,"**Describe the bug** The --config=auto mode detects the project of the current working directory, not the project in the target repository. **Expected behavior** --config=auto should detect the project URL of the target specified, and possibly enforce that only a single path target is supplied. **What is the priority of the bug to you?** - [ ] P0: blocking your adoption of Semgrep or workflow - [x] P1: important to fix or quite annoying - [ ] P2: regular bug that should get fixed",1.0,"--config=auto detects wrong project URL - **Describe the bug** The --config=auto mode detects the project of the current working directory, not the project in the target repository. **Expected behavior** --config=auto should detect the project URL of the target specified, and possibly enforce that only a single path target is supplied. **What is the priority of the bug to you?** - [ ] P0: blocking your adoption of Semgrep or workflow - [x] P1: important to fix or quite annoying - [ ] P2: regular bug that should get fixed",1, config auto detects wrong project url describe the bug the config auto mode detects the project of the current working directory not the project in the target repository expected behavior config auto should detect the project url of the target specified and possibly enforce that only a single path target is supplied what is the priority of the bug to you blocking your adoption of semgrep or workflow important to fix or quite annoying regular bug that should get fixed,1 7283,24576082762.0,IssuesEvent,2022-10-13 12:28:00,elastic/apm-agent-python,https://api.github.com/repos/elastic/apm-agent-python,closed,[META 555] Add automated span type/subtype checking against shared spec,automation chore agent-python stretch,"Spec PR: https://github.com/elastic/apm/pull/443 To start, we would just ensure that all span types/subtypes appear in the spec. In the future we will work on cross-agent alignment. ",1.0,"[META 555] Add automated span type/subtype checking against shared spec - Spec PR: https://github.com/elastic/apm/pull/443 To start, we would just ensure that all span types/subtypes appear in the spec. In the future we will work on cross-agent alignment. ",1, add automated span type subtype checking against shared spec spec pr to start we would just ensure that all span types subtypes appear in the spec in the future we will work on cross agent alignment ,1 429519,12425765216.0,IssuesEvent,2020-05-24 17:52:07,GeyserMC/Geyser,https://api.github.com/repos/GeyserMC/Geyser,closed,Standing In Bubble Column Creates Mass Amount Of Incorrect Particles,Priority: Low Unconfirmed Bug,"**Describe the bug** When standing in an upwards bubble column, there's tons of incorrect black potion effect / potion bottle particles created. **To Reproduce** Create/find an upwards bubble column and stand in it. **Expected behavior** The correct water particles should show instead. **Screenshots / Videos** ![image](https://user-images.githubusercontent.com/65837019/82745768-14cdc780-9d56-11ea-8c96-a209f309695d.png) ![image](https://user-images.githubusercontent.com/65837019/82745790-4b0b4700-9d56-11ea-97fb-0b052a29da9b.png) **Server Version** CraftBukkit version git-Spigot-2040c4c-77fd87e (MC: 1.15.2) (Implementing API version 1.15.2-R0.1-SNAPSHOT) **Geyser Version** build # 185 **Minecraft: Bedrock Edition Version** v1.14.60",1.0,"Standing In Bubble Column Creates Mass Amount Of Incorrect Particles - **Describe the bug** When standing in an upwards bubble column, there's tons of incorrect black potion effect / potion bottle particles created. **To Reproduce** Create/find an upwards bubble column and stand in it. **Expected behavior** The correct water particles should show instead. **Screenshots / Videos** ![image](https://user-images.githubusercontent.com/65837019/82745768-14cdc780-9d56-11ea-8c96-a209f309695d.png) ![image](https://user-images.githubusercontent.com/65837019/82745790-4b0b4700-9d56-11ea-97fb-0b052a29da9b.png) **Server Version** CraftBukkit version git-Spigot-2040c4c-77fd87e (MC: 1.15.2) (Implementing API version 1.15.2-R0.1-SNAPSHOT) **Geyser Version** build # 185 **Minecraft: Bedrock Edition Version** v1.14.60",0,standing in bubble column creates mass amount of incorrect particles describe the bug when standing in an upwards bubble column there s tons of incorrect black potion effect potion bottle particles created to reproduce create find an upwards bubble column and stand in it expected behavior the correct water particles should show instead screenshots videos server version craftbukkit version git spigot mc implementing api version snapshot geyser version build minecraft bedrock edition version ,0 740,7966082209.0,IssuesEvent,2018-07-14 17:16:07,Cacti/cacti,https://api.github.com/repos/Cacti/cacti,closed,Email notification for Automation Network discovery process,automation enhancement,"Hi Everyone, It would be nice to have an email report for network discovery results, somehow it may be enough or a starting point to send the same table of the **discovered devices** tab but as an email report (either text or HTML formatted). ",1.0,"Email notification for Automation Network discovery process - Hi Everyone, It would be nice to have an email report for network discovery results, somehow it may be enough or a starting point to send the same table of the **discovered devices** tab but as an email report (either text or HTML formatted). ",1,email notification for automation network discovery process hi everyone it would be nice to have an email report for network discovery results somehow it may be enough or a starting point to send the same table of the discovered devices tab but as an email report either text or html formatted ,1 626227,19804259257.0,IssuesEvent,2022-01-19 03:38:22,quickwit-inc/quickwit,https://api.github.com/repos/quickwit-inc/quickwit,closed,Return error on AcquireError in uploader,bug low-priority,"Currently we have the following code in the uploader: ``` let permit_guard = { let _guard = ctx.protect_zone(); Semaphore::acquire_owned(self.concurrent_upload_permits.clone()).await }; ``` We don't handle the case where the `acquire_owned` returns an `AcquireError` which seems weird to me. I'm not super familiar with this kind of stuff so not sure how to report that as a bug or not. ",1.0,"Return error on AcquireError in uploader - Currently we have the following code in the uploader: ``` let permit_guard = { let _guard = ctx.protect_zone(); Semaphore::acquire_owned(self.concurrent_upload_permits.clone()).await }; ``` We don't handle the case where the `acquire_owned` returns an `AcquireError` which seems weird to me. I'm not super familiar with this kind of stuff so not sure how to report that as a bug or not. ",0,return error on acquireerror in uploader currently we have the following code in the uploader let permit guard let guard ctx protect zone semaphore acquire owned self concurrent upload permits clone await we don t handle the case where the acquire owned returns an acquireerror which seems weird to me i m not super familiar with this kind of stuff so not sure how to report that as a bug or not ,0 198148,22617938689.0,IssuesEvent,2022-06-30 01:24:55,faizulho/sanity-gatsby-blog,https://api.github.com/repos/faizulho/sanity-gatsby-blog,opened,CVE-2022-2216 (High) detected in parse-url-6.0.0.tgz,security vulnerability,"## CVE-2022-2216 - High Severity Vulnerability
Vulnerable Library - parse-url-6.0.0.tgz

An advanced url parser supporting git urls too.

Library home page: https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz

Path to dependency file: /web/package.json

Path to vulnerable library: /web/node_modules/parse-url/package.json

Dependency Hierarchy: - gatsby-3.13.0.tgz (Root Library) - gatsby-telemetry-2.13.0.tgz - git-up-4.0.5.tgz - :x: **parse-url-6.0.0.tgz** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

Server-Side Request Forgery (SSRF) in GitHub repository ionicabizau/parse-url prior to 7.0.0.

Publish Date: 2022-06-27

URL: CVE-2022-2216

CVSS 3 Score Details (9.4)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://huntr.dev/bounties/505a3d39-2723-4a06-b1f7-9b2d133c92e1/

Release Date: 2022-06-27

Fix Resolution: parse-url - 6.0.1

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-2216 (High) detected in parse-url-6.0.0.tgz - ## CVE-2022-2216 - High Severity Vulnerability
Vulnerable Library - parse-url-6.0.0.tgz

An advanced url parser supporting git urls too.

Library home page: https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz

Path to dependency file: /web/package.json

Path to vulnerable library: /web/node_modules/parse-url/package.json

Dependency Hierarchy: - gatsby-3.13.0.tgz (Root Library) - gatsby-telemetry-2.13.0.tgz - git-up-4.0.5.tgz - :x: **parse-url-6.0.0.tgz** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

Server-Side Request Forgery (SSRF) in GitHub repository ionicabizau/parse-url prior to 7.0.0.

Publish Date: 2022-06-27

URL: CVE-2022-2216

CVSS 3 Score Details (9.4)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://huntr.dev/bounties/505a3d39-2723-4a06-b1f7-9b2d133c92e1/

Release Date: 2022-06-27

Fix Resolution: parse-url - 6.0.1

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in parse url tgz cve high severity vulnerability vulnerable library parse url tgz an advanced url parser supporting git urls too library home page a href path to dependency file web package json path to vulnerable library web node modules parse url package json dependency hierarchy gatsby tgz root library gatsby telemetry tgz git up tgz x parse url tgz vulnerable library found in base branch master vulnerability details server side request forgery ssrf in github repository ionicabizau parse url prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution parse url step up your open source security game with mend ,0 211811,16459362534.0,IssuesEvent,2021-05-21 16:32:34,knadh/listmonk,https://api.github.com/repos/knadh/listmonk,closed,API Lists documentation,documentation,"Hi, I'm work with API on one script for one Google Sheet and in the endpoint POST /api/lists, there is missed the required parameter ""optin"". Can you add it for the other users. Documentation : https://listmonk.app/docs/apis/lists/ TY! [Screenshot_7](https://user-images.githubusercontent.com/1880969/119156679-c16b8380-ba54-11eb-8e91-1854a1fcf0b7.jpg) ",1.0,"API Lists documentation - Hi, I'm work with API on one script for one Google Sheet and in the endpoint POST /api/lists, there is missed the required parameter ""optin"". Can you add it for the other users. Documentation : https://listmonk.app/docs/apis/lists/ TY! [Screenshot_7](https://user-images.githubusercontent.com/1880969/119156679-c16b8380-ba54-11eb-8e91-1854a1fcf0b7.jpg) ",0,api lists documentation hi i m work with api on one script for one google sheet and in the endpoint post api lists there is missed the required parameter optin can you add it for the other users documentation ty ,0 158248,13728015220.0,IssuesEvent,2020-10-04 09:33:35,afrihost/BaseCommandBundle,https://api.github.com/repos/afrihost/BaseCommandBundle,closed,Create (basic) CONTRIBUTING.md file,Easy Pick documentation hacktoberfest,"It doesn't have to be amazing, it can merely be how to install it, and then run the tests to see if it works.",1.0,"Create (basic) CONTRIBUTING.md file - It doesn't have to be amazing, it can merely be how to install it, and then run the tests to see if it works.",0,create basic contributing md file it doesn t have to be amazing it can merely be how to install it and then run the tests to see if it works ,0 7409,24795217145.0,IssuesEvent,2022-10-24 16:39:23,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,opened,[DocDB] Master unreachable after 2.0 -> 2.11 -> 2.13 -> 2.15 upgrade chain,area/docdb status/awaiting-triage qa_automation,"### Description Noticed in itest-system: ``` testupgradechain-aws-rf3-odd-upgrade-2.0.11.0_23: Start ( 0.639s) User Login : Success ( 0.162s) Refresh YB Version : Success ( 90.452s) Setup Provider : Success ( 0.075s) Updating Health Check Interval to 60000 sec : Success ( 330.911s) Create universe dfel-isd2128-b62491ddef-20221024-143446 : Success ( 37.306s) Start sample workloads : Success ( 353.686s) Upgrade Software to 2.11.2.0-b89 : Success ( 453.196s) Executing yb-admin upgrade_ysql : Success ( 274.272s) Upgrade Software to 2.13.2.0-b135 : Success ( 443.400s) Executing yb-admin upgrade_ysql : Success ( 550.804s) Upgrade Software to 2.15.2.1-b1 : >>> Integration Test Failed <<< wait_for_task: Failed task with errors in 360.78408575057983s: Failed to execute task {""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""683463eb-bd4f-4c35-9e0d-dfdab4c42e3a"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""ybControllerrRpcPort"":18018,""redisS..., hit error: WaitForServer(683463eb-bd4f-4c35-9e0d-dfdab4c42e3a, yb-itest-dfel-isd2128-b62491ddef-20221024-143446-n1, type=MASTER) did not respond in the set time.. ( 29.470s) Saved server log files and keys at /share/jenkins/workspace/itest-system-developer/logs/2.17.1.0_testupgradechain-aws-rf3-odd-upgrade-2.0.11.0_23_20221024_161152 : Success ( 90.651s) Saved server log files and keys at /share/jenkins/workspace/itest-system-developer/logs/2.17.1.0_testupgradechain-aws-rf3-odd-upgrade-2.0.11.0_23_20221024_161226 : Success ( 0.000s) Saved server log files and keys at /share/jenkins/workspace/itest-system-developer/logs/2.17.1.0_testupgradechain-aws-rf3-odd-upgrade-2.0.11.0_23_20221024_161226 : Hit error before destroying universe 'dfel-isd2128-b62491ddef-20221024-143446': 'cat /home/yugabyte/master/version_metadata.json' returned error code 1. ( 90.204s) Destroy universe : Success testupgradechain-aws-rf3-odd-upgrade-2.0.11.0_23: End ``` https://jenkins.dev.yugabyte.com/job/itest-system-developer/2128/testReport/junit/(root)/TestUpgradeChain/testupgradechain_aws_rf3_odd_upgrade_2_0_11_0_23/ Based on the master logs something seems strange: ``` W1024 16:04:19.173660 40984 async_rpc_tasks.cc:646] RunLeaderElection RPC for tablet 60505c7eb7d74d6c8b2cec3c7d3e8a3b (upgrade_mv_index [id=0000431f00003000800000000000432d]) on TS=fe4713f8f98149fbab9af307182b4646 (task=0x000000000cab68d8, state=kRunning): RunLeaderElection RPC for tablet 60505c7eb7d74d6c8b2cec3c7d3e8a3b on TS fe4713f8f98149fbab9af307182b4646 failed: Illegal state (yb/tserver/service_util.cc:253): Tablet 60505c7eb7d74d6c8b2cec3c7d3e8a3b not RUNNING: BOOTSTRAPPING (tablet server error 12) (raft group state error 0) W1024 16:04:22.000214 35183 catalog_manager.cc:10037] Expected replicas 3 but found 1 for tablet 4f3e6baa034c4d4c9e41ae7f65b35eab: tablet_id: ""4f3e6baa034c4d4c9e41ae7f65b35eab"" replicas { ts_info { permanent_uuid: ""fe4713f8f98149fbab9af307182b4646"" private_rpc_addresses { host: ""10.9.128.67"" port: 9100 } cloud_info { placement_cloud: ""aws"" placement_region: ""us-west-2"" placement_zone: ""us-west-2b"" } placement_uuid: ""d0b4e238-b9cd-4793-b1e5-8252b2a90ddf"" capabilities: 2189743739 capabilities: 1427296937 capabilities: 2980225056 } role: LEADER member_type: VOTER } stale: false partition { partition_key_start: ""\225S"" partition_key_end: ""\252\250"" } table_id: ""f1c4286c213b4ff7a1fb0bffc22509c2"" table_ids: ""f1c4286c213b4ff7a1fb0bffc22509c2"" split_depth: 0 expected_live_replicas: 3 expected_read_replicas: 0 split_parent_tablet_id: """" [suppressed 1 similar messages] W1024 16:04:22.218209 41021 async_rpc_tasks.cc:646] RunLeaderElection RPC for tablet a52790798e384210896d31417b00641b (pg_temp_17189 [id=0000431f00003000800000000000432e]) on TS=c89fcd1a3eed4a048d83507a746c3736 (task=0x000000000c3d9dd8, state=kRunning): RunLeaderElection RPC for tablet a52790798e384210896d31417b00641b on TS c89fcd1a3eed4a048d83507a746c3736 failed: Illegal state (yb/tserver/service_util.cc:253): Tablet a52790798e384210896d31417b00641b not RUNNING: NOT_STARTED (tablet server error 12) (raft group state error 5) W1024 16:04:22.222407 41022 async_rpc_tasks.cc:646] RunLeaderElection RPC for tablet 770160d9225d44918c7d9227552b8ad5 (pg_temp_17189 [id=0000431f00003000800000000000432e]) on TS=fe4713f8f98149fbab9af307182b4646 (task=0x0000000007ab56d8, state=kRunning): RunLeaderElection RPC for tablet 770160d9225d44918c7d9227552b8ad5 on TS fe4713f8f98149fbab9af307182b4646 failed: Illegal state (yb/tserver/service_util.cc:253): Tablet 770160d9225d44918c7d9227552b8ad5 not RUNNING: BOOTSTRAPPING (tablet server error 12) (raft group state error 0) W1024 16:04:31.768427 40677 env.cc:87] Failed to cleanup /mnt/d0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000.intents/000020.sst: IO error (yb/rocksdb/util/env_posix.cc:238): /mnt/d0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000.intents/000020.sst: No such file or directory W1024 16:04:31.768445 40677 env.cc:87] Failed to cleanup /mnt/d0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000.intents/000020.sst.sblock.0: IO error (yb/rocksdb/util/env_posix.cc:238): /mnt/d0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000.intents/000020.sst.sblock.0: No such file or directory W1024 16:05:01.278363 41122 async_rpc_tasks.cc:646] RunLeaderElection RPC for tablet 2bd464b527b94454bb0f863ee02543f0 (idx_arr [id=00004339000030008000000000004340]) on TS=15a5ae821602407489bb04b90fc4dc97 (task=0x000000000e69c1d8, state=kRunning): RunLeaderElection RPC for tablet 2bd464b527b94454bb0f863ee02543f0 on TS 15a5ae821602407489bb04b90fc4dc97 failed: Illegal state (yb/tserver/service_util.cc:253): Tablet 2bd464b527b94454bb0f863ee02543f0 not RUNNING: BOOTSTRAPPING (tablet server error 12) (raft group state error 0) W1024 16:05:28.379812 35356 catalog_manager.cc:10037] Expected replicas 3 but found 1 for tablet 701e889453b94a17a25c8b921bb9147d: tablet_id: ""701e889453b94a17a25c8b921bb9147d"" replicas { ts_info { permanent_uuid: ""fe4713f8f98149fbab9af307182b4646"" private_rpc_addresses { host: ""10.9.128.67"" port: 9100 } cloud_info { placement_cloud: ""aws"" placement_region: ""us-west-2"" placement_zone: ""us-west-2b"" } placement_uuid: ""d0b4e238-b9cd-4793-b1e5-8252b2a90ddf"" capabilities: 2189743739 capabilities: 1427296937 capabilities: 2980225056 } role: LEADER member_type: VOTER } stale: false partition { partition_key_start: ""j\251"" partition_key_end: ""\177\376"" } table_id: ""f1c4286c213b4ff7a1fb0bffc22509c2"" table_ids: ""f1c4286c213b4ff7a1fb0bffc22509c2"" split_depth: 0 expected_live_replicas: 3 expected_read_replicas: 0 split_parent_tablet_id: """" W1024 16:06:32.396144 35208 consensus_peers.cc:543] T 00000000000000000000000000000000 P a99121c9df5a442c9258ec3bb950ff83 -> Peer aa11ef40a1104d74a723c893393abfcf ([host: ""10.9.128.67"" port: 7100], []): Couldn't send request. Status: Network error (yb/util/net/socket.cc:540): recvmsg error: Connection refused (system error 111). Retrying in the next heartbeat period. Already tried 1 times. State: 2 ``` Full logs: https://drive.google.com/file/d/1qsHnCa9o-NKVH_L3_vcKnqIh9qc-YPdh/view?usp=sharing",1.0,"[DocDB] Master unreachable after 2.0 -> 2.11 -> 2.13 -> 2.15 upgrade chain - ### Description Noticed in itest-system: ``` testupgradechain-aws-rf3-odd-upgrade-2.0.11.0_23: Start ( 0.639s) User Login : Success ( 0.162s) Refresh YB Version : Success ( 90.452s) Setup Provider : Success ( 0.075s) Updating Health Check Interval to 60000 sec : Success ( 330.911s) Create universe dfel-isd2128-b62491ddef-20221024-143446 : Success ( 37.306s) Start sample workloads : Success ( 353.686s) Upgrade Software to 2.11.2.0-b89 : Success ( 453.196s) Executing yb-admin upgrade_ysql : Success ( 274.272s) Upgrade Software to 2.13.2.0-b135 : Success ( 443.400s) Executing yb-admin upgrade_ysql : Success ( 550.804s) Upgrade Software to 2.15.2.1-b1 : >>> Integration Test Failed <<< wait_for_task: Failed task with errors in 360.78408575057983s: Failed to execute task {""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""683463eb-bd4f-4c35-9e0d-dfdab4c42e3a"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""ybControllerrRpcPort"":18018,""redisS..., hit error: WaitForServer(683463eb-bd4f-4c35-9e0d-dfdab4c42e3a, yb-itest-dfel-isd2128-b62491ddef-20221024-143446-n1, type=MASTER) did not respond in the set time.. ( 29.470s) Saved server log files and keys at /share/jenkins/workspace/itest-system-developer/logs/2.17.1.0_testupgradechain-aws-rf3-odd-upgrade-2.0.11.0_23_20221024_161152 : Success ( 90.651s) Saved server log files and keys at /share/jenkins/workspace/itest-system-developer/logs/2.17.1.0_testupgradechain-aws-rf3-odd-upgrade-2.0.11.0_23_20221024_161226 : Success ( 0.000s) Saved server log files and keys at /share/jenkins/workspace/itest-system-developer/logs/2.17.1.0_testupgradechain-aws-rf3-odd-upgrade-2.0.11.0_23_20221024_161226 : Hit error before destroying universe 'dfel-isd2128-b62491ddef-20221024-143446': 'cat /home/yugabyte/master/version_metadata.json' returned error code 1. ( 90.204s) Destroy universe : Success testupgradechain-aws-rf3-odd-upgrade-2.0.11.0_23: End ``` https://jenkins.dev.yugabyte.com/job/itest-system-developer/2128/testReport/junit/(root)/TestUpgradeChain/testupgradechain_aws_rf3_odd_upgrade_2_0_11_0_23/ Based on the master logs something seems strange: ``` W1024 16:04:19.173660 40984 async_rpc_tasks.cc:646] RunLeaderElection RPC for tablet 60505c7eb7d74d6c8b2cec3c7d3e8a3b (upgrade_mv_index [id=0000431f00003000800000000000432d]) on TS=fe4713f8f98149fbab9af307182b4646 (task=0x000000000cab68d8, state=kRunning): RunLeaderElection RPC for tablet 60505c7eb7d74d6c8b2cec3c7d3e8a3b on TS fe4713f8f98149fbab9af307182b4646 failed: Illegal state (yb/tserver/service_util.cc:253): Tablet 60505c7eb7d74d6c8b2cec3c7d3e8a3b not RUNNING: BOOTSTRAPPING (tablet server error 12) (raft group state error 0) W1024 16:04:22.000214 35183 catalog_manager.cc:10037] Expected replicas 3 but found 1 for tablet 4f3e6baa034c4d4c9e41ae7f65b35eab: tablet_id: ""4f3e6baa034c4d4c9e41ae7f65b35eab"" replicas { ts_info { permanent_uuid: ""fe4713f8f98149fbab9af307182b4646"" private_rpc_addresses { host: ""10.9.128.67"" port: 9100 } cloud_info { placement_cloud: ""aws"" placement_region: ""us-west-2"" placement_zone: ""us-west-2b"" } placement_uuid: ""d0b4e238-b9cd-4793-b1e5-8252b2a90ddf"" capabilities: 2189743739 capabilities: 1427296937 capabilities: 2980225056 } role: LEADER member_type: VOTER } stale: false partition { partition_key_start: ""\225S"" partition_key_end: ""\252\250"" } table_id: ""f1c4286c213b4ff7a1fb0bffc22509c2"" table_ids: ""f1c4286c213b4ff7a1fb0bffc22509c2"" split_depth: 0 expected_live_replicas: 3 expected_read_replicas: 0 split_parent_tablet_id: """" [suppressed 1 similar messages] W1024 16:04:22.218209 41021 async_rpc_tasks.cc:646] RunLeaderElection RPC for tablet a52790798e384210896d31417b00641b (pg_temp_17189 [id=0000431f00003000800000000000432e]) on TS=c89fcd1a3eed4a048d83507a746c3736 (task=0x000000000c3d9dd8, state=kRunning): RunLeaderElection RPC for tablet a52790798e384210896d31417b00641b on TS c89fcd1a3eed4a048d83507a746c3736 failed: Illegal state (yb/tserver/service_util.cc:253): Tablet a52790798e384210896d31417b00641b not RUNNING: NOT_STARTED (tablet server error 12) (raft group state error 5) W1024 16:04:22.222407 41022 async_rpc_tasks.cc:646] RunLeaderElection RPC for tablet 770160d9225d44918c7d9227552b8ad5 (pg_temp_17189 [id=0000431f00003000800000000000432e]) on TS=fe4713f8f98149fbab9af307182b4646 (task=0x0000000007ab56d8, state=kRunning): RunLeaderElection RPC for tablet 770160d9225d44918c7d9227552b8ad5 on TS fe4713f8f98149fbab9af307182b4646 failed: Illegal state (yb/tserver/service_util.cc:253): Tablet 770160d9225d44918c7d9227552b8ad5 not RUNNING: BOOTSTRAPPING (tablet server error 12) (raft group state error 0) W1024 16:04:31.768427 40677 env.cc:87] Failed to cleanup /mnt/d0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000.intents/000020.sst: IO error (yb/rocksdb/util/env_posix.cc:238): /mnt/d0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000.intents/000020.sst: No such file or directory W1024 16:04:31.768445 40677 env.cc:87] Failed to cleanup /mnt/d0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000.intents/000020.sst.sblock.0: IO error (yb/rocksdb/util/env_posix.cc:238): /mnt/d0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000.intents/000020.sst.sblock.0: No such file or directory W1024 16:05:01.278363 41122 async_rpc_tasks.cc:646] RunLeaderElection RPC for tablet 2bd464b527b94454bb0f863ee02543f0 (idx_arr [id=00004339000030008000000000004340]) on TS=15a5ae821602407489bb04b90fc4dc97 (task=0x000000000e69c1d8, state=kRunning): RunLeaderElection RPC for tablet 2bd464b527b94454bb0f863ee02543f0 on TS 15a5ae821602407489bb04b90fc4dc97 failed: Illegal state (yb/tserver/service_util.cc:253): Tablet 2bd464b527b94454bb0f863ee02543f0 not RUNNING: BOOTSTRAPPING (tablet server error 12) (raft group state error 0) W1024 16:05:28.379812 35356 catalog_manager.cc:10037] Expected replicas 3 but found 1 for tablet 701e889453b94a17a25c8b921bb9147d: tablet_id: ""701e889453b94a17a25c8b921bb9147d"" replicas { ts_info { permanent_uuid: ""fe4713f8f98149fbab9af307182b4646"" private_rpc_addresses { host: ""10.9.128.67"" port: 9100 } cloud_info { placement_cloud: ""aws"" placement_region: ""us-west-2"" placement_zone: ""us-west-2b"" } placement_uuid: ""d0b4e238-b9cd-4793-b1e5-8252b2a90ddf"" capabilities: 2189743739 capabilities: 1427296937 capabilities: 2980225056 } role: LEADER member_type: VOTER } stale: false partition { partition_key_start: ""j\251"" partition_key_end: ""\177\376"" } table_id: ""f1c4286c213b4ff7a1fb0bffc22509c2"" table_ids: ""f1c4286c213b4ff7a1fb0bffc22509c2"" split_depth: 0 expected_live_replicas: 3 expected_read_replicas: 0 split_parent_tablet_id: """" W1024 16:06:32.396144 35208 consensus_peers.cc:543] T 00000000000000000000000000000000 P a99121c9df5a442c9258ec3bb950ff83 -> Peer aa11ef40a1104d74a723c893393abfcf ([host: ""10.9.128.67"" port: 7100], []): Couldn't send request. Status: Network error (yb/util/net/socket.cc:540): recvmsg error: Connection refused (system error 111). Retrying in the next heartbeat period. Already tried 1 times. State: 2 ``` Full logs: https://drive.google.com/file/d/1qsHnCa9o-NKVH_L3_vcKnqIh9qc-YPdh/view?usp=sharing",1, master unreachable after upgrade chain description noticed in itest system testupgradechain aws odd upgrade start user login success refresh yb version success setup provider success updating health check interval to sec success create universe dfel success start sample workloads success upgrade software to success executing yb admin upgrade ysql success upgrade software to success executing yb admin upgrade ysql success upgrade software to integration test failed wait for task failed task with errors in failed to execute task sleepaftermasterrestartmillis sleepaftertserverrestartmillis nodeexporteruser prometheus universeuuid enableybc false installybc false ybcinstalled false encryptionatrestconfig encryptionatrestenabled false optype undefined type data key communicationports masterhttpport masterrpcport tserverhttpport tserverrpcport ybcontrollerhttpport ybcontrollerrrpcport rediss hit error waitforserver yb itest dfel type master did not respond in the set time saved server log files and keys at share jenkins workspace itest system developer logs testupgradechain aws odd upgrade success saved server log files and keys at share jenkins workspace itest system developer logs testupgradechain aws odd upgrade success saved server log files and keys at share jenkins workspace itest system developer logs testupgradechain aws odd upgrade hit error before destroying universe dfel cat home yugabyte master version metadata json returned error code destroy universe success testupgradechain aws odd upgrade end based on the master logs something seems strange async rpc tasks cc runleaderelection rpc for tablet upgrade mv index on ts task state krunning runleaderelection rpc for tablet on ts failed illegal state yb tserver service util cc tablet not running bootstrapping tablet server error raft group state error catalog manager cc expected replicas but found for tablet tablet id replicas ts info permanent uuid private rpc addresses host port cloud info placement cloud aws placement region us west placement zone us west placement uuid capabilities capabilities capabilities role leader member type voter stale false partition partition key start partition key end table id table ids split depth expected live replicas expected read replicas split parent tablet id async rpc tasks cc runleaderelection rpc for tablet pg temp on ts task state krunning runleaderelection rpc for tablet on ts failed illegal state yb tserver service util cc tablet not running not started tablet server error raft group state error async rpc tasks cc runleaderelection rpc for tablet pg temp on ts task state krunning runleaderelection rpc for tablet on ts failed illegal state yb tserver service util cc tablet not running bootstrapping tablet server error raft group state error env cc failed to cleanup mnt yb data master data rocksdb table sys catalog uuid tablet intents sst io error yb rocksdb util env posix cc mnt yb data master data rocksdb table sys catalog uuid tablet intents sst no such file or directory env cc failed to cleanup mnt yb data master data rocksdb table sys catalog uuid tablet intents sst sblock io error yb rocksdb util env posix cc mnt yb data master data rocksdb table sys catalog uuid tablet intents sst sblock no such file or directory async rpc tasks cc runleaderelection rpc for tablet idx arr on ts task state krunning runleaderelection rpc for tablet on ts failed illegal state yb tserver service util cc tablet not running bootstrapping tablet server error raft group state error catalog manager cc expected replicas but found for tablet tablet id replicas ts info permanent uuid private rpc addresses host port cloud info placement cloud aws placement region us west placement zone us west placement uuid capabilities capabilities capabilities role leader member type voter stale false partition partition key start j partition key end table id table ids split depth expected live replicas expected read replicas split parent tablet id consensus peers cc t p peer couldn t send request status network error yb util net socket cc recvmsg error connection refused system error retrying in the next heartbeat period already tried times state full logs ,1 4895,17949516224.0,IssuesEvent,2021-09-12 13:04:39,jasonericdavis/FF_Picker,https://api.github.com/repos/jasonericdavis/FF_Picker,opened,Save Parsed Data to Supabase,automation,"After parsing the data save it Supabase so that it can be retrieved later. - [ ] save player data - [ ] save parsed offensive data - [ ] save parsed defensive data",1.0,"Save Parsed Data to Supabase - After parsing the data save it Supabase so that it can be retrieved later. - [ ] save player data - [ ] save parsed offensive data - [ ] save parsed defensive data",1,save parsed data to supabase after parsing the data save it supabase so that it can be retrieved later save player data save parsed offensive data save parsed defensive data,1 1397,10036716537.0,IssuesEvent,2019-07-18 11:23:53,home-assistant/home-assistant,https://api.github.com/repos/home-assistant/home-assistant,closed,automation entity name consistency,integration: automation stale waiting-for-reply,"**Home Assistant release (`hass --version`):** 0.60.0 **Python release (`python3 --version`):** official docker image: `homeassistant/home-assistant:latest` **Component/platform:** HA on docker **Description of problem:** when defining a script, the syntax is: ```yaml script: script_name: alias: Human Readable Script Name sequence: - service: light.turn_on entity_id: light.livingroom ``` this creates a `script.script_name` entity for automations the syntax is: ```yaml automation: - alias: Some automation name here - shows up in UI trigger: - platform: time hours: 8 minutes: 30 seconds: 00 action: - service: light.turn_on entity_id: light.livingroom ``` This creates a `automation.some_automation_name_here__shows_up_in_ui` entity **Expected:** This is not consistent. there is no way to set a more structured entity ID for automations without using alias, which is what sets the UI name. I CAN use alias, and then customize a `friendly_name`, but I think it would be better to go for consistency. I propose to allow the script syntax as well as the existing one for automation: ```yaml automation: clean_automation_id: alias: Some automation name here - shows up in UI trigger: - platform: time hours: 8 minutes: 30 seconds: 00 action: - service: light.turn_on entity_id: light.livingroom ``` this would let us have `automation.clean_automation_id` and have the alias set the UI name, just like with the script component. ",1.0,"automation entity name consistency - **Home Assistant release (`hass --version`):** 0.60.0 **Python release (`python3 --version`):** official docker image: `homeassistant/home-assistant:latest` **Component/platform:** HA on docker **Description of problem:** when defining a script, the syntax is: ```yaml script: script_name: alias: Human Readable Script Name sequence: - service: light.turn_on entity_id: light.livingroom ``` this creates a `script.script_name` entity for automations the syntax is: ```yaml automation: - alias: Some automation name here - shows up in UI trigger: - platform: time hours: 8 minutes: 30 seconds: 00 action: - service: light.turn_on entity_id: light.livingroom ``` This creates a `automation.some_automation_name_here__shows_up_in_ui` entity **Expected:** This is not consistent. there is no way to set a more structured entity ID for automations without using alias, which is what sets the UI name. I CAN use alias, and then customize a `friendly_name`, but I think it would be better to go for consistency. I propose to allow the script syntax as well as the existing one for automation: ```yaml automation: clean_automation_id: alias: Some automation name here - shows up in UI trigger: - platform: time hours: 8 minutes: 30 seconds: 00 action: - service: light.turn_on entity_id: light.livingroom ``` this would let us have `automation.clean_automation_id` and have the alias set the UI name, just like with the script component. ",1,automation entity name consistency home assistant release hass version python release version official docker image homeassistant home assistant latest component platform ha on docker description of problem when defining a script the syntax is yaml script script name alias human readable script name sequence service light turn on entity id light livingroom this creates a script script name entity for automations the syntax is yaml automation alias some automation name here shows up in ui trigger platform time hours minutes seconds action service light turn on entity id light livingroom this creates a automation some automation name here shows up in ui entity expected this is not consistent there is no way to set a more structured entity id for automations without using alias which is what sets the ui name i can use alias and then customize a friendly name but i think it would be better to go for consistency i propose to allow the script syntax as well as the existing one for automation yaml automation clean automation id alias some automation name here shows up in ui trigger platform time hours minutes seconds action service light turn on entity id light livingroom this would let us have automation clean automation id and have the alias set the ui name just like with the script component ,1 36672,8056358516.0,IssuesEvent,2018-08-02 12:28:13,primefaces/primeng,https://api.github.com/repos/primefaces/primeng,closed,InputGroup buttons look compressed with Firefox,defect,An icon only button inside an inputgroup look compressed with Firefox,1.0,InputGroup buttons look compressed with Firefox - An icon only button inside an inputgroup look compressed with Firefox,0,inputgroup buttons look compressed with firefox an icon only button inside an inputgroup look compressed with firefox,0 759397,26592644454.0,IssuesEvent,2023-01-23 09:59:11,wso2/api-manager,https://api.github.com/repos/wso2/api-manager,opened,[4.2.0][Dependency Upgrade] Analyze Trivy scan reports,Type/Task Priority/Normal Component/APIM Component/MI Component/SI,"### Description This issue is created to track progress on trivy scan report analysis on APIM, MI and SI 4.2.0. ### Affected Component None ### Version _No response_ ### Related Issues _No response_ ### Suggested Labels _No response_",1.0,"[4.2.0][Dependency Upgrade] Analyze Trivy scan reports - ### Description This issue is created to track progress on trivy scan report analysis on APIM, MI and SI 4.2.0. ### Affected Component None ### Version _No response_ ### Related Issues _No response_ ### Suggested Labels _No response_",0, analyze trivy scan reports description this issue is created to track progress on trivy scan report analysis on apim mi and si affected component none version no response related issues no response suggested labels no response ,0 7016,24125729476.0,IssuesEvent,2022-09-21 00:02:49,o3de/o3de,https://api.github.com/repos/o3de/o3de,closed,PhysX Ball Joint Component returns a memory access violation when getting its Component Property Tree with types,kind/bug triage/accepted priority/major kind/automation sig/simulation,"**Describe the bug** When attempting to get the **Component Property Tree** from a **PhysX Ball Joint Component** a memory access violation is returned **Steps to reproduce** Steps to reproduce the behavior: 1. Create a Python Editor Test that makes a call to get the **Component Property Tree** from the **PhysX Ball Joint Component**. ``` test_entity = EditorEntity.create_editor_entity(""Test"") test_component = test_entity.add_component(""PhysX Ball Joint"") print(test_component.get_property_type_visibility()) ``` or ``` test_entity = hydra.Entity(""test"") entity.create_entity(position, [""PhysX Ball Joint""]) component = test_entity.components[0] print(hydra.get_property_tree(component) ``` 2. Run automation **Expected behavior** A property tree with paths is returned and printed to the stream **Actual behavior** A Read Access Memory exception is returned **Callstack** ``` |Editor.log| <16:01:59> (Exit) - Exception with exit code: 0xc0000005 E |Editor.log| <16:02:03> (Exit) - E:\gws\o3de\Code\Legacy\CrySystem\DebugCallStack.cpp (223) : DebugCallStack::handleException E |Editor.log| <16:02:03> (Exit) - 00007FF8EF5D0057 (KERNELBASE) : UnhandledExceptionFilter E |Editor.log| <16:02:03> (Exit) - 00007FF8F1B753B0 (ntdll) : memset E |Editor.log| <16:02:03> (Exit) - 00007FF8F1B5C766 (ntdll) : _C_specific_handler E |Editor.log| <16:02:03> (Exit) - 00007FF8F1B7229F (ntdll) : _chkstk E |Editor.log| <16:02:03> (Exit) - 00007FF8F1B21454 (ntdll) : RtlRaiseException E |Editor.log| <16:02:03> (Exit) - 00007FF8F1B70DCE (ntdll) : KiUserExceptionDispatcher E |Editor.log| <16:02:03> (Exit) - E:\gws\o3de\Gems\PhysX\Code\Editor\EditorJointConfiguration.cpp (337) : PhysX::EditorJointConfig::IsInComponentMode E |Editor.log| <16:02:03> () - 00007FF8EF5D0057 (KERNELBASE) : UnhandledExceptionFilter E |Editor.log| <16:02:03> () - 00007FF8F1B753B0 (ntdll) : memset E |Editor.log| <16:02:03> () - 00007FF8F1B5C766 (ntdll) : _C_specific_handler E |Editor.log| <16:02:03> () - 00007FF8F1B7229F (ntdll) : _chkstk E |Editor.log| <16:02:03> () - 00007FF8F1B21454 (ntdll) : RtlRaiseException E |Editor.log| <16:02:03> () - 00007FF8F1B70DCE (ntdll) : KiUserExceptionDispatcher E |Editor.log| <16:02:03> () - E:\gws\o3de\Gems\PhysX\Code\Editor\EditorJointConfiguration.cpp (337) : PhysX::EditorJointConfig::IsInComponentMode ``` ",1.0,"PhysX Ball Joint Component returns a memory access violation when getting its Component Property Tree with types - **Describe the bug** When attempting to get the **Component Property Tree** from a **PhysX Ball Joint Component** a memory access violation is returned **Steps to reproduce** Steps to reproduce the behavior: 1. Create a Python Editor Test that makes a call to get the **Component Property Tree** from the **PhysX Ball Joint Component**. ``` test_entity = EditorEntity.create_editor_entity(""Test"") test_component = test_entity.add_component(""PhysX Ball Joint"") print(test_component.get_property_type_visibility()) ``` or ``` test_entity = hydra.Entity(""test"") entity.create_entity(position, [""PhysX Ball Joint""]) component = test_entity.components[0] print(hydra.get_property_tree(component) ``` 2. Run automation **Expected behavior** A property tree with paths is returned and printed to the stream **Actual behavior** A Read Access Memory exception is returned **Callstack** ``` |Editor.log| <16:01:59> (Exit) - Exception with exit code: 0xc0000005 E |Editor.log| <16:02:03> (Exit) - E:\gws\o3de\Code\Legacy\CrySystem\DebugCallStack.cpp (223) : DebugCallStack::handleException E |Editor.log| <16:02:03> (Exit) - 00007FF8EF5D0057 (KERNELBASE) : UnhandledExceptionFilter E |Editor.log| <16:02:03> (Exit) - 00007FF8F1B753B0 (ntdll) : memset E |Editor.log| <16:02:03> (Exit) - 00007FF8F1B5C766 (ntdll) : _C_specific_handler E |Editor.log| <16:02:03> (Exit) - 00007FF8F1B7229F (ntdll) : _chkstk E |Editor.log| <16:02:03> (Exit) - 00007FF8F1B21454 (ntdll) : RtlRaiseException E |Editor.log| <16:02:03> (Exit) - 00007FF8F1B70DCE (ntdll) : KiUserExceptionDispatcher E |Editor.log| <16:02:03> (Exit) - E:\gws\o3de\Gems\PhysX\Code\Editor\EditorJointConfiguration.cpp (337) : PhysX::EditorJointConfig::IsInComponentMode E |Editor.log| <16:02:03> () - 00007FF8EF5D0057 (KERNELBASE) : UnhandledExceptionFilter E |Editor.log| <16:02:03> () - 00007FF8F1B753B0 (ntdll) : memset E |Editor.log| <16:02:03> () - 00007FF8F1B5C766 (ntdll) : _C_specific_handler E |Editor.log| <16:02:03> () - 00007FF8F1B7229F (ntdll) : _chkstk E |Editor.log| <16:02:03> () - 00007FF8F1B21454 (ntdll) : RtlRaiseException E |Editor.log| <16:02:03> () - 00007FF8F1B70DCE (ntdll) : KiUserExceptionDispatcher E |Editor.log| <16:02:03> () - E:\gws\o3de\Gems\PhysX\Code\Editor\EditorJointConfiguration.cpp (337) : PhysX::EditorJointConfig::IsInComponentMode ``` ",1,physx ball joint component returns a memory access violation when getting its component property tree with types describe the bug when attempting to get the component property tree from a physx ball joint component a memory access violation is returned steps to reproduce steps to reproduce the behavior create a python editor test that makes a call to get the component property tree from the physx ball joint component test entity editorentity create editor entity test test component test entity add component physx ball joint print test component get property type visibility or test entity hydra entity test entity create entity position component test entity components print hydra get property tree component run automation expected behavior a property tree with paths is returned and printed to the stream actual behavior a read access memory exception is returned callstack editor log exit exception with exit code e editor log exit e gws code legacy crysystem debugcallstack cpp debugcallstack handleexception e editor log exit kernelbase unhandledexceptionfilter e editor log exit ntdll memset e editor log exit ntdll c specific handler e editor log exit ntdll chkstk e editor log exit ntdll rtlraiseexception e editor log exit ntdll kiuserexceptiondispatcher e editor log exit e gws gems physx code editor editorjointconfiguration cpp physx editorjointconfig isincomponentmode e editor log kernelbase unhandledexceptionfilter e editor log ntdll memset e editor log ntdll c specific handler e editor log ntdll chkstk e editor log ntdll rtlraiseexception e editor log ntdll kiuserexceptiondispatcher e editor log e gws gems physx code editor editorjointconfiguration cpp physx editorjointconfig isincomponentmode ,1 8593,27156335079.0,IssuesEvent,2023-02-17 08:11:00,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Problem on CLICK action,TYPE: bug SYSTEM: automations STATE: Need clarification FREQUENCY: level 1 HAS WORKAROUND," ### What is your Test Scenario? I want to perform a ""CLICK"" on a button. ### What is the Current behavior? The Button is visible but the CLICK action is not performed ### What is the Expected behavior? If the click is correctly submitted the behavior is the page change ### What is your web application and your TestCafe test code? https://rent.decathlon.it/ main.js ```js export const startPage = (uri) => { fixture `Getting Started from ${uri}` .page `https://rent.decathlon.net${uri}`; }; ``` selectors.js ```js import { Selector } from 'testcafe'; let selectorByDay = {}; for(let i = 1; i < 34; i++) { selectorByDay[i] = Selector(`.days:nth-child(${i+3}) p`) } export const home = { accept_cookies: Selector('#content > app-startup > ion-content > div > app-footer > div > app-cookie-law-policy-banner > ion-toolbar > ion-grid > ion-row > ion-col:nth-child(2) > ion-button'), main_title: Selector('.sc-ion-label-md-h:nth-child(1) > h2'), place_open_modal: Selector('.ng-pristine > .native-input'), place_modal: { input: Selector('.searchbar-input'), first: Selector('.list > .in-list:nth-child(1) h2') }, sport_open_modal: Selector('.hydrated:nth-child(2) > .item-lines-none .native-input'), sport_modal: { first: Selector('.in-list.item.ion-focusable.item-label.hydrated').nth(8).find('.sc-ion-label-md-h.sc-ion-label-md-s.hydrated') }, date_open_modal: Selector('.hydrated:nth-child(3) .native-input'), date_modal: { days: selectorByDay }, submit: Selector('#content > app-startup > ion-content > div > div > div > ion-grid > ion-row > ion-col:nth-child(4) > ion-button') }; export const stores = { navigation_place: Selector('.info-item:nth-child(1)') }; ``` home-compile-form.e2e.spec.js ```js import {startPage} from ""../main""; import {home, stores} from ""../selectors""; import {Selector} from ""testcafe""; startPage('/'); test('Home test', async t => { await t // Use the assertion to check if the actual header text is equal to the expected one .expect(home.main_title.innerText).contains('Rent') .click(home.accept_cookies) .click(home.place_open_modal) .click(home.place_modal.input) .wait(500) .typeText(home.place_modal.input, 'Milano') .click(home.place_modal.first) .click(home.sport_open_modal) .wait(500) .click(home.sport_modal.first) .click(home.date_open_modal) .click(home.date_modal.days[30]) .click(home.date_modal.days[30]) .wait(1500) .expect(home.submit.exists).ok() .click(Selector('#content > app-startup > ion-content > div > div > div > ion-grid > ion-row > ion-col:nth-child(4) > ion-button')) .wait(500) .expect(stores.navigation_place.innerText).contains('MI'); }); ```
Your complete configuration file (if any): No configuration is used, only the command to run the test.
Your complete test report: ``` 1) Cannot obtain information about the node because the specified selector does not match any node in the DOM tree.  > | Selector('.info-item:nth-child(1)') Browser: Chrome 76.0.3809 / Mac OS X 10.14.5 22 | .click(home.date_modal.days[30]) 23 | .wait(1500) 24 | .expect(home.submit.exists).ok() 25 | .click(Selector('#content > app-startup > ion-content > div > div > div > ion-grid > ion-row > ion-col:nth-child(4) > ion-button')) 26 | .wait(500) > 27 | .expect(stores.navigation_place.innerText).contains('MI'); 28 |}); 29 | at contains (/Users/acando14/Sites/dktrent-front-customer/e2e/home/home-compile-form.e2e.spec.js:27:48) at test (/Users/acando14/Sites/dktrent-front-customer/e2e/home/home-compile-form.e2e.spec.js:7:1) at (/Users/acando14/Sites/dktrent-front-customer/node_modules/testcafe/src/api/wrap-test-function.js:17:28) at TestRun._executeTestFn (/Users/acando14/Sites/dktrent-front-customer/node_modules/testcafe/src/test-run/index.js:288:19) at TestRun.start (/Users/acando14/Sites/dktrent-front-customer/node_modules/testcafe/src/test-run/index.js:337:24) ```
### Steps to Reproduce: 1. Create the test-suite as the snippet reported 3. Execute this command to run testcafe ` testcafe chrome e2e ` 4. See that the second page is not charged, so the click action is not performed. ### Your Environment details: * testcafe version: 1.4.0 * node.js version: v10.15.2 * command-line arguments: chrome * browser name and version: 76.0.3809.100/Google Chrome * platform and version: 10.14.5 * other: I also use the TestCafe IDE but it also fail. Thank you, regards ",1.0,"Problem on CLICK action - ### What is your Test Scenario? I want to perform a ""CLICK"" on a button. ### What is the Current behavior? The Button is visible but the CLICK action is not performed ### What is the Expected behavior? If the click is correctly submitted the behavior is the page change ### What is your web application and your TestCafe test code? https://rent.decathlon.it/ main.js ```js export const startPage = (uri) => { fixture `Getting Started from ${uri}` .page `https://rent.decathlon.net${uri}`; }; ``` selectors.js ```js import { Selector } from 'testcafe'; let selectorByDay = {}; for(let i = 1; i < 34; i++) { selectorByDay[i] = Selector(`.days:nth-child(${i+3}) p`) } export const home = { accept_cookies: Selector('#content > app-startup > ion-content > div > app-footer > div > app-cookie-law-policy-banner > ion-toolbar > ion-grid > ion-row > ion-col:nth-child(2) > ion-button'), main_title: Selector('.sc-ion-label-md-h:nth-child(1) > h2'), place_open_modal: Selector('.ng-pristine > .native-input'), place_modal: { input: Selector('.searchbar-input'), first: Selector('.list > .in-list:nth-child(1) h2') }, sport_open_modal: Selector('.hydrated:nth-child(2) > .item-lines-none .native-input'), sport_modal: { first: Selector('.in-list.item.ion-focusable.item-label.hydrated').nth(8).find('.sc-ion-label-md-h.sc-ion-label-md-s.hydrated') }, date_open_modal: Selector('.hydrated:nth-child(3) .native-input'), date_modal: { days: selectorByDay }, submit: Selector('#content > app-startup > ion-content > div > div > div > ion-grid > ion-row > ion-col:nth-child(4) > ion-button') }; export const stores = { navigation_place: Selector('.info-item:nth-child(1)') }; ``` home-compile-form.e2e.spec.js ```js import {startPage} from ""../main""; import {home, stores} from ""../selectors""; import {Selector} from ""testcafe""; startPage('/'); test('Home test', async t => { await t // Use the assertion to check if the actual header text is equal to the expected one .expect(home.main_title.innerText).contains('Rent') .click(home.accept_cookies) .click(home.place_open_modal) .click(home.place_modal.input) .wait(500) .typeText(home.place_modal.input, 'Milano') .click(home.place_modal.first) .click(home.sport_open_modal) .wait(500) .click(home.sport_modal.first) .click(home.date_open_modal) .click(home.date_modal.days[30]) .click(home.date_modal.days[30]) .wait(1500) .expect(home.submit.exists).ok() .click(Selector('#content > app-startup > ion-content > div > div > div > ion-grid > ion-row > ion-col:nth-child(4) > ion-button')) .wait(500) .expect(stores.navigation_place.innerText).contains('MI'); }); ```
Your complete configuration file (if any): No configuration is used, only the command to run the test.
Your complete test report: ``` 1) Cannot obtain information about the node because the specified selector does not match any node in the DOM tree.  > | Selector('.info-item:nth-child(1)') Browser: Chrome 76.0.3809 / Mac OS X 10.14.5 22 | .click(home.date_modal.days[30]) 23 | .wait(1500) 24 | .expect(home.submit.exists).ok() 25 | .click(Selector('#content > app-startup > ion-content > div > div > div > ion-grid > ion-row > ion-col:nth-child(4) > ion-button')) 26 | .wait(500) > 27 | .expect(stores.navigation_place.innerText).contains('MI'); 28 |}); 29 | at contains (/Users/acando14/Sites/dktrent-front-customer/e2e/home/home-compile-form.e2e.spec.js:27:48) at test (/Users/acando14/Sites/dktrent-front-customer/e2e/home/home-compile-form.e2e.spec.js:7:1) at (/Users/acando14/Sites/dktrent-front-customer/node_modules/testcafe/src/api/wrap-test-function.js:17:28) at TestRun._executeTestFn (/Users/acando14/Sites/dktrent-front-customer/node_modules/testcafe/src/test-run/index.js:288:19) at TestRun.start (/Users/acando14/Sites/dktrent-front-customer/node_modules/testcafe/src/test-run/index.js:337:24) ```
### Steps to Reproduce: 1. Create the test-suite as the snippet reported 3. Execute this command to run testcafe ` testcafe chrome e2e ` 4. See that the second page is not charged, so the click action is not performed. ### Your Environment details: * testcafe version: 1.4.0 * node.js version: v10.15.2 * command-line arguments: chrome * browser name and version: 76.0.3809.100/Google Chrome * platform and version: 10.14.5 * other: I also use the TestCafe IDE but it also fail. Thank you, regards ",1,problem on click action if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository  in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario i want to perform a click on a button what is the current behavior the button is visible but the click action is not performed what is the expected behavior if the click is correctly submitted the behavior is the page change what is your web application and your testcafe test code main js js export const startpage uri fixture getting started from uri page selectors js js import selector from testcafe let selectorbyday for let i i i selectorbyday selector days nth child i p export const home accept cookies selector content app startup ion content div app footer div app cookie law policy banner ion toolbar ion grid ion row ion col nth child ion button main title selector sc ion label md h nth child place open modal selector ng pristine native input place modal input selector searchbar input first selector list in list nth child sport open modal selector hydrated nth child item lines none native input sport modal first selector in list item ion focusable item label hydrated nth find sc ion label md h sc ion label md s hydrated date open modal selector hydrated nth child native input date modal days selectorbyday submit selector content app startup ion content div div div ion grid ion row ion col nth child ion button export const stores navigation place selector info item nth child home compile form spec js js import startpage from main import home stores from selectors import selector from testcafe startpage test home test async t await t use the assertion to check if the actual header text is equal to the expected one expect home main title innertext contains rent click home accept cookies click home place open modal click home place modal input wait typetext home place modal input milano click home place modal first click home sport open modal wait click home sport modal first click home date open modal click home date modal days click home date modal days wait expect home submit exists ok click selector content app startup ion content div div div ion grid ion row ion col nth child ion button wait expect stores navigation place innertext contains mi your complete configuration file if any no configuration is used only the command to run the test your complete test report cannot obtain information about the node because the specified selector does not match any node in the dom tree   selector info item nth child browser chrome mac os x click home date modal days wait expect home submit exists ok click selector content app startup ion content div div div ion grid ion row ion col nth child ion button wait expect stores navigation place innertext contains mi at contains users sites dktrent front customer home home compile form spec js at test users sites dktrent front customer home home compile form spec js at users sites dktrent front customer node modules testcafe src api wrap test function js at testrun executetestfn users sites dktrent front customer node modules testcafe src test run index js at testrun start users sites dktrent front customer node modules testcafe src test run index js steps to reproduce create the test suite as the snippet reported execute this command to run testcafe testcafe chrome see that the second page is not charged so the click action is not performed your environment details testcafe version node js version command line arguments chrome browser name and version google chrome platform and version other i also use the testcafe ide but it also fail thank you regards ,1 100827,21525375479.0,IssuesEvent,2022-04-28 17:52:47,astro-projects/astro,https://api.github.com/repos/astro-projects/astro,reopened,Refactor Module Structure (including MyPy and Unit tests),code-quality,"We should refactor our module structure so we don't dump many things in the `utils` directory. We should also follow the same directory structure for the `tests` directory so navigating the `tests` directory is predictable. Proposed Structure (details in [this link](https://docs.google.com/document/d/1coJwrqm3wbCcgop_9fQSiQDdZOMLZ9r7_ENDOyO8JrQ/edit?usp=sharing)): ``` astro/ ├── __init__.py ├── constants.py # mostly internal, but also exposes what we support ├── settings.py # input from users to define the lib behavior ├── exceptions.py # this would potentially be exposed to the end-user ├── databases/ # use `importlib.import_module` to avoid MissingPackage │ ├── __init__.py │ ├── base.py │ ├── bigquery.py │ ├── postgres.py │ ├── sqlite.py │ └── snowflake.py |── files/ ├── types │ ├── __init__.py │ ├── base.py │ ├── csv.py │ ├── json.py │ ├── … │ └── parquet.py ├── locations │ ├── __init__.py │ ├── base.py │ ├── local.py │ ├── s3.py # would include de credentials │ ├── http.py │ └── gcs.py # would include de credentials ├── sql/ ├── __init__.py ├── base.py ├── tables.py ├── load/ │ ├── __init__.py │ ├── load_file.py │ └── load_dataframe.py ├── export/ │ ├── __init__.py │ ├── export_file.py │ └── export_dataframe.py ├── transform/ │ ├── __init__.py │ ├── merge.py # will include both merge and append │ ├── render.py │ ├── transform.py │ └── truncate.py └── check/ ├── __init__.py ├── aggregate.py ├── boolean.py └── stats.py ``` The following items should also be covered when refactoring (Moved to this ticket from https://github.com/astro-projects/astro/issues/290 because of overlap): - Replace string constants by grouping them into Enums or Dataclasses - Single `get_connection` method given a `conn_id` that replaces `BaseHook.get_connection(table.conn_id)` - Remove shadowing of built-in names - Usage of `isinstance` instead of `type(obj)` for checking if object is instance of the class and it's subclass. We don't need to change anything if we really want to only check if object is equal to a class and not it's inherited class. Ref: https://stackoverflow.com/questions/1549801/what-are-the-differences-between-type-and-isinstance - Should we replace `if/else` with Classes for each Database (& a base/metaclass Database for interface), File similar to SqlAlchemy and have attributes on them for load, exporting etc so the code is extensible. This way we can have `merge`, `append` and other methods on the Base class and throw a `NotImplementedError` and let classes inheriting the base class add DB specific logic - Replace usage of `from distutils import log as logger` with `import logging` and creating logger - Single way of running queries if possible, instead of `hook.run` vs `connection.execute` (SQLAlchemy). And in general decide on single usage hook vs SQLAlchemy ",1.0,"Refactor Module Structure (including MyPy and Unit tests) - We should refactor our module structure so we don't dump many things in the `utils` directory. We should also follow the same directory structure for the `tests` directory so navigating the `tests` directory is predictable. Proposed Structure (details in [this link](https://docs.google.com/document/d/1coJwrqm3wbCcgop_9fQSiQDdZOMLZ9r7_ENDOyO8JrQ/edit?usp=sharing)): ``` astro/ ├── __init__.py ├── constants.py # mostly internal, but also exposes what we support ├── settings.py # input from users to define the lib behavior ├── exceptions.py # this would potentially be exposed to the end-user ├── databases/ # use `importlib.import_module` to avoid MissingPackage │ ├── __init__.py │ ├── base.py │ ├── bigquery.py │ ├── postgres.py │ ├── sqlite.py │ └── snowflake.py |── files/ ├── types │ ├── __init__.py │ ├── base.py │ ├── csv.py │ ├── json.py │ ├── … │ └── parquet.py ├── locations │ ├── __init__.py │ ├── base.py │ ├── local.py │ ├── s3.py # would include de credentials │ ├── http.py │ └── gcs.py # would include de credentials ├── sql/ ├── __init__.py ├── base.py ├── tables.py ├── load/ │ ├── __init__.py │ ├── load_file.py │ └── load_dataframe.py ├── export/ │ ├── __init__.py │ ├── export_file.py │ └── export_dataframe.py ├── transform/ │ ├── __init__.py │ ├── merge.py # will include both merge and append │ ├── render.py │ ├── transform.py │ └── truncate.py └── check/ ├── __init__.py ├── aggregate.py ├── boolean.py └── stats.py ``` The following items should also be covered when refactoring (Moved to this ticket from https://github.com/astro-projects/astro/issues/290 because of overlap): - Replace string constants by grouping them into Enums or Dataclasses - Single `get_connection` method given a `conn_id` that replaces `BaseHook.get_connection(table.conn_id)` - Remove shadowing of built-in names - Usage of `isinstance` instead of `type(obj)` for checking if object is instance of the class and it's subclass. We don't need to change anything if we really want to only check if object is equal to a class and not it's inherited class. Ref: https://stackoverflow.com/questions/1549801/what-are-the-differences-between-type-and-isinstance - Should we replace `if/else` with Classes for each Database (& a base/metaclass Database for interface), File similar to SqlAlchemy and have attributes on them for load, exporting etc so the code is extensible. This way we can have `merge`, `append` and other methods on the Base class and throw a `NotImplementedError` and let classes inheriting the base class add DB specific logic - Replace usage of `from distutils import log as logger` with `import logging` and creating logger - Single way of running queries if possible, instead of `hook.run` vs `connection.execute` (SQLAlchemy). And in general decide on single usage hook vs SQLAlchemy ",0,refactor module structure including mypy and unit tests we should refactor our module structure so we don t dump many things in the utils directory we should also follow the same directory structure for the tests directory so navigating the tests directory is predictable proposed structure details in astro ├── init py ├── constants py mostly internal but also exposes what we support ├── settings py input from users to define the lib behavior ├── exceptions py this would potentially be exposed to the end user ├── databases use importlib import module to avoid missingpackage │ ├── init py │ ├── base py │ ├── bigquery py │ ├── postgres py │ ├── sqlite py │ └── snowflake py ── files ├── types │ ├── init py │ ├── base py │ ├── csv py │ ├── json py │ ├── … │ └── parquet py ├── locations │ ├── init py │ ├── base py │ ├── local py │ ├── py would include de credentials │ ├── http py │ └── gcs py would include de credentials ├── sql ├── init py ├── base py ├── tables py ├── load │ ├── init py │ ├── load file py │ └── load dataframe py ├── export │ ├── init py │ ├── export file py │ └── export dataframe py ├── transform │ ├── init py │ ├── merge py will include both merge and append │ ├── render py │ ├── transform py │ └── truncate py └── check ├── init py ├── aggregate py ├── boolean py └── stats py the following items should also be covered when refactoring moved to this ticket from because of overlap replace string constants by grouping them into enums or dataclasses single get connection method given a conn id that replaces basehook get connection table conn id remove shadowing of built in names usage of isinstance instead of type obj for checking if object is instance of the class and it s subclass we don t need to change anything if we really want to only check if object is equal to a class and not it s inherited class ref should we replace if else with classes for each database a base metaclass database for interface file similar to sqlalchemy and have attributes on them for load exporting etc so the code is extensible this way we can have merge append and other methods on the base class and throw a notimplementederror and let classes inheriting the base class add db specific logic replace usage of from distutils import log as logger with import logging and creating logger single way of running queries if possible instead of hook run vs connection execute sqlalchemy and in general decide on single usage hook vs sqlalchemy ,0 42519,17173361893.0,IssuesEvent,2021-07-15 08:22:34,PrestaShop/PrestaShop,https://api.github.com/repos/PrestaShop/PrestaShop,closed,Module without description in webservice ressource break ressources listing on /api/,1.7.6.4 Bug Modules NMI WS Webservice," #### Describe the bug If installed modules use the hook `hookAddWebserviceResources` without providing ""description"" the list on the root /api/ path fails with `Undefined index: description` errors. For instance: ```php /** * @param array $params * @return array */ public function hookAddWebserviceResources($params) { return array( 'colissimo_custom_products' => array( 'class' => 'ColissimoCustomProduct', 'forbidden_method' => array('HEAD', 'POST', 'PUT', 'DELETE'), ), 'colissimo_custom_categories' => array( 'class' => 'ColissimoCustomCategory', 'forbidden_method' => array('HEAD', 'POST', 'PUT', 'DELETE'), ), 'colissimo_ace' => array( 'class' => 'ColissimoACE', 'forbidden_method' => array('GET', 'HEAD', 'PUT', 'DELETE'), ), ); } ``` The error is raised [here](https://github.com/PrestaShop/PrestaShop/blob/57894f9e9acb1f75088ec4ae63aaf6c468eac4b5/classes/webservice/WebserviceOutputBuilder.php#L331) #### Expected behavior Being able to list webservice ressources even if no description as been provided. #### Steps to Reproduce Steps to reproduce the behavior: 1. Install colissimo offical module for instance 2. Create a webservice API key 3. Open 'https://yourawesomeshop.com/api/' 4. See error ![image](https://user-images.githubusercontent.com/4567538/123407152-ec914780-d5ab-11eb-8908-1e671c127f49.png) #### Additional information * PrestaShop version: 1.7.6.4 * PHP version: 7.2 ",1.0,"Module without description in webservice ressource break ressources listing on /api/ - #### Describe the bug If installed modules use the hook `hookAddWebserviceResources` without providing ""description"" the list on the root /api/ path fails with `Undefined index: description` errors. For instance: ```php /** * @param array $params * @return array */ public function hookAddWebserviceResources($params) { return array( 'colissimo_custom_products' => array( 'class' => 'ColissimoCustomProduct', 'forbidden_method' => array('HEAD', 'POST', 'PUT', 'DELETE'), ), 'colissimo_custom_categories' => array( 'class' => 'ColissimoCustomCategory', 'forbidden_method' => array('HEAD', 'POST', 'PUT', 'DELETE'), ), 'colissimo_ace' => array( 'class' => 'ColissimoACE', 'forbidden_method' => array('GET', 'HEAD', 'PUT', 'DELETE'), ), ); } ``` The error is raised [here](https://github.com/PrestaShop/PrestaShop/blob/57894f9e9acb1f75088ec4ae63aaf6c468eac4b5/classes/webservice/WebserviceOutputBuilder.php#L331) #### Expected behavior Being able to list webservice ressources even if no description as been provided. #### Steps to Reproduce Steps to reproduce the behavior: 1. Install colissimo offical module for instance 2. Create a webservice API key 3. Open 'https://yourawesomeshop.com/api/' 4. See error ![image](https://user-images.githubusercontent.com/4567538/123407152-ec914780-d5ab-11eb-8908-1e671c127f49.png) #### Additional information * PrestaShop version: 1.7.6.4 * PHP version: 7.2 ",0,module without description in webservice ressource break ressources listing on api do not disclose security issues here contact security prestashop com instead describe the bug if installed modules use the hook hookaddwebserviceresources without providing description the list on the root api path fails with undefined index description errors for instance php param array params return array public function hookaddwebserviceresources params return array colissimo custom products array class colissimocustomproduct forbidden method array head post put delete colissimo custom categories array class colissimocustomcategory forbidden method array head post put delete colissimo ace array class colissimoace forbidden method array get head put delete the error is raised expected behavior being able to list webservice ressources even if no description as been provided steps to reproduce steps to reproduce the behavior install colissimo offical module for instance create a webservice api key open see error additional information prestashop version php version ,0 133,4059057478.0,IssuesEvent,2016-05-25 08:08:22,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,RuleEngineCallbackImpl throws NPE during unit tests,Automation bug,"A unit test fails because of an NPE, see https://hudson.eclipse.org/smarthome/job/SmartHomeDistribution-Nightly/1003/testReport/junit/org.eclipse.smarthome.automation.module.timer.internal/RuntimeRuleTest/assert_that_timerTrigger_works/: ``` 16:09:49.016 [DefaultQuartzScheduler_Worker-2] ERROR org.quartz.core.JobRunShell - Job DEFAULT.TimerTrigger52848043-4d61-40f7-96ca-ed4eac51aa00 threw an unhandled Exception: java.lang.NullPointerException: null at org.eclipse.smarthome.automation.core.internal.RuleEngineCallbackImpl.triggered(RuleEngineCallbackImpl.java:44) ~[na:na] at org.eclipse.smarthome.automation.module.timer.handler.CallbackJob.execute(CallbackJob.java:53) ~[na:na] at org.quartz.core.JobRunShell.run(JobRunShell.java:202) ~[quartz-2.2.1.jar:na] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) [quartz-2.2.1.jar:na] 16:09:49.019 [DefaultQuartzScheduler_Worker-2] ERROR org.quartz.core.ErrorLogger - Job (DEFAULT.TimerTrigger52848043-4d61-40f7-96ca-ed4eac51aa00 threw an exception. org.quartz.SchedulerException: Job threw an unhandled exception. at org.quartz.core.JobRunShell.run(JobRunShell.java:213) ~[quartz-2.2.1.jar:na] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) [quartz-2.2.1.jar:na] Caused by: java.lang.NullPointerException: null at org.eclipse.smarthome.automation.core.internal.RuleEngineCallbackImpl.triggered(RuleEngineCallbackImpl.java:44) ~[na:na] at org.eclipse.smarthome.automation.module.timer.handler.CallbackJob.execute(CallbackJob.java:53) ~[na:na] at org.quartz.core.JobRunShell.run(JobRunShell.java:202) ~[quartz-2.2.1.jar:na] ... 1 common frames omitted ```",1.0,"RuleEngineCallbackImpl throws NPE during unit tests - A unit test fails because of an NPE, see https://hudson.eclipse.org/smarthome/job/SmartHomeDistribution-Nightly/1003/testReport/junit/org.eclipse.smarthome.automation.module.timer.internal/RuntimeRuleTest/assert_that_timerTrigger_works/: ``` 16:09:49.016 [DefaultQuartzScheduler_Worker-2] ERROR org.quartz.core.JobRunShell - Job DEFAULT.TimerTrigger52848043-4d61-40f7-96ca-ed4eac51aa00 threw an unhandled Exception: java.lang.NullPointerException: null at org.eclipse.smarthome.automation.core.internal.RuleEngineCallbackImpl.triggered(RuleEngineCallbackImpl.java:44) ~[na:na] at org.eclipse.smarthome.automation.module.timer.handler.CallbackJob.execute(CallbackJob.java:53) ~[na:na] at org.quartz.core.JobRunShell.run(JobRunShell.java:202) ~[quartz-2.2.1.jar:na] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) [quartz-2.2.1.jar:na] 16:09:49.019 [DefaultQuartzScheduler_Worker-2] ERROR org.quartz.core.ErrorLogger - Job (DEFAULT.TimerTrigger52848043-4d61-40f7-96ca-ed4eac51aa00 threw an exception. org.quartz.SchedulerException: Job threw an unhandled exception. at org.quartz.core.JobRunShell.run(JobRunShell.java:213) ~[quartz-2.2.1.jar:na] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) [quartz-2.2.1.jar:na] Caused by: java.lang.NullPointerException: null at org.eclipse.smarthome.automation.core.internal.RuleEngineCallbackImpl.triggered(RuleEngineCallbackImpl.java:44) ~[na:na] at org.eclipse.smarthome.automation.module.timer.handler.CallbackJob.execute(CallbackJob.java:53) ~[na:na] at org.quartz.core.JobRunShell.run(JobRunShell.java:202) ~[quartz-2.2.1.jar:na] ... 1 common frames omitted ```",1,ruleenginecallbackimpl throws npe during unit tests a unit test fails because of an npe see error org quartz core jobrunshell job default threw an unhandled exception java lang nullpointerexception null at org eclipse smarthome automation core internal ruleenginecallbackimpl triggered ruleenginecallbackimpl java at org eclipse smarthome automation module timer handler callbackjob execute callbackjob java at org quartz core jobrunshell run jobrunshell java at org quartz simpl simplethreadpool workerthread run simplethreadpool java error org quartz core errorlogger job default threw an exception org quartz schedulerexception job threw an unhandled exception at org quartz core jobrunshell run jobrunshell java at org quartz simpl simplethreadpool workerthread run simplethreadpool java caused by java lang nullpointerexception null at org eclipse smarthome automation core internal ruleenginecallbackimpl triggered ruleenginecallbackimpl java at org eclipse smarthome automation module timer handler callbackjob execute callbackjob java at org quartz core jobrunshell run jobrunshell java common frames omitted ,1 183765,14952451375.0,IssuesEvent,2021-01-26 15:33:10,opendistro-for-elasticsearch/data-prepper,https://api.github.com/repos/opendistro-for-elasticsearch/data-prepper,opened,Update setup/deployment instructions,documentation,"**Is your feature request related to a problem? Please describe.** We have documentation on the overall project structure and how to setup a dev environment, however we still need guidelines on how to configure and deploy a Data Prepper instance for the trace analytics usecase. **Describe the solution you'd like** We'll need to provide generic setup instructions in the [existing setup readme](https://github.com/opendistro-for-elasticsearch/data-prepper/blob/master/docs/readme/trace_setup.md). **Additional context** Should be generic configuration and deployment steps w/ the container. #284 is already covering deployment via CloudFormation. ",1.0,"Update setup/deployment instructions - **Is your feature request related to a problem? Please describe.** We have documentation on the overall project structure and how to setup a dev environment, however we still need guidelines on how to configure and deploy a Data Prepper instance for the trace analytics usecase. **Describe the solution you'd like** We'll need to provide generic setup instructions in the [existing setup readme](https://github.com/opendistro-for-elasticsearch/data-prepper/blob/master/docs/readme/trace_setup.md). **Additional context** Should be generic configuration and deployment steps w/ the container. #284 is already covering deployment via CloudFormation. ",0,update setup deployment instructions is your feature request related to a problem please describe we have documentation on the overall project structure and how to setup a dev environment however we still need guidelines on how to configure and deploy a data prepper instance for the trace analytics usecase describe the solution you d like we ll need to provide generic setup instructions in the additional context should be generic configuration and deployment steps w the container is already covering deployment via cloudformation ,0 5551,20049084672.0,IssuesEvent,2022-02-03 02:33:08,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,opened,Resolve Issue with Cypress Coverage Plugin + JWKS URL Test,automation,"It appears the recently implemented code coverage plugin is causing an issue with the JWKS URL Client Credential test. This appears to be the case because all tests pass when the plugin is disabled, but this one test fails when the plugin is enabled. The error occurs with the `Generates, saves keys` test inside `07-jwks-url-gen-keys-access-rqst.ts`. It may be due to the asynchronous nature of the `keyStore.generate` function, since async functions can cause unexpected behaviour with Cypress, though more investigation is needed. Here is the message upon failure: ``` CypressError: Cypress detected that you returned a promise from a command while also invoking one or more cy commands in that promise. The command that returned the promise was: > `cy.readFile()` The cy command you invoked inside the promise was: > `cy.log()` Because Cypress commands are already promise-like, you don't need to wrap them or return your own promise. Cypress will resolve your command with whatever the final Cypress command yields. The reason this is an error instead of a warning is because Cypress internally queues commands serially whereas Promises execute as soon as they are invoked. Attempting to reconcile this would prevent Cypress from ever resolving. https://on.cypress.io/returning-promise-and-commands-in-another-command Because this error occurred during a `after each` hook we are skipping all of the remaining tests. at $Cy.cy. [as log] (http://oauth2proxy.localtest.me:4180/__cypress/runner/cypress_runner.js:151788:23) From Your Spec Code: at Context.eval (http://oauth2proxy.localtest.me:4180/__cypress/tests?p=cypress/support/index.ts:38651:12) ------------- let keyStore = jose.JWK.createKeyStore(); keyStore.generate('RSA', 2048, { alg: 'RS256', use: 'sig' }).then((result) => { cy.saveState('jwksurlkeys', JSON.stringify(keyStore.toJSON(true), null, ' ')); }); ```",1.0,"Resolve Issue with Cypress Coverage Plugin + JWKS URL Test - It appears the recently implemented code coverage plugin is causing an issue with the JWKS URL Client Credential test. This appears to be the case because all tests pass when the plugin is disabled, but this one test fails when the plugin is enabled. The error occurs with the `Generates, saves keys` test inside `07-jwks-url-gen-keys-access-rqst.ts`. It may be due to the asynchronous nature of the `keyStore.generate` function, since async functions can cause unexpected behaviour with Cypress, though more investigation is needed. Here is the message upon failure: ``` CypressError: Cypress detected that you returned a promise from a command while also invoking one or more cy commands in that promise. The command that returned the promise was: > `cy.readFile()` The cy command you invoked inside the promise was: > `cy.log()` Because Cypress commands are already promise-like, you don't need to wrap them or return your own promise. Cypress will resolve your command with whatever the final Cypress command yields. The reason this is an error instead of a warning is because Cypress internally queues commands serially whereas Promises execute as soon as they are invoked. Attempting to reconcile this would prevent Cypress from ever resolving. https://on.cypress.io/returning-promise-and-commands-in-another-command Because this error occurred during a `after each` hook we are skipping all of the remaining tests. at $Cy.cy. [as log] (http://oauth2proxy.localtest.me:4180/__cypress/runner/cypress_runner.js:151788:23) From Your Spec Code: at Context.eval (http://oauth2proxy.localtest.me:4180/__cypress/tests?p=cypress/support/index.ts:38651:12) ------------- let keyStore = jose.JWK.createKeyStore(); keyStore.generate('RSA', 2048, { alg: 'RS256', use: 'sig' }).then((result) => { cy.saveState('jwksurlkeys', JSON.stringify(keyStore.toJSON(true), null, ' ')); }); ```",1,resolve issue with cypress coverage plugin jwks url test it appears the recently implemented code coverage plugin is causing an issue with the jwks url client credential test this appears to be the case because all tests pass when the plugin is disabled but this one test fails when the plugin is enabled the error occurs with the generates saves keys test inside jwks url gen keys access rqst ts it may be due to the asynchronous nature of the keystore generate function since async functions can cause unexpected behaviour with cypress though more investigation is needed here is the message upon failure cypresserror cypress detected that you returned a promise from a command while also invoking one or more cy commands in that promise the command that returned the promise was cy readfile the cy command you invoked inside the promise was cy log because cypress commands are already promise like you don t need to wrap them or return your own promise cypress will resolve your command with whatever the final cypress command yields the reason this is an error instead of a warning is because cypress internally queues commands serially whereas promises execute as soon as they are invoked attempting to reconcile this would prevent cypress from ever resolving because this error occurred during a after each hook we are skipping all of the remaining tests at cy cy from your spec code at context eval let keystore jose jwk createkeystore keystore generate rsa alg use sig then result cy savestate jwksurlkeys json stringify keystore tojson true null ,1 178238,29522166653.0,IssuesEvent,2023-06-05 03:35:31,decidim/decidim,https://api.github.com/repos/decidim/decidim,closed,Provide the isolated Abide functionality to the forms,contract: redesign,"User should not see an error before they have done something that led to an error. See [more](https://www.npmjs.com/package/vanilla-abide) Foundation was the original handler for such logic, see [original discussion](https://github.com/decidim/decidim/pull/9469#discussion_r947676152) Originally posted in https://github.com/decidim/decidim/issues/9674",1.0,"Provide the isolated Abide functionality to the forms - User should not see an error before they have done something that led to an error. See [more](https://www.npmjs.com/package/vanilla-abide) Foundation was the original handler for such logic, see [original discussion](https://github.com/decidim/decidim/pull/9469#discussion_r947676152) Originally posted in https://github.com/decidim/decidim/issues/9674",0,provide the isolated abide functionality to the forms user should not see an error before they have done something that led to an error see foundation was the original handler for such logic see originally posted in ,0 103793,4185880805.0,IssuesEvent,2016-06-23 12:48:20,LearningLocker/learninglocker,https://api.github.com/repos/LearningLocker/learninglocker,closed,Statement button in report generates a MongoDB error,priority:low status:unconfirmed type:bug,"**Version** 1.12.1 **Steps to reproduce the bug** 1. Generate tests statements using the tincanapi.com generator. 2. Start working with reports. 3. Press the statements button in the report. **Expected behaviour** The display should be a list of statements matching the report filters. **Actual behaviour** The display is an error screen (with details given below). ``` Pressing the Statements button results in MongoCollection::find(): expects parameter 2 to be an array or object, null given. /var/www/learninglocker/vendor/jenssegers/mongodb/src/Jenssegers/Mongodb/Collection.php line 59 - $result = call_user_func_array(array($this->collection, $method), $parameters); ``` **Server information** Ubuntu 14.04 64 bit. Mongo Community 3.2.7 Mongo PHP driver 1.6.14 **Client information** OS: Windows 7 Browser: Firefox version 46",1.0,"Statement button in report generates a MongoDB error - **Version** 1.12.1 **Steps to reproduce the bug** 1. Generate tests statements using the tincanapi.com generator. 2. Start working with reports. 3. Press the statements button in the report. **Expected behaviour** The display should be a list of statements matching the report filters. **Actual behaviour** The display is an error screen (with details given below). ``` Pressing the Statements button results in MongoCollection::find(): expects parameter 2 to be an array or object, null given. /var/www/learninglocker/vendor/jenssegers/mongodb/src/Jenssegers/Mongodb/Collection.php line 59 - $result = call_user_func_array(array($this->collection, $method), $parameters); ``` **Server information** Ubuntu 14.04 64 bit. Mongo Community 3.2.7 Mongo PHP driver 1.6.14 **Client information** OS: Windows 7 Browser: Firefox version 46",0,statement button in report generates a mongodb error version steps to reproduce the bug generate tests statements using the tincanapi com generator start working with reports press the statements button in the report expected behaviour the display should be a list of statements matching the report filters actual behaviour the display is an error screen with details given below pressing the statements button results in mongocollection find expects parameter to be an array or object null given var www learninglocker vendor jenssegers mongodb src jenssegers mongodb collection php line result call user func array array this collection method parameters server information ubuntu bit mongo community mongo php driver client information os windows browser firefox version ,0 1398,10036847124.0,IssuesEvent,2019-07-18 11:45:16,redhat-performance/quads,https://api.github.com/repos/redhat-performance/quads,closed,Build in automated public vlan culling,automation critical network vlan,"When systems that are utilizing the automated public vlan allocation for assignments (`quads-cli --define-cloud cloud0X --vlan $vlanid`) we currently do not have a culling mechanism to remove unused vlan associations to cloud assignments when they expire. We can utilize the same logic used for `provisioned` flag at the cloud metadata level for this.",1.0,"Build in automated public vlan culling - When systems that are utilizing the automated public vlan allocation for assignments (`quads-cli --define-cloud cloud0X --vlan $vlanid`) we currently do not have a culling mechanism to remove unused vlan associations to cloud assignments when they expire. We can utilize the same logic used for `provisioned` flag at the cloud metadata level for this.",1,build in automated public vlan culling when systems that are utilizing the automated public vlan allocation for assignments quads cli define cloud vlan vlanid we currently do not have a culling mechanism to remove unused vlan associations to cloud assignments when they expire we can utilize the same logic used for provisioned flag at the cloud metadata level for this ,1 50030,20999210645.0,IssuesEvent,2022-03-29 15:51:47,angular/angular,https://api.github.com/repos/angular/angular,closed,Non-indicated compile time (type) error when using generic component in template,comp: language-service comp: compiler," # 🐞 bug report ### Is this a regression? I don't know ### Description A clear and concise description of the problem... I have a generic component that looks something like this ```typescript export class SearchListingComponent implements OnInit { @Input() type!: T; page$!: Observable[]>; } ``` where the type `Names` is in this case just the String Literal Union `'user' | 'group' | 'servicePrincipal'` The problem is that, when I try to do the following in a template, the compiler correctly errors, but this behavior does not get indicated in the editor. ```html ``` In contrast, if the component or rather the input type is not generic, it works as expected ```typescript export class SearchListingComponent implements OnInit { @Input() type!: Names; page$!: Observable[]>; ``` ```html ``` Unfortunately, even when it compiles, the component also gets initialized with `any` instead of the correct type, thus rendering the type annotation on the exposed observable useless, but I assume that is more likely to be an issue with the compiler not being able to infer the type correctly and has nothing to do with the language service. ## Bug Type What does this bug affect - [x] Angular Language Service VSCode extension - [] Angular Language Service server **Expected behavior** A clear and concise description of what you expected to happen. ## 🌍 Your Environment **Angular Version:**



Angular CLI: 13.2.4
Node: 14.17.5
Package Manager: npm 6.14.14
OS: win32 x64

Angular: 13.2.3
... animations, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router

Package                         Version
@angular-devkit/architect       0.1302.4
@angular-devkit/build-angular   13.2.4
@angular-devkit/core            13.2.4
@angular-devkit/schematics      13.2.4
@angular/cli                    13.2.4
@schematics/angular             13.2.4
rxjs                            7.5.4
typescript                      4.5.5
Extension Version:



v13.3.0
VSCode Version:



Version: 1.65.2 (user setup)
Commit: c722ca6c7eed3d7987c0d5c3df5c45f6b15e77d1
Date: 2022-03-10T14:33:55.248Z
Electron: 13.5.2
Chromium: 91.0.4472.164
Node.js: 14.16.0
V8: 9.1.269.39-electron.0
OS: Windows_NT x64 10.0.19044
Operating System:



Microsoft Windows 10 Enterprise
Version 10.0.19044 Build 19044

Extension options:



none
**Anything else relevant?** ",1.0,"Non-indicated compile time (type) error when using generic component in template - # 🐞 bug report ### Is this a regression? I don't know ### Description A clear and concise description of the problem... I have a generic component that looks something like this ```typescript export class SearchListingComponent implements OnInit { @Input() type!: T; page$!: Observable[]>; } ``` where the type `Names` is in this case just the String Literal Union `'user' | 'group' | 'servicePrincipal'` The problem is that, when I try to do the following in a template, the compiler correctly errors, but this behavior does not get indicated in the editor. ```html ``` In contrast, if the component or rather the input type is not generic, it works as expected ```typescript export class SearchListingComponent implements OnInit { @Input() type!: Names; page$!: Observable[]>; ``` ```html ``` Unfortunately, even when it compiles, the component also gets initialized with `any` instead of the correct type, thus rendering the type annotation on the exposed observable useless, but I assume that is more likely to be an issue with the compiler not being able to infer the type correctly and has nothing to do with the language service. ## Bug Type What does this bug affect - [x] Angular Language Service VSCode extension - [] Angular Language Service server **Expected behavior** A clear and concise description of what you expected to happen. ## 🌍 Your Environment **Angular Version:**



Angular CLI: 13.2.4
Node: 14.17.5
Package Manager: npm 6.14.14
OS: win32 x64

Angular: 13.2.3
... animations, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router

Package                         Version
@angular-devkit/architect       0.1302.4
@angular-devkit/build-angular   13.2.4
@angular-devkit/core            13.2.4
@angular-devkit/schematics      13.2.4
@angular/cli                    13.2.4
@schematics/angular             13.2.4
rxjs                            7.5.4
typescript                      4.5.5
Extension Version:



v13.3.0
VSCode Version:



Version: 1.65.2 (user setup)
Commit: c722ca6c7eed3d7987c0d5c3df5c45f6b15e77d1
Date: 2022-03-10T14:33:55.248Z
Electron: 13.5.2
Chromium: 91.0.4472.164
Node.js: 14.16.0
V8: 9.1.269.39-electron.0
OS: Windows_NT x64 10.0.19044
Operating System:



Microsoft Windows 10 Enterprise
Version 10.0.19044 Build 19044

Extension options:



none
**Anything else relevant?** ",0,non indicated compile time type error when using generic component in template 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 oh hi there 😄 to expedite issue processing please search open and closed issues before submitting a new one existing issues often contain information about workarounds resolution or progress updates 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 🐞 bug report is this a regression i don t know description a clear and concise description of the problem i have a generic component that looks something like this typescript export class searchlistingcomponent implements oninit input type t page observable where the type names is in this case just the string literal union user group serviceprincipal the problem is that when i try to do the following in a template the compiler correctly errors but this behavior does not get indicated in the editor html in contrast if the component or rather the input type is not generic it works as expected typescript export class searchlistingcomponent implements oninit input type names page observable html unfortunately even when it compiles the component also gets initialized with any instead of the correct type thus rendering the type annotation on the exposed observable useless but i assume that is more likely to be an issue with the compiler not being able to infer the type correctly and has nothing to do with the language service bug type what does this bug affect angular language service vscode extension angular language service server expected behavior a clear and concise description of what you expected to happen 🌍 your environment angular version angular cli node package manager npm os angular animations common compiler compiler cli core forms platform browser platform browser dynamic router package version angular devkit architect angular devkit build angular angular devkit core angular devkit schematics angular cli schematics angular rxjs typescript extension version vscode version version user setup commit date electron chromium node js electron os windows nt operating system microsoft windows enterprise version build extension options provide any workspace or user settings configured for the extension in the settings json files these are prefixed with the angular namespace example angular log terse angular view engine true none anything else relevant ,0 6830,23958077949.0,IssuesEvent,2022-09-12 16:30:39,smcnab1/op-question-mark,https://api.github.com/repos/smcnab1/op-question-mark,closed,[FR] Implement Movie Pause Automation,Status: Confirmed Type: Feature Priority: Low Priority: Medium For: Automations,"Use status of TV for Pause to change lights to slightly brighter to move around. - [ ] Bedroom - [ ] Living Room",1.0,"[FR] Implement Movie Pause Automation - Use status of TV for Pause to change lights to slightly brighter to move around. - [ ] Bedroom - [ ] Living Room",1, implement movie pause automation use status of tv for pause to change lights to slightly brighter to move around bedroom living room,1 165357,13999617323.0,IssuesEvent,2020-10-28 11:05:32,serverless/serverless,https://api.github.com/repos/serverless/serverless,closed,Document logical id generation for path variables in urls,cat/aws-event-api-gateway documentation good first issue help wanted,"# This is a Feature Proposal ## Description The resource reference in the docs (https://serverless.com/framework/docs/providers/aws/guide/resources/#aws-cloudformation-resource-reference) should be updated to reflect the normalization of path variables in the URL (as brought up by @HyperBrain --> https://github.com/serverless/serverless/issues/2359#issuecomment-285294811) --- In other words: we should document resolution rules for `normalizedPath`, which will clarify that e.g. `POST /users/{user_id}` will be normalized to `UsersUseridVarTestPost` ",1.0,"Document logical id generation for path variables in urls - # This is a Feature Proposal ## Description The resource reference in the docs (https://serverless.com/framework/docs/providers/aws/guide/resources/#aws-cloudformation-resource-reference) should be updated to reflect the normalization of path variables in the URL (as brought up by @HyperBrain --> https://github.com/serverless/serverless/issues/2359#issuecomment-285294811) --- In other words: we should document resolution rules for `normalizedPath`, which will clarify that e.g. `POST /users/{user_id}` will be normalized to `UsersUseridVarTestPost` ",0,document logical id generation for path variables in urls this is a feature proposal description the resource reference in the docs should be updated to reflect the normalization of path variables in the url as brought up by hyperbrain in other words we should document resolution rules for normalizedpath which will clarify that e g post users user id will be normalized to usersuseridvartestpost ,0 1971,11208138777.0,IssuesEvent,2020-01-06 06:48:14,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,NPM campaign removes valid use-case,automation bug,"- **Sourcegraph version:** 3.11, master #### Steps to reproduce: 1. Create a credentials campaign 2. Look at the result (sourcegraph/automation-testing has such a case for example) #### Expected behavior: The line with `${NPM_TOKEN}` is **not replaced** As this is legitimate usage of the `.npmrc` file as per the [docs](https://docs.npmjs.com/files/npmrc#files) > All npm config files are an ini-formatted list of key = value parameters. Environment variables can be replaced using ${VARIABLE_NAME}. #### Actual behavior: The line with `${NPM_TOKEN}` is **replaced** A cherry-pick into 3.11 might be useful to make this campaign usable",1.0,"NPM campaign removes valid use-case - - **Sourcegraph version:** 3.11, master #### Steps to reproduce: 1. Create a credentials campaign 2. Look at the result (sourcegraph/automation-testing has such a case for example) #### Expected behavior: The line with `${NPM_TOKEN}` is **not replaced** As this is legitimate usage of the `.npmrc` file as per the [docs](https://docs.npmjs.com/files/npmrc#files) > All npm config files are an ini-formatted list of key = value parameters. Environment variables can be replaced using ${VARIABLE_NAME}. #### Actual behavior: The line with `${NPM_TOKEN}` is **replaced** A cherry-pick into 3.11 might be useful to make this campaign usable",1,npm campaign removes valid use case sourcegraph version master steps to reproduce create a credentials campaign look at the result sourcegraph automation testing has such a case for example expected behavior the line with npm token is not replaced as this is legitimate usage of the npmrc file as per the all npm config files are an ini formatted list of key value parameters environment variables can be replaced using variable name actual behavior the line with npm token is replaced a cherry pick into might be useful to make this campaign usable,1 76649,7543058909.0,IssuesEvent,2018-04-17 14:35:38,raiden-network/raiden-pathfinding-service,https://api.github.com/repos/raiden-network/raiden-pathfinding-service,closed,Make REST API optional,backlog enhancement testing,In #6 we agreed that a REST API is useful for testing but might be unnecessary once the matrix transport works. It should then be made optional and disabled by default.,1.0,Make REST API optional - In #6 we agreed that a REST API is useful for testing but might be unnecessary once the matrix transport works. It should then be made optional and disabled by default.,0,make rest api optional in we agreed that a rest api is useful for testing but might be unnecessary once the matrix transport works it should then be made optional and disabled by default ,0 10212,31988092346.0,IssuesEvent,2023-09-21 02:07:56,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,data in consistency seen when scale tikv and ticdc,type/bug severity/minor found/automation area/ticdc may-affects-5.2 may-affects-5.3 may-affects-5.4 may-affects-6.1 may-affects-6.5 may-affects-7.1,"### What did you do? 1. create 2 kafka changefeed one for open protocol, one for canal json 2. run kafka consumer 3. run sysbench prepare ""sysbench --db-driver=mysql --mysql-host=nslookup upstream-tidb.cdc-testbed-tps-1816831-1-572 | awk -F: '{print $2}' | awk 'NR==5' | sed s/[[:space:]]//g --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=30 --table-size=1000000 --create_secondary=off --debug=true --threads=20 --mysql-ignore-errors=2013,1213,1105,1205,8022,8027,8028,9004,9007,1062 oltp_write_only prepare"" 4. scale tikv from 3 to 6, and scale cdc from 3 to 1 5. scale tikv from 6 to 3 and cdc from 1 to 3 (in parallel), when running sysbench workload ""sysbench --db-driver=mysql --mysql-host=nslookup upstream-tidb.cdc-testbed-tps-1816831-1-572 | awk -F: '{print $2}' | awk 'NR==5' | sed s/[[:space:]]//g --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=30 --table-size=1000000 --create_secondary=off --time=7200 --debug=true --threads=20 --mysql-ignore-errors=2013,1213,1105,1205,8022,8027,8028,9004,9007,1062 oltp_write_only run"" 6. send finishmark, and do data consistency for open protocal when cdc sync is done. ### What did you expect to see? data should be consistent ### What did you see instead? Data inconsistency is seen. 1 out of total 30 tables has inconsistency data, consistency data sample: ![image](https://github.com/pingcap/tiflow/assets/7403864/f539f513-1006-4a3e-99fd-39699f911c78) ### Versions of the cluster bash-5.1# /cdc version Release Version: v7.2.0-master-dirty Git Commit Hash: f08f125847d3e9ba16f90836f4a5f1a6b93af307 Git Branch: master UTC Build Time: 2023-07-05 09:50:00 Go Version: go version go1.20.1 linux/amd64 Failpoint Build: false ",1.0,"data in consistency seen when scale tikv and ticdc - ### What did you do? 1. create 2 kafka changefeed one for open protocol, one for canal json 2. run kafka consumer 3. run sysbench prepare ""sysbench --db-driver=mysql --mysql-host=nslookup upstream-tidb.cdc-testbed-tps-1816831-1-572 | awk -F: '{print $2}' | awk 'NR==5' | sed s/[[:space:]]//g --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=30 --table-size=1000000 --create_secondary=off --debug=true --threads=20 --mysql-ignore-errors=2013,1213,1105,1205,8022,8027,8028,9004,9007,1062 oltp_write_only prepare"" 4. scale tikv from 3 to 6, and scale cdc from 3 to 1 5. scale tikv from 6 to 3 and cdc from 1 to 3 (in parallel), when running sysbench workload ""sysbench --db-driver=mysql --mysql-host=nslookup upstream-tidb.cdc-testbed-tps-1816831-1-572 | awk -F: '{print $2}' | awk 'NR==5' | sed s/[[:space:]]//g --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=30 --table-size=1000000 --create_secondary=off --time=7200 --debug=true --threads=20 --mysql-ignore-errors=2013,1213,1105,1205,8022,8027,8028,9004,9007,1062 oltp_write_only run"" 6. send finishmark, and do data consistency for open protocal when cdc sync is done. ### What did you expect to see? data should be consistent ### What did you see instead? Data inconsistency is seen. 1 out of total 30 tables has inconsistency data, consistency data sample: ![image](https://github.com/pingcap/tiflow/assets/7403864/f539f513-1006-4a3e-99fd-39699f911c78) ### Versions of the cluster bash-5.1# /cdc version Release Version: v7.2.0-master-dirty Git Commit Hash: f08f125847d3e9ba16f90836f4a5f1a6b93af307 Git Branch: master UTC Build Time: 2023-07-05 09:50:00 Go Version: go version go1.20.1 linux/amd64 Failpoint Build: false ",1,data in consistency seen when scale tikv and ticdc what did you do create kafka changefeed one for open protocol one for canal json run kafka consumer run sysbench prepare sysbench db driver mysql mysql host nslookup upstream tidb cdc testbed tps awk f print awk nr sed s g mysql port mysql user root mysql db workload tables table size create secondary off debug true threads mysql ignore errors oltp write only prepare scale tikv from to and scale cdc from to scale tikv from to and cdc from to in parallel when running sysbench workload sysbench db driver mysql mysql host nslookup upstream tidb cdc testbed tps awk f print awk nr sed s g mysql port mysql user root mysql db workload tables table size create secondary off time debug true threads mysql ignore errors oltp write only run send finishmark and do data consistency for open protocal when cdc sync is done what did you expect to see data should be consistent what did you see instead data inconsistency is seen out of total tables has inconsistency data consistency data sample versions of the cluster bash cdc version release version master dirty git commit hash git branch master utc build time go version go version linux failpoint build false ,1 3665,14267976987.0,IssuesEvent,2020-11-20 21:30:42,PastVu/pastvu,https://api.github.com/repos/PastVu/pastvu,opened,Connect build workflows / Объединить сценарии сборки,Automation CI/CD Todo,"За сборку отвечают следующие сценарии: - [Сборка приложения pastvu/pastvu](https://github.com/PastVu/pastvu/blob/master/.github/workflows/docker-image.yml) - [Сборка pastvu/nginx](https://github.com/PastVu/nginx/blob/master/.github/workflows/docker-publish.yml) Сейчас сборка в [pastvu/pastvu:en](https://github.com/PastVu/pastvu/tree/en) и [pastvu/nginx](https://github.com/PastVu/nginx) запускается вручную ([workflow_dispatch](https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows#workflow_dispatch)). Нужно настроить, чтобы после пуша в мастер и [автомержа в 'en'](https://github.com/PastVu/pastvu/blob/master/.github/workflows/automerge.yml) запускалась сборка мастера и en, а затем, на основе собранных образов, собирался pastvu/nginx.",1.0,"Connect build workflows / Объединить сценарии сборки - За сборку отвечают следующие сценарии: - [Сборка приложения pastvu/pastvu](https://github.com/PastVu/pastvu/blob/master/.github/workflows/docker-image.yml) - [Сборка pastvu/nginx](https://github.com/PastVu/nginx/blob/master/.github/workflows/docker-publish.yml) Сейчас сборка в [pastvu/pastvu:en](https://github.com/PastVu/pastvu/tree/en) и [pastvu/nginx](https://github.com/PastVu/nginx) запускается вручную ([workflow_dispatch](https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows#workflow_dispatch)). Нужно настроить, чтобы после пуша в мастер и [автомержа в 'en'](https://github.com/PastVu/pastvu/blob/master/.github/workflows/automerge.yml) запускалась сборка мастера и en, а затем, на основе собранных образов, собирался pastvu/nginx.",1,connect build workflows объединить сценарии сборки за сборку отвечают следующие сценарии сейчас сборка в и запускается вручную нужно настроить чтобы после пуша в мастер и запускалась сборка мастера и en а затем на основе собранных образов собирался pastvu nginx ,1 6365,23033235012.0,IssuesEvent,2022-07-22 15:49:11,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,API Automation Test - View public documentation and publish documentation,automation,"1. Get the user session token 1.1 authenticates Janis (api owner) to get the user session token 2. API Tests for Updating documentation 2.1 Prepare the Request Specification for the API 2.2 Put the resource and verify the success code in the response (to create new documentation) 3. API Tests for Fetching documentation 3.1 Prepare the Request Specification for the API 3.2 Get the resource and verify the success code in the response 3.3 Compare the values in response against the values passed in the request 4. API Tests for Deleting documentation 4.1 Prepare the Request Specification for the API 4.2 Delete the documentation 5. API Tests for to verify no value in Get call after deleting document content 5.1 Prepare the Request Specification for the API 5.2 Delete the documentation 6. API Tests to verify Get documentation content 6.1 Prepare the Request Specification for the API 6.2 Put the resource and verify the success code in the response 6.3 Prepare the Request Specification for the API 6.4 Verify that document contant is displayed for GET /documentation 6.5 Verify that document contant is fetch by slug ID ",1.0,"API Automation Test - View public documentation and publish documentation - 1. Get the user session token 1.1 authenticates Janis (api owner) to get the user session token 2. API Tests for Updating documentation 2.1 Prepare the Request Specification for the API 2.2 Put the resource and verify the success code in the response (to create new documentation) 3. API Tests for Fetching documentation 3.1 Prepare the Request Specification for the API 3.2 Get the resource and verify the success code in the response 3.3 Compare the values in response against the values passed in the request 4. API Tests for Deleting documentation 4.1 Prepare the Request Specification for the API 4.2 Delete the documentation 5. API Tests for to verify no value in Get call after deleting document content 5.1 Prepare the Request Specification for the API 5.2 Delete the documentation 6. API Tests to verify Get documentation content 6.1 Prepare the Request Specification for the API 6.2 Put the resource and verify the success code in the response 6.3 Prepare the Request Specification for the API 6.4 Verify that document contant is displayed for GET /documentation 6.5 Verify that document contant is fetch by slug ID ",1,api automation test view public documentation and publish documentation get the user session token authenticates janis api owner to get the user session token api tests for updating documentation prepare the request specification for the api put the resource and verify the success code in the response to create new documentation api tests for fetching documentation prepare the request specification for the api get the resource and verify the success code in the response compare the values in response against the values passed in the request api tests for deleting documentation prepare the request specification for the api delete the documentation api tests for to verify no value in get call after deleting document content prepare the request specification for the api delete the documentation api tests to verify get documentation content prepare the request specification for the api put the resource and verify the success code in the response prepare the request specification for the api verify that document contant is displayed for get documentation verify that document contant is fetch by slug id ,1 322958,27656253416.0,IssuesEvent,2023-03-12 01:06:54,backend-br/vagas,https://api.github.com/repos/backend-br/vagas,closed,[Remoto ou Brasília] [Pleno ou Senior] Tech Lead Back-end Developer NodeJS - Jazida.com,PJ Sênior Remoto Testes automatizados SQL Stale,"## TL;DR Tech Lead NodeJs -> https://forms.office.com/r/xEF0mKnUUq ## Descrição da vaga Estamos em busca de um(a) Líder Técnico capacitado e experiente para liderar a equipe de desenvolvimento de backend em nosso ambiente de alta velocidade. Como membro chave da equipe de tecnologia, o candidato ideal terá um conhecimento sólido em arquitetura de sistemas, desenvolvimento de software e liderança de equipe. Se você é apaixonado por tecnologia e quer fazer parte de uma equipe que está transformando o mercado, então essa é a vaga perfeita para você! Este cargo é uma grande oportunidade de fazer parte do time de desenvolvimento de um sistema complexo, entregando aos clientes do Jazida.com ferramentas capazes de mudar a maneira como milhares de pessoas trabalham diariamente. Este trabalho tem um foco estratégico e responsabilidade em todo território nacional. Somos pioneiros na automatização da gestão de processos minerários e mudamos a forma que a mineração do Brasil gerencia seus processos. ATIVIDADES PRINCIPAIS - Projetar, codificar, testar, operar e resolver problemas - Responsável por trazer novas ideias para soluções de problemas - Desenvolvido de Testes Automatizados - Liderar e orientar uma equipe de desenvolvedores de backend para atingir metas de projeto - Propor e implementar soluções técnicas escaláveis e inovadoras - Garantir a qualidade do código e segurança das entregas - Trabalhar em estreita colaboração com outros departamentos, incluindo design, produto e suporte ao cliente - Participar ativamente na definição e implantação de processos de desenvolvimento de software eficientes e efetivos ## Local Escritório localizado em Brasília - DF ou Remoto ## Benefícios **O que o Jazida oferece** Aqui está listado apenas uma amostra do nosso pacote de benefícios - Home Office - Um pacote de salário competitivo com prêmio salarial de incentivo anual em dinheiro (PLR) - Desenvolvimento de carreira e assistência educacional para promover seus objetivos - Assistência educacional para inglês - Uma política abrangente de licenças que cobre todos os momentos importantes da vida (férias anuais, licença parental remunerada, licença médica, férias remuneradas) - Suporte no acesso a Plano de Saúde e Academia - Apoio contínuo de bem-estar individual #### Diferenciais 🤘 Ambiente de trabalho sem formalidades ⏰ Horário flexível e hierarquia plana 🦾 Ambiente livre para tomada de decisões técnicas ## Requisitos Para ter sucesso neste cargo você deve - Forte conhecimento em JS, TS, NodeJs - Conhecimento em arquitetura de sistemas, banco de dados(SQL e NoSQL) e segurança de dados - Capacidade de trabalhar em um ambiente de equipe ágil e de alta velocidade - Comunicação clara e eficaz, capacidade de trabalhar bem em equipe e liderar projetos técnicos - Conhecimento em ambientes Dockerizados - Testes automatizados - Conhecimento básico de Frontend para acompanhar demandas # Diferencial - Experiência anterior como líder técnico ou desenvolvedor sênior de backend ## Contratação PJ a combinar ## Como se candidatar -> https://forms.office.com/r/xEF0mKnUUq ## Tempo médio de feedbacks Caso você seja selecionado, receberá um contato da nossa equipe. ",1.0,"[Remoto ou Brasília] [Pleno ou Senior] Tech Lead Back-end Developer NodeJS - Jazida.com - ## TL;DR Tech Lead NodeJs -> https://forms.office.com/r/xEF0mKnUUq ## Descrição da vaga Estamos em busca de um(a) Líder Técnico capacitado e experiente para liderar a equipe de desenvolvimento de backend em nosso ambiente de alta velocidade. Como membro chave da equipe de tecnologia, o candidato ideal terá um conhecimento sólido em arquitetura de sistemas, desenvolvimento de software e liderança de equipe. Se você é apaixonado por tecnologia e quer fazer parte de uma equipe que está transformando o mercado, então essa é a vaga perfeita para você! Este cargo é uma grande oportunidade de fazer parte do time de desenvolvimento de um sistema complexo, entregando aos clientes do Jazida.com ferramentas capazes de mudar a maneira como milhares de pessoas trabalham diariamente. Este trabalho tem um foco estratégico e responsabilidade em todo território nacional. Somos pioneiros na automatização da gestão de processos minerários e mudamos a forma que a mineração do Brasil gerencia seus processos. ATIVIDADES PRINCIPAIS - Projetar, codificar, testar, operar e resolver problemas - Responsável por trazer novas ideias para soluções de problemas - Desenvolvido de Testes Automatizados - Liderar e orientar uma equipe de desenvolvedores de backend para atingir metas de projeto - Propor e implementar soluções técnicas escaláveis e inovadoras - Garantir a qualidade do código e segurança das entregas - Trabalhar em estreita colaboração com outros departamentos, incluindo design, produto e suporte ao cliente - Participar ativamente na definição e implantação de processos de desenvolvimento de software eficientes e efetivos ## Local Escritório localizado em Brasília - DF ou Remoto ## Benefícios **O que o Jazida oferece** Aqui está listado apenas uma amostra do nosso pacote de benefícios - Home Office - Um pacote de salário competitivo com prêmio salarial de incentivo anual em dinheiro (PLR) - Desenvolvimento de carreira e assistência educacional para promover seus objetivos - Assistência educacional para inglês - Uma política abrangente de licenças que cobre todos os momentos importantes da vida (férias anuais, licença parental remunerada, licença médica, férias remuneradas) - Suporte no acesso a Plano de Saúde e Academia - Apoio contínuo de bem-estar individual #### Diferenciais 🤘 Ambiente de trabalho sem formalidades ⏰ Horário flexível e hierarquia plana 🦾 Ambiente livre para tomada de decisões técnicas ## Requisitos Para ter sucesso neste cargo você deve - Forte conhecimento em JS, TS, NodeJs - Conhecimento em arquitetura de sistemas, banco de dados(SQL e NoSQL) e segurança de dados - Capacidade de trabalhar em um ambiente de equipe ágil e de alta velocidade - Comunicação clara e eficaz, capacidade de trabalhar bem em equipe e liderar projetos técnicos - Conhecimento em ambientes Dockerizados - Testes automatizados - Conhecimento básico de Frontend para acompanhar demandas # Diferencial - Experiência anterior como líder técnico ou desenvolvedor sênior de backend ## Contratação PJ a combinar ## Como se candidatar -> https://forms.office.com/r/xEF0mKnUUq ## Tempo médio de feedbacks Caso você seja selecionado, receberá um contato da nossa equipe. ",0, tech lead back end developer nodejs jazida com tl dr tech lead nodejs descrição da vaga estamos em busca de um a líder técnico capacitado e experiente para liderar a equipe de desenvolvimento de backend em nosso ambiente de alta velocidade como membro chave da equipe de tecnologia o candidato ideal terá um conhecimento sólido em arquitetura de sistemas desenvolvimento de software e liderança de equipe se você é apaixonado por tecnologia e quer fazer parte de uma equipe que está transformando o mercado então essa é a vaga perfeita para você este cargo é uma grande oportunidade de fazer parte do time de desenvolvimento de um sistema complexo entregando aos clientes do jazida com ferramentas capazes de mudar a maneira como milhares de pessoas trabalham diariamente este trabalho tem um foco estratégico e responsabilidade em todo território nacional somos pioneiros na automatização da gestão de processos minerários e mudamos a forma que a mineração do brasil gerencia seus processos atividades principais projetar codificar testar operar e resolver problemas responsável por trazer novas ideias para soluções de problemas desenvolvido de testes automatizados liderar e orientar uma equipe de desenvolvedores de backend para atingir metas de projeto propor e implementar soluções técnicas escaláveis e inovadoras garantir a qualidade do código e segurança das entregas trabalhar em estreita colaboração com outros departamentos incluindo design produto e suporte ao cliente participar ativamente na definição e implantação de processos de desenvolvimento de software eficientes e efetivos local escritório localizado em brasília df ou remoto benefícios o que o jazida oferece aqui está listado apenas uma amostra do nosso pacote de benefícios home office um pacote de salário competitivo com prêmio salarial de incentivo anual em dinheiro plr desenvolvimento de carreira e assistência educacional para promover seus objetivos assistência educacional para inglês uma política abrangente de licenças que cobre todos os momentos importantes da vida férias anuais licença parental remunerada licença médica férias remuneradas suporte no acesso a plano de saúde e academia apoio contínuo de bem estar individual diferenciais 🤘 ambiente de trabalho sem formalidades ⏰ horário flexível e hierarquia plana 🦾 ambiente livre para tomada de decisões técnicas requisitos para ter sucesso neste cargo você deve forte conhecimento em js ts nodejs conhecimento em arquitetura de sistemas banco de dados sql e nosql e segurança de dados capacidade de trabalhar em um ambiente de equipe ágil e de alta velocidade comunicação clara e eficaz capacidade de trabalhar bem em equipe e liderar projetos técnicos conhecimento em ambientes dockerizados testes automatizados conhecimento básico de frontend para acompanhar demandas diferencial experiência anterior como líder técnico ou desenvolvedor sênior de backend contratação pj a combinar como se candidatar tempo médio de feedbacks caso você seja selecionado receberá um contato da nossa equipe ,0 9782,30512802153.0,IssuesEvent,2023-07-18 22:36:50,rpopuc/gha-tests,https://api.github.com/repos/rpopuc/gha-tests,closed,Deploy,deploy-automation,"## Description Realiza deploy automatizado da aplicação. ## Environments environment_1 ## Branches feat/list",1.0,"Deploy - ## Description Realiza deploy automatizado da aplicação. ## Environments environment_1 ## Branches feat/list",1,deploy description realiza deploy automatizado da aplicação environments environment branches feat list,1 4061,15306886526.0,IssuesEvent,2021-02-24 20:06:08,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,opened,[automation] - Automation API fails with CLI error if CLI version if out of date,area/automation-api kind/bug," Automation API should have a mechanism for alerting the user if the minimum required version of the CLI isn't installed. ## Expected behavior Some sort of ""This version of automation API requires minimum CLI version [v.x.x.x]"" ## Current behavior Error raised by the CLI when a command is executed. ``` Error: code: 1 stdout: stderr: Error: unknown flag: --page-size ``` ",1.0,"[automation] - Automation API fails with CLI error if CLI version if out of date - Automation API should have a mechanism for alerting the user if the minimum required version of the CLI isn't installed. ## Expected behavior Some sort of ""This version of automation API requires minimum CLI version [v.x.x.x]"" ## Current behavior Error raised by the CLI when a command is executed. ``` Error: code: 1 stdout: stderr: Error: unknown flag: --page-size ``` ",1, automation api fails with cli error if cli version if out of date automation api should have a mechanism for alerting the user if the minimum required version of the cli isn t installed expected behavior some sort of this version of automation api requires minimum cli version current behavior error raised by the cli when a command is executed error code stdout stderr error unknown flag page size ,1 1792,3469982604.0,IssuesEvent,2015-12-23 02:48:18,kubernetes/kubernetes,https://api.github.com/repos/kubernetes/kubernetes,opened,Add Auth to the controller manager and scheduler endpoints,area/cluster-lifecycle area/introspection area/security component/controller-manager component/scheduler team/control-plane,"https://github.com/kubernetes/kubernetes/pull/18357 exposes the controller manager and scheduler (which both use host networking) by binding them to `0.0.0.0` instead of `127.0.0.1` by default. I'm creating a tracking bug to add auth to these endpoints, as this could otherwise become a security risk in the future. For now we believe it's ok as they only expose debugging data (pprof and metrics). /cc @gmarek @stephenR @davidopp @fgrzadkowski ",True,"Add Auth to the controller manager and scheduler endpoints - https://github.com/kubernetes/kubernetes/pull/18357 exposes the controller manager and scheduler (which both use host networking) by binding them to `0.0.0.0` instead of `127.0.0.1` by default. I'm creating a tracking bug to add auth to these endpoints, as this could otherwise become a security risk in the future. For now we believe it's ok as they only expose debugging data (pprof and metrics). /cc @gmarek @stephenR @davidopp @fgrzadkowski ",0,add auth to the controller manager and scheduler endpoints exposes the controller manager and scheduler which both use host networking by binding them to instead of by default i m creating a tracking bug to add auth to these endpoints as this could otherwise become a security risk in the future for now we believe it s ok as they only expose debugging data pprof and metrics cc gmarek stephenr davidopp fgrzadkowski ,0 28639,8196385357.0,IssuesEvent,2018-08-31 09:40:01,facebook/osquery,https://api.github.com/repos/facebook/osquery,closed,Create tests for the table `sandboxes`,build/test good-first-issue macOS,"## Create tests for the table `sandboxes` - Create header file for the table implementation, if one is not exists. - In test, query the table and check if retrieved columns (name and types) match the columns from table spec. - If there is any guarantee to number of rows (e.g. only 1 record in every query result, more than 3 records or something else) check it. - Test the implementation details of the table, if it possible. Table spec: `specs/darwin/sandboxes.table` Source files: - `osquery/tables/system/darwin/sandboxes.cpp` Table generating function: `genSandboxContainers()` ",1.0,"Create tests for the table `sandboxes` - ## Create tests for the table `sandboxes` - Create header file for the table implementation, if one is not exists. - In test, query the table and check if retrieved columns (name and types) match the columns from table spec. - If there is any guarantee to number of rows (e.g. only 1 record in every query result, more than 3 records or something else) check it. - Test the implementation details of the table, if it possible. Table spec: `specs/darwin/sandboxes.table` Source files: - `osquery/tables/system/darwin/sandboxes.cpp` Table generating function: `genSandboxContainers()` ",0,create tests for the table sandboxes create tests for the table sandboxes create header file for the table implementation if one is not exists in test query the table and check if retrieved columns name and types match the columns from table spec if there is any guarantee to number of rows e g only record in every query result more than records or something else check it test the implementation details of the table if it possible table spec specs darwin sandboxes table source files osquery tables system darwin sandboxes cpp table generating function gensandboxcontainers ,0 116302,14941216895.0,IssuesEvent,2021-01-25 19:25:44,cfpb/design-system,https://api.github.com/repos/cfpb/design-system,closed,Pagination: Needs content written,Needs content written Size: 3 design-system-day help wanted: design,"**Which page is this about?** https://cfpb.github.io/design-system/patterns/pagination **What kind of issue is this?** - Needs content written (page is missing key information, e.g. use cases) **Describe your issue** * Content guidelines are only for tables, and not illuminating * Needs more use case information and content guidelines (e.g. how many results do we typically show per page, what does this look like at mobile breakpoint) **Size this request (1=tiny, 5=enormous)** ",2.0,"Pagination: Needs content written - **Which page is this about?** https://cfpb.github.io/design-system/patterns/pagination **What kind of issue is this?** - Needs content written (page is missing key information, e.g. use cases) **Describe your issue** * Content guidelines are only for tables, and not illuminating * Needs more use case information and content guidelines (e.g. how many results do we typically show per page, what does this look like at mobile breakpoint) **Size this request (1=tiny, 5=enormous)** ",0,pagination needs content written which page is this about what kind of issue is this needs content written page is missing key information e g use cases describe your issue content guidelines are only for tables and not illuminating needs more use case information and content guidelines e g how many results do we typically show per page what does this look like at mobile breakpoint size this request tiny enormous ,0 316185,9638348493.0,IssuesEvent,2019-05-16 10:55:49,unitystation/unitystation,https://api.github.com/repos/unitystation/unitystation,closed,Clientside NRE when touching electric devices B: 40,Bounty Bug High Priority Non-Intrusive,"# Description ## Category BUG ## Detailed Bug Report ``` NullReferenceException: Object reference not set to an instance of an object PowerSupplyControlInheritance.ConstructionInteraction (UnityEngine.GameObject originator, UnityEngine.Vector3 position, System.String hand) (at Assets/Scripts/Electricity/Inheritance/PowerSupplyControlInheritance.cs:137) PowerSupplyControlInheritance.Interact (UnityEngine.GameObject originator, UnityEngine.Vector3 position, System.String hand) (at Assets/Scripts/Electricity/Inheritance/PowerSupplyControlInheritance.cs:129) InputTrigger.Interact (UnityEngine.Vector3 position) (at Assets/Scripts/Input System/Triggers/InputTrigger.cs:50) InputTrigger.Trigger (UnityEngine.Vector3 position) (at Assets/Scripts/Input System/Triggers/InputTrigger.cs:35) ``` ## Steps to Reproduce Please enter the steps to reproduce the bug or behaviour: 1. Host from build, join from editor as an engineer 2. Try touching SMES ",1.0,"Clientside NRE when touching electric devices B: 40 - # Description ## Category BUG ## Detailed Bug Report ``` NullReferenceException: Object reference not set to an instance of an object PowerSupplyControlInheritance.ConstructionInteraction (UnityEngine.GameObject originator, UnityEngine.Vector3 position, System.String hand) (at Assets/Scripts/Electricity/Inheritance/PowerSupplyControlInheritance.cs:137) PowerSupplyControlInheritance.Interact (UnityEngine.GameObject originator, UnityEngine.Vector3 position, System.String hand) (at Assets/Scripts/Electricity/Inheritance/PowerSupplyControlInheritance.cs:129) InputTrigger.Interact (UnityEngine.Vector3 position) (at Assets/Scripts/Input System/Triggers/InputTrigger.cs:50) InputTrigger.Trigger (UnityEngine.Vector3 position) (at Assets/Scripts/Input System/Triggers/InputTrigger.cs:35) ``` ## Steps to Reproduce Please enter the steps to reproduce the bug or behaviour: 1. Host from build, join from editor as an engineer 2. Try touching SMES ",0,clientside nre when touching electric devices b description category bug detailed bug report nullreferenceexception object reference not set to an instance of an object powersupplycontrolinheritance constructioninteraction unityengine gameobject originator unityengine position system string hand at assets scripts electricity inheritance powersupplycontrolinheritance cs powersupplycontrolinheritance interact unityengine gameobject originator unityengine position system string hand at assets scripts electricity inheritance powersupplycontrolinheritance cs inputtrigger interact unityengine position at assets scripts input system triggers inputtrigger cs inputtrigger trigger unityengine position at assets scripts input system triggers inputtrigger cs steps to reproduce please enter the steps to reproduce the bug or behaviour host from build join from editor as an engineer try touching smes ,0 61109,8488884028.0,IssuesEvent,2018-10-26 18:04:24,TTUSDC/ttuacm-backend,https://api.github.com/repos/TTUSDC/ttuacm-backend,closed,Need Issue Templates and PR Templates,Documentation Hacktoberfest Priority: Medium,We want people to be able to submit these in a formatted matter so that us developers can have an easier time and deliver it faster,1.0,Need Issue Templates and PR Templates - We want people to be able to submit these in a formatted matter so that us developers can have an easier time and deliver it faster,0,need issue templates and pr templates we want people to be able to submit these in a formatted matter so that us developers can have an easier time and deliver it faster,0 6833,23962307596.0,IssuesEvent,2022-09-12 20:19:34,OWASP/MASTG-Hacking-Playground,https://api.github.com/repos/OWASP/MASTG-Hacking-Playground,closed,Validate Android Kotlin App and Setup GitHub Actions Build Pipeline,Android-Kotlin automation,"It has been a while since the latest update, let's validate the apps first locally and then setup a GitHub action workflow to build and release the Android Kotlin App as-is.",1.0,"Validate Android Kotlin App and Setup GitHub Actions Build Pipeline - It has been a while since the latest update, let's validate the apps first locally and then setup a GitHub action workflow to build and release the Android Kotlin App as-is.",1,validate android kotlin app and setup github actions build pipeline it has been a while since the latest update let s validate the apps first locally and then setup a github action workflow to build and release the android kotlin app as is ,1 1302,9853129555.0,IssuesEvent,2019-06-19 14:12:53,spacemeshos/go-spacemesh,https://api.github.com/repos/spacemeshos/go-spacemesh,closed,Scale down in automation may fail the tests,automation,"Scale down in automation following a change on the number of nodes may cause bugs. ",1.0,"Scale down in automation may fail the tests - Scale down in automation following a change on the number of nodes may cause bugs. ",1,scale down in automation may fail the tests scale down in automation following a change on the number of nodes may cause bugs ,1 4691,17259261948.0,IssuesEvent,2021-07-22 03:57:02,appsmithorg/appsmith,https://api.github.com/repos/appsmithorg/appsmith,closed,[Bug] Need to replace docker with server in external workflow,Actions Actions Pod Automation Bug,"## Description Need to replace docker with server in external-client-test.yml ### Steps to reproduce the behaviour: run PR using ok to test sha ### Important Details ",1.0,"[Bug] Need to replace docker with server in external workflow - ## Description Need to replace docker with server in external-client-test.yml ### Steps to reproduce the behaviour: run PR using ok to test sha ### Important Details ",1, need to replace docker with server in external workflow description need to replace docker with server in external client test yml steps to reproduce the behaviour run pr using ok to test sha important details ,1 191331,6828149501.0,IssuesEvent,2017-11-08 19:25:00,GoogleCloudPlatform/google-cloud-ruby,https://api.github.com/repos/GoogleCloudPlatform/google-cloud-ruby,closed,Add max_staleness/bounded_staleness to Client#snapshot method,api: spanner priority: p2 type: feature request,"I'm not currently able to define a max_staleness/bounded_staleness when creating a snapshot as a parameter, but [Python](https://googlecloudplatform.github.io/google-cloud-python/latest/spanner/snapshot-api.html) for example does include this parameter in their snapshot method. This parameter is surfaced in the `Client#read` method, and I think it's better understood if surfaced as part of `Client#snapshot` method as well. The request is to add a parameter for max_staleness/bounded_staleness to the `Client#snapshot` method. I'm open to interpretation if there's reasons for not adding this parameter to `Client#snapshot`.",1.0,"Add max_staleness/bounded_staleness to Client#snapshot method - I'm not currently able to define a max_staleness/bounded_staleness when creating a snapshot as a parameter, but [Python](https://googlecloudplatform.github.io/google-cloud-python/latest/spanner/snapshot-api.html) for example does include this parameter in their snapshot method. This parameter is surfaced in the `Client#read` method, and I think it's better understood if surfaced as part of `Client#snapshot` method as well. The request is to add a parameter for max_staleness/bounded_staleness to the `Client#snapshot` method. I'm open to interpretation if there's reasons for not adding this parameter to `Client#snapshot`.",0,add max staleness bounded staleness to client snapshot method i m not currently able to define a max staleness bounded staleness when creating a snapshot as a parameter but for example does include this parameter in their snapshot method this parameter is surfaced in the client read method and i think it s better understood if surfaced as part of client snapshot method as well the request is to add a parameter for max staleness bounded staleness to the client snapshot method i m open to interpretation if there s reasons for not adding this parameter to client snapshot ,0 1211,9673852545.0,IssuesEvent,2019-05-22 08:34:46,gergelytakacs/AutomationShield,https://api.github.com/repos/gergelytakacs/AutomationShield,opened,Sampling: Timer 2 in conflict with PWM on pins 3 and 11,AutomationShield common bug,"Using Timer 2 for the Uno causes a conflict for PWM usage on pins 3 and (i think 11), The issue is, that Opto uses pin 3 and now Opto experiments don't work correctly. Must use Timer 1 with opto and only use timer 2 when really needed. This is quite a serious issue.",1.0,"Sampling: Timer 2 in conflict with PWM on pins 3 and 11 - Using Timer 2 for the Uno causes a conflict for PWM usage on pins 3 and (i think 11), The issue is, that Opto uses pin 3 and now Opto experiments don't work correctly. Must use Timer 1 with opto and only use timer 2 when really needed. This is quite a serious issue.",1,sampling timer in conflict with pwm on pins and using timer for the uno causes a conflict for pwm usage on pins and i think the issue is that opto uses pin and now opto experiments don t work correctly must use timer with opto and only use timer when really needed this is quite a serious issue ,1 39257,5060353156.0,IssuesEvent,2016-12-22 11:35:50,OAButton/backend,https://api.github.com/repos/OAButton/backend,opened,Anonymity for users during requests,Blocked: Copy Blocked: Design Blocked: Development Blocked: Test enhancement JISC question / discussion,I've had several people suggest that they'd like to see us implement anonymity as part of the request system & on the site generally. This is an issue to think about the implications of that & how to make it work. ,1.0,Anonymity for users during requests - I've had several people suggest that they'd like to see us implement anonymity as part of the request system & on the site generally. This is an issue to think about the implications of that & how to make it work. ,0,anonymity for users during requests i ve had several people suggest that they d like to see us implement anonymity as part of the request system on the site generally this is an issue to think about the implications of that how to make it work ,0 8811,27172285557.0,IssuesEvent,2023-02-17 20:38:05,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Incosistent parentReference in search results,type:bug Needs: Triage :mag: automation:Closed,"When doing search driveId in parentReference isn't consistent over time. ## Category - [ ] Question - [ ] Documentation issue - [x] Bug #### Expected or Desired Behavior parentReference -> driveId should return the same value for the same object. #### Observed Behavior parentReference -> driveId differs on consequent requests. Example response 1 (driveId is fine): ` { ""@odata.context"": ""https://graph.microsoft.com/v1.0/$metadata#Collection(driveItem)"", ""value"": [ { ""@odata.type"": ""#microsoft.graph.driveItem"", ""createdDateTime"": ""2020-01-24T06:11:27.107Z"", ""cTag"": ""adDo5RENFNERCQjAwN0Y4Q0VFITQ1OS42MzcxODE5NjY2NzgyNzAwMDA"", ""eTag"": ""aOURDRTREQkIwMDdGOENFRSE0NTkuMg"", ""id"": ""9DCE4DBB007F8CEE!459"", ""lastModifiedDateTime"": ""2020-02-25T03:04:27.827Z"", ""name"": ""1.108 re"", ""size"": 305401, ""webUrl"": ""https://1drv.ms/f/s!AO6MfwC7Tc6dg0s"", ""reactions"": { ""commentCount"": 0 }, ""createdBy"": { ""application"": { ""displayName"": ""ARES Kudo"", ""id"": ""481f0597"" }, ""user"": { ""displayName"": ""ANUJ KESARWANI"", ""id"": ""9dce4dbb007f8cee"" } }, ""lastModifiedBy"": { ""application"": { ""displayName"": ""ARES Kudo"", ""id"": ""481f0597"" }, ""user"": { ""displayName"": ""ANUJ KESARWANI"", ""id"": ""9dce4dbb007f8cee"" } }, ""parentReference"": { ""driveId"": ""9dce4dbb007f8cee"", ""driveType"": ""personal"", ""id"": ""9DCE4DBB007F8CEE!101"", ""path"": ""/drive/root:"" }, ""fileSystemInfo"": { ""createdDateTime"": ""2020-01-24T06:11:27.106Z"", ""lastModifiedDateTime"": ""2020-01-24T06:20:10.033Z"" }, ""folder"": { ""childCount"": 4, ""view"": { ""viewType"": ""thumbnails"", ""sortBy"": ""name"", ""sortOrder"": ""ascending"" } }, ""searchResult"": { ""onClickTelemetryUrl"": ""https://www.bing.com/personalsearchclick?IG=31F7FF3E4F6C4B68977C139117CE2ADE&CID=9DCE4DBB007F8CEE0000000000000000&ID=DevEx%2c5024&q=%7b1.108%7d&resid=9DCE4DBB007F8CEE%21459"" } } ] } ` Response 2 - same request, but note driveId is different: ` { ""@odata.context"": ""https://graph.microsoft.com/v1.0/$metadata#Collection(driveItem)"", ""value"": [ { ""@odata.type"": ""#microsoft.graph.driveItem"", ""createdDateTime"": ""2020-01-24T06:11:27.107Z"", ""cTag"": ""adDo5RENFNERCQjAwN0Y4Q0VFITQ1OS42MzcxODE5NjY2NzgyNzAwMDA"", ""eTag"": ""aOURDRTREQkIwMDdGOENFRSE0NTkuMg"", ""id"": ""9DCE4DBB007F8CEE!459"", ""lastModifiedDateTime"": ""2020-02-25T03:04:27.827Z"", ""name"": ""1.108 re"", ""size"": 305401, ""webUrl"": ""https://1drv.ms/f/s!AO6MfwC7Tc6dg0s"", ""reactions"": { ""commentCount"": 0 }, ""createdBy"": { ""application"": { ""displayName"": ""ARES Kudo"", ""id"": ""481f0597"" }, ""user"": { ""displayName"": ""ANUJ KESARWANI"", ""id"": ""9dce4dbb007f8cee"" } }, ""lastModifiedBy"": { ""application"": { ""displayName"": ""ARES Kudo"", ""id"": ""481f0597"" }, ""user"": { ""displayName"": ""ANUJ KESARWANI"", ""id"": ""9dce4dbb007f8cee"" } }, ""parentReference"": { ""driveId"": ""8000000000000000"", ""driveType"": ""personal"", ""id"": ""9DCE4DBB007F8CEE!101"", ""path"": ""/drive/root:"" }, ""fileSystemInfo"": { ""createdDateTime"": ""2020-01-24T06:11:27.106Z"", ""lastModifiedDateTime"": ""2020-01-24T06:20:10.033Z"" }, ""folder"": { ""childCount"": 4, ""view"": { ""viewType"": ""thumbnails"", ""sortBy"": ""name"", ""sortOrder"": ""ascending"" } }, ""searchResult"": { ""onClickTelemetryUrl"": ""https://www.bing.com/personalsearchclick?IG=5E286AF1CD9A4D369E823A6DEE4F6796&CID=9DCE4DBB007F8CEE0000000000000000&ID=DevEx%2c5024&q=%7b1.108%7d&resid=9DCE4DBB007F8CEE%21459"" } } ] } ` As a workaround, I think we can use user's Id as a driveId if driveType=personal, but I'm not entirely sure if this is always true. Can you, please, confirm? #### Steps to Reproduce Just search for some folder/file several times and observe the parentReference. Unfortunately, it doesn't reproduce consistently for all accounts, only for some, but for those accounts the issue is consistent. Let me know if I can provide any other info. Thank you. ",1.0,"Incosistent parentReference in search results - When doing search driveId in parentReference isn't consistent over time. ## Category - [ ] Question - [ ] Documentation issue - [x] Bug #### Expected or Desired Behavior parentReference -> driveId should return the same value for the same object. #### Observed Behavior parentReference -> driveId differs on consequent requests. Example response 1 (driveId is fine): ` { ""@odata.context"": ""https://graph.microsoft.com/v1.0/$metadata#Collection(driveItem)"", ""value"": [ { ""@odata.type"": ""#microsoft.graph.driveItem"", ""createdDateTime"": ""2020-01-24T06:11:27.107Z"", ""cTag"": ""adDo5RENFNERCQjAwN0Y4Q0VFITQ1OS42MzcxODE5NjY2NzgyNzAwMDA"", ""eTag"": ""aOURDRTREQkIwMDdGOENFRSE0NTkuMg"", ""id"": ""9DCE4DBB007F8CEE!459"", ""lastModifiedDateTime"": ""2020-02-25T03:04:27.827Z"", ""name"": ""1.108 re"", ""size"": 305401, ""webUrl"": ""https://1drv.ms/f/s!AO6MfwC7Tc6dg0s"", ""reactions"": { ""commentCount"": 0 }, ""createdBy"": { ""application"": { ""displayName"": ""ARES Kudo"", ""id"": ""481f0597"" }, ""user"": { ""displayName"": ""ANUJ KESARWANI"", ""id"": ""9dce4dbb007f8cee"" } }, ""lastModifiedBy"": { ""application"": { ""displayName"": ""ARES Kudo"", ""id"": ""481f0597"" }, ""user"": { ""displayName"": ""ANUJ KESARWANI"", ""id"": ""9dce4dbb007f8cee"" } }, ""parentReference"": { ""driveId"": ""9dce4dbb007f8cee"", ""driveType"": ""personal"", ""id"": ""9DCE4DBB007F8CEE!101"", ""path"": ""/drive/root:"" }, ""fileSystemInfo"": { ""createdDateTime"": ""2020-01-24T06:11:27.106Z"", ""lastModifiedDateTime"": ""2020-01-24T06:20:10.033Z"" }, ""folder"": { ""childCount"": 4, ""view"": { ""viewType"": ""thumbnails"", ""sortBy"": ""name"", ""sortOrder"": ""ascending"" } }, ""searchResult"": { ""onClickTelemetryUrl"": ""https://www.bing.com/personalsearchclick?IG=31F7FF3E4F6C4B68977C139117CE2ADE&CID=9DCE4DBB007F8CEE0000000000000000&ID=DevEx%2c5024&q=%7b1.108%7d&resid=9DCE4DBB007F8CEE%21459"" } } ] } ` Response 2 - same request, but note driveId is different: ` { ""@odata.context"": ""https://graph.microsoft.com/v1.0/$metadata#Collection(driveItem)"", ""value"": [ { ""@odata.type"": ""#microsoft.graph.driveItem"", ""createdDateTime"": ""2020-01-24T06:11:27.107Z"", ""cTag"": ""adDo5RENFNERCQjAwN0Y4Q0VFITQ1OS42MzcxODE5NjY2NzgyNzAwMDA"", ""eTag"": ""aOURDRTREQkIwMDdGOENFRSE0NTkuMg"", ""id"": ""9DCE4DBB007F8CEE!459"", ""lastModifiedDateTime"": ""2020-02-25T03:04:27.827Z"", ""name"": ""1.108 re"", ""size"": 305401, ""webUrl"": ""https://1drv.ms/f/s!AO6MfwC7Tc6dg0s"", ""reactions"": { ""commentCount"": 0 }, ""createdBy"": { ""application"": { ""displayName"": ""ARES Kudo"", ""id"": ""481f0597"" }, ""user"": { ""displayName"": ""ANUJ KESARWANI"", ""id"": ""9dce4dbb007f8cee"" } }, ""lastModifiedBy"": { ""application"": { ""displayName"": ""ARES Kudo"", ""id"": ""481f0597"" }, ""user"": { ""displayName"": ""ANUJ KESARWANI"", ""id"": ""9dce4dbb007f8cee"" } }, ""parentReference"": { ""driveId"": ""8000000000000000"", ""driveType"": ""personal"", ""id"": ""9DCE4DBB007F8CEE!101"", ""path"": ""/drive/root:"" }, ""fileSystemInfo"": { ""createdDateTime"": ""2020-01-24T06:11:27.106Z"", ""lastModifiedDateTime"": ""2020-01-24T06:20:10.033Z"" }, ""folder"": { ""childCount"": 4, ""view"": { ""viewType"": ""thumbnails"", ""sortBy"": ""name"", ""sortOrder"": ""ascending"" } }, ""searchResult"": { ""onClickTelemetryUrl"": ""https://www.bing.com/personalsearchclick?IG=5E286AF1CD9A4D369E823A6DEE4F6796&CID=9DCE4DBB007F8CEE0000000000000000&ID=DevEx%2c5024&q=%7b1.108%7d&resid=9DCE4DBB007F8CEE%21459"" } } ] } ` As a workaround, I think we can use user's Id as a driveId if driveType=personal, but I'm not entirely sure if this is always true. Can you, please, confirm? #### Steps to Reproduce Just search for some folder/file several times and observe the parentReference. Unfortunately, it doesn't reproduce consistently for all accounts, only for some, but for those accounts the issue is consistent. Let me know if I can provide any other info. Thank you. ",1,incosistent parentreference in search results when doing search driveid in parentreference isn t consistent over time category question documentation issue bug expected or desired behavior parentreference driveid should return the same value for the same object observed behavior parentreference driveid differs on consequent requests example response driveid is fine odata context value odata type microsoft graph driveitem createddatetime ctag etag id lastmodifieddatetime name re size weburl reactions commentcount createdby application displayname ares kudo id user displayname anuj kesarwani id lastmodifiedby application displayname ares kudo id user displayname anuj kesarwani id parentreference driveid drivetype personal id path drive root filesysteminfo createddatetime lastmodifieddatetime folder childcount view viewtype thumbnails sortby name sortorder ascending searchresult onclicktelemetryurl response same request but note driveid is different odata context value odata type microsoft graph driveitem createddatetime ctag etag id lastmodifieddatetime name re size weburl reactions commentcount createdby application displayname ares kudo id user displayname anuj kesarwani id lastmodifiedby application displayname ares kudo id user displayname anuj kesarwani id parentreference driveid drivetype personal id path drive root filesysteminfo createddatetime lastmodifieddatetime folder childcount view viewtype thumbnails sortby name sortorder ascending searchresult onclicktelemetryurl as a workaround i think we can use user s id as a driveid if drivetype personal but i m not entirely sure if this is always true can you please confirm steps to reproduce just search for some folder file several times and observe the parentreference unfortunately it doesn t reproduce consistently for all accounts only for some but for those accounts the issue is consistent let me know if i can provide any other info thank you ,1 322669,9821443683.0,IssuesEvent,2019-06-14 07:13:55,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,accounts.firefox.com - site is not usable,browser-firefox-mobile engine-gecko priority-normal," **URL**: https://accounts.firefox.com/complete_reset_password?uid=f029469bc935401cab737a7fc11c5ebc&token=dff0de94541efe694ffe0b65f75c6ff6fe594b53015315423f24f114fdc6af43&code=34aca3ec07bef74fe88eb68b47bf6dab&email=reubenrowland64%40gmail.com&service=sync&resume=eyJkZXZpY2VJZCI6IjlmNGE5NzU4NmUxMjRhMmE5OTgxYTFhODczZmY0YzJlIiwiZW1haWwiOiJyZXViZW5yb3dsYW5kNjRAZ21haWwuY29tIiwiZW50cnlwb2ludCI6Im1vemlsbGEub3JnLWdsb2JhbG5hdiIsImVudHJ5cG9pbnRFeHBlcmltZW50IjpudWxsLCJlbnRyeXBvaW50VmFyaWF0aW9uIjpudWxsLCJmbG93QmVnaW4iOjE1NjAwOTk2MDI1NTEsImZsb3dJZCI6IjUwODBmYmVhZjJlY2MyZTc4ZjJiZDNlZDVjNmI5NWVjZWQzMzY3MjViYWZiNjMzMjc2ZmIzZWNhMGRlMWZkNWQiLCJyZXNldFBhc3N3b3JkQ29uZmlybSI6dHJ1ZSwic3R5bGUiOm51bGwsInVuaXF1ZVVzZXJJZCI6ImNmYzVkOTI1LTkzYmUtNGI0My1iMGJhLWY4MTlhYWUxZjM0MyIsInV0bUNhbXBhaWduIjoiZ2xvYmFsbmF2IiwidXRtQ29udGVudCI6ImdldC1maXJlZm94LWFjY291bnQiLCJ1dG1NZWRpdW0iOiJyZWZlcnJhbCIsInV0bVNvdXJjZSI6Ind3dy5tb3ppbGxhLm9yZyIsInV0bVRlcm0iOm51bGx9&emailToHashWith=reubenrowland64%40gmail.com&utm_medium=email&utm_campaign=fx-forgot-password&utm_content=fx-reset-password **Browser / Version**: Firefox Mobile 68.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: 505 error **Steps to Reproduce**: Won't load [![Screenshot Description](https://webcompat.com/uploads/2019/6/416874e1-6e73-49fe-b426-33b384ce1f39-thumb.jpeg)](https://webcompat.com/uploads/2019/6/416874e1-6e73-49fe-b426-33b384ce1f39.jpeg)
Browser Configuration
  • mixed active content blocked: false
  • image.mem.shared: true
  • buildID: 20190603181408
  • tracking content blocked: false
  • gfx.webrender.blob-images: true
  • hasTouchScreen: true
  • mixed passive content blocked: false
  • gfx.webrender.enabled: false
  • gfx.webrender.all: false
  • channel: beta

Console Messages:

[u'[console.error(SecurityError: The operation is insecure.) https://accounts-static.cdn.mozilla.net/bundle-e0b6d3a609074d8cd90f62f5d5b500b1b55713df/appDependencies.bundle.js:65:79710]']
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"accounts.firefox.com - site is not usable - **URL**: https://accounts.firefox.com/complete_reset_password?uid=f029469bc935401cab737a7fc11c5ebc&token=dff0de94541efe694ffe0b65f75c6ff6fe594b53015315423f24f114fdc6af43&code=34aca3ec07bef74fe88eb68b47bf6dab&email=reubenrowland64%40gmail.com&service=sync&resume=eyJkZXZpY2VJZCI6IjlmNGE5NzU4NmUxMjRhMmE5OTgxYTFhODczZmY0YzJlIiwiZW1haWwiOiJyZXViZW5yb3dsYW5kNjRAZ21haWwuY29tIiwiZW50cnlwb2ludCI6Im1vemlsbGEub3JnLWdsb2JhbG5hdiIsImVudHJ5cG9pbnRFeHBlcmltZW50IjpudWxsLCJlbnRyeXBvaW50VmFyaWF0aW9uIjpudWxsLCJmbG93QmVnaW4iOjE1NjAwOTk2MDI1NTEsImZsb3dJZCI6IjUwODBmYmVhZjJlY2MyZTc4ZjJiZDNlZDVjNmI5NWVjZWQzMzY3MjViYWZiNjMzMjc2ZmIzZWNhMGRlMWZkNWQiLCJyZXNldFBhc3N3b3JkQ29uZmlybSI6dHJ1ZSwic3R5bGUiOm51bGwsInVuaXF1ZVVzZXJJZCI6ImNmYzVkOTI1LTkzYmUtNGI0My1iMGJhLWY4MTlhYWUxZjM0MyIsInV0bUNhbXBhaWduIjoiZ2xvYmFsbmF2IiwidXRtQ29udGVudCI6ImdldC1maXJlZm94LWFjY291bnQiLCJ1dG1NZWRpdW0iOiJyZWZlcnJhbCIsInV0bVNvdXJjZSI6Ind3dy5tb3ppbGxhLm9yZyIsInV0bVRlcm0iOm51bGx9&emailToHashWith=reubenrowland64%40gmail.com&utm_medium=email&utm_campaign=fx-forgot-password&utm_content=fx-reset-password **Browser / Version**: Firefox Mobile 68.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: 505 error **Steps to Reproduce**: Won't load [![Screenshot Description](https://webcompat.com/uploads/2019/6/416874e1-6e73-49fe-b426-33b384ce1f39-thumb.jpeg)](https://webcompat.com/uploads/2019/6/416874e1-6e73-49fe-b426-33b384ce1f39.jpeg)
Browser Configuration
  • mixed active content blocked: false
  • image.mem.shared: true
  • buildID: 20190603181408
  • tracking content blocked: false
  • gfx.webrender.blob-images: true
  • hasTouchScreen: true
  • mixed passive content blocked: false
  • gfx.webrender.enabled: false
  • gfx.webrender.all: false
  • channel: beta

Console Messages:

[u'[console.error(SecurityError: The operation is insecure.) https://accounts-static.cdn.mozilla.net/bundle-e0b6d3a609074d8cd90f62f5d5b500b1b55713df/appDependencies.bundle.js:65:79710]']
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,accounts firefox com site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description error steps to reproduce won t load browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta console messages from with ❤️ ,0 3756,14516096924.0,IssuesEvent,2020-12-13 14:47:31,domoticafacilconjota/capitulos,https://api.github.com/repos/domoticafacilconjota/capitulos,closed,Manejo Iluminación Dormitorio ,Automation a Node RED,"**Código de la automatización** ``` - id: '1603846567316' alias: 'Mesita #1' description: 'Enciende y apaga Mesita de noche #1' trigger: - platform: device domain: mqtt device_id: d0c248fc0a7311eb8ac4bd3e4b327c07 type: action subtype: single discovery_id: 0x00158d000450b798 action_single condition: [] action: - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#1 domain: light mode: single - id: '1603846764866' alias: 'Mesita #2' description: 'Enciende y apaga Mesita de noche #2' trigger: - platform: device domain: mqtt device_id: 531d50760bee11ebabb7f953763782dc type: action subtype: single discovery_id: 0x00158d000450b761 action_single condition: [] action: - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#2 domain: light mode: single - id: '1603846964929' alias: 'Mesita #1 y Mesita #2' description: 'Enciende y apaga Mesita de noche #1 y Mesita de noche #2' trigger: - platform: device domain: mqtt device_id: d0c248fc0a7311eb8ac4bd3e4b327c07 type: action subtype: double discovery_id: 0x00158d000450b798 action_double - platform: device domain: mqtt device_id: 531d50760bee11ebabb7f953763782dc type: action subtype: double discovery_id: 0x00158d000450b761 action_double condition: [] action: - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#1 domain: light - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#2 domain: light mode: single - id: '1603847207830' alias: 'Mesita #1, Mesita #2 y Lampara' description: 'Enciende y apaga Mesita de noche #1, Mesita de noche #2 y Lampara de techo' trigger: - platform: device domain: mqtt device_id: d0c248fc0a7311eb8ac4bd3e4b327c07 type: action subtype: hold discovery_id: 0x00158d000450b798 action_hold - platform: device domain: mqtt device_id: 531d50760bee11ebabb7f953763782dc type: action subtype: hold discovery_id: 0x00158d000450b761 action_hold condition: [] action: - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#1 domain: light - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#2 domain: light - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#3 domain: light mode: single ``` **Explicación de lo que hace actualmente la automatización** Un clip ""single"" en el interruptor inalambrico de la mesita #1 enciende la lampara de noche de esa mesita, un doble clip enciende ambas lamparas en la mesita #1 y la mesita #2 o mantener el interruptor ""hold"" enciende ambas lamparas de las mesitas y la lampara principal de la habitacion, lo mismo si esto se realiza con el interruptor de la mesita #2. Para apagar solo se debe realizar la misma accion en cualquiera de los interruptores. (single, doble o hold.) **Notas del autor** El hardware usado es dos interruptores inalambricos aquara modelo WXKG11LM y tres ampolletas xiaomi modelo ZNLDP12LM todos con protocolo zigbee.",1.0,"Manejo Iluminación Dormitorio - **Código de la automatización** ``` - id: '1603846567316' alias: 'Mesita #1' description: 'Enciende y apaga Mesita de noche #1' trigger: - platform: device domain: mqtt device_id: d0c248fc0a7311eb8ac4bd3e4b327c07 type: action subtype: single discovery_id: 0x00158d000450b798 action_single condition: [] action: - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#1 domain: light mode: single - id: '1603846764866' alias: 'Mesita #2' description: 'Enciende y apaga Mesita de noche #2' trigger: - platform: device domain: mqtt device_id: 531d50760bee11ebabb7f953763782dc type: action subtype: single discovery_id: 0x00158d000450b761 action_single condition: [] action: - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#2 domain: light mode: single - id: '1603846964929' alias: 'Mesita #1 y Mesita #2' description: 'Enciende y apaga Mesita de noche #1 y Mesita de noche #2' trigger: - platform: device domain: mqtt device_id: d0c248fc0a7311eb8ac4bd3e4b327c07 type: action subtype: double discovery_id: 0x00158d000450b798 action_double - platform: device domain: mqtt device_id: 531d50760bee11ebabb7f953763782dc type: action subtype: double discovery_id: 0x00158d000450b761 action_double condition: [] action: - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#1 domain: light - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#2 domain: light mode: single - id: '1603847207830' alias: 'Mesita #1, Mesita #2 y Lampara' description: 'Enciende y apaga Mesita de noche #1, Mesita de noche #2 y Lampara de techo' trigger: - platform: device domain: mqtt device_id: d0c248fc0a7311eb8ac4bd3e4b327c07 type: action subtype: hold discovery_id: 0x00158d000450b798 action_hold - platform: device domain: mqtt device_id: 531d50760bee11ebabb7f953763782dc type: action subtype: hold discovery_id: 0x00158d000450b761 action_hold condition: [] action: - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#1 domain: light - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#2 domain: light - type: toggle device_id: d21be7620a7311eba8b1978a04c92df0 entity_id: light.lampara_light#3 domain: light mode: single ``` **Explicación de lo que hace actualmente la automatización** Un clip ""single"" en el interruptor inalambrico de la mesita #1 enciende la lampara de noche de esa mesita, un doble clip enciende ambas lamparas en la mesita #1 y la mesita #2 o mantener el interruptor ""hold"" enciende ambas lamparas de las mesitas y la lampara principal de la habitacion, lo mismo si esto se realiza con el interruptor de la mesita #2. Para apagar solo se debe realizar la misma accion en cualquiera de los interruptores. (single, doble o hold.) **Notas del autor** El hardware usado es dos interruptores inalambricos aquara modelo WXKG11LM y tres ampolletas xiaomi modelo ZNLDP12LM todos con protocolo zigbee.",1,manejo iluminación dormitorio código de la automatización id alias mesita description enciende y apaga mesita de noche trigger platform device domain mqtt device id type action subtype single discovery id action single condition action type toggle device id entity id light lampara light domain light mode single id alias mesita description enciende y apaga mesita de noche trigger platform device domain mqtt device id type action subtype single discovery id action single condition action type toggle device id entity id light lampara light domain light mode single id alias mesita y mesita description enciende y apaga mesita de noche y mesita de noche trigger platform device domain mqtt device id type action subtype double discovery id action double platform device domain mqtt device id type action subtype double discovery id action double condition action type toggle device id entity id light lampara light domain light type toggle device id entity id light lampara light domain light mode single id alias mesita mesita y lampara description enciende y apaga mesita de noche mesita de noche y lampara de techo trigger platform device domain mqtt device id type action subtype hold discovery id action hold platform device domain mqtt device id type action subtype hold discovery id action hold condition action type toggle device id entity id light lampara light domain light type toggle device id entity id light lampara light domain light type toggle device id entity id light lampara light domain light mode single explicación de lo que hace actualmente la automatización un clip single en el interruptor inalambrico de la mesita enciende la lampara de noche de esa mesita un doble clip enciende ambas lamparas en la mesita y la mesita o mantener el interruptor hold enciende ambas lamparas de las mesitas y la lampara principal de la habitacion lo mismo si esto se realiza con el interruptor de la mesita para apagar solo se debe realizar la misma accion en cualquiera de los interruptores single doble o hold notas del autor el hardware usado es dos interruptores inalambricos aquara modelo y tres ampolletas xiaomi modelo todos con protocolo zigbee ,1 689,7804222972.0,IssuesEvent,2018-06-11 06:28:36,neon-bindings/neon,https://api.github.com/repos/neon-bindings/neon,opened,Streamlined release automation,Triaged automation,"It would be good to get the release automation down to a single command, perhaps with one prompt for the GitHub access token. We can do this once we: - [ ] Trust the `bump.sh` script enough to run unaided without verifying the diffs. - [ ] Change `neon new` to run without prompts by default so `validate.sh` can run unaided. - [ ] Create a master `release.sh` script to coordinate all these intermediate steps. ",1.0,"Streamlined release automation - It would be good to get the release automation down to a single command, perhaps with one prompt for the GitHub access token. We can do this once we: - [ ] Trust the `bump.sh` script enough to run unaided without verifying the diffs. - [ ] Change `neon new` to run without prompts by default so `validate.sh` can run unaided. - [ ] Create a master `release.sh` script to coordinate all these intermediate steps. ",1,streamlined release automation it would be good to get the release automation down to a single command perhaps with one prompt for the github access token we can do this once we trust the bump sh script enough to run unaided without verifying the diffs change neon new to run without prompts by default so validate sh can run unaided create a master release sh script to coordinate all these intermediate steps ,1 17312,5382889653.0,IssuesEvent,2017-02-24 03:40:21,Microsoft/TypeScript,https://api.github.com/repos/Microsoft/TypeScript,opened,Path Suggestions include Excluded Folders and Files ,VS Code Tracked," From https://github.com/Microsoft/vscode/issues/20557 **TypeScript Version:** 2.2.1 ```json { ""exclude"": [ ""node_modules"" ] } ``` ```ts import * as x from './|' ``` Request `completions` from TSServer at the `|` **Expected behavior:** No folder suggestion for `node_modules` is returned since the folder has been explicitly excluded **Actual behavior:** An entry for `node_modules` is included in the results",1.0,"Path Suggestions include Excluded Folders and Files - From https://github.com/Microsoft/vscode/issues/20557 **TypeScript Version:** 2.2.1 ```json { ""exclude"": [ ""node_modules"" ] } ``` ```ts import * as x from './|' ``` Request `completions` from TSServer at the `|` **Expected behavior:** No folder suggestion for `node_modules` is returned since the folder has been explicitly excluded **Actual behavior:** An entry for `node_modules` is included in the results",0,path suggestions include excluded folders and files from typescript version json exclude node modules ts import as x from request completions from tsserver at the expected behavior no folder suggestion for node modules is returned since the folder has been explicitly excluded actual behavior an entry for node modules is included in the results,0 54730,23322160240.0,IssuesEvent,2022-08-08 17:25:58,elastic/kibana,https://api.github.com/repos/elastic/kibana,reopened,[Data view] Default geo-field ,Team:Geo enhancement usability loe:hours Feature:Data Views impact:low Team:AppServicesUx,"**Describe the feature:** Data views have a default time-field. It would be great if users could configure a default geo field. **Describe a specific use case for the feature:** This would be incredibly useful to be able to create default Maps (similar how Kibana can show default time-series visualizations). e.g. automatically show a map in Lens, Discover, .... ",1.0,"[Data view] Default geo-field - **Describe the feature:** Data views have a default time-field. It would be great if users could configure a default geo field. **Describe a specific use case for the feature:** This would be incredibly useful to be able to create default Maps (similar how Kibana can show default time-series visualizations). e.g. automatically show a map in Lens, Discover, .... ",0, default geo field describe the feature data views have a default time field it would be great if users could configure a default geo field describe a specific use case for the feature this would be incredibly useful to be able to create default maps similar how kibana can show default time series visualizations e g automatically show a map in lens discover ,0 805470,29520837102.0,IssuesEvent,2023-06-05 01:34:26,aws/eks-anywhere,https://api.github.com/repos/aws/eks-anywhere,closed,Add V9 logging for mystery kubectl manifest,priority/p2 stale team/ce," **What would you like to be added**: I would like v9 logging to be added for the manifest that is being applied via stdin (`kubectl apply -f -`) during cluster upgrade/create. **Why is this needed**: The current failure being logged is difficult to troubleshoot be cause I cannot see what the manifest actually was. ``` {""T"":1672876356742319224,""M"":""Executing command"",""cmd"":""/usr/bin/docker exec -i eksa_1672875864202180228 kubectl apply -f - --namespace eksa-system --kubeconfig test/generated/test.kind.kubeconfig""} {""T"":1672876357496487511,""M"":""docker"",""stderr"":""error: error validating \""STDIN\"": error validating data: [ValidationError(KubeadmControlPlane.spec.kubeadmConfigSpec.files[1]): missing required field \""path\"" in io.x-k8s.cluster.controlplane.v1beta1.KubeadmControlPlane.spec.kubeadmConfigSpec.files, ValidationError(KubeadmControlPlane.spec): missing required field \""machineTemplate\"" in io.x-k8s.cluster.controlplane.v1beta1.KubeadmControlPlane.spec, ValidationError(KubeadmControlPlane.spec): missing required field \""version\"" in io.x-k8s.cluster.controlplane.v1beta1.KubeadmControlPlane.spec]; if you choose to ignore these errors, turn validation off with --validate=false\n""} ``` ",1.0,"Add V9 logging for mystery kubectl manifest - **What would you like to be added**: I would like v9 logging to be added for the manifest that is being applied via stdin (`kubectl apply -f -`) during cluster upgrade/create. **Why is this needed**: The current failure being logged is difficult to troubleshoot be cause I cannot see what the manifest actually was. ``` {""T"":1672876356742319224,""M"":""Executing command"",""cmd"":""/usr/bin/docker exec -i eksa_1672875864202180228 kubectl apply -f - --namespace eksa-system --kubeconfig test/generated/test.kind.kubeconfig""} {""T"":1672876357496487511,""M"":""docker"",""stderr"":""error: error validating \""STDIN\"": error validating data: [ValidationError(KubeadmControlPlane.spec.kubeadmConfigSpec.files[1]): missing required field \""path\"" in io.x-k8s.cluster.controlplane.v1beta1.KubeadmControlPlane.spec.kubeadmConfigSpec.files, ValidationError(KubeadmControlPlane.spec): missing required field \""machineTemplate\"" in io.x-k8s.cluster.controlplane.v1beta1.KubeadmControlPlane.spec, ValidationError(KubeadmControlPlane.spec): missing required field \""version\"" in io.x-k8s.cluster.controlplane.v1beta1.KubeadmControlPlane.spec]; if you choose to ignore these errors, turn validation off with --validate=false\n""} ``` ",0,add logging for mystery kubectl manifest what would you like to be added i would like logging to be added for the manifest that is being applied via stdin kubectl apply f during cluster upgrade create why is this needed the current failure being logged is difficult to troubleshoot be cause i cannot see what the manifest actually was t m executing command cmd usr bin docker exec i eksa kubectl apply f namespace eksa system kubeconfig test generated test kind kubeconfig t m docker stderr error error validating stdin error validating data missing required field path in io x cluster controlplane kubeadmcontrolplane spec kubeadmconfigspec files validationerror kubeadmcontrolplane spec missing required field machinetemplate in io x cluster controlplane kubeadmcontrolplane spec validationerror kubeadmcontrolplane spec missing required field version in io x cluster controlplane kubeadmcontrolplane spec if you choose to ignore these errors turn validation off with validate false n ,0 4007,15158884744.0,IssuesEvent,2021-02-12 02:27:40,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,[Bug] testStrictVisitDisableExceptionToggle test possibly flaky on Firebase,eng:automation wontfix 🐞 bug,"Spotted testStrictVisitDisableExceptionToggle failing on Nexus 6: https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/7727347515092753227/executions/bs.280da37d4a1af1ad - if it keeps failing will need to be disabled ",1.0,"[Bug] testStrictVisitDisableExceptionToggle test possibly flaky on Firebase - Spotted testStrictVisitDisableExceptionToggle failing on Nexus 6: https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/7727347515092753227/executions/bs.280da37d4a1af1ad - if it keeps failing will need to be disabled ",1, teststrictvisitdisableexceptiontoggle test possibly flaky on firebase spotted teststrictvisitdisableexceptiontoggle failing on nexus if it keeps failing will need to be disabled ,1 278258,21058242157.0,IssuesEvent,2022-04-01 06:55:00,lchokhoe/ped,https://api.github.com/repos/lchokhoe/ped,opened,"Documentation Bug - Typo found in the ""Edit a Procedure of your Client: `editProc`"" subsection",severity.VeryLow type.DocumentationBug,"Documentation Bug - Typo found in the ""Edit a Procedure of your Client: `editProc`"" subsection Typo was found in the mentioned section. ![image.png](https://raw.githubusercontent.com/lchokhoe/ped/main/files/1ffa4fb3-8c11-4613-91c5-fb1568c3833b.png) ",1.0,"Documentation Bug - Typo found in the ""Edit a Procedure of your Client: `editProc`"" subsection - Documentation Bug - Typo found in the ""Edit a Procedure of your Client: `editProc`"" subsection Typo was found in the mentioned section. ![image.png](https://raw.githubusercontent.com/lchokhoe/ped/main/files/1ffa4fb3-8c11-4613-91c5-fb1568c3833b.png) ",0,documentation bug typo found in the edit a procedure of your client editproc subsection documentation bug typo found in the edit a procedure of your client editproc subsection typo was found in the mentioned section ,0 638157,20713614120.0,IssuesEvent,2022-03-12 09:20:33,ISPP-AparkApp/aparkapp,https://api.github.com/repos/ISPP-AparkApp/aparkapp,opened,Crear modelos,enhancement high priority,"En base al modelado UML, se deberán crear en Django todos los modelos necesarios junto con sus atributos y relaciones correspondientes.",1.0,"Crear modelos - En base al modelado UML, se deberán crear en Django todos los modelos necesarios junto con sus atributos y relaciones correspondientes.",0,crear modelos en base al modelado uml se deberán crear en django todos los modelos necesarios junto con sus atributos y relaciones correspondientes ,0 37464,12479487286.0,IssuesEvent,2020-05-29 18:21:43,chef/chef-analyze,https://api.github.com/repos/chef/chef-analyze,opened,Add `anonymize` flag to `chef-analyze report`,Aspect: Security Triage: Confirmed,"Add an anonymize flag so that we can allow customers to provide us with meaningful reports without revealing sensitive internal data. Make this configurable so that it can be enabled by default. This work exists and is complete (except for policyfile) in the branch `mp/anon. The branch will need to be brought up to date, or the changes re-applied in a new branch. Output format examples pending. ",True,"Add `anonymize` flag to `chef-analyze report` - Add an anonymize flag so that we can allow customers to provide us with meaningful reports without revealing sensitive internal data. Make this configurable so that it can be enabled by default. This work exists and is complete (except for policyfile) in the branch `mp/anon. The branch will need to be brought up to date, or the changes re-applied in a new branch. Output format examples pending. ",0,add anonymize flag to chef analyze report add an anonymize flag so that we can allow customers to provide us with meaningful reports without revealing sensitive internal data make this configurable so that it can be enabled by default this work exists and is complete except for policyfile in the branch mp anon the branch will need to be brought up to date or the changes re applied in a new branch output format examples pending ,0 114817,24672663651.0,IssuesEvent,2022-10-18 14:45:40,FerretDB/FerretDB,https://api.github.com/repos/FerretDB/FerretDB,opened,Allow dashes (`-`) in collection names,code/enhancement,"### What should be done? Currently, we reject collection names with dashes (`-`). We should allow them because they are common in real-life apps. We should update code, tests, diff tests in the dance repo, and documentation.",1.0,"Allow dashes (`-`) in collection names - ### What should be done? Currently, we reject collection names with dashes (`-`). We should allow them because they are common in real-life apps. We should update code, tests, diff tests in the dance repo, and documentation.",0,allow dashes in collection names what should be done currently we reject collection names with dashes we should allow them because they are common in real life apps we should update code tests diff tests in the dance repo and documentation ,0 6259,22618382333.0,IssuesEvent,2022-06-30 02:12:43,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,IE11 Input Typing Problem,TYPE: bug AREA: client SYSTEM: automations FREQUENCY: level 1 STATE: Stale," ### What is your Test Scenario? Given a webpage with 2 input fields i'm trying to type text into both. Problem is presented when using IE11 on saucelabs with testcafe. Typetext only inputs text to the first of both inputs, the second time i call typeText method it doesn't seem to work correctly. ### What is the Current behavior? When using typeText method on first input (could be any of the two inputs mentioned) it types perfectly the text passed. Later when i try to switch to the other input and type text into it it refuses to type the text or even select the input requested. I tried both clicking and then typeText methods but nothing seems to work in IE11. Taking in consideration this works without any problem on chrome and firefox it's possibly a problem fully related to IE11. ### What is the Expected behavior? typeText method should switch between these two input fields typing the correct text. Using click and then triggering the typeText method or just by using typeText method. ### What is your web application and your TestCafe test code? Your website URL (or attach your complete example): https://anypoint.mulesoft.com/login/
Your complete test code (or attach your test files): ``` // Please note all code below works on chrome and firefox and selectors are properly initialized await t.click(loginPage.userNameInput); await t.typeText(loginPage.userNameInput,username,{replace:true,caretPos:0,paste:true}); await t.click(loginPage.passwordInpu); await t.typeText(loginPage.passwordInput,password,{replace:true,caretPos:0,paste:true}); await t.click(loginPage.signInButton); ```
Your complete configuration file (if any): ``` ```
Your complete test report: ``` ```
Screenshots: ``` ```
### Steps to Reproduce: 1. Go to my website ... 2. Try to fill in username and password with any value... 3. Second input field tried to access isn't filled up ### Your Environment details: * testcafe version: 1.3.3 * node.js version: 10.15.1 * command-line arguments: * browser name and version: IE11 on Saucelabs * platform and version: Windows 8.1 * other: Using saucelabs provider ",1.0,"IE11 Input Typing Problem - ### What is your Test Scenario? Given a webpage with 2 input fields i'm trying to type text into both. Problem is presented when using IE11 on saucelabs with testcafe. Typetext only inputs text to the first of both inputs, the second time i call typeText method it doesn't seem to work correctly. ### What is the Current behavior? When using typeText method on first input (could be any of the two inputs mentioned) it types perfectly the text passed. Later when i try to switch to the other input and type text into it it refuses to type the text or even select the input requested. I tried both clicking and then typeText methods but nothing seems to work in IE11. Taking in consideration this works without any problem on chrome and firefox it's possibly a problem fully related to IE11. ### What is the Expected behavior? typeText method should switch between these two input fields typing the correct text. Using click and then triggering the typeText method or just by using typeText method. ### What is your web application and your TestCafe test code? Your website URL (or attach your complete example): https://anypoint.mulesoft.com/login/
Your complete test code (or attach your test files): ``` // Please note all code below works on chrome and firefox and selectors are properly initialized await t.click(loginPage.userNameInput); await t.typeText(loginPage.userNameInput,username,{replace:true,caretPos:0,paste:true}); await t.click(loginPage.passwordInpu); await t.typeText(loginPage.passwordInput,password,{replace:true,caretPos:0,paste:true}); await t.click(loginPage.signInButton); ```
Your complete configuration file (if any): ``` ```
Your complete test report: ``` ```
Screenshots: ``` ```
### Steps to Reproduce: 1. Go to my website ... 2. Try to fill in username and password with any value... 3. Second input field tried to access isn't filled up ### Your Environment details: * testcafe version: 1.3.3 * node.js version: 10.15.1 * command-line arguments: * browser name and version: IE11 on Saucelabs * platform and version: Windows 8.1 * other: Using saucelabs provider ",1, input typing problem if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository  in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario given a webpage with input fields i m trying to type text into both problem is presented when using on saucelabs with testcafe typetext only inputs text to the first of both inputs the second time i call typetext method it doesn t seem to work correctly what is the current behavior when using typetext method on first input could be any of the two inputs mentioned it types perfectly the text passed later when i try to switch to the other input and type text into it it refuses to type the text or even select the input requested i tried both clicking and then typetext methods but nothing seems to work in taking in consideration this works without any problem on chrome and firefox it s possibly a problem fully related to what is the expected behavior typetext method should switch between these two input fields typing the correct text using click and then triggering the typetext method or just by using typetext method what is your web application and your testcafe test code your website url or attach your complete example your complete test code or attach your test files please note all code below works on chrome and firefox and selectors are properly initialized await t click loginpage usernameinput await t typetext loginpage usernameinput username replace true caretpos paste true await t click loginpage passwordinpu await t typetext loginpage passwordinput password replace true caretpos paste true await t click loginpage signinbutton your complete configuration file if any your complete test report screenshots steps to reproduce go to my website try to fill in username and password with any value second input field tried to access isn t filled up your environment details testcafe version node js version command line arguments browser name and version on saucelabs platform and version windows other using saucelabs provider ,1 469363,13507363663.0,IssuesEvent,2020-09-14 05:47:20,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,www.google.com - see bug description,browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical," **URL**: https://www.google.com/ **Browser / Version**: Firefox 81.0 **Operating System**: Windows 7 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: google **Steps to Reproduce**: site not opening
View the screenshot
Browser Configuration
  • gfx.webrender.all: false
  • gfx.webrender.blob-images: true
  • gfx.webrender.enabled: false
  • image.mem.shared: true
  • buildID: 20200910180444
  • channel: beta
  • hasTouchScreen: false
  • mixed active content blocked: false
  • mixed passive content blocked: false
  • tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2020/9/276c577c-feaa-40d1-be1d-5ae55bd3571e) _From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"www.google.com - see bug description - **URL**: https://www.google.com/ **Browser / Version**: Firefox 81.0 **Operating System**: Windows 7 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: google **Steps to Reproduce**: site not opening
View the screenshot
Browser Configuration
  • gfx.webrender.all: false
  • gfx.webrender.blob-images: true
  • gfx.webrender.enabled: false
  • image.mem.shared: true
  • buildID: 20200910180444
  • channel: beta
  • hasTouchScreen: false
  • mixed active content blocked: false
  • mixed passive content blocked: false
  • tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2020/9/276c577c-feaa-40d1-be1d-5ae55bd3571e) _From [webcompat.com](https://webcompat.com/) with ❤️_",0, see bug description url browser version firefox operating system windows tested another browser yes chrome problem type something else description google steps to reproduce site not opening view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ ,0 6284,22686930723.0,IssuesEvent,2022-07-04 14:56:28,submariner-io/get.submariner.io,https://api.github.com/repos/submariner-io/get.submariner.io,closed,Add periodic get.submariner.io subctl install tests,help wanted automation S next-version-candidate,"As shown by https://github.com/submariner-io/get.submariner.io/pull/13, changes elsewhere can break get.submariner.io-based installs. Add periodic tests to verify subctl installs work, report any failures.",1.0,"Add periodic get.submariner.io subctl install tests - As shown by https://github.com/submariner-io/get.submariner.io/pull/13, changes elsewhere can break get.submariner.io-based installs. Add periodic tests to verify subctl installs work, report any failures.",1,add periodic get submariner io subctl install tests as shown by changes elsewhere can break get submariner io based installs add periodic tests to verify subctl installs work report any failures ,1 43861,11309693922.0,IssuesEvent,2020-01-19 14:52:27,rust-lang/docs.rs,https://api.github.com/repos/rust-lang/docs.rs,closed,Crate `grex` did not build,build-failure," **Crate name:** grex **Build failure link:** https://docs.rs/crate/grex/0.3.1/builds/210840 **Additional details:** The problem is dependent crate `matrixmultiply v0.2.3`. The error is `array lengths can't depend on generic parameters`. This bug has been reported already in [bluss/matrixmultiply#50](https://github.com/bluss/matrixmultiply/issues/50). Seems to be a problem with the current Rust nightly as reported in [rust-lang/rust#67743](https://github.com/rust-lang/rust/issues/67743).",1.0,"Crate `grex` did not build - **Crate name:** grex **Build failure link:** https://docs.rs/crate/grex/0.3.1/builds/210840 **Additional details:** The problem is dependent crate `matrixmultiply v0.2.3`. The error is `array lengths can't depend on generic parameters`. This bug has been reported already in [bluss/matrixmultiply#50](https://github.com/bluss/matrixmultiply/issues/50). Seems to be a problem with the current Rust nightly as reported in [rust-lang/rust#67743](https://github.com/rust-lang/rust/issues/67743).",0,crate grex did not build if you need a system dependency added for your crate to build consider making a pr to instead of opening an issue here there are detailed instructions for this at crate name grex build failure link additional details the problem is dependent crate matrixmultiply the error is array lengths can t depend on generic parameters this bug has been reported already in seems to be a problem with the current rust nightly as reported in ,0 232845,7680667987.0,IssuesEvent,2018-05-16 03:01:56,turenar/mayfes2018-pikyou,https://api.github.com/repos/turenar/mayfes2018-pikyou,closed,[Blockly/enchantjs] コードの実行に eval を使用している。,component:game priority:low type:enhancement,"Blocklyで生成したコードは文字列で得られるためこれを実行する必要がある。現在は、`eval(code);`として実行しているが、`eval`は任意のコードが実行できたり、遅かったりとあまりよろしくない(?)らしい。 もし代替手段があればそちらに切り替えたい。",1.0,"[Blockly/enchantjs] コードの実行に eval を使用している。 - Blocklyで生成したコードは文字列で得られるためこれを実行する必要がある。現在は、`eval(code);`として実行しているが、`eval`は任意のコードが実行できたり、遅かったりとあまりよろしくない(?)らしい。 もし代替手段があればそちらに切り替えたい。",0, コードの実行に eval を使用している。 blocklyで生成したコードは文字列で得られるためこれを実行する必要がある。現在は、 eval code として実行しているが、 eval は任意のコードが実行できたり、遅かったりとあまりよろしくない(?)らしい。 もし代替手段があればそちらに切り替えたい。,0 465257,13369351508.0,IssuesEvent,2020-09-01 08:42:58,grpc/grpc,https://api.github.com/repos/grpc/grpc,opened,Image missing in documentation for C++ Completion Queue,kind/bug priority/P2," I found a bug in the documentation for Grpc Core at . The image address is incorrect so the image is not displayed. It is currently . Looking at the raw markdown article for that documentation at I see the image and the address used there is .",1.0,"Image missing in documentation for C++ Completion Queue - I found a bug in the documentation for Grpc Core at . The image address is incorrect so the image is not displayed. It is currently . Looking at the raw markdown article for that documentation at I see the image and the address used there is .",0,image missing in documentation for c completion queue please do not post a question here this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers at stackoverflow with grpc tag for questions that specifically need to be answered by grpc team members please ask look for answers at grpc io mailing list issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g i found a bug in the documentation for grpc core at the image address is incorrect so the image is not displayed it is currently looking at the raw markdown article for that documentation at i see the image and the address used there is ,0 8420,26950258489.0,IssuesEvent,2023-02-08 11:07:27,gchq/Gaffer,https://api.github.com/repos/gchq/Gaffer,closed,Improve code analysis CI in Gaffer,automation,"https://github.com/gchq/Gaffer/issues/39 added [FindBugs](https://gleclaire.github.io/findbugs-maven-plugin/) to Gaffer. FindBugs is no longer maintained and has been replaced by [SpotBugs](https://spotbugs.github.io/spotbugs-maven-plugin/). This could be upgraded, or a more modern tool could be used to replace this entirely. For example: - [CodeQL](https://codeql.github.com/) - [semgrep](https://semgrep.dev/) - [Codacy](https://github.com/marketplace/codacy) - [sonarqube](https://www.sonarqube.org/) Some of these would also replace the need for other plugins such as checkstyle and code coverage as they handle those too. ",1.0,"Improve code analysis CI in Gaffer - https://github.com/gchq/Gaffer/issues/39 added [FindBugs](https://gleclaire.github.io/findbugs-maven-plugin/) to Gaffer. FindBugs is no longer maintained and has been replaced by [SpotBugs](https://spotbugs.github.io/spotbugs-maven-plugin/). This could be upgraded, or a more modern tool could be used to replace this entirely. For example: - [CodeQL](https://codeql.github.com/) - [semgrep](https://semgrep.dev/) - [Codacy](https://github.com/marketplace/codacy) - [sonarqube](https://www.sonarqube.org/) Some of these would also replace the need for other plugins such as checkstyle and code coverage as they handle those too. ",1,improve code analysis ci in gaffer added to gaffer findbugs is no longer maintained and has been replaced by this could be upgraded or a more modern tool could be used to replace this entirely for example some of these would also replace the need for other plugins such as checkstyle and code coverage as they handle those too ,1 286794,24785848767.0,IssuesEvent,2022-10-24 09:41:41,QubesOS/updates-status,https://api.github.com/repos/QubesOS/updates-status,closed,linux-kernel-latest v6.0.2-1-latest (r4.1),buggy r4.1-dom0-cur-test,"Update of linux-kernel-latest to v6.0.2-1-latest for Qubes r4.1, see comments below for details. Built from: https://github.com/QubesOS/qubes-linux-kernel/commit/fb45fcbcc3adfe1fcb40de48aeb7a48e265d16c2 [Changes since previous version](https://github.com/QubesOS/qubes-linux-kernel/compare/v5.18.16-1-latest...v6.0.2-1-latest): QubesOS/qubes-linux-kernel@fb45fcb version 6.0.2-1 QubesOS/qubes-linux-kernel@073ed83 Refresh packaging for -rc kernels QubesOS/qubes-linux-kernel@4752f09 Fix modules.img relabeling for .0 mainline release QubesOS/qubes-linux-kernel@dcbf75e Disable building macbook12-spi-driver QubesOS/qubes-linux-kernel@5ead150 Switch download urls to 6.x QubesOS/qubes-linux-kernel@1e94b79 Merge branch 'update-v5.19.12' QubesOS/qubes-linux-kernel@758b538 version 5.19.14-1 QubesOS/qubes-linux-kernel@f694058 Disable CONFIG_XEN_VIRTIO QubesOS/qubes-linux-kernel@8c14396 Revert ""Include patch fixing Ryzen 6000 Keyboard"" QubesOS/qubes-linux-kernel@8b21eb7 Update to kernel-5.19.12 QubesOS/qubes-linux-kernel@b8477b1 version 5.19.9-1 QubesOS/qubes-linux-kernel@64d6d29 Revert ""Backport fix for persistent grants negotiation in blkfront/back"" QubesOS/qubes-linux-kernel@f081722 version 5.19.6-1 QubesOS/qubes-linux-kernel@c538914 Revert ""Include patch fixing Xorg crash"" QubesOS/qubes-linux-kernel@5fcfe0f Workaround iwlwifi load issue for AX200-series when MSI-X is unavailable QubesOS/qubes-linux-kernel@b1bc9ec Backport fix for persistent grants negotiation in blkfront/back QubesOS/qubes-linux-kernel@04cc7d6 Drop disabling CONFIG_LEGACY_VSYSCALL_EMULATE QubesOS/qubes-linux-kernel@43b9b30 ci: drop R4.0 build QubesOS/qubes-linux-kernel@3ba0d17 Update to kernel-5.19.2 Referenced issues: QubesOS/qubes-issues#5615 If you're release manager, you can issue GPG-inline signed command: * `Upload linux-kernel-latest fb45fcbcc3adfe1fcb40de48aeb7a48e265d16c2 r4.1 current repo` (available 7 days from now) * `Upload linux-kernel-latest fb45fcbcc3adfe1fcb40de48aeb7a48e265d16c2 r4.1 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now) * `Upload linux-kernel-latest fb45fcbcc3adfe1fcb40de48aeb7a48e265d16c2 r4.1 security-testing repo` Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it). ",1.0,"linux-kernel-latest v6.0.2-1-latest (r4.1) - Update of linux-kernel-latest to v6.0.2-1-latest for Qubes r4.1, see comments below for details. Built from: https://github.com/QubesOS/qubes-linux-kernel/commit/fb45fcbcc3adfe1fcb40de48aeb7a48e265d16c2 [Changes since previous version](https://github.com/QubesOS/qubes-linux-kernel/compare/v5.18.16-1-latest...v6.0.2-1-latest): QubesOS/qubes-linux-kernel@fb45fcb version 6.0.2-1 QubesOS/qubes-linux-kernel@073ed83 Refresh packaging for -rc kernels QubesOS/qubes-linux-kernel@4752f09 Fix modules.img relabeling for .0 mainline release QubesOS/qubes-linux-kernel@dcbf75e Disable building macbook12-spi-driver QubesOS/qubes-linux-kernel@5ead150 Switch download urls to 6.x QubesOS/qubes-linux-kernel@1e94b79 Merge branch 'update-v5.19.12' QubesOS/qubes-linux-kernel@758b538 version 5.19.14-1 QubesOS/qubes-linux-kernel@f694058 Disable CONFIG_XEN_VIRTIO QubesOS/qubes-linux-kernel@8c14396 Revert ""Include patch fixing Ryzen 6000 Keyboard"" QubesOS/qubes-linux-kernel@8b21eb7 Update to kernel-5.19.12 QubesOS/qubes-linux-kernel@b8477b1 version 5.19.9-1 QubesOS/qubes-linux-kernel@64d6d29 Revert ""Backport fix for persistent grants negotiation in blkfront/back"" QubesOS/qubes-linux-kernel@f081722 version 5.19.6-1 QubesOS/qubes-linux-kernel@c538914 Revert ""Include patch fixing Xorg crash"" QubesOS/qubes-linux-kernel@5fcfe0f Workaround iwlwifi load issue for AX200-series when MSI-X is unavailable QubesOS/qubes-linux-kernel@b1bc9ec Backport fix for persistent grants negotiation in blkfront/back QubesOS/qubes-linux-kernel@04cc7d6 Drop disabling CONFIG_LEGACY_VSYSCALL_EMULATE QubesOS/qubes-linux-kernel@43b9b30 ci: drop R4.0 build QubesOS/qubes-linux-kernel@3ba0d17 Update to kernel-5.19.2 Referenced issues: QubesOS/qubes-issues#5615 If you're release manager, you can issue GPG-inline signed command: * `Upload linux-kernel-latest fb45fcbcc3adfe1fcb40de48aeb7a48e265d16c2 r4.1 current repo` (available 7 days from now) * `Upload linux-kernel-latest fb45fcbcc3adfe1fcb40de48aeb7a48e265d16c2 r4.1 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now) * `Upload linux-kernel-latest fb45fcbcc3adfe1fcb40de48aeb7a48e265d16c2 r4.1 security-testing repo` Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it). ",0,linux kernel latest latest update of linux kernel latest to latest for qubes see comments below for details built from qubesos qubes linux kernel version qubesos qubes linux kernel refresh packaging for rc kernels qubesos qubes linux kernel fix modules img relabeling for mainline release qubesos qubes linux kernel disable building spi driver qubesos qubes linux kernel switch download urls to x qubesos qubes linux kernel merge branch update qubesos qubes linux kernel version qubesos qubes linux kernel disable config xen virtio qubesos qubes linux kernel revert include patch fixing ryzen keyboard qubesos qubes linux kernel update to kernel qubesos qubes linux kernel version qubesos qubes linux kernel revert backport fix for persistent grants negotiation in blkfront back qubesos qubes linux kernel version qubesos qubes linux kernel revert include patch fixing xorg crash qubesos qubes linux kernel workaround iwlwifi load issue for series when msi x is unavailable qubesos qubes linux kernel backport fix for persistent grants negotiation in blkfront back qubesos qubes linux kernel drop disabling config legacy vsyscall emulate qubesos qubes linux kernel ci drop build qubesos qubes linux kernel update to kernel referenced issues qubesos qubes issues if you re release manager you can issue gpg inline signed command upload linux kernel latest current repo available days from now upload linux kernel latest current dists repo you can choose subset of distributions like vm vm available days from now upload linux kernel latest security testing repo above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it ,0 4613,17010495189.0,IssuesEvent,2021-07-02 03:13:49,JacobLinCool/BA,https://api.github.com/repos/JacobLinCool/BA,closed,[Finished] Automation (2021/07/01 16:24:27),automation,"**Finished.** (2021/07/01 16:25:06) ## 登入: 完成 ``` [2021/07/01 16:24:28] 開始執行帳號登入程序 [2021/07/01 16:24:28] 正在檢測登入狀態 [2021/07/01 16:24:36] 登入狀態: 已登入 [2021/07/01 16:24:36] 帳號登入程序已完成 ``` ## 簽到: 完成 ``` [2021/07/01 16:24:36] 開始執行自動簽到程序 [2021/07/01 16:24:36] 正在檢測簽到狀態 [2021/07/01 16:24:40] 簽到狀態: 已簽到 [2021/07/01 16:24:41] 自動簽到程序已完成 [2021/07/01 16:24:41] 開始執行自動觀看雙倍簽到獎勵廣告程序 [2021/07/01 16:24:41] 正在檢測雙倍簽到獎勵狀態 [2021/07/01 16:24:46] 雙倍簽到獎勵狀態: 已獲得雙倍簽到獎勵 [2021/07/01 16:24:46] 自動觀看雙倍簽到獎勵廣告程序已完成 ``` ## 答題: 完成 ``` [2021/07/01 16:24:46] 開始執行動畫瘋自動答題程序 [2021/07/01 16:24:46] 正在檢測答題狀態 [2021/07/01 16:24:48] 今日已經答過題目了 [2021/07/01 16:24:49] 動畫瘋自動答題程序已完成 ``` ## 抽獎: 完成 ``` [2021/07/01 16:24:49] 開始執行福利社自動抽抽樂程序 [2021/07/01 16:24:49] 正在尋找抽抽樂 [2021/07/01 16:24:51] 找到 7 個抽抽樂 [2021/07/01 16:24:51] 1: WFH必備,Innowatt 組合禮包抽抽樂! [2021/07/01 16:24:51] 2: 網路爆紅好穿到得獎的 Cody 防水咖啡鞋, 愛地球從腳下開始! [2021/07/01 16:24:51] 3: 『迎廣309陪你防疫打遊戲』 InWin迎廣限時抽抽樂! [2021/07/01 16:24:51] 4: 【PurePlan|頸椎支撐器 2.0】每天 15 分鐘,矯正不良姿勢,減緩頸椎退化! [2021/07/01 16:24:51] 5: 站起來迎接勝利!樂歌 Loctek 電動升降電競桌! [2021/07/01 16:24:51] 6: GoKids玩樂小子|暑假要幹嘛?在家玩桌遊啊! [2021/07/01 16:24:51] 7: CBD漢麻毯|失眠OUT! [2021/07/01 16:24:51] 正在嘗試執行第 1 個抽抽樂: WFH必備,Innowatt 組合禮包抽抽樂! [2021/07/01 16:24:53] 第 1 個抽抽樂(WFH必備,Innowatt 組合禮包抽抽樂!)的廣告免費次數已用完 [2021/07/01 16:24:53] 正在嘗試執行第 2 個抽抽樂: 網路爆紅好穿到得獎的 Cody 防水咖啡鞋, 愛地球從腳下開始! [2021/07/01 16:24:55] 第 2 個抽抽樂(網路爆紅好穿到得獎的 Cody 防水咖啡鞋, 愛地球從腳下開始!)的廣告免費次數已用完 [2021/07/01 16:24:55] 正在嘗試執行第 3 個抽抽樂: 『迎廣309陪你防疫打遊戲』 InWin迎廣限時抽抽樂! [2021/07/01 16:24:57] 第 3 個抽抽樂(『迎廣309陪你防疫打遊戲』 InWin迎廣限時抽抽樂!)的廣告免費次數已用完 [2021/07/01 16:24:57] 正在嘗試執行第 4 個抽抽樂: 【PurePlan|頸椎支撐器 2.0】每天 15 分鐘,矯正不良姿勢,減緩頸椎退化! [2021/07/01 16:24:59] 第 4 個抽抽樂(【PurePlan|頸椎支撐器 2.0】每天 15 分鐘,矯正不良姿勢,減緩頸椎退化!)的廣告免費次數已用完 [2021/07/01 16:24:59] 正在嘗試執行第 5 個抽抽樂: 站起來迎接勝利!樂歌 Loctek 電動升降電競桌! [2021/07/01 16:25:01] 第 5 個抽抽樂(站起來迎接勝利!樂歌 Loctek 電動升降電競桌!)的廣告免費次數已用完 [2021/07/01 16:25:01] 正在嘗試執行第 6 個抽抽樂: GoKids玩樂小子|暑假要幹嘛?在家玩桌遊啊! [2021/07/01 16:25:02] 第 6 個抽抽樂(GoKids玩樂小子|暑假要幹嘛?在家玩桌遊啊!)的廣告免費次數已用完 [2021/07/01 16:25:02] 正在嘗試執行第 7 個抽抽樂: CBD漢麻毯|失眠OUT! [2021/07/01 16:25:04] 第 7 個抽抽樂(CBD漢麻毯|失眠OUT!)的廣告免費次數已用完 [2021/07/01 16:25:06] 福利社自動抽抽樂程序已完成 ``` ",1.0,"[Finished] Automation (2021/07/01 16:24:27) - **Finished.** (2021/07/01 16:25:06) ## 登入: 完成 ``` [2021/07/01 16:24:28] 開始執行帳號登入程序 [2021/07/01 16:24:28] 正在檢測登入狀態 [2021/07/01 16:24:36] 登入狀態: 已登入 [2021/07/01 16:24:36] 帳號登入程序已完成 ``` ## 簽到: 完成 ``` [2021/07/01 16:24:36] 開始執行自動簽到程序 [2021/07/01 16:24:36] 正在檢測簽到狀態 [2021/07/01 16:24:40] 簽到狀態: 已簽到 [2021/07/01 16:24:41] 自動簽到程序已完成 [2021/07/01 16:24:41] 開始執行自動觀看雙倍簽到獎勵廣告程序 [2021/07/01 16:24:41] 正在檢測雙倍簽到獎勵狀態 [2021/07/01 16:24:46] 雙倍簽到獎勵狀態: 已獲得雙倍簽到獎勵 [2021/07/01 16:24:46] 自動觀看雙倍簽到獎勵廣告程序已完成 ``` ## 答題: 完成 ``` [2021/07/01 16:24:46] 開始執行動畫瘋自動答題程序 [2021/07/01 16:24:46] 正在檢測答題狀態 [2021/07/01 16:24:48] 今日已經答過題目了 [2021/07/01 16:24:49] 動畫瘋自動答題程序已完成 ``` ## 抽獎: 完成 ``` [2021/07/01 16:24:49] 開始執行福利社自動抽抽樂程序 [2021/07/01 16:24:49] 正在尋找抽抽樂 [2021/07/01 16:24:51] 找到 7 個抽抽樂 [2021/07/01 16:24:51] 1: WFH必備,Innowatt 組合禮包抽抽樂! [2021/07/01 16:24:51] 2: 網路爆紅好穿到得獎的 Cody 防水咖啡鞋, 愛地球從腳下開始! [2021/07/01 16:24:51] 3: 『迎廣309陪你防疫打遊戲』 InWin迎廣限時抽抽樂! [2021/07/01 16:24:51] 4: 【PurePlan|頸椎支撐器 2.0】每天 15 分鐘,矯正不良姿勢,減緩頸椎退化! [2021/07/01 16:24:51] 5: 站起來迎接勝利!樂歌 Loctek 電動升降電競桌! [2021/07/01 16:24:51] 6: GoKids玩樂小子|暑假要幹嘛?在家玩桌遊啊! [2021/07/01 16:24:51] 7: CBD漢麻毯|失眠OUT! [2021/07/01 16:24:51] 正在嘗試執行第 1 個抽抽樂: WFH必備,Innowatt 組合禮包抽抽樂! [2021/07/01 16:24:53] 第 1 個抽抽樂(WFH必備,Innowatt 組合禮包抽抽樂!)的廣告免費次數已用完 [2021/07/01 16:24:53] 正在嘗試執行第 2 個抽抽樂: 網路爆紅好穿到得獎的 Cody 防水咖啡鞋, 愛地球從腳下開始! [2021/07/01 16:24:55] 第 2 個抽抽樂(網路爆紅好穿到得獎的 Cody 防水咖啡鞋, 愛地球從腳下開始!)的廣告免費次數已用完 [2021/07/01 16:24:55] 正在嘗試執行第 3 個抽抽樂: 『迎廣309陪你防疫打遊戲』 InWin迎廣限時抽抽樂! [2021/07/01 16:24:57] 第 3 個抽抽樂(『迎廣309陪你防疫打遊戲』 InWin迎廣限時抽抽樂!)的廣告免費次數已用完 [2021/07/01 16:24:57] 正在嘗試執行第 4 個抽抽樂: 【PurePlan|頸椎支撐器 2.0】每天 15 分鐘,矯正不良姿勢,減緩頸椎退化! [2021/07/01 16:24:59] 第 4 個抽抽樂(【PurePlan|頸椎支撐器 2.0】每天 15 分鐘,矯正不良姿勢,減緩頸椎退化!)的廣告免費次數已用完 [2021/07/01 16:24:59] 正在嘗試執行第 5 個抽抽樂: 站起來迎接勝利!樂歌 Loctek 電動升降電競桌! [2021/07/01 16:25:01] 第 5 個抽抽樂(站起來迎接勝利!樂歌 Loctek 電動升降電競桌!)的廣告免費次數已用完 [2021/07/01 16:25:01] 正在嘗試執行第 6 個抽抽樂: GoKids玩樂小子|暑假要幹嘛?在家玩桌遊啊! [2021/07/01 16:25:02] 第 6 個抽抽樂(GoKids玩樂小子|暑假要幹嘛?在家玩桌遊啊!)的廣告免費次數已用完 [2021/07/01 16:25:02] 正在嘗試執行第 7 個抽抽樂: CBD漢麻毯|失眠OUT! [2021/07/01 16:25:04] 第 7 個抽抽樂(CBD漢麻毯|失眠OUT!)的廣告免費次數已用完 [2021/07/01 16:25:06] 福利社自動抽抽樂程序已完成 ``` ",1, automation finished 登入 完成 開始執行帳號登入程序 正在檢測登入狀態 登入狀態 已登入 帳號登入程序已完成 簽到 完成 開始執行自動簽到程序 正在檢測簽到狀態 簽到狀態 已簽到 自動簽到程序已完成 開始執行自動觀看雙倍簽到獎勵廣告程序 正在檢測雙倍簽到獎勵狀態 雙倍簽到獎勵狀態 已獲得雙倍簽到獎勵 自動觀看雙倍簽到獎勵廣告程序已完成 答題 完成 開始執行動畫瘋自動答題程序 正在檢測答題狀態 今日已經答過題目了 動畫瘋自動答題程序已完成 抽獎 完成 開始執行福利社自動抽抽樂程序 正在尋找抽抽樂 找到 個抽抽樂 wfh必備,innowatt 組合禮包抽抽樂! 網路爆紅好穿到得獎的 cody 防水咖啡鞋, 愛地球從腳下開始! 『 』 inwin迎廣限時抽抽樂! 【pureplan|頸椎支撐器 】每天 分鐘,矯正不良姿勢,減緩頸椎退化! 站起來迎接勝利!樂歌 loctek 電動升降電競桌! gokids玩樂小子|暑假要幹嘛?在家玩桌遊啊! cbd漢麻毯|失眠out! 正在嘗試執行第 個抽抽樂: wfh必備,innowatt 組合禮包抽抽樂! 第 個抽抽樂(wfh必備,innowatt 組合禮包抽抽樂!)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: 網路爆紅好穿到得獎的 cody 防水咖啡鞋, 愛地球從腳下開始! 第 個抽抽樂(網路爆紅好穿到得獎的 cody 防水咖啡鞋, 愛地球從腳下開始!)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: 『 』 inwin迎廣限時抽抽樂! 第 個抽抽樂(『 』 inwin迎廣限時抽抽樂!)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: 【pureplan|頸椎支撐器 】每天 分鐘,矯正不良姿勢,減緩頸椎退化! 第 個抽抽樂(【pureplan|頸椎支撐器 】每天 分鐘,矯正不良姿勢,減緩頸椎退化!)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: 站起來迎接勝利!樂歌 loctek 電動升降電競桌! 第 個抽抽樂(站起來迎接勝利!樂歌 loctek 電動升降電競桌!)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: gokids玩樂小子|暑假要幹嘛?在家玩桌遊啊! 第 個抽抽樂(gokids玩樂小子|暑假要幹嘛?在家玩桌遊啊!)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: cbd漢麻毯|失眠out! 第 個抽抽樂(cbd漢麻毯|失眠out!)的廣告免費次數已用完 福利社自動抽抽樂程序已完成 ,1 210412,16099600682.0,IssuesEvent,2021-04-27 07:36:57,Altinity/clickhouse-operator,https://api.github.com/repos/Altinity/clickhouse-operator,closed,inquiry regarding RBAC permission on service account: clickhouse-operator,in testing work in progress,"Dear team, we noticed the rbac for clickhouse-operator is binding to highest cluster role user: cluster-admin, this is not allowed for our production environment since we need to specify detail cluster resources and give out as minimum permission as possible to prevent security issues, would you kindly please advice on those specific resouces permission for the sa? e.g. pvc, pod, ... Thanks. ",1.0,"inquiry regarding RBAC permission on service account: clickhouse-operator - Dear team, we noticed the rbac for clickhouse-operator is binding to highest cluster role user: cluster-admin, this is not allowed for our production environment since we need to specify detail cluster resources and give out as minimum permission as possible to prevent security issues, would you kindly please advice on those specific resouces permission for the sa? e.g. pvc, pod, ... Thanks. ",0,inquiry regarding rbac permission on service account clickhouse operator dear team we noticed the rbac for clickhouse operator is binding to highest cluster role user cluster admin this is not allowed for our production environment since we need to specify detail cluster resources and give out as minimum permission as possible to prevent security issues would you kindly please advice on those specific resouces permission for the sa e g pvc pod thanks ,0 392957,26965903987.0,IssuesEvent,2023-02-08 22:20:33,provenance-io/provenance-abci-listener,https://api.github.com/repos/provenance-io/provenance-abci-listener,closed,Add initial implementation of the gRPC State Listening plugin,documentation enhancement," ## Summary Add initial implementation of the gRPC plugin [ADR-038: State Listening plugin](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-038-state-listening.md) ____ #### For Admin Use - [x] Not duplicate issue - [x] Appropriate labels applied - [x] Appropriate contributors tagged - [x] Contributor assigned/self-assigned ",1.0,"Add initial implementation of the gRPC State Listening plugin - ## Summary Add initial implementation of the gRPC plugin [ADR-038: State Listening plugin](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-038-state-listening.md) ____ #### For Admin Use - [x] Not duplicate issue - [x] Appropriate labels applied - [x] Appropriate contributors tagged - [x] Contributor assigned/self-assigned ",0,add initial implementation of the grpc state listening plugin thank you for opening an issue before submitting this request please review this template summary add initial implementation of the grpc plugin for admin use not duplicate issue appropriate labels applied appropriate contributors tagged contributor assigned self assigned ,0 82973,7857986852.0,IssuesEvent,2018-06-21 12:40:06,kubernetes/kubeadm,https://api.github.com/repos/kubernetes/kubeadm,closed,Add e2e regression tests for the kubelet being secure,active area/security area/testing area/upgrades priority/critical-urgent sig/auth sig/cluster-lifecycle sig/node,"As part of https://github.com/kubernetes/kubeadm/issues/732, https://github.com/kubernetes/kubeadm/issues/650 and https://github.com/kubernetes/kubernetes/pull/63881 we should make sure the following things in our end-to-end testing: 1. The kubelet cAdvisor port (4194) can't be reached, neither via the API server proxy nor directly on the public IP address 2. The kubelet read-only port (10255) can't be reached, neither via the API server proxy nor directly on the public IP address 3. The kubelet can delegate ServiceAccount tokens to the API server 4. The kubelet's main port (10250) has both authentication (should fail with no credentials) and authorization (should fail with insufficient permissions) set-up These e2e tests, which I preliminarily propose to host under `[sig-cluster-lifecycle] [Feature:KubeletSecurity]`, would be run by any kubeadm e2e suite running against v1.11+ clusters. These test are super important to make sure no kubeadm version regresses security-wise by accident. @dixudx is working on creating these tests, thank you a lot! FYI @kubernetes/sig-node-proposals @kubernetes/sig-auth-proposals ",1.0,"Add e2e regression tests for the kubelet being secure - As part of https://github.com/kubernetes/kubeadm/issues/732, https://github.com/kubernetes/kubeadm/issues/650 and https://github.com/kubernetes/kubernetes/pull/63881 we should make sure the following things in our end-to-end testing: 1. The kubelet cAdvisor port (4194) can't be reached, neither via the API server proxy nor directly on the public IP address 2. The kubelet read-only port (10255) can't be reached, neither via the API server proxy nor directly on the public IP address 3. The kubelet can delegate ServiceAccount tokens to the API server 4. The kubelet's main port (10250) has both authentication (should fail with no credentials) and authorization (should fail with insufficient permissions) set-up These e2e tests, which I preliminarily propose to host under `[sig-cluster-lifecycle] [Feature:KubeletSecurity]`, would be run by any kubeadm e2e suite running against v1.11+ clusters. These test are super important to make sure no kubeadm version regresses security-wise by accident. @dixudx is working on creating these tests, thank you a lot! FYI @kubernetes/sig-node-proposals @kubernetes/sig-auth-proposals ",0,add regression tests for the kubelet being secure as part of and we should make sure the following things in our end to end testing the kubelet cadvisor port can t be reached neither via the api server proxy nor directly on the public ip address the kubelet read only port can t be reached neither via the api server proxy nor directly on the public ip address the kubelet can delegate serviceaccount tokens to the api server the kubelet s main port has both authentication should fail with no credentials and authorization should fail with insufficient permissions set up these tests which i preliminarily propose to host under would be run by any kubeadm suite running against clusters these test are super important to make sure no kubeadm version regresses security wise by accident dixudx is working on creating these tests thank you a lot fyi kubernetes sig node proposals kubernetes sig auth proposals ,0 7677,25443568393.0,IssuesEvent,2022-11-24 02:24:56,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,closed,"Lightning tidb backend import fails due to ""Got a packet bigger than 'max_allowed_packet' bytes""",type/bug severity/moderate component/lightning found/automation may-affects-4.0 may-affects-5.1 may-affects-5.2 may-affects-5.3 may-affects-5.4 may-affects-5.0 may-affects-6.0,"## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) This issue is seen in daily CI http://172.16.4.180:31714/artifacts/testground/plan-exec-751647/plan-exec-751647-3014636176/main-logs, though I can't reproduce it manually locally. Lightning fails with import with below command: ``` /tmp # cat tidb-lightning.toml [mydumper.csv] header=false /tidb-lightning ""-backend"" ""tidb"" ""-sorted-kv-dir"" ""/tmp/sorted-kv-dir"" ""-d"" ""s3://tpcc/10-warehouses-csv?access-key=minioadmin&secret-access-key=minioadmin&endpoint=http%3a%2f%2fminio.pingcap.net%3a9000&force-path-style=true"" ""-pd-urls"" ""src-tidb-pd.fb-debug-brd8q:2379"" ""-tidb-host"" ""src-tidb-tidb.fb-debug-brd8q"" ""-tidb-port"" ""4000"" ""-tidb-user"" ""root"" ""-tidb-password"" """" ""-c"" ""/tmp/tidb-lightning.toml"" ``` ### 2. What did you expect to see? (Required) Lightning import should succeed ### 3. What did you see instead (Required) Lightning import fails with below errors: ``` [2022/04/18 17:58:37.488 +00:00] [ERROR] [tidb.go:557] [""execute statement failed""] [rows=""数据太长,请参考附件日志""] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [restore.go:2199] [""write to data engine failed""] [table=`test`.`stock`] [engineNumber=0] [fileIndex=8] [path=test.stock.8.csv:0] [task=deliver] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [restore.go:2512] [""restore file failed""] [table=`test`.`stock`] [engineNumber=0] [fileIndex=8] [path=test.stock.8.csv:0] [takeTime=13.717181767s] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [table_restore.go:573] [""encode kv data and write failed""] [table=`test`.`stock`] [engineNumber=0] [takeTime=1m16.146008765s] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [table_restore.go:309] [""restore engine failed""] [table=`test`.`stock`] [engineNumber=0] [takeTime=1m16.146108421s] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [table_restore.go:335] [""import whole table failed""] [table=`test`.`stock`] [takeTime=1m16.146146526s] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [restore.go:1478] [""restore table failed""] [table=`test`.`stock`] [takeTime=1m16.14800138s] [error=""[Lightning:Restore:ErrRestoreTable]restore table `test`.`stock` failed: Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] ``` [lightning.log.2022-04-18T17.55.53Z.txt](https://github.com/pingcap/tidb/files/8528282/lightning.log.2022-04-18T17.55.53Z.txt) ### 4. What is your TiDB version? (Required) [release-version=v6.1.0-nightly] [git-hash=e10ad280572a5d28fd8fa4b698812789931f5f6d] [git-branch=heads/refs/tags/v6.1.0-nightly] ",1.0,"Lightning tidb backend import fails due to ""Got a packet bigger than 'max_allowed_packet' bytes"" - ## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) This issue is seen in daily CI http://172.16.4.180:31714/artifacts/testground/plan-exec-751647/plan-exec-751647-3014636176/main-logs, though I can't reproduce it manually locally. Lightning fails with import with below command: ``` /tmp # cat tidb-lightning.toml [mydumper.csv] header=false /tidb-lightning ""-backend"" ""tidb"" ""-sorted-kv-dir"" ""/tmp/sorted-kv-dir"" ""-d"" ""s3://tpcc/10-warehouses-csv?access-key=minioadmin&secret-access-key=minioadmin&endpoint=http%3a%2f%2fminio.pingcap.net%3a9000&force-path-style=true"" ""-pd-urls"" ""src-tidb-pd.fb-debug-brd8q:2379"" ""-tidb-host"" ""src-tidb-tidb.fb-debug-brd8q"" ""-tidb-port"" ""4000"" ""-tidb-user"" ""root"" ""-tidb-password"" """" ""-c"" ""/tmp/tidb-lightning.toml"" ``` ### 2. What did you expect to see? (Required) Lightning import should succeed ### 3. What did you see instead (Required) Lightning import fails with below errors: ``` [2022/04/18 17:58:37.488 +00:00] [ERROR] [tidb.go:557] [""execute statement failed""] [rows=""数据太长,请参考附件日志""] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [restore.go:2199] [""write to data engine failed""] [table=`test`.`stock`] [engineNumber=0] [fileIndex=8] [path=test.stock.8.csv:0] [task=deliver] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [restore.go:2512] [""restore file failed""] [table=`test`.`stock`] [engineNumber=0] [fileIndex=8] [path=test.stock.8.csv:0] [takeTime=13.717181767s] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [table_restore.go:573] [""encode kv data and write failed""] [table=`test`.`stock`] [engineNumber=0] [takeTime=1m16.146008765s] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [table_restore.go:309] [""restore engine failed""] [table=`test`.`stock`] [engineNumber=0] [takeTime=1m16.146108421s] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [table_restore.go:335] [""import whole table failed""] [table=`test`.`stock`] [takeTime=1m16.146146526s] [error=""Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] [2022/04/18 17:58:37.490 +00:00] [ERROR] [restore.go:1478] [""restore table failed""] [table=`test`.`stock`] [takeTime=1m16.14800138s] [error=""[Lightning:Restore:ErrRestoreTable]restore table `test`.`stock` failed: Error 1153: Got a packet bigger than 'max_allowed_packet' bytes""] ``` [lightning.log.2022-04-18T17.55.53Z.txt](https://github.com/pingcap/tidb/files/8528282/lightning.log.2022-04-18T17.55.53Z.txt) ### 4. What is your TiDB version? (Required) [release-version=v6.1.0-nightly] [git-hash=e10ad280572a5d28fd8fa4b698812789931f5f6d] [git-branch=heads/refs/tags/v6.1.0-nightly] ",1,lightning tidb backend import fails due to got a packet bigger than max allowed packet bytes bug report please answer these questions before submitting your issue thanks minimal reproduce step required this issue is seen in daily ci though i can t reproduce it manually locally lightning fails with import with below command tmp cat tidb lightning toml header false tidb lightning backend tidb sorted kv dir tmp sorted kv dir d tpcc warehouses csv access key minioadmin secret access key minioadmin endpoint http pingcap net force path style true pd urls src tidb pd fb debug tidb host src tidb tidb fb debug tidb port tidb user root tidb password c tmp tidb lightning toml what did you expect to see required lightning import should succeed what did you see instead required lightning import fails with below errors restore table test stock failed error got a packet bigger than max allowed packet bytes what is your tidb version required ,1 645,7697019129.0,IssuesEvent,2018-05-18 17:13:58,mozilla-mobile/focus-android,https://api.github.com/repos/mozilla-mobile/focus-android,closed,Update taskcluster script to submit builds to nimbledroid,automation,"The test needs to be run on two variant: focusWebviewDebug klarGeckoviewDebug The instruction is here: http://docs.nimbledroid.com/androidUserGuide.html#simple-example-curl Will provide more details on json key and other parameter values shortly",1.0,"Update taskcluster script to submit builds to nimbledroid - The test needs to be run on two variant: focusWebviewDebug klarGeckoviewDebug The instruction is here: http://docs.nimbledroid.com/androidUserGuide.html#simple-example-curl Will provide more details on json key and other parameter values shortly",1,update taskcluster script to submit builds to nimbledroid the test needs to be run on two variant focuswebviewdebug klargeckoviewdebug the instruction is here will provide more details on json key and other parameter values shortly,1 346512,30923715115.0,IssuesEvent,2023-08-06 08:10:26,Sralker731/Database_Application,https://api.github.com/repos/Sralker731/Database_Application,closed,"Test ""Save Query"" functionality",bug enhancement database Testing,"""Save Query"" functionality is used, when user wants to save his query in the text file Logic: If file does not exist -> Create file QUERIES_0.txt If exist -> Create file QUERIES_1.txt The file name will be automatically incremented.",1.0,"Test ""Save Query"" functionality - ""Save Query"" functionality is used, when user wants to save his query in the text file Logic: If file does not exist -> Create file QUERIES_0.txt If exist -> Create file QUERIES_1.txt The file name will be automatically incremented.",0,test save query functionality save query functionality is used when user wants to save his query in the text file logic if file does not exist create file queries txt if exist create file queries txt the file name will be automatically incremented ,0 56214,8057536484.0,IssuesEvent,2018-08-02 15:38:59,blockstack/blockstack-browser,https://api.github.com/repos/blockstack/blockstack-browser,reopened,Add Windows Installation Instructions.,documentation good first issue good-first-task help wanted,Add installation instructions for windows. The windows install instructions are over in the [`packaging`](https://github.com/blockstack/packaging/tree/master/browser-core-docker) repository. Add a section in the README with this information.,1.0,Add Windows Installation Instructions. - Add installation instructions for windows. The windows install instructions are over in the [`packaging`](https://github.com/blockstack/packaging/tree/master/browser-core-docker) repository. Add a section in the README with this information.,0,add windows installation instructions add installation instructions for windows the windows install instructions are over in the repository add a section in the readme with this information ,0 162778,25593458960.0,IssuesEvent,2022-12-01 14:36:45,Kwenta/kwenta,https://api.github.com/repos/Kwenta/kwenta,closed,Staking UI Polish,design core dev staking,"There's been some feedback that there is room for improvement in the staking UI / UX, let's review the designs and frontend, check they are in sync and frontend implemented fully and then see if there some areas for improvement. ",1.0,"Staking UI Polish - There's been some feedback that there is room for improvement in the staking UI / UX, let's review the designs and frontend, check they are in sync and frontend implemented fully and then see if there some areas for improvement. ",0,staking ui polish there s been some feedback that there is room for improvement in the staking ui ux let s review the designs and frontend check they are in sync and frontend implemented fully and then see if there some areas for improvement ,0 3012,12974202850.0,IssuesEvent,2020-07-21 15:05:57,GoodDollar/GoodDAPP,https://api.github.com/repos/GoodDollar/GoodDAPP,closed,"[BUG] When creating a new account, phone and email are not added to the profile",automation bug mvp,"steps: - go to https://gooddev.netlify.app/ - created new account with valid data - go to Edit Profile page - pay attention to the phone and email fields the phone and email fields are empty. ![bug_created_new_acc_1_2020-07-06.jpg](https://images.zenhubusercontent.com/5eb529c8c90bb26b8aaf9d9d/6d3a9a78-050b-463e-8960-8b82ab071e5b) ![bug_created_new_acc_2_2020-07-06.jpg](https://images.zenhubusercontent.com/5eb529c8c90bb26b8aaf9d9d/01bad7d0-608a-4c54-9496-73187e1b08e8) video: https://www.screencast.com/t/qEBnzGBlGuzM",1.0,"[BUG] When creating a new account, phone and email are not added to the profile - steps: - go to https://gooddev.netlify.app/ - created new account with valid data - go to Edit Profile page - pay attention to the phone and email fields the phone and email fields are empty. ![bug_created_new_acc_1_2020-07-06.jpg](https://images.zenhubusercontent.com/5eb529c8c90bb26b8aaf9d9d/6d3a9a78-050b-463e-8960-8b82ab071e5b) ![bug_created_new_acc_2_2020-07-06.jpg](https://images.zenhubusercontent.com/5eb529c8c90bb26b8aaf9d9d/01bad7d0-608a-4c54-9496-73187e1b08e8) video: https://www.screencast.com/t/qEBnzGBlGuzM",1, when creating a new account phone and email are not added to the profile steps go to created new account with valid data go to edit profile page pay attention to the phone and email fields the phone and email fields are empty video ,1 3986,15097802925.0,IssuesEvent,2021-02-07 20:06:30,pulumi/docs,https://api.github.com/repos/pulumi/docs,closed,Cloudtrail -> Cloudwatch example is broken,automation/tfgen-provider-docs,"[Link](https://www.pulumi.com/docs/reference/pkg/aws/cloudtrail/trail/#sending-events-to-cloudwatch-logs) When I try to set this up, I get `InvalidCloudWatchLogsRoleArnException: You must specify a role ARN as well as a log group.` It's very not clear where the role ARN should come from. ",1.0,"Cloudtrail -> Cloudwatch example is broken - [Link](https://www.pulumi.com/docs/reference/pkg/aws/cloudtrail/trail/#sending-events-to-cloudwatch-logs) When I try to set this up, I get `InvalidCloudWatchLogsRoleArnException: You must specify a role ARN as well as a log group.` It's very not clear where the role ARN should come from. ",1,cloudtrail cloudwatch example is broken when i try to set this up i get invalidcloudwatchlogsrolearnexception you must specify a role arn as well as a log group it s very not clear where the role arn should come from ,1 6392,23073559791.0,IssuesEvent,2022-07-25 20:35:57,StoneCypher/fsl,https://api.github.com/repos/StoneCypher/fsl,closed,WAIT YOU CAN ACTUALLY UNIFY ISSUE TRACKERS SORT OF,Ease of use Issue needs work Automation List issues,"Wow. It works. We'll need to unify the trackers for: * [x] #1086 * [x] #1087 * [x] #1088 * [x] #1089 * [x] #1091 * [x] #1093 * [x] #1094 * [x] #1095 * [x] #1096 * [x] #1097 * [x] #1098 ",1.0,"WAIT YOU CAN ACTUALLY UNIFY ISSUE TRACKERS SORT OF - Wow. It works. We'll need to unify the trackers for: * [x] #1086 * [x] #1087 * [x] #1088 * [x] #1089 * [x] #1091 * [x] #1093 * [x] #1094 * [x] #1095 * [x] #1096 * [x] #1097 * [x] #1098 ",1,wait you can actually unify issue trackers sort of wow it works we ll need to unify the trackers for ,1 733811,25323510909.0,IssuesEvent,2022-11-18 07:05:44,Waurum-Studio/waurum-issues,https://api.github.com/repos/Waurum-Studio/waurum-issues,opened,Boosters System,- Feature priority: 1 type: ui type: prog [FUN],"### Description A system where players can buy a serverwide booster than enables for all players during a limited timeframe. This would for example increase luck in case openings, increase earnings, some limited effects, etc.. Would be buyable through gems to make them a bit more worth and important. this was planned for 4.5, lack of time, but definitely implemented in 4.6. ### Concerned Platform(s) [WEB] Garry's Mod Store ### Additional Information _No response_",1.0,"Boosters System - ### Description A system where players can buy a serverwide booster than enables for all players during a limited timeframe. This would for example increase luck in case openings, increase earnings, some limited effects, etc.. Would be buyable through gems to make them a bit more worth and important. this was planned for 4.5, lack of time, but definitely implemented in 4.6. ### Concerned Platform(s) [WEB] Garry's Mod Store ### Additional Information _No response_",0,boosters system description a system where players can buy a serverwide booster than enables for all players during a limited timeframe this would for example increase luck in case openings increase earnings some limited effects etc would be buyable through gems to make them a bit more worth and important this was planned for lack of time but definitely implemented in concerned platform s garry s mod store additional information no response ,0 719,7882331710.0,IssuesEvent,2018-06-26 22:14:15,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Azure Automation GitHub link,automation/svc cxp doc-bug in-progress triaged,"I think the Azure Automation GitHub link should point to this: https://github.com/azureautomation?language=python (Now it points to the same as the line above, maybe copy+paste error?) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: fac644d7-9d8c-9f4e-7540-81378707c9d5 * Version Independent ID: ed36731c-e9b4-f127-abb7-6f23db15b5fc * Content: [My first Python runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-python2) * Content Source: [articles/automation/automation-first-runbook-textual-python2.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-python2.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Azure Automation GitHub link - I think the Azure Automation GitHub link should point to this: https://github.com/azureautomation?language=python (Now it points to the same as the line above, maybe copy+paste error?) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: fac644d7-9d8c-9f4e-7540-81378707c9d5 * Version Independent ID: ed36731c-e9b4-f127-abb7-6f23db15b5fc * Content: [My first Python runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-python2) * Content Source: [articles/automation/automation-first-runbook-textual-python2.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-python2.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,azure automation github link i think the azure automation github link should point to this now it points to the same as the line above maybe copy paste error document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 2792,12561023314.0,IssuesEvent,2020-06-08 00:10:58,hbirchtree/coffeecutie,https://api.github.com/repos/hbirchtree/coffeecutie,closed,Add support for generating Nupkg on Windows,Automation Windows no-issue-activity,Would make it easier to download self-contained utilities,1.0,Add support for generating Nupkg on Windows - Would make it easier to download self-contained utilities,1,add support for generating nupkg on windows would make it easier to download self contained utilities,1 50810,10560058711.0,IssuesEvent,2019-10-04 13:06:36,zonemaster/zonemaster-engine,https://api.github.com/repos/zonemaster/zonemaster-engine,opened,Update implementation of Syntax06,test code,"PR zonemaster/zonemaster#788 has updated specification of test case [Syntax06] (in develop branch). Now the implementation must be updated: Updated mail logic and messages: * Explicit message for localhost as recipient. * Explicit message for illegal CNAME. * Explicit test all MX for RNAME and all must pass. * Raised default level of messages. * Affects implementation in several ways. [Syntax06]: https://github.com/zonemaster/zonemaster/blob/develop/docs/specifications/tests/Syntax-TP/syntax06.md ",1.0,"Update implementation of Syntax06 - PR zonemaster/zonemaster#788 has updated specification of test case [Syntax06] (in develop branch). Now the implementation must be updated: Updated mail logic and messages: * Explicit message for localhost as recipient. * Explicit message for illegal CNAME. * Explicit test all MX for RNAME and all must pass. * Raised default level of messages. * Affects implementation in several ways. [Syntax06]: https://github.com/zonemaster/zonemaster/blob/develop/docs/specifications/tests/Syntax-TP/syntax06.md ",0,update implementation of pr zonemaster zonemaster has updated specification of test case in develop branch now the implementation must be updated updated mail logic and messages explicit message for localhost as recipient explicit message for illegal cname explicit test all mx for rname and all must pass raised default level of messages affects implementation in several ways ,0 128297,12369393389.0,IssuesEvent,2020-05-18 15:11:40,reapit/foundations,https://api.github.com/repos/reapit/foundations,opened,Update wording on the Webhooks page in the Developers Portal ,cloud-team documentation marketplace,"**Task:** Update the wording on the Ping modal, add and edit modals and heading sub text **Location:** https://dev.marketplace.reapit.cloud/developer/webhooks ### Heading Sub Text **Current text:** Lorem Ipsum **Replace with:** Our webhooks system allows your application to directly subscribe to events happening in our customers data. Rather than needing to make API calls to poll for new information, a webhook subscription can be created to allow Reapit Foundations to send a HTTP request directly to your endpoints that you configure here. This system is designed to flexibly work with how your application is built and deployed. If you wish, you can set up a single endpoint to catch all topics for all customers. Alternatively, you may wish to set up a different webhook subscription per topic or per customer. For more information about Webhooks, please see our [webhooks documentation](https://foundations-documentation.reapit.cloud/api/api-documentation#webhooks) ### Ping Modal **Current Text:** To test your Webhook subscription, please select a ‘Subscription Topic’ below: **Replace with:** To test your Webhook subscription, please select a subscription topic and an example payload for that topic will be sent to the configured URL. For more information, see 'testing webhook link TBC' ### Webhook subscription modal (Add & Edit) **Current Text:** You can create a Webhook to receive notifications from the topics that you choose to subscribe it to. You can receive notifications for any customer that has installed your application. For more information about Webhooks, please see our webhooks documentation **Replace with:** Webhooks are configured here to allow your application to receive real-time notifications about the topics you choose to subscribe it to. A single webhook subscription can receive notifications for multiple topics so long as your application has been granted the required permissions. Webhooks subscriptions can be set up for any customer who has installed your application. Additionally, you can choose ‘SBOX’ to listen for sandbox environment notifications. For more information about Webhooks, please see our [webhooks documentation](https://foundations-documentation.reapit.cloud/api/api-documentation#webhooks) ",1.0,"Update wording on the Webhooks page in the Developers Portal - **Task:** Update the wording on the Ping modal, add and edit modals and heading sub text **Location:** https://dev.marketplace.reapit.cloud/developer/webhooks ### Heading Sub Text **Current text:** Lorem Ipsum **Replace with:** Our webhooks system allows your application to directly subscribe to events happening in our customers data. Rather than needing to make API calls to poll for new information, a webhook subscription can be created to allow Reapit Foundations to send a HTTP request directly to your endpoints that you configure here. This system is designed to flexibly work with how your application is built and deployed. If you wish, you can set up a single endpoint to catch all topics for all customers. Alternatively, you may wish to set up a different webhook subscription per topic or per customer. For more information about Webhooks, please see our [webhooks documentation](https://foundations-documentation.reapit.cloud/api/api-documentation#webhooks) ### Ping Modal **Current Text:** To test your Webhook subscription, please select a ‘Subscription Topic’ below: **Replace with:** To test your Webhook subscription, please select a subscription topic and an example payload for that topic will be sent to the configured URL. For more information, see 'testing webhook link TBC' ### Webhook subscription modal (Add & Edit) **Current Text:** You can create a Webhook to receive notifications from the topics that you choose to subscribe it to. You can receive notifications for any customer that has installed your application. For more information about Webhooks, please see our webhooks documentation **Replace with:** Webhooks are configured here to allow your application to receive real-time notifications about the topics you choose to subscribe it to. A single webhook subscription can receive notifications for multiple topics so long as your application has been granted the required permissions. Webhooks subscriptions can be set up for any customer who has installed your application. Additionally, you can choose ‘SBOX’ to listen for sandbox environment notifications. For more information about Webhooks, please see our [webhooks documentation](https://foundations-documentation.reapit.cloud/api/api-documentation#webhooks) ",0,update wording on the webhooks page in the developers portal task update the wording on the ping modal add and edit modals and heading sub text location heading sub text current text lorem ipsum replace with our webhooks system allows your application to directly subscribe to events happening in our customers data rather than needing to make api calls to poll for new information a webhook subscription can be created to allow reapit foundations to send a http request directly to your endpoints that you configure here this system is designed to flexibly work with how your application is built and deployed if you wish you can set up a single endpoint to catch all topics for all customers alternatively you may wish to set up a different webhook subscription per topic or per customer for more information about webhooks please see our ping modal current text to test your webhook subscription please select a ‘subscription topic’ below replace with to test your webhook subscription please select a subscription topic and an example payload for that topic will be sent to the configured url for more information see testing webhook link tbc webhook subscription modal add edit current text you can create a webhook to receive notifications from the topics that you choose to subscribe it to you can receive notifications for any customer that has installed your application for more information about webhooks please see our webhooks documentation replace with webhooks are configured here to allow your application to receive real time notifications about the topics you choose to subscribe it to a single webhook subscription can receive notifications for multiple topics so long as your application has been granted the required permissions webhooks subscriptions can be set up for any customer who has installed your application additionally you can choose ‘sbox’ to listen for sandbox environment notifications for more information about webhooks please see our ,0 271447,29502679873.0,IssuesEvent,2023-06-03 00:51:40,opensearch-project/data-prepper,https://api.github.com/repos/opensearch-project/data-prepper,opened,WS-2016-7057 (Medium) detected in plexus-utils-2.0.6.jar,Mend: dependency security vulnerability,"## WS-2016-7057 - Medium Severity Vulnerability
Vulnerable Library - plexus-utils-2.0.6.jar

A collection of various utility classes to ease working with strings, files, command lines, XML and more.

Path to dependency file: /data-prepper-plugins/opensearch-source/build.gradle

Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar

Dependency Hierarchy: - data-prepper-main-2.3.0-SNAPSHOT (Root Library) - data-prepper-plugins-2.3.0-SNAPSHOT - opensearch-source-2.3.0-SNAPSHOT - maven-artifact-3.0.3.jar - :x: **plexus-utils-2.0.6.jar** (Vulnerable Library)

Found in HEAD commit: 90bdaa7e7833bdd504c817e49d4434b4d8880f56

Found in base branch: main

Vulnerability Details

Plexus-utils before 3.0.24 are vulnerable to Directory Traversal

Publish Date: 2016-05-07

URL: WS-2016-7057

CVSS 3 Score Details (5.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2016-05-07

Fix Resolution: 3.0.24

",True,"WS-2016-7057 (Medium) detected in plexus-utils-2.0.6.jar - ## WS-2016-7057 - Medium Severity Vulnerability
Vulnerable Library - plexus-utils-2.0.6.jar

A collection of various utility classes to ease working with strings, files, command lines, XML and more.

Path to dependency file: /data-prepper-plugins/opensearch-source/build.gradle

Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.codehaus.plexus/plexus-utils/2.0.6/3a20c424a712a7c02b02af61dcad5f001b29a9fd/plexus-utils-2.0.6.jar

Dependency Hierarchy: - data-prepper-main-2.3.0-SNAPSHOT (Root Library) - data-prepper-plugins-2.3.0-SNAPSHOT - opensearch-source-2.3.0-SNAPSHOT - maven-artifact-3.0.3.jar - :x: **plexus-utils-2.0.6.jar** (Vulnerable Library)

Found in HEAD commit: 90bdaa7e7833bdd504c817e49d4434b4d8880f56

Found in base branch: main

Vulnerability Details

Plexus-utils before 3.0.24 are vulnerable to Directory Traversal

Publish Date: 2016-05-07

URL: WS-2016-7057

CVSS 3 Score Details (5.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2016-05-07

Fix Resolution: 3.0.24

",0,ws medium detected in plexus utils jar ws medium severity vulnerability vulnerable library plexus utils jar a collection of various utility classes to ease working with strings files command lines xml and more path to dependency file data prepper plugins opensearch source build gradle path to vulnerable library home wss scanner gradle caches modules files org codehaus plexus plexus utils plexus utils jar home wss scanner gradle caches modules files org codehaus plexus plexus utils plexus utils jar home wss scanner gradle caches modules files org codehaus plexus plexus utils plexus utils jar home wss scanner gradle caches modules files org codehaus plexus plexus utils plexus utils jar home wss scanner gradle caches modules files org codehaus plexus plexus utils plexus utils jar home wss scanner gradle caches modules files org codehaus plexus plexus utils plexus utils jar home wss scanner gradle caches modules files org codehaus plexus plexus utils plexus utils jar home wss scanner gradle caches modules files org codehaus plexus plexus utils plexus utils jar dependency hierarchy data prepper main snapshot root library data prepper plugins snapshot opensearch source snapshot maven artifact jar x plexus utils jar vulnerable library found in head commit a href found in base branch main vulnerability details plexus utils before are vulnerable to directory traversal publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution ,0 7272,24554511371.0,IssuesEvent,2022-10-12 14:54:21,betagouv/preuve-covoiturage,https://api.github.com/repos/betagouv/preuve-covoiturage,closed,Problème sur les données du mois de juin publiées sur Data.gouv,BUG Open Data Automation Needs Triage Stale,"##Problème sur les données du mois de juin publiées sur Data.gouv : https://www.data.gouv.fr/fr/datasets/trajets-realises-en-covoiturage-registre-de-preuve-de-covoiturage/. Il manque les trajets frontaliers contrairement à ce qui est mentionné dans la description du fichier (""Les données concernent également les trajets dont le point de départ OU d’arrivée est situé en dehors du territoire français."") et contrairement aux mois précédents. Peut-on les rajouter ? ",1.0,"Problème sur les données du mois de juin publiées sur Data.gouv - ##Problème sur les données du mois de juin publiées sur Data.gouv : https://www.data.gouv.fr/fr/datasets/trajets-realises-en-covoiturage-registre-de-preuve-de-covoiturage/. Il manque les trajets frontaliers contrairement à ce qui est mentionné dans la description du fichier (""Les données concernent également les trajets dont le point de départ OU d’arrivée est situé en dehors du territoire français."") et contrairement aux mois précédents. Peut-on les rajouter ? ",1,problème sur les données du mois de juin publiées sur data gouv problème sur les données du mois de juin publiées sur data gouv il manque les trajets frontaliers contrairement à ce qui est mentionné dans la description du fichier les données concernent également les trajets dont le point de départ ou d’arrivée est situé en dehors du territoire français et contrairement aux mois précédents peut on les rajouter ,1 3713,14403464338.0,IssuesEvent,2020-12-03 16:05:20,nf-core/tools,https://api.github.com/repos/nf-core/tools,closed,Problems with pipeline logo after template sync,automation low-priority,"With the template update to 1.11, some mysterious changes occurred on some pipeline logos in the `TEMPLATE` branch. For the `mag` pipeline the logo turned black: https://github.com/nf-core/mag/blob/TEMPLATE/docs/images/nf-core-mag_logo.png And for the `bcellmagic` pipeline the PNG file contains HTML code: https://github.com/nf-core/bcellmagic/blob/TEMPLATE/docs/images/nf-core-bcellmagic_logo.png The other pipeline logos seem fine though.",1.0,"Problems with pipeline logo after template sync - With the template update to 1.11, some mysterious changes occurred on some pipeline logos in the `TEMPLATE` branch. For the `mag` pipeline the logo turned black: https://github.com/nf-core/mag/blob/TEMPLATE/docs/images/nf-core-mag_logo.png And for the `bcellmagic` pipeline the PNG file contains HTML code: https://github.com/nf-core/bcellmagic/blob/TEMPLATE/docs/images/nf-core-bcellmagic_logo.png The other pipeline logos seem fine though.",1,problems with pipeline logo after template sync with the template update to some mysterious changes occurred on some pipeline logos in the template branch for the mag pipeline the logo turned black and for the bcellmagic pipeline the png file contains html code the other pipeline logos seem fine though ,1 75754,26029620609.0,IssuesEvent,2022-12-21 19:45:04,ontop/ontop,https://api.github.com/repos/ontop/ontop,closed,Rdb2RdfTest dg0016 Failure: VARBINARY is not properly converted to xsd:hexBinary,type: defect topic: r2rml compatibility status: fixed w: db support w: datatype," ### Description Using the same Rdb2RdfTest dg0016 from #558 Using MySQL 5. There seems to be a problem converting the `VARBINARY` column to `hexBinary`. Instead of base64-encoding the binary data as a hex string, it looks like the binary data is returned unencoded. https://www.w3.org/TR/xmlschema-2/#hexBinary #### SQL CREATE/INSERT ``` CREATE TABLE ""Patient"" ( ""ID"" INTEGER, ""FirstName"" VARCHAR(50), ""LastName"" VARCHAR(50), ""Sex"" VARCHAR(6), ""Weight"" REAL, ""Height"" FLOAT, ""BirthDate"" DATE, ""EntranceDate"" TIMESTAMP, ""PaidInAdvance"" BOOLEAN, ""Photo"" VARBINARY(200), PRIMARY KEY (""ID"") ); INSERT INTO ""Patient"" (""ID"", ""FirstName"",""LastName"",""Sex"",""Weight"",""Height"",""BirthDate"",""EntranceDate"",""PaidInAdvance"",""Photo"") VALUES (10,'Monica','Geller','female',80.25,1.65,'1981-10-10','2009-10-10 12:12:22',FALSE, X'89504E470D0A1A0A0000000D49484452000000050000000508060000008D6F26E50000001C4944415408D763F9FFFEBFC37F062005C3201284D031F18258CD04000EF535CBD18E0E1F0000000049454E44AE426082'); ``` #### SQLPPMapping ``` [PrefixDeclaration] rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns# rdfs: http://www.w3.org/2000/01/rdf-schema# owl: http://www.w3.org/2002/07/owl# xsd: http://www.w3.org/2001/XMLSchema# obda: https://w3id.org/obda/vocabulary# [MappingDeclaration] @collection [[ mappingId MAPPING-ID1 target a ; {ID}^^xsd:integer ; {FirstName}^^xsd:string ; {LastName}^^xsd:string ; {Sex}^^xsd:string ; {Weight}^^xsd:double ; {Height}^^xsd:double ; {BirthDate}^^xsd:date ; {EntranceDate}^^xsd:dateTime ; {PaidInAdvance}^^xsd:boolean ; {Photo}^^xsd:hexBinary . source SELECT * FROM `Patient` ]] ``` #### SQL Translated Query ##### New query after the dialect-specific extra normalization: During query translation, the VARBINARY column results in this statement: ``` CONSTRUCT [ID1m20, v30, v49, v9] [v9/""7""^^BIGINT, v30/""http://example.com/base/Patient#Photo""^^TEXT, v49/VARBINARYToTEXT(Photo1m9)] FILTER IS_NOT_NULL(Photo1m9) EXTENSIONAL `Patient`(0:ID1m20,9:Photo1m9) ``` ##### SQL Query (other columns omitted for clarity) ``` SELECT v23.`ID1m20` AS `ID1m20`, v23.`v30` AS `v30`, v23.`v49` AS `v49`, v23.`v9` AS `v9` FROM (SELECT v1.`ID` AS `ID1m20`, 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AS `v30`, 'http://example.com/base/Patient' AS `v49`, 0 AS `v9` FROM `Patient` v1 UNION ALL SELECT v19.`ID` AS `ID1m20`, 'http://example.com/base/Patient#Photo' AS `v30`, CAST(v19.`Photo` AS CHAR CHARACTER SET utf8) AS `v49`, 7 AS `v9` FROM `Patient` v19 WHERE v19.`Photo` IS NOT NULL ) v23 ``` **Expected:** ``` ""89504E470D0A1A0A0000000D49484452000000050000000508060000008D6F26E50000001C4944415408D763F9FFFEBFC37F062005C3201284D031F18258CD04000EF535CBD18E0E1F0000000049454E44AE426082""^^ . ``` **Actual:** ``` ""?PNG\r\n\n ***BINARY DATA*** IEND?B`?""^^ . ``` ### Possible Fix: (only tested for MySQL) Wrap `VARBINARY` columns with `HEX( ... )` **Original:** ``` SELECT v19.`ID` AS `ID1m20`, 'http://example.com/base/Patient#Photo' AS `v30`, CAST(v19.`Photo` AS CHAR CHARACTER SET utf8) AS `v49`, 7 AS `v9` ``` **Fixed:** ``` SELECT v19.`ID` AS `ID1m20`, 'http://example.com/base/Patient#Photo' AS `v30`, CAST(HEX(v19.`Photo`) AS CHAR CHARACTER SET utf8) AS `v49`, 7 AS `v9` ``` ### Versions Ontop: 4.2.1 (Maven Central) Mysql: mysql:5 (Docker Hub) Driver: mysql-connector-java:8.0.30 ### Additional Information I believe this test passes with H2. It appears to be in the `VARBINARYToTEXT` substitution, but I have been unable to track down its implementation in the code.",1.0,"Rdb2RdfTest dg0016 Failure: VARBINARY is not properly converted to xsd:hexBinary - ### Description Using the same Rdb2RdfTest dg0016 from #558 Using MySQL 5. There seems to be a problem converting the `VARBINARY` column to `hexBinary`. Instead of base64-encoding the binary data as a hex string, it looks like the binary data is returned unencoded. https://www.w3.org/TR/xmlschema-2/#hexBinary #### SQL CREATE/INSERT ``` CREATE TABLE ""Patient"" ( ""ID"" INTEGER, ""FirstName"" VARCHAR(50), ""LastName"" VARCHAR(50), ""Sex"" VARCHAR(6), ""Weight"" REAL, ""Height"" FLOAT, ""BirthDate"" DATE, ""EntranceDate"" TIMESTAMP, ""PaidInAdvance"" BOOLEAN, ""Photo"" VARBINARY(200), PRIMARY KEY (""ID"") ); INSERT INTO ""Patient"" (""ID"", ""FirstName"",""LastName"",""Sex"",""Weight"",""Height"",""BirthDate"",""EntranceDate"",""PaidInAdvance"",""Photo"") VALUES (10,'Monica','Geller','female',80.25,1.65,'1981-10-10','2009-10-10 12:12:22',FALSE, X'89504E470D0A1A0A0000000D49484452000000050000000508060000008D6F26E50000001C4944415408D763F9FFFEBFC37F062005C3201284D031F18258CD04000EF535CBD18E0E1F0000000049454E44AE426082'); ``` #### SQLPPMapping ``` [PrefixDeclaration] rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns# rdfs: http://www.w3.org/2000/01/rdf-schema# owl: http://www.w3.org/2002/07/owl# xsd: http://www.w3.org/2001/XMLSchema# obda: https://w3id.org/obda/vocabulary# [MappingDeclaration] @collection [[ mappingId MAPPING-ID1 target a ; {ID}^^xsd:integer ; {FirstName}^^xsd:string ; {LastName}^^xsd:string ; {Sex}^^xsd:string ; {Weight}^^xsd:double ; {Height}^^xsd:double ; {BirthDate}^^xsd:date ; {EntranceDate}^^xsd:dateTime ; {PaidInAdvance}^^xsd:boolean ; {Photo}^^xsd:hexBinary . source SELECT * FROM `Patient` ]] ``` #### SQL Translated Query ##### New query after the dialect-specific extra normalization: During query translation, the VARBINARY column results in this statement: ``` CONSTRUCT [ID1m20, v30, v49, v9] [v9/""7""^^BIGINT, v30/""http://example.com/base/Patient#Photo""^^TEXT, v49/VARBINARYToTEXT(Photo1m9)] FILTER IS_NOT_NULL(Photo1m9) EXTENSIONAL `Patient`(0:ID1m20,9:Photo1m9) ``` ##### SQL Query (other columns omitted for clarity) ``` SELECT v23.`ID1m20` AS `ID1m20`, v23.`v30` AS `v30`, v23.`v49` AS `v49`, v23.`v9` AS `v9` FROM (SELECT v1.`ID` AS `ID1m20`, 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AS `v30`, 'http://example.com/base/Patient' AS `v49`, 0 AS `v9` FROM `Patient` v1 UNION ALL SELECT v19.`ID` AS `ID1m20`, 'http://example.com/base/Patient#Photo' AS `v30`, CAST(v19.`Photo` AS CHAR CHARACTER SET utf8) AS `v49`, 7 AS `v9` FROM `Patient` v19 WHERE v19.`Photo` IS NOT NULL ) v23 ``` **Expected:** ``` ""89504E470D0A1A0A0000000D49484452000000050000000508060000008D6F26E50000001C4944415408D763F9FFFEBFC37F062005C3201284D031F18258CD04000EF535CBD18E0E1F0000000049454E44AE426082""^^ . ``` **Actual:** ``` ""?PNG\r\n\n ***BINARY DATA*** IEND?B`?""^^ . ``` ### Possible Fix: (only tested for MySQL) Wrap `VARBINARY` columns with `HEX( ... )` **Original:** ``` SELECT v19.`ID` AS `ID1m20`, 'http://example.com/base/Patient#Photo' AS `v30`, CAST(v19.`Photo` AS CHAR CHARACTER SET utf8) AS `v49`, 7 AS `v9` ``` **Fixed:** ``` SELECT v19.`ID` AS `ID1m20`, 'http://example.com/base/Patient#Photo' AS `v30`, CAST(HEX(v19.`Photo`) AS CHAR CHARACTER SET utf8) AS `v49`, 7 AS `v9` ``` ### Versions Ontop: 4.2.1 (Maven Central) Mysql: mysql:5 (Docker Hub) Driver: mysql-connector-java:8.0.30 ### Additional Information I believe this test passes with H2. It appears to be in the `VARBINARYToTEXT` substitution, but I have been unable to track down its implementation in the code.",0, failure varbinary is not properly converted to xsd hexbinary do you want to ask a question are you looking for support we have also a mailing list have a look at our guidelines on how to submit a bug report description using the same from using mysql there seems to be a problem converting the varbinary column to hexbinary instead of encoding the binary data as a hex string it looks like the binary data is returned unencoded sql create insert create table patient id integer firstname varchar lastname varchar sex varchar weight real height float birthdate date entrancedate timestamp paidinadvance boolean photo varbinary primary key id insert into patient id firstname lastname sex weight height birthdate entrancedate paidinadvance photo values monica geller female false x sqlppmapping rdf rdfs owl xsd obda collection mappingid mapping target a id xsd integer firstname xsd string lastname xsd string sex xsd string weight xsd double height xsd double birthdate xsd date entrancedate xsd datetime paidinadvance xsd boolean photo xsd hexbinary source select from patient sql translated query new query after the dialect specific extra normalization during query translation the varbinary column results in this statement construct filter is not null extensional patient sql query other columns omitted for clarity select as as as as from select id as as as as from patient union all select id as as cast photo as char character set as as from patient where photo is not null expected actual png r n n binary data iend b possible fix only tested for mysql wrap varbinary columns with hex original select id as as cast photo as char character set as as fixed select id as as cast hex photo as char character set as as versions ontop maven central mysql mysql docker hub driver mysql connector java additional information i believe this test passes with it appears to be in the varbinarytotext substitution but i have been unable to track down its implementation in the code ,0 316157,27141842634.0,IssuesEvent,2023-02-16 16:52:52,wazuh/wazuh,https://api.github.com/repos/wazuh/wazuh,closed,Release 4.4.0 - Release Candidate 1 - Specific systems,team/cicd type/release tracking release test/4.4.0,"### Packages tests metrics information ||| | :-- | :-- | | **Main release candidate issue** | #16132 | | **Main packages metrics issue** | #16142 | | **Version** | 4.4.0 | | **Release candidate** | RC1 | | **Tag** | https://github.com/wazuh/wazuh/tree/v4.4.0-rc1 | --- ## Build packages | System | Status | Build | | :-- | :--: | :-- | | AIX | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_builder_special/651/ | | HPUX | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_builder_special/656/ | | S10 SPARC | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_builder_special/653/ | | S11 SPARC | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_builder_special/654/ | | OVA | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_Builder_OVA/190/ | | AMI | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_Builder_AMI/111/ | --- ### Test packages | System | Build | Install | Deployment install | Upgrade | Remove | TCP | UDP | Errors found | Warnings found | Alerts found | Check users | | :-- | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | AIX | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :red_circle: | :green_circle: | :green_circle: | :green_circle: | | HPUX | :green_circle: | :green_circle: | --- | --- | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | | S10 SPARC | :green_circle: | :green_circle: | --- | :yellow_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | | S11 SPARC | :green_circle: | :green_circle: | --- | :yellow_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | | OVA | :green_circle: | :green_circle: | --- | --- | --- | :green_circle: | :green_circle: | :red_circle: | :green_circle: | :green_circle: | :green_circle: | | AMI | :green_circle: | :green_circle: | --- | --- | --- | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | --- ##### PPC64EL packages ##### | System | Build | Install | Deployment install | Upgrade | Uninstall | Alerts | TCP | UDP | Errors | Warnings | System users | | :-- | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | CentOS 7 | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | | Debian Stretch | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | --- ##### OVA/AMI specific tests | System | Filebeat test | Cluster green/yellow | Production repositories | UI Access | No SSH root access | SSH user access | Wazuh dashboard/APP version | Dashboard/Indexer VERSION file | | :-- | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | OVA | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :blackgreen_circle_circle: | | AMI | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | --- Status legend: :black_circle: - Pending/In progress :white_circle: - Skipped :red_circle: - Rejected :yellow_circle: - Ready to review :green_circle: - Approved --- ## Auditor's validation In order to close and proceed with the release or the next candidate version, the following auditors must give the green light to this RC. - [ ] @alberpilot - [ ] @okynos --- ",1.0,"Release 4.4.0 - Release Candidate 1 - Specific systems - ### Packages tests metrics information ||| | :-- | :-- | | **Main release candidate issue** | #16132 | | **Main packages metrics issue** | #16142 | | **Version** | 4.4.0 | | **Release candidate** | RC1 | | **Tag** | https://github.com/wazuh/wazuh/tree/v4.4.0-rc1 | --- ## Build packages | System | Status | Build | | :-- | :--: | :-- | | AIX | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_builder_special/651/ | | HPUX | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_builder_special/656/ | | S10 SPARC | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_builder_special/653/ | | S11 SPARC | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_builder_special/654/ | | OVA | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_Builder_OVA/190/ | | AMI | :green_circle: | https://ci.wazuh.info/view/Packages/job/Packages_Builder_AMI/111/ | --- ### Test packages | System | Build | Install | Deployment install | Upgrade | Remove | TCP | UDP | Errors found | Warnings found | Alerts found | Check users | | :-- | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | AIX | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :red_circle: | :green_circle: | :green_circle: | :green_circle: | | HPUX | :green_circle: | :green_circle: | --- | --- | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | | S10 SPARC | :green_circle: | :green_circle: | --- | :yellow_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | | S11 SPARC | :green_circle: | :green_circle: | --- | :yellow_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | | OVA | :green_circle: | :green_circle: | --- | --- | --- | :green_circle: | :green_circle: | :red_circle: | :green_circle: | :green_circle: | :green_circle: | | AMI | :green_circle: | :green_circle: | --- | --- | --- | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | --- ##### PPC64EL packages ##### | System | Build | Install | Deployment install | Upgrade | Uninstall | Alerts | TCP | UDP | Errors | Warnings | System users | | :-- | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | CentOS 7 | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | | Debian Stretch | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | --- ##### OVA/AMI specific tests | System | Filebeat test | Cluster green/yellow | Production repositories | UI Access | No SSH root access | SSH user access | Wazuh dashboard/APP version | Dashboard/Indexer VERSION file | | :-- | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | OVA | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :blackgreen_circle_circle: | | AMI | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | :green_circle: | --- Status legend: :black_circle: - Pending/In progress :white_circle: - Skipped :red_circle: - Rejected :yellow_circle: - Ready to review :green_circle: - Approved --- ## Auditor's validation In order to close and proceed with the release or the next candidate version, the following auditors must give the green light to this RC. - [ ] @alberpilot - [ ] @okynos --- ",0,release release candidate specific systems packages tests metrics information main release candidate issue main packages metrics issue version release candidate tag build packages system status build aix green circle hpux green circle sparc green circle sparc green circle ova green circle ami green circle test packages system build install deployment install upgrade remove tcp udp errors found warnings found alerts found check users aix green circle green circle green circle green circle green circle green circle green circle red circle green circle green circle green circle hpux green circle green circle green circle green circle green circle green circle green circle green circle green circle sparc green circle green circle yellow circle green circle green circle green circle green circle green circle green circle green circle sparc green circle green circle yellow circle green circle green circle green circle green circle green circle green circle green circle ova green circle green circle green circle green circle red circle green circle green circle green circle ami green circle green circle green circle green circle green circle green circle green circle green circle packages system build install deployment install upgrade uninstall alerts tcp udp errors warnings system users centos green circle green circle green circle green circle green circle green circle green circle green circle green circle green circle green circle debian stretch green circle green circle green circle green circle green circle green circle green circle green circle green circle green circle green circle ova ami specific tests system filebeat test cluster green yellow production repositories ui access no ssh root access ssh user access wazuh dashboard app version dashboard indexer version file ova green circle green circle green circle green circle green circle green circle green circle green circle blackgreen circle circle ami green circle green circle green circle green circle green circle green circle green circle green circle green circle status legend black circle pending in progress white circle skipped red circle rejected yellow circle ready to review green circle approved auditor s validation in order to close and proceed with the release or the next candidate version the following auditors must give the green light to this rc alberpilot okynos ,0 8490,27014598222.0,IssuesEvent,2023-02-10 18:09:11,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,closed,[XCUITests] Investigate UI tests for the different widgets iOS14,eng:automation iOS14,"Let's add this ticket to try to add some automation for the different widgets: -Top Sites -Quick Search -Quick View (tabs only) ... ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-2549) ",1.0,"[XCUITests] Investigate UI tests for the different widgets iOS14 - Let's add this ticket to try to add some automation for the different widgets: -Top Sites -Quick Search -Quick View (tabs only) ... ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-2549) ",1, investigate ui tests for the different widgets let s add this ticket to try to add some automation for the different widgets top sites quick search quick view tabs only ┆issue is synchronized with this ,1 2553,12267831325.0,IssuesEvent,2020-05-07 11:25:12,GoodDollar/GoodDAPP,https://api.github.com/repos/GoodDollar/GoodDAPP,opened,Add test cases for card timing (new functionality),automation,"@YuryAnanyev Test cases for feed card timing you can find [here](https://github.com/GoodDollar/GoodDAPP/issues/1792) . Test cases from #96 till 101. Related ticket with information: https://github.com/GoodDollar/GoodDAPP/issues/1664",1.0,"Add test cases for card timing (new functionality) - @YuryAnanyev Test cases for feed card timing you can find [here](https://github.com/GoodDollar/GoodDAPP/issues/1792) . Test cases from #96 till 101. Related ticket with information: https://github.com/GoodDollar/GoodDAPP/issues/1664",1,add test cases for card timing new functionality yuryananyev test cases for feed card timing you can find test cases from till related ticket with information ,1 3461,13784204280.0,IssuesEvent,2020-10-08 20:29:08,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,opened,[Automation API] Add remote program support to nodejs ,area/automation-api language/javascript,"Currently, we only support `local` and `inline` programs in nodejs. We should add support for remote programs (clone and setup from git) to support gitops style workflows. ",1.0,"[Automation API] Add remote program support to nodejs - Currently, we only support `local` and `inline` programs in nodejs. We should add support for remote programs (clone and setup from git) to support gitops style workflows. ",1, add remote program support to nodejs currently we only support local and inline programs in nodejs we should add support for remote programs clone and setup from git to support gitops style workflows ,1 9189,27712648726.0,IssuesEvent,2023-03-14 15:06:58,githubcustomers/discovery.co.za,https://api.github.com/repos/githubcustomers/discovery.co.za,opened,Task Seven: Render results of other SARIF-based SAST tools directly within the GitHub UI,ghas-trial automation Important,"# Task Seven: Render results of other SARIF-based SAST tools directly within the GitHub UI You may have other security/scanning/linting tools that support SARIF output. For example you may have Prisma running your container scanning tool, Acunetix running DAST and ESLint running linting for your JavaScript applications. Code Scanning isn't a tool that does everything and by default we would like to be extensible and easy to integrate with. Check out information about the different ways to integrate with GitHub Advanced Security: - [About integration with code scanning](https://docs.github.com/en/code-security/secure-coding/about-integration-with-code-scanning) - [Uploading a SARIF file to GitHub](https://docs.github.com/en/code-security/secure-coding/uploading-a-sarif-file-to-github) - [SARIF support for code scanning](https://docs.github.com/en/code-security/secure-coding/sarif-support-for-code-scanning) - [CodeQL CLI Upload](https://codeql.github.com/docs/codeql-cli/manual/github-upload-results/) Create a task that integrates a third party SARIF file (even if it is a linter) into a repository security dashboard. *Hint Hint*: Take a look at [@microsoft/eslint-formatter-sarif](https://www.npmjs.com/package/@microsoft/eslint-formatter-sarif) and [Uploading a SARIF file to GitHub](https://docs.github.com/en/code-security/secure-coding/uploading-a-sarif-file-to-github). ",1.0,"Task Seven: Render results of other SARIF-based SAST tools directly within the GitHub UI - # Task Seven: Render results of other SARIF-based SAST tools directly within the GitHub UI You may have other security/scanning/linting tools that support SARIF output. For example you may have Prisma running your container scanning tool, Acunetix running DAST and ESLint running linting for your JavaScript applications. Code Scanning isn't a tool that does everything and by default we would like to be extensible and easy to integrate with. Check out information about the different ways to integrate with GitHub Advanced Security: - [About integration with code scanning](https://docs.github.com/en/code-security/secure-coding/about-integration-with-code-scanning) - [Uploading a SARIF file to GitHub](https://docs.github.com/en/code-security/secure-coding/uploading-a-sarif-file-to-github) - [SARIF support for code scanning](https://docs.github.com/en/code-security/secure-coding/sarif-support-for-code-scanning) - [CodeQL CLI Upload](https://codeql.github.com/docs/codeql-cli/manual/github-upload-results/) Create a task that integrates a third party SARIF file (even if it is a linter) into a repository security dashboard. *Hint Hint*: Take a look at [@microsoft/eslint-formatter-sarif](https://www.npmjs.com/package/@microsoft/eslint-formatter-sarif) and [Uploading a SARIF file to GitHub](https://docs.github.com/en/code-security/secure-coding/uploading-a-sarif-file-to-github). ",1,task seven render results of other sarif based sast tools directly within the github ui task seven render results of other sarif based sast tools directly within the github ui you may have other security scanning linting tools that support sarif output for example you may have prisma running your container scanning tool acunetix running dast and eslint running linting for your javascript applications code scanning isn t a tool that does everything and by default we would like to be extensible and easy to integrate with check out information about the different ways to integrate with github advanced security create a task that integrates a third party sarif file even if it is a linter into a repository security dashboard hint hint take a look at and ,1 764017,26781699166.0,IssuesEvent,2023-01-31 21:48:15,root-project/root,https://api.github.com/repos/root-project/root,closed,Strange behaviour in interpreter in master/6.28 when initialising vectors,bug affects:master priority:critical in:Cling affects:6.28,"This simple code, gives a strange behaviour when running it within the ROOT prompt in 6.28 and master. In ROOT master and 6,28 it seems the vector is initialised with the values used the first time and not with the new ones. It works fine in 6.26, where the second time the vector passed to the DummyBroadCast function is initialised correctly. ``` .L testBroadCast.hxx Test1::Session s1; Test2::Session s2; ``` Here is the code : `testBroadCast.hxx` ``` #include #include template void DummyBroadCast(const T* data, std::vector shape, std::vector targetShape) { std::cout << ""target shape ""; for (size_t i = 0; i < targetShape.size(); i++) std::cout << targetShape[i] << "" ""; std::cout << std::endl; } namespace Test1 { struct Session { std::vector fTensor_conv0bias = std::vector(4); float * tensor_conv0bias = fTensor_conv0bias.data(); std::vector fTensor_conv0biasbcast = std::vector(64); float * tensor_conv0biasbcast = fTensor_conv0biasbcast.data(); Session() { std::vector shape = { 4 , 1 , 1 }; std::vector targetShape = { 2 , 4 , 4, 4 }; DummyBroadCast(tensor_conv0bias, shape, targetShape); } }; } namespace Test2 { struct Session { std::vector fTensor_conv0bias = std::vector(4); float * tensor_conv0bias = fTensor_conv0bias.data(); std::vector fTensor_conv0biasbcast = std::vector(100); float * tensor_conv0biasbcast = fTensor_conv0biasbcast.data(); Session() { std::vector shape = { 4 , 1 , 1 }; std::vector targetShape = { 2 , 4 , 5, 5 }; DummyBroadCast(tensor_conv0bias, shape, targetShape); // DummyBroadCast(tensor_conv0bias, { 4 , 1 , 1 }, { 2 , 4 , 5, 5 }); } }; } ``` This affects some failures seen in SOFIE test like https://lcgapp-services.cern.ch/root-jenkins/view/ROOT%20Nightly/job/root-nightly-master/LABEL=ROOT-ubuntu2204,SPEC=soversion,V=master/lastCompletedBuild/testReport/projectroot.tmva.sofie/test/gtest_tmva_sofie_test_TestSofieModels/ ",1.0,"Strange behaviour in interpreter in master/6.28 when initialising vectors - This simple code, gives a strange behaviour when running it within the ROOT prompt in 6.28 and master. In ROOT master and 6,28 it seems the vector is initialised with the values used the first time and not with the new ones. It works fine in 6.26, where the second time the vector passed to the DummyBroadCast function is initialised correctly. ``` .L testBroadCast.hxx Test1::Session s1; Test2::Session s2; ``` Here is the code : `testBroadCast.hxx` ``` #include #include template void DummyBroadCast(const T* data, std::vector shape, std::vector targetShape) { std::cout << ""target shape ""; for (size_t i = 0; i < targetShape.size(); i++) std::cout << targetShape[i] << "" ""; std::cout << std::endl; } namespace Test1 { struct Session { std::vector fTensor_conv0bias = std::vector(4); float * tensor_conv0bias = fTensor_conv0bias.data(); std::vector fTensor_conv0biasbcast = std::vector(64); float * tensor_conv0biasbcast = fTensor_conv0biasbcast.data(); Session() { std::vector shape = { 4 , 1 , 1 }; std::vector targetShape = { 2 , 4 , 4, 4 }; DummyBroadCast(tensor_conv0bias, shape, targetShape); } }; } namespace Test2 { struct Session { std::vector fTensor_conv0bias = std::vector(4); float * tensor_conv0bias = fTensor_conv0bias.data(); std::vector fTensor_conv0biasbcast = std::vector(100); float * tensor_conv0biasbcast = fTensor_conv0biasbcast.data(); Session() { std::vector shape = { 4 , 1 , 1 }; std::vector targetShape = { 2 , 4 , 5, 5 }; DummyBroadCast(tensor_conv0bias, shape, targetShape); // DummyBroadCast(tensor_conv0bias, { 4 , 1 , 1 }, { 2 , 4 , 5, 5 }); } }; } ``` This affects some failures seen in SOFIE test like https://lcgapp-services.cern.ch/root-jenkins/view/ROOT%20Nightly/job/root-nightly-master/LABEL=ROOT-ubuntu2204,SPEC=soversion,V=master/lastCompletedBuild/testReport/projectroot.tmva.sofie/test/gtest_tmva_sofie_test_TestSofieModels/ ",0,strange behaviour in interpreter in master when initialising vectors this simple code gives a strange behaviour when running it within the root prompt in and master in root master and it seems the vector is initialised with the values used the first time and not with the new ones it works fine in where the second time the vector passed to the dummybroadcast function is initialised correctly l testbroadcast hxx session session here is the code testbroadcast hxx include include template void dummybroadcast const t data std vector shape std vector targetshape std cout target shape for size t i i targetshape size i std cout targetshape std cout std endl namespace struct session std vector ftensor std vector float tensor ftensor data std vector ftensor std vector float tensor ftensor data session std vector shape std vector targetshape dummybroadcast tensor shape targetshape namespace struct session std vector ftensor std vector float tensor ftensor data std vector ftensor std vector float tensor ftensor data session std vector shape std vector targetshape dummybroadcast tensor shape targetshape dummybroadcast tensor this affects some failures seen in sofie test like ,0 244864,7880674119.0,IssuesEvent,2018-06-26 16:35:52,aowen87/FOO,https://api.github.com/repos/aowen87/FOO,closed,Python filters failing on Windows,Likelihood: 3 - Occasional OS: Windows 8 Priority: Normal Severity: 4 - Crash / Wrong Results Support Group: Any Target Version: 2.13.1 bug version: 2.8.2,"Reported by Allen Harvey. avtPythonExpression::Execute Error - Error unwraping vtkDataSet result when attempting to use a custom vector expression. He reports it still works on Linux, but started failing on Windows 'several versions ago'. ",1.0,"Python filters failing on Windows - Reported by Allen Harvey. avtPythonExpression::Execute Error - Error unwraping vtkDataSet result when attempting to use a custom vector expression. He reports it still works on Linux, but started failing on Windows 'several versions ago'. ",0,python filters failing on windows reported by allen harvey avtpythonexpression execute error error unwraping vtkdataset result when attempting to use a custom vector expression he reports it still works on linux but started failing on windows several versions ago ,0 2285,11713480044.0,IssuesEvent,2020-03-09 10:24:14,apache/druid,https://api.github.com/repos/apache/druid,opened,An ExecutorService must not be created for each KerberosHttpClient; Don't assign ExecutorService into variable of Executor type,Area - Automation/Static Analysis Bug Contributions Welcome Performance Starter,"Similarly to #9286, we should catch assignments of `ExecutorService` (within type hierarchy) into variables of `Executor` type. Two Structural search patterns are needed: - ""Java"" pattern: `$x$ = $y$;`, where the Type of `$x$` is `Executor` and the Type of `$y$` is `ExecutorService`. - ""Java - Class Member"" pattern: `$Type$ $x$ = $y$;`, where the Text of `$Type$` is `Executor` and the Type of `$y$` is `ExecutorService`. (BTW, there is only ""Java"" pattern suggested in #9286 and added in #9325; a similar ""Java - Class Member"" pattern should be added for `ExecutorService` - `ScheduledExecutorService` pair, too.) This is needed because [`ExecutorService` instances must be shut down explicitly](https://github.com/code-review-checklists/java-concurrency#explicit-shutdown) and `Executor` doesn't allow for this. Incidentally, the sole violation of this rule in Druid code, in `KerberosHttpClient`: https://github.com/apache/druid/blob/a6776648112917b72c077ba3ac0cb7f61993a2d0/extensions-core/druid-kerberos/src/main/java/org/apache/druid/security/kerberos/KerberosHttpClient.java#L44-L54 Is actually an instance of another Java concurrency problem: [an `ExecutorService` must not be created for each instance of a short-lived object](https://github.com/code-review-checklists/java-concurrency#reuse-threads) like `KerberosHttpClient`. So the `exec` in `KerberosHttpClient` should properly made static, or cached on the DI level as a `@Named` `ExecutorService`.",1.0,"An ExecutorService must not be created for each KerberosHttpClient; Don't assign ExecutorService into variable of Executor type - Similarly to #9286, we should catch assignments of `ExecutorService` (within type hierarchy) into variables of `Executor` type. Two Structural search patterns are needed: - ""Java"" pattern: `$x$ = $y$;`, where the Type of `$x$` is `Executor` and the Type of `$y$` is `ExecutorService`. - ""Java - Class Member"" pattern: `$Type$ $x$ = $y$;`, where the Text of `$Type$` is `Executor` and the Type of `$y$` is `ExecutorService`. (BTW, there is only ""Java"" pattern suggested in #9286 and added in #9325; a similar ""Java - Class Member"" pattern should be added for `ExecutorService` - `ScheduledExecutorService` pair, too.) This is needed because [`ExecutorService` instances must be shut down explicitly](https://github.com/code-review-checklists/java-concurrency#explicit-shutdown) and `Executor` doesn't allow for this. Incidentally, the sole violation of this rule in Druid code, in `KerberosHttpClient`: https://github.com/apache/druid/blob/a6776648112917b72c077ba3ac0cb7f61993a2d0/extensions-core/druid-kerberos/src/main/java/org/apache/druid/security/kerberos/KerberosHttpClient.java#L44-L54 Is actually an instance of another Java concurrency problem: [an `ExecutorService` must not be created for each instance of a short-lived object](https://github.com/code-review-checklists/java-concurrency#reuse-threads) like `KerberosHttpClient`. So the `exec` in `KerberosHttpClient` should properly made static, or cached on the DI level as a `@Named` `ExecutorService`.",1,an executorservice must not be created for each kerberoshttpclient don t assign executorservice into variable of executor type similarly to we should catch assignments of executorservice within type hierarchy into variables of executor type two structural search patterns are needed java pattern x y where the type of x is executor and the type of y is executorservice java class member pattern type x y where the text of type is executor and the type of y is executorservice btw there is only java pattern suggested in and added in a similar java class member pattern should be added for executorservice scheduledexecutorservice pair too this is needed because and executor doesn t allow for this incidentally the sole violation of this rule in druid code in kerberoshttpclient is actually an instance of another java concurrency problem like kerberoshttpclient so the exec in kerberoshttpclient should properly made static or cached on the di level as a named executorservice ,1 1572,10346472562.0,IssuesEvent,2019-09-04 15:20:12,ASL-LEX/asl-lex,https://api.github.com/repos/ASL-LEX/asl-lex,closed,Make a top level python script to pre-generate edge lists,automation,"- [ ] Script should import PyND - [ ] script should run PyND for a configurable list of features Will update the criteria once Naomi sends those ",1.0,"Make a top level python script to pre-generate edge lists - - [ ] Script should import PyND - [ ] script should run PyND for a configurable list of features Will update the criteria once Naomi sends those ",1,make a top level python script to pre generate edge lists script should import pynd script should run pynd for a configurable list of features will update the criteria once naomi sends those ,1 218,4761950067.0,IssuesEvent,2016-10-25 09:51:47,cf-tm-bot/openstack_cpi,https://api.github.com/repos/cf-tm-bot/openstack_cpi,closed,Use the terraform-resource for pipeline environment setup - Story Id: 119565681,chore ci env-creation-automation pipeline started,"There is a terraform-resource available: https://github.com/ljfranklin/terraform-resource We use it to setup AWS VPCs and expose them as a environment pool. This was introduced [with this change](https://github.com/cloudfoundry-incubator/bosh-aws-cpi-release/commit/18bf8ac0938c0f3db0eda27d760d04665caa239b). * We don't need a pool, just kill the project at the end and create from scratch on the next run. * Make sure we could turn that off, if we want to keep failed VMs for analysis --- Mirrors: [story 119565681](https://www.pivotaltracker.com/story/show/119565681) submitted on May 13, 2016 UTC - **Requester**: Felix Riegger - **Owners**: Mauro Morales, Felix Riegger - **Estimate**: 0.0",1.0,"Use the terraform-resource for pipeline environment setup - Story Id: 119565681 - There is a terraform-resource available: https://github.com/ljfranklin/terraform-resource We use it to setup AWS VPCs and expose them as a environment pool. This was introduced [with this change](https://github.com/cloudfoundry-incubator/bosh-aws-cpi-release/commit/18bf8ac0938c0f3db0eda27d760d04665caa239b). * We don't need a pool, just kill the project at the end and create from scratch on the next run. * Make sure we could turn that off, if we want to keep failed VMs for analysis --- Mirrors: [story 119565681](https://www.pivotaltracker.com/story/show/119565681) submitted on May 13, 2016 UTC - **Requester**: Felix Riegger - **Owners**: Mauro Morales, Felix Riegger - **Estimate**: 0.0",1,use the terraform resource for pipeline environment setup story id there is a terraform resource available we use it to setup aws vpcs and expose them as a environment pool this was introduced we don t need a pool just kill the project at the end and create from scratch on the next run make sure we could turn that off if we want to keep failed vms for analysis mirrors submitted on may utc requester felix riegger owners mauro morales felix riegger estimate ,1 117415,9934842286.0,IssuesEvent,2019-07-02 15:15:28,knative/serving,https://api.github.com/repos/knative/serving,closed,where can i know which go version should i use to build serving,area/test-and-release kind/doc kind/question," ## Expected Behavior i should be aware of the go version that use to build *serving* ## Actual Behavior https://github.com/knative/serving/blob/master/DEVELOPMENT.md#requirements doesn't mention the version of go, but actually i can successfully build *serving* with **go1.10.4** but without the luck in **go1.8.3**",1.0,"where can i know which go version should i use to build serving - ## Expected Behavior i should be aware of the go version that use to build *serving* ## Actual Behavior https://github.com/knative/serving/blob/master/DEVELOPMENT.md#requirements doesn't mention the version of go, but actually i can successfully build *serving* with **go1.10.4** but without the luck in **go1.8.3**",0,where can i know which go version should i use to build serving pro tip you can leave this block commented and it still works select the appropriate areas for your issue kind question kind doc expected behavior i should be aware of the go version that use to build serving actual behavior doesn t mention the version of go but actually i can successfully build serving with but without the luck in ,0 1954,11169838618.0,IssuesEvent,2019-12-28 09:03:46,apache/druid,https://api.github.com/repos/apache/druid,opened,Make 'Unguarded field access' inspection in IntelliJ an error,Area - Automation/Static Analysis,To take advantage of `@GuardedBy`.,1.0,Make 'Unguarded field access' inspection in IntelliJ an error - To take advantage of `@GuardedBy`.,1,make unguarded field access inspection in intellij an error to take advantage of guardedby ,1 228927,7569574564.0,IssuesEvent,2018-04-23 05:28:45,openshift/origin,https://api.github.com/repos/openshift/origin,closed,openshift-sdn network restart terminates run once pods immediately,component/networking kind/bug lifecycle/rotten priority/P1 sig/networking,"The CRI net namespace restart function when openshift-sdn restarts is terminating run once pods that may not need networking, leading to failures. It's not clear to me that completely terminating all run-once pods on a node when the sdn process is disrupted is correct. ``` I1001 22:40:49.619765 123473 pod.go:250] Processed pod network request &{UPDATE openshift-node imagetest acda4ba2cdc58950364307639a38e0724a2b57bd519a0a576fe6f766d1617467 0xc42097d680}, result err failed to find pod details from OVS flows I1001 22:40:49.619819 123473 pod.go:215] Returning pod network request &{UPDATE openshift-node imagetest acda4ba2cdc58950364307639a38e0724a2b57bd519a0a576fe6f766d1617467 0xc42097d680}, result err failed to find pod details from OVS flows W1001 22:40:49.619830 123473 node.go:368] will restart pod 'openshift-node/imagetest' due to update failure on restart: failed to find pod details from OVS flows I1001 22:40:49.622187 123473 node.go:290] Killing pod 'openshift-node/debug' sandbox due to failed restart I1001 22:40:49.647180 123473 cniserver.go:231] Waiting for DEL result for pod openshift-node/debug I1001 22:40:49.647208 123473 pod.go:212] Dispatching pod network request &{DEL openshift-node debug cd5d493cf280f661a176f7449e1b4946e04bbf274e75954df755d9e959323e53 /proc/121859/ns/net 0xc42097de00} I1001 22:40:49.653653 123473 pod.go:248] Processing pod network request &{DEL openshift-node debug cd5d493cf280f661a176f7449e1b4946e04bbf274e75954df755d9e959323e53 /proc/121859/ns/net 0xc42097de00} ``` ``` oc get pods NAME READY STATUS RESTARTS AGE debug 1/1 Running 0 5m imagetest 0/1 Error 0 5m ``` @openshift/sig-networking ",1.0,"openshift-sdn network restart terminates run once pods immediately - The CRI net namespace restart function when openshift-sdn restarts is terminating run once pods that may not need networking, leading to failures. It's not clear to me that completely terminating all run-once pods on a node when the sdn process is disrupted is correct. ``` I1001 22:40:49.619765 123473 pod.go:250] Processed pod network request &{UPDATE openshift-node imagetest acda4ba2cdc58950364307639a38e0724a2b57bd519a0a576fe6f766d1617467 0xc42097d680}, result err failed to find pod details from OVS flows I1001 22:40:49.619819 123473 pod.go:215] Returning pod network request &{UPDATE openshift-node imagetest acda4ba2cdc58950364307639a38e0724a2b57bd519a0a576fe6f766d1617467 0xc42097d680}, result err failed to find pod details from OVS flows W1001 22:40:49.619830 123473 node.go:368] will restart pod 'openshift-node/imagetest' due to update failure on restart: failed to find pod details from OVS flows I1001 22:40:49.622187 123473 node.go:290] Killing pod 'openshift-node/debug' sandbox due to failed restart I1001 22:40:49.647180 123473 cniserver.go:231] Waiting for DEL result for pod openshift-node/debug I1001 22:40:49.647208 123473 pod.go:212] Dispatching pod network request &{DEL openshift-node debug cd5d493cf280f661a176f7449e1b4946e04bbf274e75954df755d9e959323e53 /proc/121859/ns/net 0xc42097de00} I1001 22:40:49.653653 123473 pod.go:248] Processing pod network request &{DEL openshift-node debug cd5d493cf280f661a176f7449e1b4946e04bbf274e75954df755d9e959323e53 /proc/121859/ns/net 0xc42097de00} ``` ``` oc get pods NAME READY STATUS RESTARTS AGE debug 1/1 Running 0 5m imagetest 0/1 Error 0 5m ``` @openshift/sig-networking ",0,openshift sdn network restart terminates run once pods immediately the cri net namespace restart function when openshift sdn restarts is terminating run once pods that may not need networking leading to failures it s not clear to me that completely terminating all run once pods on a node when the sdn process is disrupted is correct pod go processed pod network request update openshift node imagetest result err failed to find pod details from ovs flows pod go returning pod network request update openshift node imagetest result err failed to find pod details from ovs flows node go will restart pod openshift node imagetest due to update failure on restart failed to find pod details from ovs flows node go killing pod openshift node debug sandbox due to failed restart cniserver go waiting for del result for pod openshift node debug pod go dispatching pod network request del openshift node debug proc ns net pod go processing pod network request del openshift node debug proc ns net oc get pods name ready status restarts age debug running imagetest error openshift sig networking ,0 1009,12179383253.0,IssuesEvent,2020-04-28 10:34:49,rook/rook,https://api.github.com/repos/rook/rook,closed,Convert the Ceph Cluster controller to the controller-runtime,ceph - feature reliability,"**Is this a bug report or feature request?** * Feature Request **What should the feature do:** Convert the [CephCluster controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/cluster/controller.go) to be managed with the controller-runtime. Currently Rook only has a simple watch in an informer as seen [here](https://github.com/rook/rook/blob/master/pkg/operator/k8sutil/customresource.go#L54). **What is use case behind this feature:** The controller runtime will improve reliability of the operator in several areas: - Events can be re-queued if failed or the operator is not able to complete the operation - Exponential backoff is provided automatically for re-queued events - Waiting for the next event does not need to block on the current event if it is taking a long time and the event can be re-queued. Several controllers in Rook are using the controller runtime. For examples, see the [pool controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/pool/controller.go) or [disruption budget](https://github.com/rook/rook/blob/master/pkg/operator/ceph/disruption/clusterdisruption/reconcile.go) controller. ",True,"Convert the Ceph Cluster controller to the controller-runtime - **Is this a bug report or feature request?** * Feature Request **What should the feature do:** Convert the [CephCluster controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/cluster/controller.go) to be managed with the controller-runtime. Currently Rook only has a simple watch in an informer as seen [here](https://github.com/rook/rook/blob/master/pkg/operator/k8sutil/customresource.go#L54). **What is use case behind this feature:** The controller runtime will improve reliability of the operator in several areas: - Events can be re-queued if failed or the operator is not able to complete the operation - Exponential backoff is provided automatically for re-queued events - Waiting for the next event does not need to block on the current event if it is taking a long time and the event can be re-queued. Several controllers in Rook are using the controller runtime. For examples, see the [pool controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/pool/controller.go) or [disruption budget](https://github.com/rook/rook/blob/master/pkg/operator/ceph/disruption/clusterdisruption/reconcile.go) controller. ",0,convert the ceph cluster controller to the controller runtime is this a bug report or feature request feature request what should the feature do convert the to be managed with the controller runtime currently rook only has a simple watch in an informer as seen what is use case behind this feature the controller runtime will improve reliability of the operator in several areas events can be re queued if failed or the operator is not able to complete the operation exponential backoff is provided automatically for re queued events waiting for the next event does not need to block on the current event if it is taking a long time and the event can be re queued several controllers in rook are using the controller runtime for examples see the or controller ,0 317386,27234369568.0,IssuesEvent,2023-02-21 15:20:19,openhab/openhab-addons,https://api.github.com/repos/openhab/openhab-addons,opened,[network] PresenceDetectionTest unstable,test,"This test failed in a GHA Windows build: https://github.com/wborn/openhab-addons/actions/runs/4233027883/jobs/7353466566 ``` [ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.863 s <<< FAILURE! - in org.openhab.binding.network.internal.PresenceDetectionTest [ERROR] org.openhab.binding.network.internal.PresenceDetectionTest.partialAndFinalCallbackTests Time elapsed: 0.57 s <<< FAILURE! org.mockito.exceptions.verification.TooFewActualInvocations: listener.partialDetectionResult(); Wanted 3 times: -> at org.openhab.binding.network.internal.PresenceDetectionTest.partialAndFinalCallbackTests(PresenceDetectionTest.java:130) But was 2 times: -> at org.openhab.binding.network.internal.PresenceDetection.lambda$6(PresenceDetection.java:460) -> at org.openhab.binding.network.internal.PresenceDetection.lambda$9(PresenceDetection.java:540) at org.openhab.binding.network.internal.PresenceDetectionTest.partialAndFinalCallbackTests(PresenceDetectionTest.java:130) ```",1.0,"[network] PresenceDetectionTest unstable - This test failed in a GHA Windows build: https://github.com/wborn/openhab-addons/actions/runs/4233027883/jobs/7353466566 ``` [ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.863 s <<< FAILURE! - in org.openhab.binding.network.internal.PresenceDetectionTest [ERROR] org.openhab.binding.network.internal.PresenceDetectionTest.partialAndFinalCallbackTests Time elapsed: 0.57 s <<< FAILURE! org.mockito.exceptions.verification.TooFewActualInvocations: listener.partialDetectionResult(); Wanted 3 times: -> at org.openhab.binding.network.internal.PresenceDetectionTest.partialAndFinalCallbackTests(PresenceDetectionTest.java:130) But was 2 times: -> at org.openhab.binding.network.internal.PresenceDetection.lambda$6(PresenceDetection.java:460) -> at org.openhab.binding.network.internal.PresenceDetection.lambda$9(PresenceDetection.java:540) at org.openhab.binding.network.internal.PresenceDetectionTest.partialAndFinalCallbackTests(PresenceDetectionTest.java:130) ```",0, presencedetectiontest unstable this test failed in a gha windows build tests run failures errors skipped time elapsed s failure in org openhab binding network internal presencedetectiontest org openhab binding network internal presencedetectiontest partialandfinalcallbacktests time elapsed s failure org mockito exceptions verification toofewactualinvocations listener partialdetectionresult wanted times at org openhab binding network internal presencedetectiontest partialandfinalcallbacktests presencedetectiontest java but was times at org openhab binding network internal presencedetection lambda presencedetection java at org openhab binding network internal presencedetection lambda presencedetection java at org openhab binding network internal presencedetectiontest partialandfinalcallbacktests presencedetectiontest java ,0 1476,10172913204.0,IssuesEvent,2019-08-08 11:55:05,elastic/apm-agent-nodejs,https://api.github.com/repos/elastic/apm-agent-nodejs,closed,Run commit message linting in parallel with regular tests on Jenkins,[zube]: In Review automation ci,"Linting of commit messages was moved to a separate build in #1172, but it's running before any of the other tests, which I guess means it will not run the regular tests until linter finishes successfully (please correct me if I'm wrong). There's no reason why we want to hold up the regular tests waiting for the linter to finish, so we should just run it all in parallel. ping @elastic/observablt-robots ",1.0,"Run commit message linting in parallel with regular tests on Jenkins - Linting of commit messages was moved to a separate build in #1172, but it's running before any of the other tests, which I guess means it will not run the regular tests until linter finishes successfully (please correct me if I'm wrong). There's no reason why we want to hold up the regular tests waiting for the linter to finish, so we should just run it all in parallel. ping @elastic/observablt-robots ",1,run commit message linting in parallel with regular tests on jenkins linting of commit messages was moved to a separate build in but it s running before any of the other tests which i guess means it will not run the regular tests until linter finishes successfully please correct me if i m wrong there s no reason why we want to hold up the regular tests waiting for the linter to finish so we should just run it all in parallel ping elastic observablt robots ,1 8519,27048925520.0,IssuesEvent,2023-02-13 11:51:00,Budibase/budibase,https://api.github.com/repos/Budibase/budibase,opened,Custom template for automation is not showing {{ currentYear }} {{ company }}.,bug binding automations env - production,"### Discussed in https://github.com/Budibase/budibase/discussions/9472
Originally posted by **zielono77** January 30, 2023 Hey, Recently, I have created an app for business travels for company. At the moment, I encountered one problem with templates. The {{ currentYear }} {{ company }} is not showing in the emails once they have been sent. However, other emails such as password recovery, user welcome etc. works perfectly fine. ![Screenshot 2023-01-30 at 10 57 49](https://user-images.githubusercontent.com/24386954/215445696-bc7571bd-78a2-4ff5-a368-e712d0a2cff8.png) Do you guys know where I'm making a mistake? :)
Custom template: ![Screenshot 2023-02-13 at 11 49 48](https://user-images.githubusercontent.com/101575380/218450334-a29fbdf4-fa20-49de-a8ca-a829794174c2.png) Email received from automation: ![Screenshot 2023-02-13 at 11 50 26](https://user-images.githubusercontent.com/101575380/218450464-4c5a33e9-1552-4d24-83e7-6e26cac34304.png) ![Screenshot 2023-02-13 at 11 50 53](https://user-images.githubusercontent.com/101575380/218450555-c42eef60-917e-4f83-b18c-5e024b56e4c5.png) ",1.0,"Custom template for automation is not showing {{ currentYear }} {{ company }}. - ### Discussed in https://github.com/Budibase/budibase/discussions/9472
Originally posted by **zielono77** January 30, 2023 Hey, Recently, I have created an app for business travels for company. At the moment, I encountered one problem with templates. The {{ currentYear }} {{ company }} is not showing in the emails once they have been sent. However, other emails such as password recovery, user welcome etc. works perfectly fine. ![Screenshot 2023-01-30 at 10 57 49](https://user-images.githubusercontent.com/24386954/215445696-bc7571bd-78a2-4ff5-a368-e712d0a2cff8.png) Do you guys know where I'm making a mistake? :)
Custom template: ![Screenshot 2023-02-13 at 11 49 48](https://user-images.githubusercontent.com/101575380/218450334-a29fbdf4-fa20-49de-a8ca-a829794174c2.png) Email received from automation: ![Screenshot 2023-02-13 at 11 50 26](https://user-images.githubusercontent.com/101575380/218450464-4c5a33e9-1552-4d24-83e7-6e26cac34304.png) ![Screenshot 2023-02-13 at 11 50 53](https://user-images.githubusercontent.com/101575380/218450555-c42eef60-917e-4f83-b18c-5e024b56e4c5.png) ",1,custom template for automation is not showing currentyear company discussed in originally posted by january hey recently i have created an app for business travels for company at the moment i encountered one problem with templates the currentyear company is not showing in the emails once they have been sent however other emails such as password recovery user welcome etc works perfectly fine do you guys know where i m making a mistake custom template email received from automation ,1 208218,15880479075.0,IssuesEvent,2021-04-09 13:45:03,ValveSoftware/steam-for-linux,https://api.github.com/repos/ValveSoftware/steam-for-linux,closed,Steamlink frame buffer freezes on Overwatch Hero Selection screen,3rd party game Need Retest Streaming,"#### Your system information * Steam client version (build number or date): 1536436120 * Distribution (e.g. Ubuntu): Gentoo * Opted into Steam client beta?: No * Opted into Steamlink firmware beta?: Yes * Have you checked for system updates?: Yes #### Please describe your issue in as much detail as possible: I have a shortcut in steam for starting overwatch from the lutris install via a script that I wrote. It starts fine. I can do the tutorial fine. The controller works fine. The steam overlay works fine. When the hero selection screen appears, the screen buffer freezes. Sound continues playing and from what I can tell the game is running fine. Pressing the trigger on the controller to fire the character's gun will cause audible sounds of gun fire. Using the control stick to move will make movement sounds. However, the steamlink frame buffer is frozen and I need to forcibly power cycle it. Going to the host machine's display to see what happens shows no problems. #### Steps for reproducing this issue: 1. Install Lutris 2. Install Overwatch through Lutris (the DXVK version) 3. Make a script to start overwatch (see below) 4. Install Steam 5. Make a shortcut that uses that script. 6. Verify that it starts overwatch. 7. Exit overwatch. 8. Setup Steamlink. 9. Try to play overwatch over Steamlink 10. Observe screen buffer freeze on hero selection screen when trying to play again bots. ``` #!/bin/sh # Battle.net won't actually start Overwatch if it isn't already started, but we try anyway. env SDL_VIDEO_FULLSCREEN_DISPLAY=""off"" PBA_ENABLE=""1"" WINEESYNC=""1"" __GL_SHADER_DISK_CACHE=""1"" __GL_SHADER_DISK_CACHE_PATH=""${HOME}/Games/overwatch"" __GL_SHADER_DISK_CACHE_SKIP_CLEANUP=""1"" __GL_THREADED_OPTIMIZATIONS=""1"" __PBA_CB_HEAP=""128"" __PBA_GEO_HEAP=""512"" mesa_glthread=""true"" DRI_PRIME=""0"" WINEDEBUG=""-all"" WINEARCH=""win64"" WINE=""/usr/bin/wine"" WINEPREFIX=""${HOME}/Games/overwatch"" WINEDLLOVERRIDES=""d3d10,d3d10_1,d3d10core,d3d11,dxgi=n"" ""/usr/bin/wine"" ""/home/richard/Games/overwatch/drive_c/Program Files (x86)/Battle.net/Battle.net Launcher.exe"" --exec=""launch Pro""; # Give battle.net time to start. This is a hack. sleep 7; # Start Overwatch by running the same exact command. env SDL_VIDEO_FULLSCREEN_DISPLAY=""off"" PBA_ENABLE=""1"" WINEESYNC=""1"" __GL_SHADER_DISK_CACHE=""1"" __GL_SHADER_DISK_CACHE_PATH=""${HOME}/Games/overwatch"" __GL_SHADER_DISK_CACHE_SKIP_CLEANUP=""1"" __GL_THREADED_OPTIMIZATIONS=""1"" __PBA_CB_HEAP=""128"" __PBA_GEO_HEAP=""512"" mesa_glthread=""true"" DRI_PRIME=""0"" WINEDEBUG=""-all"" WINEARCH=""win64"" WINE=""/usr/bin/wine"" WINEPREFIX=""${HOME}/Games/overwatch"" WINEDLLOVERRIDES=""d3d10,d3d10_1,d3d10core,d3d11,dxgi=n"" ""/usr/bin/wine"" ""/home/richard/Games/overwatch/drive_c/Program Files (x86)/Battle.net/Battle.net Launcher.exe"" --exec=""launch Pro"" ``` Note that I am using a special verson of Gentoo's app-emulation/wine-staging-3.17 that has PBA and ESYNC builtin that I made to match what Lutris' own prebuilt binary should provide. It should be noted that PBA should be no effect because DXVK is used. You can run lutris with `lutris -d` and start overwatch to get the environment variables used. If you want to build the same exact patched wine-staging install that I have, run the following in a root shell when wine-staging is already installed and gentoolkit is installed: ``` EBUILD=""$(equery which wine-staging:3.17)"" ebuild ""${EBUILD}"" clean unpack sed -i -e 's/test ""$enable_msvfw32_ICGetDisplayFormat"" -eq 1/false/' /var/tmp/portage/app-emulation/wine-staging-3.17/work/wine-staging-3.17/patches/patchinstall.sh ebuild ""${EBUILD}"" prepare cd /var/tmp/portage/app-emulation/wine-staging-3.17/work/wine-3.17 wget https://github.com/zfigura/wine/releases/download/esyncb4478b7/esync.tgz tar -xf esync.tgz cd esync/ curl https://raw.githubusercontent.com/Tk-Glitch/PKGBUILDS/69abc5dcf32446a76006ea83f144ee4a208b270e/wine-tkg-git/esync-staging-fixes-r2.patch | patch -p1 curl https://raw.githubusercontent.com/Tk-Glitch/PKGBUILDS/69abc5dcf32446a76006ea83f144ee4a208b270e/wine-tkg-git/esync-compat-fixes-r2.patch | patch -p1 cd .. rm esync/*orig for i in esync/*; do patch -F5 -p1 < $i;done; curl https://raw.githubusercontent.com/Tk-Glitch/PKGBUILDS/69abc5dcf32446a76006ea83f144ee4a208b270e/wine-tkg-git/PBA317%2B.patch | patch -p1 curl https://raw.githubusercontent.com/Tk-Glitch/PKGBUILDS/69abc5dcf32446a76006ea83f144ee4a208b270e/wine-tkg-git/esync-no_alloc_handle.patch | patch -p1 cd ebuild ""${EBUILD}"" merge eselect wine set wine-staging-3.17 ``` Configure lutris to use the system wine, to disable the lutris runtime, to disable the process monitor and to enable WINEESYNC=1 (it gets set to 0 when changing to the system wine). Then it should be safe to get lutris to dump the environment variables. That should match what the above script was doing environment variable wise.",1.0,"Steamlink frame buffer freezes on Overwatch Hero Selection screen - #### Your system information * Steam client version (build number or date): 1536436120 * Distribution (e.g. Ubuntu): Gentoo * Opted into Steam client beta?: No * Opted into Steamlink firmware beta?: Yes * Have you checked for system updates?: Yes #### Please describe your issue in as much detail as possible: I have a shortcut in steam for starting overwatch from the lutris install via a script that I wrote. It starts fine. I can do the tutorial fine. The controller works fine. The steam overlay works fine. When the hero selection screen appears, the screen buffer freezes. Sound continues playing and from what I can tell the game is running fine. Pressing the trigger on the controller to fire the character's gun will cause audible sounds of gun fire. Using the control stick to move will make movement sounds. However, the steamlink frame buffer is frozen and I need to forcibly power cycle it. Going to the host machine's display to see what happens shows no problems. #### Steps for reproducing this issue: 1. Install Lutris 2. Install Overwatch through Lutris (the DXVK version) 3. Make a script to start overwatch (see below) 4. Install Steam 5. Make a shortcut that uses that script. 6. Verify that it starts overwatch. 7. Exit overwatch. 8. Setup Steamlink. 9. Try to play overwatch over Steamlink 10. Observe screen buffer freeze on hero selection screen when trying to play again bots. ``` #!/bin/sh # Battle.net won't actually start Overwatch if it isn't already started, but we try anyway. env SDL_VIDEO_FULLSCREEN_DISPLAY=""off"" PBA_ENABLE=""1"" WINEESYNC=""1"" __GL_SHADER_DISK_CACHE=""1"" __GL_SHADER_DISK_CACHE_PATH=""${HOME}/Games/overwatch"" __GL_SHADER_DISK_CACHE_SKIP_CLEANUP=""1"" __GL_THREADED_OPTIMIZATIONS=""1"" __PBA_CB_HEAP=""128"" __PBA_GEO_HEAP=""512"" mesa_glthread=""true"" DRI_PRIME=""0"" WINEDEBUG=""-all"" WINEARCH=""win64"" WINE=""/usr/bin/wine"" WINEPREFIX=""${HOME}/Games/overwatch"" WINEDLLOVERRIDES=""d3d10,d3d10_1,d3d10core,d3d11,dxgi=n"" ""/usr/bin/wine"" ""/home/richard/Games/overwatch/drive_c/Program Files (x86)/Battle.net/Battle.net Launcher.exe"" --exec=""launch Pro""; # Give battle.net time to start. This is a hack. sleep 7; # Start Overwatch by running the same exact command. env SDL_VIDEO_FULLSCREEN_DISPLAY=""off"" PBA_ENABLE=""1"" WINEESYNC=""1"" __GL_SHADER_DISK_CACHE=""1"" __GL_SHADER_DISK_CACHE_PATH=""${HOME}/Games/overwatch"" __GL_SHADER_DISK_CACHE_SKIP_CLEANUP=""1"" __GL_THREADED_OPTIMIZATIONS=""1"" __PBA_CB_HEAP=""128"" __PBA_GEO_HEAP=""512"" mesa_glthread=""true"" DRI_PRIME=""0"" WINEDEBUG=""-all"" WINEARCH=""win64"" WINE=""/usr/bin/wine"" WINEPREFIX=""${HOME}/Games/overwatch"" WINEDLLOVERRIDES=""d3d10,d3d10_1,d3d10core,d3d11,dxgi=n"" ""/usr/bin/wine"" ""/home/richard/Games/overwatch/drive_c/Program Files (x86)/Battle.net/Battle.net Launcher.exe"" --exec=""launch Pro"" ``` Note that I am using a special verson of Gentoo's app-emulation/wine-staging-3.17 that has PBA and ESYNC builtin that I made to match what Lutris' own prebuilt binary should provide. It should be noted that PBA should be no effect because DXVK is used. You can run lutris with `lutris -d` and start overwatch to get the environment variables used. If you want to build the same exact patched wine-staging install that I have, run the following in a root shell when wine-staging is already installed and gentoolkit is installed: ``` EBUILD=""$(equery which wine-staging:3.17)"" ebuild ""${EBUILD}"" clean unpack sed -i -e 's/test ""$enable_msvfw32_ICGetDisplayFormat"" -eq 1/false/' /var/tmp/portage/app-emulation/wine-staging-3.17/work/wine-staging-3.17/patches/patchinstall.sh ebuild ""${EBUILD}"" prepare cd /var/tmp/portage/app-emulation/wine-staging-3.17/work/wine-3.17 wget https://github.com/zfigura/wine/releases/download/esyncb4478b7/esync.tgz tar -xf esync.tgz cd esync/ curl https://raw.githubusercontent.com/Tk-Glitch/PKGBUILDS/69abc5dcf32446a76006ea83f144ee4a208b270e/wine-tkg-git/esync-staging-fixes-r2.patch | patch -p1 curl https://raw.githubusercontent.com/Tk-Glitch/PKGBUILDS/69abc5dcf32446a76006ea83f144ee4a208b270e/wine-tkg-git/esync-compat-fixes-r2.patch | patch -p1 cd .. rm esync/*orig for i in esync/*; do patch -F5 -p1 < $i;done; curl https://raw.githubusercontent.com/Tk-Glitch/PKGBUILDS/69abc5dcf32446a76006ea83f144ee4a208b270e/wine-tkg-git/PBA317%2B.patch | patch -p1 curl https://raw.githubusercontent.com/Tk-Glitch/PKGBUILDS/69abc5dcf32446a76006ea83f144ee4a208b270e/wine-tkg-git/esync-no_alloc_handle.patch | patch -p1 cd ebuild ""${EBUILD}"" merge eselect wine set wine-staging-3.17 ``` Configure lutris to use the system wine, to disable the lutris runtime, to disable the process monitor and to enable WINEESYNC=1 (it gets set to 0 when changing to the system wine). Then it should be safe to get lutris to dump the environment variables. That should match what the above script was doing environment variable wise.",0,steamlink frame buffer freezes on overwatch hero selection screen your system information steam client version build number or date distribution e g ubuntu gentoo opted into steam client beta no opted into steamlink firmware beta yes have you checked for system updates yes please describe your issue in as much detail as possible i have a shortcut in steam for starting overwatch from the lutris install via a script that i wrote it starts fine i can do the tutorial fine the controller works fine the steam overlay works fine when the hero selection screen appears the screen buffer freezes sound continues playing and from what i can tell the game is running fine pressing the trigger on the controller to fire the character s gun will cause audible sounds of gun fire using the control stick to move will make movement sounds however the steamlink frame buffer is frozen and i need to forcibly power cycle it going to the host machine s display to see what happens shows no problems steps for reproducing this issue install lutris install overwatch through lutris the dxvk version make a script to start overwatch see below install steam make a shortcut that uses that script verify that it starts overwatch exit overwatch setup steamlink try to play overwatch over steamlink observe screen buffer freeze on hero selection screen when trying to play again bots bin sh battle net won t actually start overwatch if it isn t already started but we try anyway env sdl video fullscreen display off pba enable wineesync gl shader disk cache gl shader disk cache path home games overwatch gl shader disk cache skip cleanup gl threaded optimizations pba cb heap pba geo heap mesa glthread true dri prime winedebug all winearch wine usr bin wine wineprefix home games overwatch winedlloverrides dxgi n usr bin wine home richard games overwatch drive c program files battle net battle net launcher exe exec launch pro give battle net time to start this is a hack sleep start overwatch by running the same exact command env sdl video fullscreen display off pba enable wineesync gl shader disk cache gl shader disk cache path home games overwatch gl shader disk cache skip cleanup gl threaded optimizations pba cb heap pba geo heap mesa glthread true dri prime winedebug all winearch wine usr bin wine wineprefix home games overwatch winedlloverrides dxgi n usr bin wine home richard games overwatch drive c program files battle net battle net launcher exe exec launch pro note that i am using a special verson of gentoo s app emulation wine staging that has pba and esync builtin that i made to match what lutris own prebuilt binary should provide it should be noted that pba should be no effect because dxvk is used you can run lutris with lutris d and start overwatch to get the environment variables used if you want to build the same exact patched wine staging install that i have run the following in a root shell when wine staging is already installed and gentoolkit is installed ebuild equery which wine staging ebuild ebuild clean unpack sed i e s test enable icgetdisplayformat eq false var tmp portage app emulation wine staging work wine staging patches patchinstall sh ebuild ebuild prepare cd var tmp portage app emulation wine staging work wine wget tar xf esync tgz cd esync curl patch curl patch cd rm esync orig for i in esync do patch i done curl patch curl patch cd ebuild ebuild merge eselect wine set wine staging configure lutris to use the system wine to disable the lutris runtime to disable the process monitor and to enable wineesync it gets set to when changing to the system wine then it should be safe to get lutris to dump the environment variables that should match what the above script was doing environment variable wise ,0 9531,29389792788.0,IssuesEvent,2023-05-30 00:07:58,Azure/ALZ-Bicep,https://api.github.com/repos/Azure/ALZ-Bicep,closed,🪲 Bug Report - InvalidAuthenticationToken Error when running mgDiagSettingsAll.bicep,bug Needs: Attention :wave: Area: Management Groups Area: Logging & Automation,"## Describe the bug I'm setting up a new ALZ based on the bicep development flow using Powershell. I've set the parPlatformMgAlzDefaultsEnable = false for my scenario. All was working well until step 4.1 Management Groups Diagnostic Settings when I encountered the following 3 errors: Status Message: (Code:InvalidAuthenticationToken) ## To Reproduce Steps to reproduce the behaviour: I run this: New-AzManagementGroupDeployment ` -TemplateFile infra-as-code/bicep/orchestration/mgDiagSettingsAll/mgDiagSettingsAll.bicep ` -TemplateParameterFile infra-as-code/bicep/orchestration/mgDiagSettingsAll/parameters/mgDiagSettingsAll.parameters.all.json ` -Location australiaeast ` -ManagementGroupId alz ## Expected behaviour Complete with no errors. ## Correlation ID 938b8666-a410-428b-ba99-d3598090d3ca ## Additional context Obviously I have modified the mgDiagSettingsAll.parameters.all.json file. And I believe I have this param correct ""parLogAnalyticsWorkspaceResourceId"": { ""value"": ""/subscriptions//resourceGroups/rg-alz-logging-001/providers/Microsoft.OperationalInsights/workspaces/alz-log-analytics"" }, Where is the subscription id of my platform management groups LAW. I found this in the JSON view of the LAW. Any help appreciated. ",1.0,"🪲 Bug Report - InvalidAuthenticationToken Error when running mgDiagSettingsAll.bicep - ## Describe the bug I'm setting up a new ALZ based on the bicep development flow using Powershell. I've set the parPlatformMgAlzDefaultsEnable = false for my scenario. All was working well until step 4.1 Management Groups Diagnostic Settings when I encountered the following 3 errors: Status Message: (Code:InvalidAuthenticationToken) ## To Reproduce Steps to reproduce the behaviour: I run this: New-AzManagementGroupDeployment ` -TemplateFile infra-as-code/bicep/orchestration/mgDiagSettingsAll/mgDiagSettingsAll.bicep ` -TemplateParameterFile infra-as-code/bicep/orchestration/mgDiagSettingsAll/parameters/mgDiagSettingsAll.parameters.all.json ` -Location australiaeast ` -ManagementGroupId alz ## Expected behaviour Complete with no errors. ## Correlation ID 938b8666-a410-428b-ba99-d3598090d3ca ## Additional context Obviously I have modified the mgDiagSettingsAll.parameters.all.json file. And I believe I have this param correct ""parLogAnalyticsWorkspaceResourceId"": { ""value"": ""/subscriptions//resourceGroups/rg-alz-logging-001/providers/Microsoft.OperationalInsights/workspaces/alz-log-analytics"" }, Where is the subscription id of my platform management groups LAW. I found this in the JSON view of the LAW. Any help appreciated. ",1,🪲 bug report invalidauthenticationtoken error when running mgdiagsettingsall bicep describe the bug i m setting up a new alz based on the bicep development flow using powershell i ve set the parplatformmgalzdefaultsenable false for my scenario all was working well until step management groups diagnostic settings when i encountered the following errors status message code invalidauthenticationtoken to reproduce steps to reproduce the behaviour i run this new azmanagementgroupdeployment templatefile infra as code bicep orchestration mgdiagsettingsall mgdiagsettingsall bicep templateparameterfile infra as code bicep orchestration mgdiagsettingsall parameters mgdiagsettingsall parameters all json location australiaeast managementgroupid alz expected behaviour complete with no errors correlation id additional context obviously i have modified the mgdiagsettingsall parameters all json file and i believe i have this param correct parloganalyticsworkspaceresourceid value subscriptions resourcegroups rg alz logging providers microsoft operationalinsights workspaces alz log analytics where is the subscription id of my platform management groups law i found this in the json view of the law any help appreciated ,1 5416,19538438470.0,IssuesEvent,2021-12-31 13:31:22,ccodwg/Covid19CanadaBot,https://api.github.com/repos/ccodwg/Covid19CanadaBot,closed,Automate IP-blocked Montreal datasets,automation,"The five Montreal datasets in `datasets.json` fail, as their site is known to block most non-residential addresses (see discussion [here](https://github.com/ccodwg/Covid19CanadaArchive/issues/61); this could be solved by running this part elsewhere or by changing the IP to an acceptable one ``` python archiver.py datasets.json -m prod --uuid 0635e027-1825-4441-aba7-6f4d1157d080 8d0aa5a5-6397-4f4c-a9ca-26ba81f26bc7 6edc20de-c174-4c75-9d6c-c463ef43512b 7ae6a613-aa31-4f63-aaeb-f8fdf3d68ee0 45b626ef-d41d-48e9-b007-97edac5ac838 ```",1.0,"Automate IP-blocked Montreal datasets - The five Montreal datasets in `datasets.json` fail, as their site is known to block most non-residential addresses (see discussion [here](https://github.com/ccodwg/Covid19CanadaArchive/issues/61); this could be solved by running this part elsewhere or by changing the IP to an acceptable one ``` python archiver.py datasets.json -m prod --uuid 0635e027-1825-4441-aba7-6f4d1157d080 8d0aa5a5-6397-4f4c-a9ca-26ba81f26bc7 6edc20de-c174-4c75-9d6c-c463ef43512b 7ae6a613-aa31-4f63-aaeb-f8fdf3d68ee0 45b626ef-d41d-48e9-b007-97edac5ac838 ```",1,automate ip blocked montreal datasets the five montreal datasets in datasets json fail as their site is known to block most non residential addresses see discussion this could be solved by running this part elsewhere or by changing the ip to an acceptable one python archiver py datasets json m prod uuid aaeb ,1 165715,12879868177.0,IssuesEvent,2020-07-12 01:25:30,osquery/osquery,https://api.github.com/repos/osquery/osquery,closed,Create in-memory numeric monitoring plugin,feature table test,"## In-memory numeric monitoring plugin Create simple in-memory numeric monitoring plugin to test that certain keys are being bumped in the integration tests. Filesystem plugin for example: `osquery/numeric_monitoring/plugins/filesystem.h` Need to implement another plugin with global access interface to the records by key.",1.0,"Create in-memory numeric monitoring plugin - ## In-memory numeric monitoring plugin Create simple in-memory numeric monitoring plugin to test that certain keys are being bumped in the integration tests. Filesystem plugin for example: `osquery/numeric_monitoring/plugins/filesystem.h` Need to implement another plugin with global access interface to the records by key.",0,create in memory numeric monitoring plugin in memory numeric monitoring plugin create simple in memory numeric monitoring plugin to test that certain keys are being bumped in the integration tests filesystem plugin for example osquery numeric monitoring plugins filesystem h need to implement another plugin with global access interface to the records by key ,0 635,7652902900.0,IssuesEvent,2018-05-10 00:35:57,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,"Step 5: The term 'Connect-AzureRmAccount' is not recognized as the name of a cmdlet, function, script",automation cxp doc-enhancement duplicate in-progress triaged,"I'm getting an error at Step 5. The Get-AutomationConnection return the tenant ID and the ApplicationID. But when I run Connect-AzureRmAccount, Connect-AzureRmAccount : The term 'Connect-AzureRmAccount' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:2 char:1 + Connect-AzureRmAccount -ServicePrincipal -Tenant $Conn.TenantID ` + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (Connect-AzureRmAccount:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException What did I miss? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 038d927f-2bcc-c62d-b3c3-f194513bced6 * Version Independent ID: 41adf2c5-3ab7-7387-e541-89e34aa6a6b1 * Content: [My first PowerShell runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-powershell) * Content Source: [articles/automation/automation-first-runbook-textual-powershell.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-powershell.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Step 5: The term 'Connect-AzureRmAccount' is not recognized as the name of a cmdlet, function, script - I'm getting an error at Step 5. The Get-AutomationConnection return the tenant ID and the ApplicationID. But when I run Connect-AzureRmAccount, Connect-AzureRmAccount : The term 'Connect-AzureRmAccount' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:2 char:1 + Connect-AzureRmAccount -ServicePrincipal -Tenant $Conn.TenantID ` + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (Connect-AzureRmAccount:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException What did I miss? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 038d927f-2bcc-c62d-b3c3-f194513bced6 * Version Independent ID: 41adf2c5-3ab7-7387-e541-89e34aa6a6b1 * Content: [My first PowerShell runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-powershell) * Content Source: [articles/automation/automation-first-runbook-textual-powershell.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-powershell.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,step the term connect azurermaccount is not recognized as the name of a cmdlet function script i m getting an error at step the get automationconnection return the tenant id and the applicationid but when i run connect azurermaccount connect azurermaccount the term connect azurermaccount is not recognized as the name of a cmdlet function script file or operable program check the spelling of the name or if a path was included verify that the path is correct and try again at line char connect azurermaccount serviceprincipal tenant conn tenantid categoryinfo objectnotfound connect azurermaccount string commandnotfoundexception fullyqualifiederrorid commandnotfoundexception what did i miss document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 42520,11013086695.0,IssuesEvent,2019-12-04 19:41:54,spack/spack,https://api.github.com/repos/spack/spack,opened,No way to build ParaView with python/3.4.x?,build-error,"@chuckatkins @danlipsa ### Steps to reproduce the issue ```console $ spack install spack install paraview@5.7.0%gcc@7.4.0 build_type=Release +mpi~opengl2+osmesa +plugins+python3 ^cmake@3.15.3 ^openssl@1.0.2o ^libbsd@0.9.1 ^hdf5@1.10.2 ^m4@1.4.18 ^netcdf-c@4.7.1 ^python@3.4 ==> Error: An unsatisfiable version constraint has been detected for spec: python@3.4 ^gettext ^pkgconfig@0.9.0: while trying to concretize the partial spec: py-matplotlib@3: py-matplotlib requires python version 3.5:, but spec asked for 3.4 ``` ### Platform and user environment Please report your OS here: ```commandline $ uname -a Linux sn364.localdomain 3.10.0-1062.1.1.1chaos.ch6.x86_64 #1 SMP Wed Sep 4 16:09:20 PDT 2019 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -d Description: Red Hat Enterprise Linux Server release 7.7 (Maipo) ``` Can the package.py file be changed to allow building paraview with python 3.4? I believe the latest matplotlib version supported with python 3.4 is 2.2.4...",1.0,"No way to build ParaView with python/3.4.x? - @chuckatkins @danlipsa ### Steps to reproduce the issue ```console $ spack install spack install paraview@5.7.0%gcc@7.4.0 build_type=Release +mpi~opengl2+osmesa +plugins+python3 ^cmake@3.15.3 ^openssl@1.0.2o ^libbsd@0.9.1 ^hdf5@1.10.2 ^m4@1.4.18 ^netcdf-c@4.7.1 ^python@3.4 ==> Error: An unsatisfiable version constraint has been detected for spec: python@3.4 ^gettext ^pkgconfig@0.9.0: while trying to concretize the partial spec: py-matplotlib@3: py-matplotlib requires python version 3.5:, but spec asked for 3.4 ``` ### Platform and user environment Please report your OS here: ```commandline $ uname -a Linux sn364.localdomain 3.10.0-1062.1.1.1chaos.ch6.x86_64 #1 SMP Wed Sep 4 16:09:20 PDT 2019 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -d Description: Red Hat Enterprise Linux Server release 7.7 (Maipo) ``` Can the package.py file be changed to allow building paraview with python 3.4? I believe the latest matplotlib version supported with python 3.4 is 2.2.4...",0,no way to build paraview with python x chuckatkins danlipsa steps to reproduce the issue console spack install spack install paraview gcc build type release mpi osmesa plugins cmake openssl libbsd netcdf c python error an unsatisfiable version constraint has been detected for spec python gettext pkgconfig while trying to concretize the partial spec py matplotlib py matplotlib requires python version but spec asked for platform and user environment please report your os here commandline uname a linux localdomain smp wed sep pdt gnu linux lsb release d description red hat enterprise linux server release maipo can the package py file be changed to allow building paraview with python i believe the latest matplotlib version supported with python is ,0 6117,22207060372.0,IssuesEvent,2022-06-07 15:41:10,DoESLiverpool/somebody-should,https://api.github.com/repos/DoESLiverpool/somebody-should,closed,Need to backup InfluxDB server database,2 - Should DoES System: Automation,"Spinning this out of #1210 because it's not what caused the problem with mqtt.local/energy usage messages, but does need to be fixed.",1.0,"Need to backup InfluxDB server database - Spinning this out of #1210 because it's not what caused the problem with mqtt.local/energy usage messages, but does need to be fixed.",1,need to backup influxdb server database spinning this out of because it s not what caused the problem with mqtt local energy usage messages but does need to be fixed ,1 225475,7481950994.0,IssuesEvent,2018-04-04 22:37:21,Planteome/plant-experimental-conditions-ontology,https://api.github.com/repos/Planteome/plant-experimental-conditions-ontology,closed,mineral salt treatment (new EO:0001007),Plant Treatment Request: New ontology term high priority sourceforge,"proposed new term: mineral salt treatment (new [EO:0001007](http://purl.obolibrary.org/obo/EO_0001007)): A mineral treatment ([EO:0007044](http://purl.obolibrary.org/obo/EO_0007044)) involving exposure of the plant(s) to mineral salts ([CHEBI:24866](http://purl.obolibrary.org/obo/CHEBI_24866)). Comment: A salt ([CHEBI:24866](http://purl.obolibrary.org/obo/CHEBI_24866)) is defined as an assembly of cations and anions. This is a general term to include the salts of various minerals. Definition can be improved once hierarchy is approved Reported by: cooperl09 Original Ticket: [obo/plant-environment-ontology-eo/53](https://sourceforge.net/p/obo/plant-environment-ontology-eo/53) ",1.0,"mineral salt treatment (new EO:0001007) - proposed new term: mineral salt treatment (new [EO:0001007](http://purl.obolibrary.org/obo/EO_0001007)): A mineral treatment ([EO:0007044](http://purl.obolibrary.org/obo/EO_0007044)) involving exposure of the plant(s) to mineral salts ([CHEBI:24866](http://purl.obolibrary.org/obo/CHEBI_24866)). Comment: A salt ([CHEBI:24866](http://purl.obolibrary.org/obo/CHEBI_24866)) is defined as an assembly of cations and anions. This is a general term to include the salts of various minerals. Definition can be improved once hierarchy is approved Reported by: cooperl09 Original Ticket: [obo/plant-environment-ontology-eo/53](https://sourceforge.net/p/obo/plant-environment-ontology-eo/53) ",0,mineral salt treatment new eo proposed new term mineral salt treatment new a mineral treatment involving exposure of the plant s to mineral salts comment a salt is defined as an assembly of cations and anions this is a general term to include the salts of various minerals definition can be improved once hierarchy is approved reported by original ticket ,0 106751,11496957681.0,IssuesEvent,2020-02-12 09:08:09,cilium/cilium,https://api.github.com/repos/cilium/cilium,closed,k8s-install-restart-pods.rst: WARNING: document isn't included in any toctree,area/documentation kind/bug,"`make render-docs` returns the following error: Documentation/gettingstarted/k8s-install-restart-pods.rst: WARNING: document isn't included in any toctree ",1.0,"k8s-install-restart-pods.rst: WARNING: document isn't included in any toctree - `make render-docs` returns the following error: Documentation/gettingstarted/k8s-install-restart-pods.rst: WARNING: document isn't included in any toctree ",0, install restart pods rst warning document isn t included in any toctree make render docs returns the following error documentation gettingstarted install restart pods rst warning document isn t included in any toctree ,0 10007,31147391806.0,IssuesEvent,2023-08-16 07:34:41,treasuryguild/Catalyst-Training-and-Automation,https://api.github.com/repos/treasuryguild/Catalyst-Training-and-Automation,closed,180.36 ADA Outgoing,Catalyst-Training-and-Automation-Treasury-Wallet Outgoing,"{ ""id"" : ""1692171202721"", ""date"": ""Wed, 16 Aug 2023 07:33:22 GMT"", ""fund"": ""TreasuryWallet"", ""project"": ""Catalyst-Training-and-Automation"", ""proposal"": ""Catalyst-Training-and-Automation-Treasury-Wallet"", ""ideascale"": """", ""budget"": ""Rewards-Withdrawal"", ""name"": ""QADAO pool"", ""exchangeRate"": ""0.279 USD per ADA"", ""ada"" : ""180.36"", ""walletBalance"": [""2874.76 ADA""], ""txid"": ""7b7240280f90d32ac6cc0a782ad3fece27a9c384f4d89d9b99772ed679a08082"", ""description"": ""Withdrawal rewards from the pool"" } ",1.0,"180.36 ADA Outgoing - { ""id"" : ""1692171202721"", ""date"": ""Wed, 16 Aug 2023 07:33:22 GMT"", ""fund"": ""TreasuryWallet"", ""project"": ""Catalyst-Training-and-Automation"", ""proposal"": ""Catalyst-Training-and-Automation-Treasury-Wallet"", ""ideascale"": """", ""budget"": ""Rewards-Withdrawal"", ""name"": ""QADAO pool"", ""exchangeRate"": ""0.279 USD per ADA"", ""ada"" : ""180.36"", ""walletBalance"": [""2874.76 ADA""], ""txid"": ""7b7240280f90d32ac6cc0a782ad3fece27a9c384f4d89d9b99772ed679a08082"", ""description"": ""Withdrawal rewards from the pool"" } ",1, ada outgoing id date wed aug gmt fund treasurywallet project catalyst training and automation proposal catalyst training and automation treasury wallet ideascale budget rewards withdrawal name qadao pool exchangerate usd per ada ada walletbalance txid description withdrawal rewards from the pool ,1 54915,13942791907.0,IssuesEvent,2020-10-22 21:40:00,Whizkevina/uchi-sidebar-clone,https://api.github.com/repos/Whizkevina/uchi-sidebar-clone,opened,"CVE-2019-6284 (Medium) detected in node-sass-4.14.1.tgz, node-sassv4.13.1",security vulnerability,"## CVE-2019-6284 - Medium Severity Vulnerability
Vulnerable Libraries - node-sass-4.14.1.tgz, node-sassv4.13.1

node-sass-4.14.1.tgz

Wrapper around libsass

Library home page: https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz

Path to dependency file: uchi-sidebar-clone/package.json

Path to vulnerable library: uchi-sidebar-clone/node_modules/node-sass/package.json

Dependency Hierarchy: - :x: **node-sass-4.14.1.tgz** (Vulnerable Library)

Found in HEAD commit: 5405eeecb088ab7acf45ef51e052988d72c3fe7f

Found in base branch: main

Vulnerability Details

In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.

Publish Date: 2019-01-14

URL: CVE-2019-6284

CVSS 3 Score Details (6.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284

Release Date: 2019-08-06

Fix Resolution: LibSass - 3.6.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-6284 (Medium) detected in node-sass-4.14.1.tgz, node-sassv4.13.1 - ## CVE-2019-6284 - Medium Severity Vulnerability
Vulnerable Libraries - node-sass-4.14.1.tgz, node-sassv4.13.1

node-sass-4.14.1.tgz

Wrapper around libsass

Library home page: https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz

Path to dependency file: uchi-sidebar-clone/package.json

Path to vulnerable library: uchi-sidebar-clone/node_modules/node-sass/package.json

Dependency Hierarchy: - :x: **node-sass-4.14.1.tgz** (Vulnerable Library)

Found in HEAD commit: 5405eeecb088ab7acf45ef51e052988d72c3fe7f

Found in base branch: main

Vulnerability Details

In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.

Publish Date: 2019-01-14

URL: CVE-2019-6284

CVSS 3 Score Details (6.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284

Release Date: 2019-08-06

Fix Resolution: LibSass - 3.6.0

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in node sass tgz node cve medium severity vulnerability vulnerable libraries node sass tgz node node sass tgz wrapper around libsass library home page a href path to dependency file uchi sidebar clone package json path to vulnerable library uchi sidebar clone node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href found in base branch main vulnerability details in libsass a heap based buffer over read exists in sass prelexer alternatives in prelexer hpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource ,0 7133,24290052198.0,IssuesEvent,2022-09-29 04:37:04,AdamXweb/awesome-aussie,https://api.github.com/repos/AdamXweb/awesome-aussie,opened,[ADDITION] Zepto,Addition Awaiting Review Added to Airtable Automation from Airtable,"### Category ### Software to be added Zepto ### Supporting Material URL: https://www.zepto.com.au/ Description: Zepto creates real-time, account-to-account merchant payment solutions for an on-demand economy. Size: HQ: Sydney LinkedIn: https://www.linkedin.com/company/zepto-payments/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/recKw0jFzxu5H6A6v",1.0,"[ADDITION] Zepto - ### Category ### Software to be added Zepto ### Supporting Material URL: https://www.zepto.com.au/ Description: Zepto creates real-time, account-to-account merchant payment solutions for an on-demand economy. Size: HQ: Sydney LinkedIn: https://www.linkedin.com/company/zepto-payments/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/recKw0jFzxu5H6A6v",1, zepto category software to be added zepto supporting material url description zepto creates real time account to account merchant payment solutions for an on demand economy size hq sydney linkedin see record on airtable ,1 5607,5814742973.0,IssuesEvent,2017-05-05 05:37:42,archco/cosmos-css,https://api.github.com/repos/archco/cosmos-css,closed,Repackaging of library files.,infrastructure js,"현재 Util안의 function중에 ElementUtil에 해당해야하는 function들이 있다. Util, Color, Helper, ElementUtil 라이브러리들을 정리하자. - [x] 라이브러리에 접근하는 방법은 두가지, 첫번째는 import에서 멤버로 불러오는 방법, 두번째는 `Cosmos.lib`로 불러온다. - `Cosmos.lib.Util` - base utilities. - `Cosmos.lib.Color` - color library. - `Cosmos.lib.ElementUtil` - DOM element utilities. - ~~`Cosmos.lib.Helper`~~ Util쪽으로 이동후 삭제. - [x] 내부 static functions의 분류를 새로 한다. - [x] 새로운 library를 기준으로 refactoring and documentation.",1.0,"Repackaging of library files. - 현재 Util안의 function중에 ElementUtil에 해당해야하는 function들이 있다. Util, Color, Helper, ElementUtil 라이브러리들을 정리하자. - [x] 라이브러리에 접근하는 방법은 두가지, 첫번째는 import에서 멤버로 불러오는 방법, 두번째는 `Cosmos.lib`로 불러온다. - `Cosmos.lib.Util` - base utilities. - `Cosmos.lib.Color` - color library. - `Cosmos.lib.ElementUtil` - DOM element utilities. - ~~`Cosmos.lib.Helper`~~ Util쪽으로 이동후 삭제. - [x] 내부 static functions의 분류를 새로 한다. - [x] 새로운 library를 기준으로 refactoring and documentation.",0,repackaging of library files 현재 util안의 function중에 elementutil에 해당해야하는 function들이 있다 util color helper elementutil 라이브러리들을 정리하자 라이브러리에 접근하는 방법은 두가지 첫번째는 import에서 멤버로 불러오는 방법 두번째는 cosmos lib 로 불러온다 cosmos lib util base utilities cosmos lib color color library cosmos lib elementutil dom element utilities cosmos lib helper util쪽으로 이동후 삭제 내부 static functions의 분류를 새로 한다 새로운 library를 기준으로 refactoring and documentation ,0 1717,10596042630.0,IssuesEvent,2019-10-09 20:20:34,rancher/rancher,https://api.github.com/repos/rancher/rancher,opened,Make onTag automation jobs run as standard user.,kind/task setup/automation,"Currently , when onTag automation jobs run , it creates rancher server and uses the global admin to create clusters and run validation tests. We want to create a standard user and use this user to create clusters and run validation tests.",1.0,"Make onTag automation jobs run as standard user. - Currently , when onTag automation jobs run , it creates rancher server and uses the global admin to create clusters and run validation tests. We want to create a standard user and use this user to create clusters and run validation tests.",1,make ontag automation jobs run as standard user currently when ontag automation jobs run it creates rancher server and uses the global admin to create clusters and run validation tests we want to create a standard user and use this user to create clusters and run validation tests ,1 3880,14882524874.0,IssuesEvent,2021-01-20 12:01:49,elastic/apm-pipeline-library,https://api.github.com/repos/elastic/apm-pipeline-library,closed,Add a pre-commit hook to follow markdown links,automation enhancement,"It would be nice if a precommit hook checks for public links, detecting broken links",1.0,"Add a pre-commit hook to follow markdown links - It would be nice if a precommit hook checks for public links, detecting broken links",1,add a pre commit hook to follow markdown links it would be nice if a precommit hook checks for public links detecting broken links,1 2381,11857591859.0,IssuesEvent,2020-03-25 09:52:26,coolOrangeLabs/powerGateTemplate,https://api.github.com/repos/coolOrangeLabs/powerGateTemplate,closed,6 - Plugin app.config configuration,PGServer_Automation,"### ToDo + [ ] Important things like connection or export details should be configurable from the app.config file + [ ] Adjust the [wiki about this](https://github.com/coolOrangeProjects/powerGateStandardServer/wiki/Server-Configuration)",1.0,"6 - Plugin app.config configuration - ### ToDo + [ ] Important things like connection or export details should be configurable from the app.config file + [ ] Adjust the [wiki about this](https://github.com/coolOrangeProjects/powerGateStandardServer/wiki/Server-Configuration)",1, plugin app config configuration todo important things like connection or export details should be configurable from the app config file adjust the ,1 480,6609257133.0,IssuesEvent,2017-09-19 14:00:24,PEDSnet/pedsnetcdm_to_pcornetcdm,https://api.github.com/repos/PEDSnet/pedsnetcdm_to_pcornetcdm,closed,Integrate PCORnet-Harvest table generation,Automation,"Currently manual, and probably not fully automatable, but we can script at least a first pass",1.0,"Integrate PCORnet-Harvest table generation - Currently manual, and probably not fully automatable, but we can script at least a first pass",1,integrate pcornet harvest table generation currently manual and probably not fully automatable but we can script at least a first pass,1 64728,18848205134.0,IssuesEvent,2021-11-11 17:15:32,vector-im/element-web,https://api.github.com/repos/vector-im/element-web,opened,The sidebar icon isn't roundish,T-Defect S-Minor A-User-Settings A-Spaces A-Appearance Z-IA,"![Screenshot_20211111_181225](https://user-images.githubusercontent.com/25768714/141340111-d00cb183-e04b-4c79-96cb-630c4780de10.png) All the other icons are round or roundish and the sidebar icon feels a little out of place, IMO",1.0,"The sidebar icon isn't roundish - ![Screenshot_20211111_181225](https://user-images.githubusercontent.com/25768714/141340111-d00cb183-e04b-4c79-96cb-630c4780de10.png) All the other icons are round or roundish and the sidebar icon feels a little out of place, IMO",0,the sidebar icon isn t roundish all the other icons are round or roundish and the sidebar icon feels a little out of place imo,0 5707,20816545175.0,IssuesEvent,2022-03-18 10:55:48,Music-Bot-for-Jitsi/Jimmi,https://api.github.com/repos/Music-Bot-for-Jitsi/Jimmi,closed,Setup Codecov,automation," > **As a** developer > **I want** my code to be checked automatically for test coverage on each push on main > **so that** I have clear metrics showing my code quality. ## Description: Setup Codecov for this repo using GitHub Actions. ### 🟢 In scope: ### 🔴 Not in scope: ## What should be the result? ",1.0,"Setup Codecov - > **As a** developer > **I want** my code to be checked automatically for test coverage on each push on main > **so that** I have clear metrics showing my code quality. ## Description: Setup Codecov for this repo using GitHub Actions. ### 🟢 In scope: ### 🔴 Not in scope: ## What should be the result? ",1,setup codecov as a developer i want my code to be checked automatically for test coverage on each push on main so that i have clear metrics showing my code quality description setup codecov for this repo using github actions 🟢 in scope 🔴 not in scope what should be the result ,1 64307,8723040809.0,IssuesEvent,2018-12-09 18:13:14,wprig/wprig,https://api.github.com/repos/wprig/wprig,opened,Fix npm run dev reference in README.md,documentation,early on in README.md it says `npm run build` where it should say `npm run dev`.,1.0,Fix npm run dev reference in README.md - early on in README.md it says `npm run build` where it should say `npm run dev`.,0,fix npm run dev reference in readme md early on in readme md it says npm run build where it should say npm run dev ,0 9971,30853429812.0,IssuesEvent,2023-08-02 18:33:32,jondavid-black/AaC,https://api.github.com/repos/jondavid-black/AaC,opened,Automate release notifications,enhancement automation,"As an AaC Developer, I want to automate the notifications of new releases, so that I can perform appropriate responsive actions. ## AC: - [ ] Generate notification from workflows of new version available ## Resources https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/notifications-for-workflow-runs https://gitdailies.com/articles/github-actions-notifications/",1.0,"Automate release notifications - As an AaC Developer, I want to automate the notifications of new releases, so that I can perform appropriate responsive actions. ## AC: - [ ] Generate notification from workflows of new version available ## Resources https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/notifications-for-workflow-runs https://gitdailies.com/articles/github-actions-notifications/",1,automate release notifications as an aac developer i want to automate the notifications of new releases so that i can perform appropriate responsive actions ac generate notification from workflows of new version available resources ,1 1812,10851294893.0,IssuesEvent,2019-11-13 10:29:55,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,opened,"Automation Backend: Tech Debt, Developer UX and Ideas for Architecture & Design",automation,"**This is a braindump of all the things we ran into while working on #6085. The items on this list range from ""nice to have"" to ""we should do it"" to ""we _need_ to do this, sooner rather than later""** ### `a8n.Store` * Can/should we combine `repos.Store` and `a8n.Store`? * What about enterprise/OSS divide here? * The tests for `a8n.Store` are in one large file and have global state * The tests are slow because a database needs to be dropped and created for every package in which a `dbtest` database is used * In `a8n.Store`: the column names are duplicated in every query string. We could probably substrings and include them in the other queries. * Probably missing index on `campaign_job.error` (see `Store.GetCampaignPlanStatus` for how this is queried, maybe we don't need it, since we query by `campaign_plan_id` and iterate over all rows anyway) * `UpdateCampaignJob` needs `nulltimeColumn` for `StartedAt` and `StartedAt` ### Architecture and Design of a8n code * We have `a8n.Service` and `run.Runner` — they are really similar * Should we combine them? Do we keep them separate and have two separate services? * Do we need the `run` package? ### Performance & Scalability * No persistent queue * `Runner` executes `CampaignJobs` in goroutines * `a8n.Service` executes `ChangesetJobs` in goroutines * Why we'd want a persistent queue: if frontend crashes all progress is lost and we'd be in a state we can't get out of (at least for `previewCampaignPlan`) * It would also make it easier for us to offload work from frontend to another process * GitHub rate limit and abuse detection * Right now we don't launch `ChangesetJobs` in parallel because we don't want to trigger GitHub's abuse detection and run into the rate limit. * But since we use `github-proxy`, which acts as a semaphore of size 1 for all requests to Github, we could probably parallelize these requests. ### Enterprise and OSS * We have two `a8n` packages, one in in `internal` and one in `enterprise/pkg` * That always requires a named import * We can't just have only one enterprise package because we need to access the type definitions in `internal/a8n` from `cmd/repo-updater/repos` because `repo-updater` has the `Sources` that can load and create `a8n.Changesets` ### Executing CampaignPlans * Is it better to use `zoekt.Searcher.List` in the `a8n.Runner` than `graphqlbackend.RepoSearch`? [See the code here](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/resolvers/resolver.go#L429) ### Naming of and in `repos` package * Naming of and in `repos` package * By now, the `repos` package should be called `externalservice` (or `codehost`), since it does a lot more than just `repos`. It could also probably be extracted from `repo-updater` * `repos.Source` needs to be called `Client`, because it's more than a ""source of repositories"" by now ### GraphQL layer and a8n * In order to create a new GraphQL query/mutation for `a8n` you need to: * Define it in `schema.graphql` * Define an interface in `./cmd/frontend/graphqlbackend/a8n.go` that will be implemented by the (enterprise-only) `a8n` package * Implement the interface in `./enterprise/pkg/a8n/resolvers/` * Write a test in `./enterprise/pkg/a8n/resolvers` which _again_ defines structs that reflect the GraphQL schema to which we can unmarshall the resolver response * The GraphQL tests in `./enterprise/pkg/a8n/resolvers/resolver_test.go` are _really_ verbose and heavy * They all share the same setup * They require a lot of setup (admin user, ext service, repos just to have a `RepoID` somewhere) * They are slow due to the heavy setup * They require inline type definitions to unmarshal GraphQL responses, duplicated across tests ### Inconsistencies in type definitions * Repo IDs are sometimes `uint32`, sometimes `int32`, sometimes `int64` * Inconsistent usage of `api.RepoName` and `api.CommitID` — some use it, some don't ### Developer UX when dealing with external services, repos and talking to code hosts * No predefined set of common `httpcli` middlewares in `httpcli` * If you have a repo, it's cumbersome to get to the right external service client. * [See this piece of code](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/service.go#L175-L200) * [See this](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/syncer.go#L71-L105) ### Fix multi-file diffs without extended header in `go-diff` `go-diff` has a bug where it doesn't parse multi-file diffs correctly that have no headers between diffs. [See this piece of code](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/run/campaign_type.go#L130-L146) ",1.0,"Automation Backend: Tech Debt, Developer UX and Ideas for Architecture & Design - **This is a braindump of all the things we ran into while working on #6085. The items on this list range from ""nice to have"" to ""we should do it"" to ""we _need_ to do this, sooner rather than later""** ### `a8n.Store` * Can/should we combine `repos.Store` and `a8n.Store`? * What about enterprise/OSS divide here? * The tests for `a8n.Store` are in one large file and have global state * The tests are slow because a database needs to be dropped and created for every package in which a `dbtest` database is used * In `a8n.Store`: the column names are duplicated in every query string. We could probably substrings and include them in the other queries. * Probably missing index on `campaign_job.error` (see `Store.GetCampaignPlanStatus` for how this is queried, maybe we don't need it, since we query by `campaign_plan_id` and iterate over all rows anyway) * `UpdateCampaignJob` needs `nulltimeColumn` for `StartedAt` and `StartedAt` ### Architecture and Design of a8n code * We have `a8n.Service` and `run.Runner` — they are really similar * Should we combine them? Do we keep them separate and have two separate services? * Do we need the `run` package? ### Performance & Scalability * No persistent queue * `Runner` executes `CampaignJobs` in goroutines * `a8n.Service` executes `ChangesetJobs` in goroutines * Why we'd want a persistent queue: if frontend crashes all progress is lost and we'd be in a state we can't get out of (at least for `previewCampaignPlan`) * It would also make it easier for us to offload work from frontend to another process * GitHub rate limit and abuse detection * Right now we don't launch `ChangesetJobs` in parallel because we don't want to trigger GitHub's abuse detection and run into the rate limit. * But since we use `github-proxy`, which acts as a semaphore of size 1 for all requests to Github, we could probably parallelize these requests. ### Enterprise and OSS * We have two `a8n` packages, one in in `internal` and one in `enterprise/pkg` * That always requires a named import * We can't just have only one enterprise package because we need to access the type definitions in `internal/a8n` from `cmd/repo-updater/repos` because `repo-updater` has the `Sources` that can load and create `a8n.Changesets` ### Executing CampaignPlans * Is it better to use `zoekt.Searcher.List` in the `a8n.Runner` than `graphqlbackend.RepoSearch`? [See the code here](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/resolvers/resolver.go#L429) ### Naming of and in `repos` package * Naming of and in `repos` package * By now, the `repos` package should be called `externalservice` (or `codehost`), since it does a lot more than just `repos`. It could also probably be extracted from `repo-updater` * `repos.Source` needs to be called `Client`, because it's more than a ""source of repositories"" by now ### GraphQL layer and a8n * In order to create a new GraphQL query/mutation for `a8n` you need to: * Define it in `schema.graphql` * Define an interface in `./cmd/frontend/graphqlbackend/a8n.go` that will be implemented by the (enterprise-only) `a8n` package * Implement the interface in `./enterprise/pkg/a8n/resolvers/` * Write a test in `./enterprise/pkg/a8n/resolvers` which _again_ defines structs that reflect the GraphQL schema to which we can unmarshall the resolver response * The GraphQL tests in `./enterprise/pkg/a8n/resolvers/resolver_test.go` are _really_ verbose and heavy * They all share the same setup * They require a lot of setup (admin user, ext service, repos just to have a `RepoID` somewhere) * They are slow due to the heavy setup * They require inline type definitions to unmarshal GraphQL responses, duplicated across tests ### Inconsistencies in type definitions * Repo IDs are sometimes `uint32`, sometimes `int32`, sometimes `int64` * Inconsistent usage of `api.RepoName` and `api.CommitID` — some use it, some don't ### Developer UX when dealing with external services, repos and talking to code hosts * No predefined set of common `httpcli` middlewares in `httpcli` * If you have a repo, it's cumbersome to get to the right external service client. * [See this piece of code](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/service.go#L175-L200) * [See this](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/syncer.go#L71-L105) ### Fix multi-file diffs without extended header in `go-diff` `go-diff` has a bug where it doesn't parse multi-file diffs correctly that have no headers between diffs. [See this piece of code](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/run/campaign_type.go#L130-L146) ",1,automation backend tech debt developer ux and ideas for architecture design this is a braindump of all the things we ran into while working on the items on this list range from nice to have to we should do it to we need to do this sooner rather than later store can should we combine repos store and store what about enterprise oss divide here the tests for store are in one large file and have global state the tests are slow because a database needs to be dropped and created for every package in which a dbtest database is used in store the column names are duplicated in every query string we could probably substrings and include them in the other queries probably missing index on campaign job error see store getcampaignplanstatus for how this is queried maybe we don t need it since we query by campaign plan id and iterate over all rows anyway updatecampaignjob needs nulltimecolumn for startedat and startedat architecture and design of code we have service and run runner — they are really similar should we combine them do we keep them separate and have two separate services do we need the run package performance scalability no persistent queue runner executes campaignjobs in goroutines service executes changesetjobs in goroutines why we d want a persistent queue if frontend crashes all progress is lost and we d be in a state we can t get out of at least for previewcampaignplan it would also make it easier for us to offload work from frontend to another process github rate limit and abuse detection right now we don t launch changesetjobs in parallel because we don t want to trigger github s abuse detection and run into the rate limit but since we use github proxy which acts as a semaphore of size for all requests to github we could probably parallelize these requests enterprise and oss we have two packages one in in internal and one in enterprise pkg that always requires a named import we can t just have only one enterprise package because we need to access the type definitions in internal from cmd repo updater repos because repo updater has the sources that can load and create changesets executing campaignplans is it better to use zoekt searcher list in the runner than graphqlbackend reposearch naming of and in repos package naming of and in repos package by now the repos package should be called externalservice or codehost since it does a lot more than just repos it could also probably be extracted from repo updater repos source needs to be called client because it s more than a source of repositories by now graphql layer and in order to create a new graphql query mutation for you need to define it in schema graphql define an interface in cmd frontend graphqlbackend go that will be implemented by the enterprise only package implement the interface in enterprise pkg resolvers write a test in enterprise pkg resolvers which again defines structs that reflect the graphql schema to which we can unmarshall the resolver response the graphql tests in enterprise pkg resolvers resolver test go are really verbose and heavy they all share the same setup they require a lot of setup admin user ext service repos just to have a repoid somewhere they are slow due to the heavy setup they require inline type definitions to unmarshal graphql responses duplicated across tests inconsistencies in type definitions repo ids are sometimes sometimes sometimes inconsistent usage of api reponame and api commitid — some use it some don t developer ux when dealing with external services repos and talking to code hosts no predefined set of common httpcli middlewares in httpcli if you have a repo it s cumbersome to get to the right external service client fix multi file diffs without extended header in go diff go diff has a bug where it doesn t parse multi file diffs correctly that have no headers between diffs ,1 4277,15931764656.0,IssuesEvent,2021-04-14 04:11:51,MinaProtocol/mina,https://api.github.com/repos/MinaProtocol/mina,closed,Change structured event parser generator to use maps instead of alists,acceptance-automation,"in support of the new changes made in https://github.com/MinaProtocol/mina/issues/7687 change the parser generator in `src/lib/ppx_register_event/register_event.ml` to use a regular map, so that it's less sensitive to ordering and extra metadata",1.0,"Change structured event parser generator to use maps instead of alists - in support of the new changes made in https://github.com/MinaProtocol/mina/issues/7687 change the parser generator in `src/lib/ppx_register_event/register_event.ml` to use a regular map, so that it's less sensitive to ordering and extra metadata",1,change structured event parser generator to use maps instead of alists in support of the new changes made in change the parser generator in src lib ppx register event register event ml to use a regular map so that it s less sensitive to ordering and extra metadata,1 403364,11839788994.0,IssuesEvent,2020-03-23 17:43:27,cityofaustin/census2020,https://api.github.com/repos/cityofaustin/census2020,closed,Taking The Census Is Easy Text to Take Place of Map on Homepage of Census Website,Priority: ★★★,"@mateoclarke - Can we put the following text in ENG and SPA on the homepage of our website where the Neighborhood Organizing Map once was? Let me know if you need anything else. We do not have a graphic to use at this time, but hoping soon. English: The 2020 Census is easy. 9 Questions in 9 Minutes. You will answer a simple questionnaire about yourself and everyone who is or will be living with you on April 1, 2020. Your home can fill out the Census in three ways: Online, By Phone, and By Mail. Take the Census NOW! (_Links to My2020Census.Gov_) Spanish: Completar el Censo del 2020 es fácil. 9 preguntas en 9 minutos. Usted responderá a un formulario sencillo que hace preguntas sobre usted y todas las personas que viven en su hogar el 1 de abril del 2020. Su hogar puede llenar el Censo de tres maneras: En línea, Por teléfono, y Por correo. Llena el Censo! (_Links to My2020Census.Gov_) ",1.0,"Taking The Census Is Easy Text to Take Place of Map on Homepage of Census Website - @mateoclarke - Can we put the following text in ENG and SPA on the homepage of our website where the Neighborhood Organizing Map once was? Let me know if you need anything else. We do not have a graphic to use at this time, but hoping soon. English: The 2020 Census is easy. 9 Questions in 9 Minutes. You will answer a simple questionnaire about yourself and everyone who is or will be living with you on April 1, 2020. Your home can fill out the Census in three ways: Online, By Phone, and By Mail. Take the Census NOW! (_Links to My2020Census.Gov_) Spanish: Completar el Censo del 2020 es fácil. 9 preguntas en 9 minutos. Usted responderá a un formulario sencillo que hace preguntas sobre usted y todas las personas que viven en su hogar el 1 de abril del 2020. Su hogar puede llenar el Censo de tres maneras: En línea, Por teléfono, y Por correo. Llena el Censo! (_Links to My2020Census.Gov_) ",0,taking the census is easy text to take place of map on homepage of census website mateoclarke can we put the following text in eng and spa on the homepage of our website where the neighborhood organizing map once was let me know if you need anything else we do not have a graphic to use at this time but hoping soon english the census is easy questions in minutes you will answer a simple questionnaire about yourself and everyone who is or will be living with you on april your home can fill out the census in three ways online by phone and by mail take the census now links to gov spanish completar el censo del es fácil preguntas en minutos usted responderá a un formulario sencillo que hace preguntas sobre usted y todas las personas que viven en su hogar el de abril del su hogar puede llenar el censo de tres maneras en línea por teléfono y por correo llena el censo links to gov ,0 2948,12856807956.0,IssuesEvent,2020-07-09 08:16:26,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,opened,Check the chrome driver download the URL seems wrong in some cases,automation bug team:automation,"The URL to download the chrome driver seems wrong, the command `$(curl -o- https://chromedriver.storage.googleapis.com/LATEST_RELEASE)/chromedriver_linux64.zip` do something weird with the `LATEST_RELEASE` env var and the `curl` command is `$(curl -o- https://chromedriver.storage.googleapis.com/LATEST_RELEASE)` ``` [2020-07-09T03:17:17.275Z] Step 3/9 : RUN curl -SLO https://chromedriver.storage.googleapis.com/$(curl -o- https://chromedriver.storage.googleapis.com/LATEST_RELEASE)/chromedriver_linux64.zip && apt-get -yqq update && apt install -yqq --no-install-recommends unzip && unzip -d /usr/local/bin/ chromedriver_linux64.zip chromedriver && rm -rf /var/lib/apt/lists/* [2020-07-09T03:17:17.275Z] ---> Running in 80ad6a6143dc [2020-07-09T03:17:17.275Z] % Total % Received % Xferd Average Speed Time Time Time Current [2020-07-09T03:17:17.275Z] Dload Upload Total Spent Left Speed [2020-07-09T03:17:17.275Z] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 12 100 12 0 0 444 0 --:--:-- --:--:-- --:--:-- 444 [2020-07-09T03:17:17.275Z] % Total % Received % Xferd Average Speed Time Time Time Current [2020-07-09T03:17:17.275Z] Dload Upload Total Spent Left Speed [2020-07-09T03:17:17.275Z] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 93 5099k 93 4792k 0 0 63.2M 0 --:--:-- --:--:-- --:--:-- 63.2M [2020-07-09T03:17:17.275Z] curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104 [2020-07-09T03:17:17.275Z] The command '/bin/sh -c curl -SLO https://chromedriver.storage.googleapis.com/$(curl -o- https://chromedriver.storage.googleapis.com/LATEST_RELEASE)/chromedriver_linux64.zip && apt-get -yqq update && apt install -yqq --no-install-recommends unzip && unzip -d /usr/local/bin/ chromedriver_linux64.zip chromedriver && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 56 ```",2.0,"Check the chrome driver download the URL seems wrong in some cases - The URL to download the chrome driver seems wrong, the command `$(curl -o- https://chromedriver.storage.googleapis.com/LATEST_RELEASE)/chromedriver_linux64.zip` do something weird with the `LATEST_RELEASE` env var and the `curl` command is `$(curl -o- https://chromedriver.storage.googleapis.com/LATEST_RELEASE)` ``` [2020-07-09T03:17:17.275Z] Step 3/9 : RUN curl -SLO https://chromedriver.storage.googleapis.com/$(curl -o- https://chromedriver.storage.googleapis.com/LATEST_RELEASE)/chromedriver_linux64.zip && apt-get -yqq update && apt install -yqq --no-install-recommends unzip && unzip -d /usr/local/bin/ chromedriver_linux64.zip chromedriver && rm -rf /var/lib/apt/lists/* [2020-07-09T03:17:17.275Z] ---> Running in 80ad6a6143dc [2020-07-09T03:17:17.275Z] % Total % Received % Xferd Average Speed Time Time Time Current [2020-07-09T03:17:17.275Z] Dload Upload Total Spent Left Speed [2020-07-09T03:17:17.275Z] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 12 100 12 0 0 444 0 --:--:-- --:--:-- --:--:-- 444 [2020-07-09T03:17:17.275Z] % Total % Received % Xferd Average Speed Time Time Time Current [2020-07-09T03:17:17.275Z] Dload Upload Total Spent Left Speed [2020-07-09T03:17:17.275Z] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 93 5099k 93 4792k 0 0 63.2M 0 --:--:-- --:--:-- --:--:-- 63.2M [2020-07-09T03:17:17.275Z] curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104 [2020-07-09T03:17:17.275Z] The command '/bin/sh -c curl -SLO https://chromedriver.storage.googleapis.com/$(curl -o- https://chromedriver.storage.googleapis.com/LATEST_RELEASE)/chromedriver_linux64.zip && apt-get -yqq update && apt install -yqq --no-install-recommends unzip && unzip -d /usr/local/bin/ chromedriver_linux64.zip chromedriver && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 56 ```",1,check the chrome driver download the url seems wrong in some cases the url to download the chrome driver seems wrong the command curl o do something weird with the latest release env var and the curl command is curl o step run curl slo o apt get yqq update apt install yqq no install recommends unzip unzip d usr local bin chromedriver zip chromedriver rm rf var lib apt lists running in total received xferd average speed time time time current dload upload total spent left speed total received xferd average speed time time time current dload upload total spent left speed curl openssl ssl read ssl error syscall errno the command bin sh c curl slo o apt get yqq update apt install yqq no install recommends unzip unzip d usr local bin chromedriver zip chromedriver rm rf var lib apt lists returned a non zero code ,1 28162,8101634032.0,IssuesEvent,2018-08-12 15:52:06,SoftEtherVPN/SoftEtherVPN,https://api.github.com/repos/SoftEtherVPN/SoftEtherVPN,closed,clang static analyzer (scan-build) with cmake,build & release,"please, someone have a look how scan-build can be called from within cmake (I'm going to have a look by myself if I will have spare time)",1.0,"clang static analyzer (scan-build) with cmake - please, someone have a look how scan-build can be called from within cmake (I'm going to have a look by myself if I will have spare time)",0,clang static analyzer scan build with cmake please someone have a look how scan build can be called from within cmake i m going to have a look by myself if i will have spare time ,0 60608,6711524358.0,IssuesEvent,2017-10-13 04:32:34,agonzalez0515/test-test,https://api.github.com/repos/agonzalez0515/test-test,closed,dfgdfgfd,test-1 test-2,"# This is the custom template!!!!!!! ### 🆕🐥☝ First Timers Only. This issue is reserved for people who never contributed to Open Source before. We know that the process of creating a pull request is the biggest barrier for new contributors. This issue is for you 💝 [About First Timers Only](http://www.firsttimersonly.com/). ### 🤔 What you will need to know. Nothing. This issue is meant to welcome you to Open Source :) We are happy to walk you through the process. ### 📋 Step by Step - [ ] 🙋 **Claim this issue**: Comment below. Once claimed we add you as contributor to this repository. - [ ] 👌 **Accept our invitation** to this repository. Once accepted, assign yourself to this repository - [ ] 📝 **Update** the file [README.md](https://github.com/agonzalez0515/first-timers-test/blob/master/README.md) in the `first-timers-test` repository (press the little pen Icon) and edit the line as shown below. ```diff @@ -1 +1,2 @@ # First Timers Bot Test +gjkhgijdhg;dhg;dfhg;df ``` - [ ] 💾 **Commit** your changes - [ ] 🔀 **Start a Pull Request**. There are two ways how you can start a pull request: 1. If you are familiar with the terminal or would like to learn it, [here is a great tutorial](https://egghead.io/series/how-to-contribute-to-an-open-source-project-on-github) on how to send a pull request using the terminal. 2. You can [edit files directly in your browser](https://help.github.com/articles/editing-files-in-your-repository/) - [ ] 🏁 **Done** Ask in comments for a review :) ### 🤔❓ Questions Leave a comment below! This issue was created by [First-Timers-Bot](https://github.com/hoodiehq/first-timers-bot). ",2.0,"dfgdfgfd - # This is the custom template!!!!!!! ### 🆕🐥☝ First Timers Only. This issue is reserved for people who never contributed to Open Source before. We know that the process of creating a pull request is the biggest barrier for new contributors. This issue is for you 💝 [About First Timers Only](http://www.firsttimersonly.com/). ### 🤔 What you will need to know. Nothing. This issue is meant to welcome you to Open Source :) We are happy to walk you through the process. ### 📋 Step by Step - [ ] 🙋 **Claim this issue**: Comment below. Once claimed we add you as contributor to this repository. - [ ] 👌 **Accept our invitation** to this repository. Once accepted, assign yourself to this repository - [ ] 📝 **Update** the file [README.md](https://github.com/agonzalez0515/first-timers-test/blob/master/README.md) in the `first-timers-test` repository (press the little pen Icon) and edit the line as shown below. ```diff @@ -1 +1,2 @@ # First Timers Bot Test +gjkhgijdhg;dhg;dfhg;df ``` - [ ] 💾 **Commit** your changes - [ ] 🔀 **Start a Pull Request**. There are two ways how you can start a pull request: 1. If you are familiar with the terminal or would like to learn it, [here is a great tutorial](https://egghead.io/series/how-to-contribute-to-an-open-source-project-on-github) on how to send a pull request using the terminal. 2. You can [edit files directly in your browser](https://help.github.com/articles/editing-files-in-your-repository/) - [ ] 🏁 **Done** Ask in comments for a review :) ### 🤔❓ Questions Leave a comment below! This issue was created by [First-Timers-Bot](https://github.com/hoodiehq/first-timers-bot). ",0,dfgdfgfd this is the custom template 🆕🐥☝ first timers only this issue is reserved for people who never contributed to open source before we know that the process of creating a pull request is the biggest barrier for new contributors this issue is for you 💝 🤔 what you will need to know nothing this issue is meant to welcome you to open source we are happy to walk you through the process 📋 step by step 🙋 claim this issue comment below once claimed we add you as contributor to this repository 👌 accept our invitation to this repository once accepted assign yourself to this repository 📝 update the file in the first timers test repository press the little pen icon and edit the line as shown below diff first timers bot test gjkhgijdhg dhg dfhg df 💾 commit your changes 🔀 start a pull request there are two ways how you can start a pull request if you are familiar with the terminal or would like to learn it on how to send a pull request using the terminal you can 🏁 done ask in comments for a review 🤔❓ questions leave a comment below this issue was created by ,0 89651,18018130430.0,IssuesEvent,2021-09-16 15:57:42,googleapis/java-video-transcoder,https://api.github.com/repos/googleapis/java-video-transcoder,closed,Trim a video,type: question api: transcoder,"I didn't find a way to trim down a video given start-time, end-time. Let's say the input video is of duration 60secs and I want to cut out the duration between start-time=20secs and end-time=30secs from the input video. Is there any way to do it with current feature set ? I only found cropping and adding overlays. ",1.0,"Trim a video - I didn't find a way to trim down a video given start-time, end-time. Let's say the input video is of duration 60secs and I want to cut out the duration between start-time=20secs and end-time=30secs from the input video. Is there any way to do it with current feature set ? I only found cropping and adding overlays. ",0,trim a video i didn t find a way to trim down a video given start time end time let s say the input video is of duration and i want to cut out the duration between start time and end time from the input video is there any way to do it with current feature set i only found cropping and adding overlays ,0 5811,21259660994.0,IssuesEvent,2022-04-13 01:47:12,theglus/Home-Assistant-Config,https://api.github.com/repos/theglus/Home-Assistant-Config,closed,Troubleshoot vacuum speed for automation.vacuum_schedule,automation Winston,"# Requirements - [x] Ensure `automation.vacuum_schedule` sets speed to `Silent`. - [x] Ensure `automation.leave_vacuum` sets speed to `Turbo`. # Acceptance Testing **Test `automation.vacuum_schedule` action using the UI** - [x] Change vacuum speed in automation to Turbo. - [x] Test action and confirm speed is Turbo. - [x] Change vacuum speed in automation to Silent. - [x] Test action and confirm speed is Silent. **Test in combination with `automation.leave_vacuum`** - [x] Trigger `automation.leave_vacuum`. - [x] Confirm speed is Turbo. - [x] Trigger `automation.vacuum_schedule`. - [x] Confirm speed is Silent.",1.0,"Troubleshoot vacuum speed for automation.vacuum_schedule - # Requirements - [x] Ensure `automation.vacuum_schedule` sets speed to `Silent`. - [x] Ensure `automation.leave_vacuum` sets speed to `Turbo`. # Acceptance Testing **Test `automation.vacuum_schedule` action using the UI** - [x] Change vacuum speed in automation to Turbo. - [x] Test action and confirm speed is Turbo. - [x] Change vacuum speed in automation to Silent. - [x] Test action and confirm speed is Silent. **Test in combination with `automation.leave_vacuum`** - [x] Trigger `automation.leave_vacuum`. - [x] Confirm speed is Turbo. - [x] Trigger `automation.vacuum_schedule`. - [x] Confirm speed is Silent.",1,troubleshoot vacuum speed for automation vacuum schedule requirements ensure automation vacuum schedule sets speed to silent ensure automation leave vacuum sets speed to turbo acceptance testing test automation vacuum schedule action using the ui change vacuum speed in automation to turbo test action and confirm speed is turbo change vacuum speed in automation to silent test action and confirm speed is silent test in combination with automation leave vacuum trigger automation leave vacuum confirm speed is turbo trigger automation vacuum schedule confirm speed is silent ,1 88315,15800767179.0,IssuesEvent,2021-04-03 01:11:50,rammatzkvosky/jdb,https://api.github.com/repos/rammatzkvosky/jdb,opened,CVE-2020-36179 (High) detected in jackson-databind-2.8.8.jar,security vulnerability,"## CVE-2020-36179 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.8.8.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: jdb/pom.xml

Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.8/jackson-databind-2.8.8.jar

Dependency Hierarchy: - :x: **jackson-databind-2.8.8.jar** (Vulnerable Library)

Vulnerability Details

FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS.

Publish Date: 2021-01-07

URL: CVE-2020-36179

CVSS 3 Score Details (8.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/FasterXML/jackson-databind/issues/3004

Release Date: 2021-01-07

Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8

*** :rescue_worker_helmet: Automatic Remediation is available for this issue ",True,"CVE-2020-36179 (High) detected in jackson-databind-2.8.8.jar - ## CVE-2020-36179 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.8.8.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: jdb/pom.xml

Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.8/jackson-databind-2.8.8.jar

Dependency Hierarchy: - :x: **jackson-databind-2.8.8.jar** (Vulnerable Library)

Vulnerability Details

FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS.

Publish Date: 2021-01-07

URL: CVE-2020-36179

CVSS 3 Score Details (8.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/FasterXML/jackson-databind/issues/3004

Release Date: 2021-01-07

Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8

*** :rescue_worker_helmet: Automatic Remediation is available for this issue ",0,cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file jdb pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to oadd org apache commons dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to oadd org apache commons dbcp cpdsadapter driveradaptercpds vulnerabilityurl ,0 95862,3961338962.0,IssuesEvent,2016-05-02 12:33:08,AmpersandTarski/Ampersand,https://api.github.com/repos/AmpersandTarski/Ampersand,opened,New Excel Import does not create specialized atoms,bug component:front-end priority:high,"Consider the following: ~~~ CLASSIFY Concern ISA Variable varAgreement :: Variable * Agreement [UNI,TOT] INTERFACE ""#Concerns"" FOR ExcelImporter: I[Concern] CRud BOX [ ""Agreement"" : varAgreement CRUd ] INTERFACE ""Concerns"" FOR User: V[SESSION*Concern] cRud COLS [ ""Concern"" : I cRud , ""Agreement"" : varAgreement cRud ] ~~~ Consider the following Excel sheet ![picture1](https://cloud.githubusercontent.com/assets/8522589/14954384/3a16eb0e-1072-11e6-973b-554df9249ff9.png) After importing this population, they should show up in the interface `Concerns`, but they do not. It appears as if they are created as a `Variable` rather than a `Concern` ",1.0,"New Excel Import does not create specialized atoms - Consider the following: ~~~ CLASSIFY Concern ISA Variable varAgreement :: Variable * Agreement [UNI,TOT] INTERFACE ""#Concerns"" FOR ExcelImporter: I[Concern] CRud BOX [ ""Agreement"" : varAgreement CRUd ] INTERFACE ""Concerns"" FOR User: V[SESSION*Concern] cRud COLS [ ""Concern"" : I cRud , ""Agreement"" : varAgreement cRud ] ~~~ Consider the following Excel sheet ![picture1](https://cloud.githubusercontent.com/assets/8522589/14954384/3a16eb0e-1072-11e6-973b-554df9249ff9.png) After importing this population, they should show up in the interface `Concerns`, but they do not. It appears as if they are created as a `Variable` rather than a `Concern` ",0,new excel import does not create specialized atoms consider the following classify concern isa variable varagreement variable agreement interface concerns for excelimporter i crud box agreement varagreement crud interface concerns for user v crud cols concern i crud agreement varagreement crud consider the following excel sheet after importing this population they should show up in the interface concerns but they do not it appears as if they are created as a variable rather than a concern ,0 1629,10471810260.0,IssuesEvent,2019-09-23 08:46:36,mozilla-mobile/android-components,https://api.github.com/repos/mozilla-mobile/android-components,closed,"Add an automated check for license headers on top on new files (lint rule, detekt check, ..)",Hacktoberfest help wanted 🤖 automation,We should have an automated check for that verifies that new files have the license headers on top - if not the PRs fail automatically.,1.0,"Add an automated check for license headers on top on new files (lint rule, detekt check, ..) - We should have an automated check for that verifies that new files have the license headers on top - if not the PRs fail automatically.",1,add an automated check for license headers on top on new files lint rule detekt check we should have an automated check for that verifies that new files have the license headers on top if not the prs fail automatically ,1 1684,10582609973.0,IssuesEvent,2019-10-08 11:58:52,IBM/FHIR,https://api.github.com/repos/IBM/FHIR,closed,Replace xmp tag to create valid javadoc ,automation,"Replace xmp tag to create valid javadoc Replace tag with invalid javadoc ``` fhir-client/src/main/java/com/ibm/fhir/client/FHIRRequestHeader.java:58: * fhir-client/src/main/java/com/ibm/fhir/client/FHIRRequestHeader.java:64: * fhir-config/src/main/java/com/ibm/fhir/config/PropertyGroup.java:36: * fhir-config/src/main/java/com/ibm/fhir/config/PropertyGroup.java:38: * fhir-config/src/main/java/com/ibm/fhir/config/PropertyGroup.java:344: * fhir-config/src/main/java/com/ibm/fhir/config/PropertyGroup.java:352: * ```",1.0,"Replace xmp tag to create valid javadoc - Replace xmp tag to create valid javadoc Replace tag with invalid javadoc ``` fhir-client/src/main/java/com/ibm/fhir/client/FHIRRequestHeader.java:58: * fhir-client/src/main/java/com/ibm/fhir/client/FHIRRequestHeader.java:64: * fhir-config/src/main/java/com/ibm/fhir/config/PropertyGroup.java:36: * fhir-config/src/main/java/com/ibm/fhir/config/PropertyGroup.java:38: * fhir-config/src/main/java/com/ibm/fhir/config/PropertyGroup.java:344: * fhir-config/src/main/java/com/ibm/fhir/config/PropertyGroup.java:352: * ```",1,replace xmp tag to create valid javadoc replace xmp tag to create valid javadoc replace tag with invalid javadoc fhir client src main java com ibm fhir client fhirrequestheader java fhir client src main java com ibm fhir client fhirrequestheader java fhir config src main java com ibm fhir config propertygroup java fhir config src main java com ibm fhir config propertygroup java fhir config src main java com ibm fhir config propertygroup java fhir config src main java com ibm fhir config propertygroup java ,1 3851,14723239529.0,IssuesEvent,2021-01-06 00:02:37,jfournierphoto/Home-Assistant-Config,https://api.github.com/repos/jfournierphoto/Home-Assistant-Config,closed,Add device monitoring automation,Automation enhancement,"Add a generic automation that monitors a group of devices and is triggered when one goes down, it will then notify me so I can follow-up on the issue. Group will ne named : monitored devices",1.0,"Add device monitoring automation - Add a generic automation that monitors a group of devices and is triggered when one goes down, it will then notify me so I can follow-up on the issue. Group will ne named : monitored devices",1,add device monitoring automation add a generic automation that monitors a group of devices and is triggered when one goes down it will then notify me so i can follow up on the issue group will ne named monitored devices,1 9560,29810516749.0,IssuesEvent,2023-06-16 14:43:59,pharmaverse/falcon,https://api.github.com/repos/pharmaverse/falcon,opened,Evaluate requirements to publish `falcon` on CRAN,automation,"We would like to publish the `falcon` package to CRAN which would allow for more widespread usage of the package. To do so, we will need to evaluate the requirements for publishing to CRAN and check with the automation team to ensure automation checks account for this. The `tern` package will be published to CRAN in the coming weeks, after which all of `falcon`'s package dependencies will be available on CRAN. Tentative date for publishing to CRAN would be in Fall 2023.",1.0,"Evaluate requirements to publish `falcon` on CRAN - We would like to publish the `falcon` package to CRAN which would allow for more widespread usage of the package. To do so, we will need to evaluate the requirements for publishing to CRAN and check with the automation team to ensure automation checks account for this. The `tern` package will be published to CRAN in the coming weeks, after which all of `falcon`'s package dependencies will be available on CRAN. Tentative date for publishing to CRAN would be in Fall 2023.",1,evaluate requirements to publish falcon on cran we would like to publish the falcon package to cran which would allow for more widespread usage of the package to do so we will need to evaluate the requirements for publishing to cran and check with the automation team to ensure automation checks account for this the tern package will be published to cran in the coming weeks after which all of falcon s package dependencies will be available on cran tentative date for publishing to cran would be in fall ,1 364591,10766114609.0,IssuesEvent,2019-11-01 12:58:45,decred/pi-ui,https://api.github.com/repos/decred/pi-ui,closed,Update RadioButton style accordingly to design specs,component-improvement good first issue priority: medium,"The RadioButton needs to be modified accordingly to the design specs: https://xd.adobe.com/spec/4e7eb732-a445-46b0-4f7e-4152097a10c8-384f/screen/c878836b-546e-40f1-89c2-57911c2c3161/Proposals-admin-search-1",1.0,"Update RadioButton style accordingly to design specs - The RadioButton needs to be modified accordingly to the design specs: https://xd.adobe.com/spec/4e7eb732-a445-46b0-4f7e-4152097a10c8-384f/screen/c878836b-546e-40f1-89c2-57911c2c3161/Proposals-admin-search-1",0,update radiobutton style accordingly to design specs the radiobutton needs to be modified accordingly to the design specs ,0 6972,24074995333.0,IssuesEvent,2022-09-18 17:27:35,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Automation evaluates disabled condition in OR to true,integration: automation,"### The problem In this automation trace, my disabled trigger condition was evaluated to true rather than false. Specifically, `conditions/0/conditions/1` is shown as true in the trace despite it having `enabled: false`. The automation was triggered by id `temp_not_changing` but executed a choose branch gated by trigger id `almost_warmer_outside`. ### What version of Home Assistant Core has the issue? 2022.9.4 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant OS ### Integration causing the issue Automation ### Link to integration documentation on our website _No response_ ### Diagnostics information [Home Assistant 2022-09-18 08_02_54.txt](https://github.com/home-assistant/core/files/9594333/Home.Assistant.2022-09-18.08_02_54.txt) ### Example YAML snippet ```yaml choose: - conditions: - condition: or conditions: - condition: trigger id: almost_warmer_outside - condition: trigger id: temp_rising_quickly enabled: false - condition: template value_template: >- {{ as_local(as_datetime(states('input_datetime.last_window_temperature_notification'))).day != now().day }} sequence: ``` ### Anything in the logs that might be useful for us? _No response_ ### Additional information _No response_",1.0,"Automation evaluates disabled condition in OR to true - ### The problem In this automation trace, my disabled trigger condition was evaluated to true rather than false. Specifically, `conditions/0/conditions/1` is shown as true in the trace despite it having `enabled: false`. The automation was triggered by id `temp_not_changing` but executed a choose branch gated by trigger id `almost_warmer_outside`. ### What version of Home Assistant Core has the issue? 2022.9.4 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant OS ### Integration causing the issue Automation ### Link to integration documentation on our website _No response_ ### Diagnostics information [Home Assistant 2022-09-18 08_02_54.txt](https://github.com/home-assistant/core/files/9594333/Home.Assistant.2022-09-18.08_02_54.txt) ### Example YAML snippet ```yaml choose: - conditions: - condition: or conditions: - condition: trigger id: almost_warmer_outside - condition: trigger id: temp_rising_quickly enabled: false - condition: template value_template: >- {{ as_local(as_datetime(states('input_datetime.last_window_temperature_notification'))).day != now().day }} sequence: ``` ### Anything in the logs that might be useful for us? _No response_ ### Additional information _No response_",1,automation evaluates disabled condition in or to true the problem in this automation trace my disabled trigger condition was evaluated to true rather than false specifically conditions conditions is shown as true in the trace despite it having enabled false the automation was triggered by id temp not changing but executed a choose branch gated by trigger id almost warmer outside what version of home assistant core has the issue what was the last working version of home assistant core no response what type of installation are you running home assistant os integration causing the issue automation link to integration documentation on our website no response diagnostics information example yaml snippet yaml choose conditions condition or conditions condition trigger id almost warmer outside condition trigger id temp rising quickly enabled false condition template value template as local as datetime states input datetime last window temperature notification day now day sequence anything in the logs that might be useful for us no response additional information no response ,1 1882,11029919609.0,IssuesEvent,2019-12-06 14:48:43,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,a8n: Implement retryCampaign mutation,automation,"This is a follow-up to [RFC 42](https://docs.google.com/document/d/1j85PoL6NOzLX_PHFzBQogZcnttYK0BXj9XnrxF3DYmA/edit). Right now, when the `createCampaign` mutation with a given `CampaignPlan` ID fails due to various reasons (GitHub not reachable, token invalid, gitserver down, ...) the conversion of `ChangesetJobs` into `Changesets` in the `(&a8n.Service).runChangesetJob` ends with the `ChangesetJobs` having its `Error` field populated. [See the code here](https://sourcegraph.com/github.com/sourcegraph/sourcegraph@034cee47c83a7734cf04a4b5720717665b5a69db/-/blob/enterprise/pkg/a8n/service.go#L114) What we want is a `retryCampaign` mutation that * takes in a `Campaign` ID * loads all the failed `ChangesetJobs` (definition: `finished_at` is null, or `error` is non-blank, or `changeset_id` is null) * uses `(&a8n.Service).runChangesetJob` to try again to create a commit from the given diff in the connected CampaignJob, push the commit, open a pull request on the codehost, save the pull request as an external service **Important**: for that to work, the `runChangesetJob` method must be idempotent! That means: if it runs twice with the same `ChangesetJob` is **cannot create duplicate pull requests!**. That means it needs to check that new commits are not added to same branch, check for `ErrAlreadyExists` response from code hosts, early-exit if a `Changset` with the given `changeset_job_id` exists, etc. ",1.0,"a8n: Implement retryCampaign mutation - This is a follow-up to [RFC 42](https://docs.google.com/document/d/1j85PoL6NOzLX_PHFzBQogZcnttYK0BXj9XnrxF3DYmA/edit). Right now, when the `createCampaign` mutation with a given `CampaignPlan` ID fails due to various reasons (GitHub not reachable, token invalid, gitserver down, ...) the conversion of `ChangesetJobs` into `Changesets` in the `(&a8n.Service).runChangesetJob` ends with the `ChangesetJobs` having its `Error` field populated. [See the code here](https://sourcegraph.com/github.com/sourcegraph/sourcegraph@034cee47c83a7734cf04a4b5720717665b5a69db/-/blob/enterprise/pkg/a8n/service.go#L114) What we want is a `retryCampaign` mutation that * takes in a `Campaign` ID * loads all the failed `ChangesetJobs` (definition: `finished_at` is null, or `error` is non-blank, or `changeset_id` is null) * uses `(&a8n.Service).runChangesetJob` to try again to create a commit from the given diff in the connected CampaignJob, push the commit, open a pull request on the codehost, save the pull request as an external service **Important**: for that to work, the `runChangesetJob` method must be idempotent! That means: if it runs twice with the same `ChangesetJob` is **cannot create duplicate pull requests!**. That means it needs to check that new commits are not added to same branch, check for `ErrAlreadyExists` response from code hosts, early-exit if a `Changset` with the given `changeset_job_id` exists, etc. ",1, implement retrycampaign mutation this is a follow up to right now when the createcampaign mutation with a given campaignplan id fails due to various reasons github not reachable token invalid gitserver down the conversion of changesetjobs into changesets in the service runchangesetjob ends with the changesetjobs having its error field populated what we want is a retrycampaign mutation that takes in a campaign id loads all the failed changesetjobs definition finished at is null or error is non blank or changeset id is null uses service runchangesetjob to try again to create a commit from the given diff in the connected campaignjob push the commit open a pull request on the codehost save the pull request as an external service important for that to work the runchangesetjob method must be idempotent that means if it runs twice with the same changesetjob is cannot create duplicate pull requests that means it needs to check that new commits are not added to same branch check for erralreadyexists response from code hosts early exit if a changset with the given changeset job id exists etc ,1 6337,22786561265.0,IssuesEvent,2022-07-09 10:38:40,h2xd/exposition,https://api.github.com/repos/h2xd/exposition,closed,add semver and release for the core package,🤖 automation,- [ ] The core page should be pushed to npm when pushed to `main`,1.0,add semver and release for the core package - - [ ] The core page should be pushed to npm when pushed to `main`,1,add semver and release for the core package the core page should be pushed to npm when pushed to main ,1 106393,4271591864.0,IssuesEvent,2016-07-13 11:45:29,PowerlineApp/powerline-mobile,https://api.github.com/repos/PowerlineApp/powerline-mobile,closed,My Profile: file:///android_asset/www/%7B%7B(profile.avatar_src_prefix%7C%7C'')%20+%20profile.avatar_file_name%7D%7D net::ERR_FILE_NOT_FOUND,bug low-priority,"Happens on android 5.0.1 when entering My profile (for user which has not profile photo). does not seem to harm UX. ``` GET file:///android_asset/www/%7B%7B(profile.avatar_src_prefix%7C%7C'')%20+%20profile.avatar_file_name%7D%7D net::ERR_FILE_NOT_FOUND jquery.js:5221(anonymous function) jquery.js:5221n.fn.extend.domManip jquery.js:5414n.fn.extend.append jquery.js:5218I.appendViewElement ionic.bundle.min.js:444H.render ionic.bundle.min.js:443H.init ionic.bundle.min.js:443I.render ionic.bundle.min.js:444I.register ionic.bundle.min.js:444a ionic.bundle.min.js:446(anonymous function) ionic.bundle.min.js:446n.$broadcast ionic.bundle.min.js:168x.transition.I.then.x.transition.x.transition ionic.bundle.min.js:420(anonymous function) ionic.bundle.min.js:151n.$eval ionic.bundle.min.js:165n.$digest ionic.bundle.min.js:163n.$apply ionic.bundle.min.js:166l ionic.bundle.min.js:118F ionic.bundle.min.js:122K.onload ```",1.0,"My Profile: file:///android_asset/www/%7B%7B(profile.avatar_src_prefix%7C%7C'')%20+%20profile.avatar_file_name%7D%7D net::ERR_FILE_NOT_FOUND - Happens on android 5.0.1 when entering My profile (for user which has not profile photo). does not seem to harm UX. ``` GET file:///android_asset/www/%7B%7B(profile.avatar_src_prefix%7C%7C'')%20+%20profile.avatar_file_name%7D%7D net::ERR_FILE_NOT_FOUND jquery.js:5221(anonymous function) jquery.js:5221n.fn.extend.domManip jquery.js:5414n.fn.extend.append jquery.js:5218I.appendViewElement ionic.bundle.min.js:444H.render ionic.bundle.min.js:443H.init ionic.bundle.min.js:443I.render ionic.bundle.min.js:444I.register ionic.bundle.min.js:444a ionic.bundle.min.js:446(anonymous function) ionic.bundle.min.js:446n.$broadcast ionic.bundle.min.js:168x.transition.I.then.x.transition.x.transition ionic.bundle.min.js:420(anonymous function) ionic.bundle.min.js:151n.$eval ionic.bundle.min.js:165n.$digest ionic.bundle.min.js:163n.$apply ionic.bundle.min.js:166l ionic.bundle.min.js:118F ionic.bundle.min.js:122K.onload ```",0,my profile file android asset www profile avatar src prefix avatar file name net err file not found happens on android when entering my profile for user which has not profile photo does not seem to harm ux get file android asset www profile avatar src prefix avatar file name net err file not found jquery js anonymous function jquery js fn extend dommanip jquery js fn extend append jquery js appendviewelement ionic bundle min js render ionic bundle min js init ionic bundle min js render ionic bundle min js register ionic bundle min js ionic bundle min js anonymous function ionic bundle min js broadcast ionic bundle min js transition i then x transition x transition ionic bundle min js anonymous function ionic bundle min js eval ionic bundle min js digest ionic bundle min js apply ionic bundle min js ionic bundle min js ionic bundle min js onload ,0 2116,11425170776.0,IssuesEvent,2020-02-03 19:17:15,rancher/rancher,https://api.github.com/repos/rancher/rancher,closed,Automation - add rbac test cases for etcd backups,setup/automation,"**What kind of request is this (question/bug/enhancement/feature request):** Enhancement We need to add RBAC test cases in the automation framework for the etcd backups functionality.",1.0,"Automation - add rbac test cases for etcd backups - **What kind of request is this (question/bug/enhancement/feature request):** Enhancement We need to add RBAC test cases in the automation framework for the etcd backups functionality.",1,automation add rbac test cases for etcd backups what kind of request is this question bug enhancement feature request enhancement we need to add rbac test cases in the automation framework for the etcd backups functionality ,1 71738,9535912493.0,IssuesEvent,2019-04-30 08:21:56,GEOLYTIX/xyz,https://api.github.com/repos/GEOLYTIX/xyz,closed,Tableview viewports,Documentation Enhancement Testing,Currently layer tableviews will always filter for the viewport. This should be optional.,1.0,Tableview viewports - Currently layer tableviews will always filter for the viewport. This should be optional.,0,tableview viewports currently layer tableviews will always filter for the viewport this should be optional ,0 277560,8629585180.0,IssuesEvent,2018-11-21 21:19:47,yammr/yammr-frontend,https://api.github.com/repos/yammr/yammr-frontend,closed,Navigation,Priority: High Sprint: 3 Status: Completed,"### Navigation As a user I want to be able to navigate between screens so that I can explore the content of Yammr. #### Acceptance Criteria Given : A navigable screen When : I click a component that navigates to a new component Then : The history of the components will be remembered #### Story Points Points: 3 ",1.0,"Navigation - ### Navigation As a user I want to be able to navigate between screens so that I can explore the content of Yammr. #### Acceptance Criteria Given : A navigable screen When : I click a component that navigates to a new component Then : The history of the components will be remembered #### Story Points Points: 3 ",0,navigation navigation as a user i want to be able to navigate between screens so that i can explore the content of yammr acceptance criteria given a navigable screen when i click a component that navigates to a new component then the history of the components will be remembered story points points ,0 5015,18269856122.0,IssuesEvent,2021-10-04 12:47:12,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,Migrate bridge to ESLint,type:chore automation bridge.2.0 estimate: 3,"As of 2019, TSLint deprecation was announced. Angular now also deprecated TSLint in favor of ESLint. We should also migrate our bridge angular project to ESLint. See [this guide](https://github.com/angular-eslint/angular-eslint#migrating-an-angular-cli-project-from-codelyzer-and-tslint) for help ### DoD - [x] Prettier fix - [x] add editorconfig to the project The CI should use: - [x] ESLint - [x] Prettier check ",1.0,"Migrate bridge to ESLint - As of 2019, TSLint deprecation was announced. Angular now also deprecated TSLint in favor of ESLint. We should also migrate our bridge angular project to ESLint. See [this guide](https://github.com/angular-eslint/angular-eslint#migrating-an-angular-cli-project-from-codelyzer-and-tslint) for help ### DoD - [x] Prettier fix - [x] add editorconfig to the project The CI should use: - [x] ESLint - [x] Prettier check ",1,migrate bridge to eslint as of tslint deprecation was announced angular now also deprecated tslint in favor of eslint we should also migrate our bridge angular project to eslint see for help dod prettier fix add editorconfig to the project the ci should use eslint prettier check ,1 388102,26751595468.0,IssuesEvent,2023-01-30 20:05:13,typescript-eslint/typescript-eslint,https://api.github.com/repos/typescript-eslint/typescript-eslint,closed,Docs: Maintenance > new TypeScript versions,documentation accepting prs,"### Before You File a Documentation Request Please Confirm You Have Done The Following... - [X] I have looked for existing [open or closed documentation requests](https://github.com/typescript-eslint/typescript-eslint/issues?q=is%3Aissue+label%3Adocumentation) that match my proposal. - [X] I have [read the FAQ](https://typescript-eslint.io/docs/linting/troubleshooting) and my problem is not listed. ### Suggested Changes We don't support TypeScript nightlies, beta releases, or release candidates (RCs) - but do generally use the RCs as the sign to start working on support for the upcoming new version. Let's document this. A few references: * #5227 * #5914 * #5915 * https://github.com/typescript-eslint/typescript-eslint/pull/5915#discussion_r1012292363 ### Affected URL(s) https://typescript-eslint.io/docs/maintenance/typescript-versions, maybe?",1.0,"Docs: Maintenance > new TypeScript versions - ### Before You File a Documentation Request Please Confirm You Have Done The Following... - [X] I have looked for existing [open or closed documentation requests](https://github.com/typescript-eslint/typescript-eslint/issues?q=is%3Aissue+label%3Adocumentation) that match my proposal. - [X] I have [read the FAQ](https://typescript-eslint.io/docs/linting/troubleshooting) and my problem is not listed. ### Suggested Changes We don't support TypeScript nightlies, beta releases, or release candidates (RCs) - but do generally use the RCs as the sign to start working on support for the upcoming new version. Let's document this. A few references: * #5227 * #5914 * #5915 * https://github.com/typescript-eslint/typescript-eslint/pull/5915#discussion_r1012292363 ### Affected URL(s) https://typescript-eslint.io/docs/maintenance/typescript-versions, maybe?",0,docs maintenance new typescript versions before you file a documentation request please confirm you have done the following i have looked for existing that match my proposal i have and my problem is not listed suggested changes we don t support typescript nightlies beta releases or release candidates rcs but do generally use the rcs as the sign to start working on support for the upcoming new version let s document this a few references affected url s maybe ,0 6851,23974705570.0,IssuesEvent,2022-09-13 10:35:22,mlcommons/ck,https://api.github.com/repos/mlcommons/ck,opened,[CK2/CM] Handle multiple python dependencies,question cm-script-automation,"Suppose a script is having dependency on onnxruntime and pytorch and system default python is 3.10 - onnxruntime requires python <=3.9 - pytorch works for python3.10 How can we ensure that python version remains the same for all the python dependencies of the script?",1.0,"[CK2/CM] Handle multiple python dependencies - Suppose a script is having dependency on onnxruntime and pytorch and system default python is 3.10 - onnxruntime requires python <=3.9 - pytorch works for python3.10 How can we ensure that python version remains the same for all the python dependencies of the script?",1, handle multiple python dependencies suppose a script is having dependency on onnxruntime and pytorch and system default python is onnxruntime requires python pytorch works for how can we ensure that python version remains the same for all the python dependencies of the script ,1 238348,18239131392.0,IssuesEvent,2021-10-01 10:40:44,org-roam/org-roam-bibtex,https://api.github.com/repos/org-roam/org-roam-bibtex,closed,orb-insert-interface customization issue,1. bug 1. documentation,"**Describe the bug** A clear and concise description of what the bug is. I am new to ORB, and am trying to have a non-generic `orb-insert-interface`. I use Doom, with `ivy`, so `ivy-bibtex` seems like a good choice. I can confirm that `ivy-bibtex` is a valid and working command in my installation. I just can't use it for ORB. Following [the docs](https://github.com/org-roam/org-roam-bibtex/blob/master/doc/orb-manual.org#orb-insert-configuration), they say that the variable `orb-insert-interface` is to be set to the preferred interface. In my init.el, I add the line `(setq orb-insert-interface 'ivy-bibtex)`, however I find that I am still getting the generic interface. Is the `setq` approach the correct one? It is not clear in the docs. If there is another correct way of doing it, can it be shared and added to the docs? **To Reproduce** Steps to reproduce the behavior: 1. Run command `orb-insert` 2. Observe generic interface **Expected behavior** A clear and concise description of what you expected to happen. **ORB configuration** #### init.el ```elisp (use-package! org-roam-bibtex :after org-roam :hook (org-roam-mode . org-roam-bibtex-mode) :config (require 'org-ref) (setq orb-insert-interface 'ivy-bibtex) ) ``` #### packages.el ```elisp (package! org-roam-bibtex :recipe (:host github :repo ""org-roam/org-roam-bibtex"")) ;; When using org-roam via the `+roam` flag (unpin! org-roam) ;; When using bibtex-completion via the `biblio` module (unpin! bibtex-completion helm-bibtex ivy-bibtex) ``` **Environment (please complete the following information):** - ORB commit or MELPA package version: 0.5.1 - Org Roam commit or MELPA package version: 1.2.3 - Org Mode version: 1.2.3 - Emacs framework: Doom v2.0.9 - Emacs version 27.1 - OS: Linux - ",1.0,"orb-insert-interface customization issue - **Describe the bug** A clear and concise description of what the bug is. I am new to ORB, and am trying to have a non-generic `orb-insert-interface`. I use Doom, with `ivy`, so `ivy-bibtex` seems like a good choice. I can confirm that `ivy-bibtex` is a valid and working command in my installation. I just can't use it for ORB. Following [the docs](https://github.com/org-roam/org-roam-bibtex/blob/master/doc/orb-manual.org#orb-insert-configuration), they say that the variable `orb-insert-interface` is to be set to the preferred interface. In my init.el, I add the line `(setq orb-insert-interface 'ivy-bibtex)`, however I find that I am still getting the generic interface. Is the `setq` approach the correct one? It is not clear in the docs. If there is another correct way of doing it, can it be shared and added to the docs? **To Reproduce** Steps to reproduce the behavior: 1. Run command `orb-insert` 2. Observe generic interface **Expected behavior** A clear and concise description of what you expected to happen. **ORB configuration** #### init.el ```elisp (use-package! org-roam-bibtex :after org-roam :hook (org-roam-mode . org-roam-bibtex-mode) :config (require 'org-ref) (setq orb-insert-interface 'ivy-bibtex) ) ``` #### packages.el ```elisp (package! org-roam-bibtex :recipe (:host github :repo ""org-roam/org-roam-bibtex"")) ;; When using org-roam via the `+roam` flag (unpin! org-roam) ;; When using bibtex-completion via the `biblio` module (unpin! bibtex-completion helm-bibtex ivy-bibtex) ``` **Environment (please complete the following information):** - ORB commit or MELPA package version: 0.5.1 - Org Roam commit or MELPA package version: 1.2.3 - Org Mode version: 1.2.3 - Emacs framework: Doom v2.0.9 - Emacs version 27.1 - OS: Linux - ",0,orb insert interface customization issue describe the bug a clear and concise description of what the bug is i am new to orb and am trying to have a non generic orb insert interface i use doom with ivy so ivy bibtex seems like a good choice i can confirm that ivy bibtex is a valid and working command in my installation i just can t use it for orb following they say that the variable orb insert interface is to be set to the preferred interface in my init el i add the line setq orb insert interface ivy bibtex however i find that i am still getting the generic interface is the setq approach the correct one it is not clear in the docs if there is another correct way of doing it can it be shared and added to the docs to reproduce steps to reproduce the behavior run command orb insert observe generic interface expected behavior a clear and concise description of what you expected to happen orb configuration init el elisp use package org roam bibtex after org roam hook org roam mode org roam bibtex mode config require org ref setq orb insert interface ivy bibtex packages el elisp package org roam bibtex recipe host github repo org roam org roam bibtex when using org roam via the roam flag unpin org roam when using bibtex completion via the biblio module unpin bibtex completion helm bibtex ivy bibtex environment please complete the following information orb commit or melpa package version org roam commit or melpa package version org mode version emacs framework doom emacs version os linux ,0 5719,5953663063.0,IssuesEvent,2017-05-27 09:59:22,frappe/erpnext,https://api.github.com/repos/frappe/erpnext,closed, Login attempts Administrator Warning!,security,"Hi, How can we Administrators of ERPNext know that a user has tried more than 3/5 times to login and did not succeeded? By checking the Authentication Log but why does not the system informs the Admin of the wrong attempts and lock the account for XXXX amount of time. This should be a security concern.",True," Login attempts Administrator Warning! - Hi, How can we Administrators of ERPNext know that a user has tried more than 3/5 times to login and did not succeeded? By checking the Authentication Log but why does not the system informs the Admin of the wrong attempts and lock the account for XXXX amount of time. This should be a security concern.",0, login attempts administrator warning hi how can we administrators of erpnext know that a user has tried more than times to login and did not succeeded by checking the authentication log but why does not the system informs the admin of the wrong attempts and lock the account for xxxx amount of time this should be a security concern ,0 566830,16831553203.0,IssuesEvent,2021-06-18 06:03:43,netdata/netdata-cloud,https://api.github.com/repos/netdata/netdata-cloud,closed,"[BUG] When I click on an alarm, I usually see an empty chart",bug internal submit priority/high visualizations-team-bugs,"Regardless of an individual agent's retention, users should NEVER see this: ![image](https://user-images.githubusercontent.com/43294513/119701001-c2136980-be08-11eb-949f-9b9a100282e7.png) ",1.0,"[BUG] When I click on an alarm, I usually see an empty chart - Regardless of an individual agent's retention, users should NEVER see this: ![image](https://user-images.githubusercontent.com/43294513/119701001-c2136980-be08-11eb-949f-9b9a100282e7.png) ",0, when i click on an alarm i usually see an empty chart regardless of an individual agent s retention users should never see this ,0 287782,24861031985.0,IssuesEvent,2022-10-27 08:17:56,astropy/astropy,https://api.github.com/repos/astropy/astropy,closed,TST: Transient PytestUnraisableExceptionWarning in cloud.rst,testing Docs io.fits Bug ¯\_(ツ)_/¯,"- [ ] Revert #13909 Occasionally, this will pop up but not always. Usually it is in this job (daily cron): https://github.com/astropy/astropy/blob/e32652cf0e1d9dcfda9ec9ae1193e7ea3ee40788/.github/workflows/ci_cron_daily.yml#L45-L50 ``` _____________________________ [doctest] cloud.rst ______________________________ cls = func = . at 0x7f1c4972beb0> when = 'call' reraise = (, ) @classmethod def from_call( cls, func: ""Callable[[], TResult]"", when: ""Literal['collect', 'setup', 'call', 'teardown']"", reraise: Optional[ Union[Type[BaseException], Tuple[Type[BaseException], ...]] ] = None, ) -> ""CallInfo[TResult]"": """"""Call func, wrapping the result in a CallInfo. :param func: The function to call. Called without arguments. :param when: The phase in which the function is called. :param reraise: Exception or exceptions that shall propagate if raised by the function, instead of being wrapped in the CallInfo. """""" excinfo = None start = timing.time() precise_start = timing.perf_counter() try: > result: Optional[TResult] = func() /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/_pytest/runner.py:339: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > lambda: ihook(item=item, **kwds), when=when, reraise=reraise ) /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/_pytest/runner.py:260: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_HookCaller 'pytest_runtest_call'>, args = () kwargs = {'item': }, argname = 'item' firstresult = False def __call__(self, *args, **kwargs): if args: raise TypeError(""hook calling supports only keyword arguments"") assert not self.is_historic() # This is written to avoid expensive operations when not needed. if self.spec: for argname in self.spec.argnames: if argname not in kwargs: notincall = tuple(set(self.spec.argnames) - kwargs.keys()) warnings.warn( ""Argument(s) {} which are declared in the hookspec "" ""can not be found in this hook call"".format(notincall), stacklevel=2, ) break firstresult = self.spec.opts.get(""firstresult"") else: firstresult = False > return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/pluggy/_hooks.py:265: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_pytest.config.PytestPluginManager object at 0x7f1c809db790> hook_name = 'pytest_runtest_call' methods = [>, ...] kwargs = {'item': }, firstresult = False def _hookexec(self, hook_name, methods, kwargs, firstresult): # called from all hookcaller instances. # enable_tracing will set its own wrapping function at self._inner_hookexec > return self._inner_hookexec(hook_name, methods, kwargs, firstresult) /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/pluggy/_manager.py:80: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ hook_name = 'pytest_runtest_call' hook_impls = [>, ...] caller_kwargs = {'item': }, firstresult = False def _multicall(hook_name, hook_impls, caller_kwargs, firstresult): """"""Execute a call into multiple python functions/methods and return the result(s). ``caller_kwargs`` comes from _HookCaller.__call__(). """""" __tracebackhide__ = True results = [] excinfo = None try: # run impl and wrapper setup functions in a loop teardowns = [] try: for hook_impl in reversed(hook_impls): try: args = [caller_kwargs[argname] for argname in hook_impl.argnames] except KeyError: for argname in hook_impl.argnames: if argname not in caller_kwargs: raise HookCallError( f""hook call must provide argument {argname!r}"" ) if hook_impl.hookwrapper: try: gen = hook_impl.function(*args) next(gen) # first yield teardowns.append(gen) except StopIteration: _raise_wrapfail(gen, ""did not yield"") else: res = hook_impl.function(*args) if res is not None: results.append(res) if firstresult: # halt further impl calls break except BaseException: excinfo = sys.exc_info() finally: if firstresult: # first result hooks return a single value outcome = _Result(results[0] if results else None, excinfo) else: outcome = _Result(results, excinfo) # run all wrapper post-yield blocks for gen in reversed(teardowns): try: > gen.send(outcome) /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/pluggy/_callers.py:55: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ @pytest.hookimpl(hookwrapper=True, tryfirst=True) def pytest_runtest_call() -> Generator[None, None, None]: > yield from unraisable_exception_runtest_hook() /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/_pytest/unraisableexception.py:88: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def unraisable_exception_runtest_hook() -> Generator[None, None, None]: with catch_unraisable_exception() as cm: yield if cm.unraisable: if cm.unraisable.err_msg is not None: err_msg = cm.unraisable.err_msg else: err_msg = ""Exception ignored in"" msg = f""{err_msg}: {cm.unraisable.object!r}\n\n"" msg += """".join( traceback.format_exception( cm.unraisable.exc_type, cm.unraisable.exc_value, cm.unraisable.exc_traceback, ) ) > warnings.warn(pytest.PytestUnraisableExceptionWarning(msg)) E pytest.PytestUnraisableExceptionWarning: Exception ignored in: E E Traceback (most recent call last): E File ""/opt/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/asyncio/sslproto.py"", line 320, in __del__ E _warn(f""unclosed transport {self!r}"", ResourceWarning, source=self) E ResourceWarning: unclosed transport /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning =========================== short test summary info ============================ FAILED ../../docs/io/fits/usage/cloud.rst::cloud.rst - pytest.PytestUnraisableExceptionWarning: Exception ignored in: Traceback (most recent call last): File ""/opt/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/asyncio/sslproto.py"", line 320, in __del__ _warn(f""unclosed transport {self!r}"", ResourceWarning, source=self) ResourceWarning: unclosed transport ```",1.0,"TST: Transient PytestUnraisableExceptionWarning in cloud.rst - - [ ] Revert #13909 Occasionally, this will pop up but not always. Usually it is in this job (daily cron): https://github.com/astropy/astropy/blob/e32652cf0e1d9dcfda9ec9ae1193e7ea3ee40788/.github/workflows/ci_cron_daily.yml#L45-L50 ``` _____________________________ [doctest] cloud.rst ______________________________ cls = func = . at 0x7f1c4972beb0> when = 'call' reraise = (, ) @classmethod def from_call( cls, func: ""Callable[[], TResult]"", when: ""Literal['collect', 'setup', 'call', 'teardown']"", reraise: Optional[ Union[Type[BaseException], Tuple[Type[BaseException], ...]] ] = None, ) -> ""CallInfo[TResult]"": """"""Call func, wrapping the result in a CallInfo. :param func: The function to call. Called without arguments. :param when: The phase in which the function is called. :param reraise: Exception or exceptions that shall propagate if raised by the function, instead of being wrapped in the CallInfo. """""" excinfo = None start = timing.time() precise_start = timing.perf_counter() try: > result: Optional[TResult] = func() /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/_pytest/runner.py:339: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > lambda: ihook(item=item, **kwds), when=when, reraise=reraise ) /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/_pytest/runner.py:260: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_HookCaller 'pytest_runtest_call'>, args = () kwargs = {'item': }, argname = 'item' firstresult = False def __call__(self, *args, **kwargs): if args: raise TypeError(""hook calling supports only keyword arguments"") assert not self.is_historic() # This is written to avoid expensive operations when not needed. if self.spec: for argname in self.spec.argnames: if argname not in kwargs: notincall = tuple(set(self.spec.argnames) - kwargs.keys()) warnings.warn( ""Argument(s) {} which are declared in the hookspec "" ""can not be found in this hook call"".format(notincall), stacklevel=2, ) break firstresult = self.spec.opts.get(""firstresult"") else: firstresult = False > return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/pluggy/_hooks.py:265: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_pytest.config.PytestPluginManager object at 0x7f1c809db790> hook_name = 'pytest_runtest_call' methods = [>, ...] kwargs = {'item': }, firstresult = False def _hookexec(self, hook_name, methods, kwargs, firstresult): # called from all hookcaller instances. # enable_tracing will set its own wrapping function at self._inner_hookexec > return self._inner_hookexec(hook_name, methods, kwargs, firstresult) /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/pluggy/_manager.py:80: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ hook_name = 'pytest_runtest_call' hook_impls = [>, ...] caller_kwargs = {'item': }, firstresult = False def _multicall(hook_name, hook_impls, caller_kwargs, firstresult): """"""Execute a call into multiple python functions/methods and return the result(s). ``caller_kwargs`` comes from _HookCaller.__call__(). """""" __tracebackhide__ = True results = [] excinfo = None try: # run impl and wrapper setup functions in a loop teardowns = [] try: for hook_impl in reversed(hook_impls): try: args = [caller_kwargs[argname] for argname in hook_impl.argnames] except KeyError: for argname in hook_impl.argnames: if argname not in caller_kwargs: raise HookCallError( f""hook call must provide argument {argname!r}"" ) if hook_impl.hookwrapper: try: gen = hook_impl.function(*args) next(gen) # first yield teardowns.append(gen) except StopIteration: _raise_wrapfail(gen, ""did not yield"") else: res = hook_impl.function(*args) if res is not None: results.append(res) if firstresult: # halt further impl calls break except BaseException: excinfo = sys.exc_info() finally: if firstresult: # first result hooks return a single value outcome = _Result(results[0] if results else None, excinfo) else: outcome = _Result(results, excinfo) # run all wrapper post-yield blocks for gen in reversed(teardowns): try: > gen.send(outcome) /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/pluggy/_callers.py:55: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ @pytest.hookimpl(hookwrapper=True, tryfirst=True) def pytest_runtest_call() -> Generator[None, None, None]: > yield from unraisable_exception_runtest_hook() /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/_pytest/unraisableexception.py:88: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def unraisable_exception_runtest_hook() -> Generator[None, None, None]: with catch_unraisable_exception() as cm: yield if cm.unraisable: if cm.unraisable.err_msg is not None: err_msg = cm.unraisable.err_msg else: err_msg = ""Exception ignored in"" msg = f""{err_msg}: {cm.unraisable.object!r}\n\n"" msg += """".join( traceback.format_exception( cm.unraisable.exc_type, cm.unraisable.exc_value, cm.unraisable.exc_traceback, ) ) > warnings.warn(pytest.PytestUnraisableExceptionWarning(msg)) E pytest.PytestUnraisableExceptionWarning: Exception ignored in: E E Traceback (most recent call last): E File ""/opt/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/asyncio/sslproto.py"", line 320, in __del__ E _warn(f""unclosed transport {self!r}"", ResourceWarning, source=self) E ResourceWarning: unclosed transport /home/runner/work/astropy/astropy/.tox/py310-test-alldeps-predeps/lib/python3.10/site-packages/_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning =========================== short test summary info ============================ FAILED ../../docs/io/fits/usage/cloud.rst::cloud.rst - pytest.PytestUnraisableExceptionWarning: Exception ignored in: Traceback (most recent call last): File ""/opt/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/asyncio/sslproto.py"", line 320, in __del__ _warn(f""unclosed transport {self!r}"", ResourceWarning, source=self) ResourceWarning: unclosed transport ```",0,tst transient pytestunraisableexceptionwarning in cloud rst revert occasionally this will pop up but not always usually it is in this job daily cron cloud rst cls func at when call reraise classmethod def from call cls func callable tresult when literal reraise optional union tuple none callinfo call func wrapping the result in a callinfo param func the function to call called without arguments param when the phase in which the function is called param reraise exception or exceptions that shall propagate if raised by the function instead of being wrapped in the callinfo excinfo none start timing time precise start timing perf counter try result optional func home runner work astropy astropy tox test alldeps predeps lib site packages pytest runner py lambda ihook item item kwds when when reraise reraise home runner work astropy astropy tox test alldeps predeps lib site packages pytest runner py self args kwargs item argname item firstresult false def call self args kwargs if args raise typeerror hook calling supports only keyword arguments assert not self is historic this is written to avoid expensive operations when not needed if self spec for argname in self spec argnames if argname not in kwargs notincall tuple set self spec argnames kwargs keys warnings warn argument s which are declared in the hookspec can not be found in this hook call format notincall stacklevel break firstresult self spec opts get firstresult else firstresult false return self hookexec self name self get hookimpls kwargs firstresult home runner work astropy astropy tox test alldeps predeps lib site packages pluggy hooks py self hook name pytest runtest call methods kwargs item firstresult false def hookexec self hook name methods kwargs firstresult called from all hookcaller instances enable tracing will set its own wrapping function at self inner hookexec return self inner hookexec hook name methods kwargs firstresult home runner work astropy astropy tox test alldeps predeps lib site packages pluggy manager py hook name pytest runtest call hook impls caller kwargs item firstresult false def multicall hook name hook impls caller kwargs firstresult execute a call into multiple python functions methods and return the result s caller kwargs comes from hookcaller call tracebackhide true results excinfo none try run impl and wrapper setup functions in a loop teardowns try for hook impl in reversed hook impls try args for argname in hook impl argnames except keyerror for argname in hook impl argnames if argname not in caller kwargs raise hookcallerror f hook call must provide argument argname r if hook impl hookwrapper try gen hook impl function args next gen first yield teardowns append gen except stopiteration raise wrapfail gen did not yield else res hook impl function args if res is not none results append res if firstresult halt further impl calls break except baseexception excinfo sys exc info finally if firstresult first result hooks return a single value outcome result results if results else none excinfo else outcome result results excinfo run all wrapper post yield blocks for gen in reversed teardowns try gen send outcome home runner work astropy astropy tox test alldeps predeps lib site packages pluggy callers py pytest hookimpl hookwrapper true tryfirst true def pytest runtest call generator yield from unraisable exception runtest hook home runner work astropy astropy tox test alldeps predeps lib site packages pytest unraisableexception py def unraisable exception runtest hook generator with catch unraisable exception as cm yield if cm unraisable if cm unraisable err msg is not none err msg cm unraisable err msg else err msg exception ignored in msg f err msg cm unraisable object r n n msg join traceback format exception cm unraisable exc type cm unraisable exc value cm unraisable exc traceback warnings warn pytest pytestunraisableexceptionwarning msg e pytest pytestunraisableexceptionwarning exception ignored in e e traceback most recent call last e file opt hostedtoolcache python lib asyncio sslproto py line in del e warn f unclosed transport self r resourcewarning source self e resourcewarning unclosed transport home runner work astropy astropy tox test alldeps predeps lib site packages pytest unraisableexception py pytestunraisableexceptionwarning short test summary info failed docs io fits usage cloud rst cloud rst pytest pytestunraisableexceptionwarning exception ignored in traceback most recent call last file opt hostedtoolcache python lib asyncio sslproto py line in del warn f unclosed transport self r resourcewarning source self resourcewarning unclosed transport ,0 123227,10257807677.0,IssuesEvent,2019-08-21 21:01:30,rancher/rancher,https://api.github.com/repos/rancher/rancher,closed,[UI] Cluster Templates index page still shows image,[zube]: To Test kind/bug-qa team/ui,"Version: master-head (v2.3) (8/13/19) **What kind of request is this (question/bug/enhancement/feature request):** Bug **Steps to reproduce (least amount of steps as possible):** There was a recent change to remove most images in the Rancher UI. If you view the Cluster Templates page (without having any cluster templates) there is still an image shown in the UI here. **Result:** ![image](https://user-images.githubusercontent.com/45179589/62984786-56a20880-bde9-11e9-82e5-bbbe93dac960.png) The image should be removed and we should just have a text string that states something like ""There are no Cluster Templates defined."" or similar.",1.0,"[UI] Cluster Templates index page still shows image - Version: master-head (v2.3) (8/13/19) **What kind of request is this (question/bug/enhancement/feature request):** Bug **Steps to reproduce (least amount of steps as possible):** There was a recent change to remove most images in the Rancher UI. If you view the Cluster Templates page (without having any cluster templates) there is still an image shown in the UI here. **Result:** ![image](https://user-images.githubusercontent.com/45179589/62984786-56a20880-bde9-11e9-82e5-bbbe93dac960.png) The image should be removed and we should just have a text string that states something like ""There are no Cluster Templates defined."" or similar.",0, cluster templates index page still shows image version master head what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible there was a recent change to remove most images in the rancher ui if you view the cluster templates page without having any cluster templates there is still an image shown in the ui here result the image should be removed and we should just have a text string that states something like there are no cluster templates defined or similar ,0 96552,20027915596.0,IssuesEvent,2022-02-02 00:01:31,belav/csharpier,https://api.github.com/repos/belav/csharpier,closed,VSCode - clean up code that kills csharpier when terminal focus is available.,area:vscode status:blocked,"The VSCode extension automatically kills csharpier to ensure a user can update it. This is done because we cannot detect when the terminal in vscode is focused. See https://github.com/microsoft/vscode/issues/117980 When VSCode has the ability to detect terminal focus, then the extension can be smarter about when it kill csharpier.",1.0,"VSCode - clean up code that kills csharpier when terminal focus is available. - The VSCode extension automatically kills csharpier to ensure a user can update it. This is done because we cannot detect when the terminal in vscode is focused. See https://github.com/microsoft/vscode/issues/117980 When VSCode has the ability to detect terminal focus, then the extension can be smarter about when it kill csharpier.",0,vscode clean up code that kills csharpier when terminal focus is available the vscode extension automatically kills csharpier to ensure a user can update it this is done because we cannot detect when the terminal in vscode is focused see when vscode has the ability to detect terminal focus then the extension can be smarter about when it kill csharpier ,0 129370,12405715846.0,IssuesEvent,2020-05-21 17:47:21,BHoM/CarbonQueryDatabase_Toolkit,https://api.github.com/repos/BHoM/CarbonQueryDatabase_Toolkit,closed,Add wiki documentation describing toolkit and functionality with examples,type:documentation," #### What is missing/incorrect? Wiki needed with description of toolkit's purpose and quick start guide with examples.",1.0,"Add wiki documentation describing toolkit and functionality with examples - #### What is missing/incorrect? Wiki needed with description of toolkit's purpose and quick start guide with examples.",0,add wiki documentation describing toolkit and functionality with examples what is missing incorrect wiki needed with description of toolkit s purpose and quick start guide with examples ,0 201,2505394140.0,IssuesEvent,2015-01-11 13:22:01,Secretchronicles/TSC,https://api.github.com/repos/Secretchronicles/TSC,closed,Ruby Timer documentation is inconsistent,Documentation,"Specifically, Timer.after claims its value is in seconds, when it gets interpreted as milliseconds (same as Timer.new). Not sure if the intention was milliseconds and the docs are wrong, or the intention was seconds and the implementation's wrong. @Quintus can decide ;)",1.0,"Ruby Timer documentation is inconsistent - Specifically, Timer.after claims its value is in seconds, when it gets interpreted as milliseconds (same as Timer.new). Not sure if the intention was milliseconds and the docs are wrong, or the intention was seconds and the implementation's wrong. @Quintus can decide ;)",0,ruby timer documentation is inconsistent specifically timer after claims its value is in seconds when it gets interpreted as milliseconds same as timer new not sure if the intention was milliseconds and the docs are wrong or the intention was seconds and the implementation s wrong quintus can decide ,0 9725,30344952751.0,IssuesEvent,2023-07-11 14:52:32,elastic/cloudbeat,https://api.github.com/repos/elastic/cloudbeat,closed,[CNVM] Full sanity test,Team:Cloud Security automation Vulnerability Management 8.10 candidate,"**Motivation** As decided on a separate ticket (see below), Cloudbeat repository will be in charge of managing vulnerability management CloudFormation template. Before we publish the templates to S3 we must assure that the expected resources are provisioned on AWS and that the new vulnerabilities are indexed to elasticsearch. **Assumptions** - Cloudbeat CI has some elastic-stack that we can use (elastic-package / elastic cloud) **Definition of done** - [x] Create a new agent policy - [x] Install vulnerability management integration - [x] Provision the vulnerability management template with the relevant parameters (fleet URL, enrollment token) - [ ] Assert that elasticsearch has vulnerabilities **Out of scope** - https://github.com/elastic/cloudbeat/issues/698 **Related tasks/epics** - https://github.com/elastic/security-team/issues/5700 ",1.0,"[CNVM] Full sanity test - **Motivation** As decided on a separate ticket (see below), Cloudbeat repository will be in charge of managing vulnerability management CloudFormation template. Before we publish the templates to S3 we must assure that the expected resources are provisioned on AWS and that the new vulnerabilities are indexed to elasticsearch. **Assumptions** - Cloudbeat CI has some elastic-stack that we can use (elastic-package / elastic cloud) **Definition of done** - [x] Create a new agent policy - [x] Install vulnerability management integration - [x] Provision the vulnerability management template with the relevant parameters (fleet URL, enrollment token) - [ ] Assert that elasticsearch has vulnerabilities **Out of scope** - https://github.com/elastic/cloudbeat/issues/698 **Related tasks/epics** - https://github.com/elastic/security-team/issues/5700 ",1, full sanity test motivation as decided on a separate ticket see below cloudbeat repository will be in charge of managing vulnerability management cloudformation template before we publish the templates to we must assure that the expected resources are provisioned on aws and that the new vulnerabilities are indexed to elasticsearch assumptions cloudbeat ci has some elastic stack that we can use elastic package elastic cloud definition of done create a new agent policy install vulnerability management integration provision the vulnerability management template with the relevant parameters fleet url enrollment token assert that elasticsearch has vulnerabilities out of scope related tasks epics ,1 8446,26966930943.0,IssuesEvent,2023-02-08 23:25:56,influxdata/ui,https://api.github.com/repos/influxdata/ui,closed,Kodiak Tool - UI/Deployments Sync Team,team/ui team/automation,"UI/Deployments team has reached consensus, along with the leads of 2 (out of 3) UI teams, to add kodiak to the UI repo. QX team needs to review and add their perspective. **Goal:** handle the density of PRs being merged in the UI, and have a formalized process which ensures only rebased-then-CI-passing commits get added to master branch. **Proposed solution**: using autoupdate and automerge flags on pull requests (as is currently working in k8s-idpe). **Spike research**: by qx team **Putting kodiak in**: would be done by deployments team ",1.0,"Kodiak Tool - UI/Deployments Sync Team - UI/Deployments team has reached consensus, along with the leads of 2 (out of 3) UI teams, to add kodiak to the UI repo. QX team needs to review and add their perspective. **Goal:** handle the density of PRs being merged in the UI, and have a formalized process which ensures only rebased-then-CI-passing commits get added to master branch. **Proposed solution**: using autoupdate and automerge flags on pull requests (as is currently working in k8s-idpe). **Spike research**: by qx team **Putting kodiak in**: would be done by deployments team ",1,kodiak tool ui deployments sync team ui deployments team has reached consensus along with the leads of out of ui teams to add kodiak to the ui repo qx team needs to review and add their perspective goal handle the density of prs being merged in the ui and have a formalized process which ensures only rebased then ci passing commits get added to master branch proposed solution using autoupdate and automerge flags on pull requests as is currently working in idpe spike research by qx team putting kodiak in would be done by deployments team ,1 723261,24890927511.0,IssuesEvent,2022-10-28 11:57:50,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,www.google.com - site is not usable,priority-critical browser-fenix engine-gecko," **URL**: https://www.google.com/webhp?client=firefox-b-m&channel=ts **Browser / Version**: Firefox Mobile 108.0 **Operating System**: Android 11 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: Sometimes the page goes black and I can't do anything, if I open other tabs they are also black and I have to close Firefox and reopen.
View the screenshot
Browser Configuration
  • gfx.webrender.all: false
  • gfx.webrender.blob-images: true
  • gfx.webrender.enabled: false
  • image.mem.shared: true
  • buildID: 20221026224258
  • channel: nightly
  • hasTouchScreen: true
  • mixed active content blocked: false
  • mixed passive content blocked: false
  • tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2022/10/cc856348-337b-4a8f-92af-f7d73c591547) _From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"www.google.com - site is not usable - **URL**: https://www.google.com/webhp?client=firefox-b-m&channel=ts **Browser / Version**: Firefox Mobile 108.0 **Operating System**: Android 11 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: Sometimes the page goes black and I can't do anything, if I open other tabs they are also black and I have to close Firefox and reopen.
View the screenshot
Browser Configuration
  • gfx.webrender.all: false
  • gfx.webrender.blob-images: true
  • gfx.webrender.enabled: false
  • image.mem.shared: true
  • buildID: 20221026224258
  • channel: nightly
  • hasTouchScreen: true
  • mixed active content blocked: false
  • mixed passive content blocked: false
  • tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2022/10/cc856348-337b-4a8f-92af-f7d73c591547) _From [webcompat.com](https://webcompat.com/) with ❤️_",0, site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce sometimes the page goes black and i can t do anything if i open other tabs they are also black and i have to close firefox and reopen view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ ,0 788475,27754206501.0,IssuesEvent,2023-03-16 00:09:48,wisp-forest/owo-lib,https://api.github.com/repos/wisp-forest/owo-lib,closed,"owo-ui: Data-driven text box trims preset text to the default limit, even when a max-length is set",type: bug status: scheduled priority: normal status: confirmed,"For example the text of ```xml 300 This is an example text longer than 32 chars, which is the default limit ``` will get trimmed to `This is an example text longer t` This happens because at https://github.com/wisp-forest/owo-lib/blob/30aa4c11a6fad97ca76afbd274d525fd29b09802/src/main/java/io/wispforest/owo/ui/component/TextBoxComponent.java#L68-L71 the text is set before the maximum length and the text setter trims the text if it is to long. ",1.0,"owo-ui: Data-driven text box trims preset text to the default limit, even when a max-length is set - For example the text of ```xml 300 This is an example text longer than 32 chars, which is the default limit ``` will get trimmed to `This is an example text longer t` This happens because at https://github.com/wisp-forest/owo-lib/blob/30aa4c11a6fad97ca76afbd274d525fd29b09802/src/main/java/io/wispforest/owo/ui/component/TextBoxComponent.java#L68-L71 the text is set before the maximum length and the text setter trims the text if it is to long. ",0,owo ui data driven text box trims preset text to the default limit even when a max length is set for example the text of xml this is an example text longer than chars which is the default limit will get trimmed to this is an example text longer t this happens because at the text is set before the maximum length and the text setter trims the text if it is to long ,0 2772,3995811826.0,IssuesEvent,2016-05-10 16:40:00,docker/docker,https://api.github.com/repos/docker/docker,closed,Seccomp profile only applying if user defined?,area/security/seccomp,"I'm only seeing a seccomp profile apply if I define a user for the container to run as. Is this by design or my bug? Given this profile: ``` cat b.json { ""defaultAction"": ""SCMP_ACT_ALLOW"", ""architectures"": [ ""SCMP_ARCH_X86_64"", ""SCMP_ARCH_X86"", ""SCMP_ARCH_X32"" ], ""syscalls"": [ { ""name"": ""chmod"", ""action"": ""SCMP_ACT_ERRNO"", ""args"": [] }, { ""name"": ""chown"", ""action"": ""SCMP_ACT_ERRNO"", ""args"": [] }, { ""name"": ""chown32"", ""action"": ""SCMP_ACT_ERRNO"", ""args"": [] } ] } ``` I'm seeing this behaviour: ``` > sudo docker run --rm -it -u 1000 --security-opt seccomp:b.json ubuntu chmod 400 /etc/hostname chmod: changing permissions of '/etc/hostname': Operation not permitted > sudo docker run --rm -it -u 1 --security-opt seccomp:b.json ubuntu chmod 400 /etc/hostname chmod: changing permissions of '/etc/hostname': Operation not permitted > sudo docker run --rm -it --security-opt seccomp:b.json ubuntu chmod 400 /etc/hostname > echo $? 0 ``` If I don't set a user, the seccomp profile seems to be ignored? Am I missing something? ``` uname -ar Linux ubuntu-2gb-lon1-01 3.13.0-85-generic #129-Ubuntu SMP Thu Mar 17 20:50:15 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux docker version Client: Version: 1.11.1 API version: 1.23 Go version: go1.5.4 Git commit: 5604cbe Built: Wed Apr 27 00:34:20 2016 OS/Arch: linux/amd64 Server: Version: 1.11.1 API version: 1.23 Go version: go1.5.4 Git commit: 5604cbe Built: Wed Apr 27 00:34:20 2016 OS/Arch: linux/amd64 docker info Containers: 17 Running: 0 Paused: 0 Stopped: 17 Images: 3 Server Version: 1.11.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 42 Dirperm1 Supported: false Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: null host bridge Kernel Version: 3.13.0-85-generic Operating System: Ubuntu 14.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 1.955 GiB Name: ubuntu-2gb-lon1-01 ID: FSWW:DWBL:TRMB:HFDG:VNH7:IX3H:7QCY:EQIX:UY3N:2Q7T:ISK7:3SOL Docker Root Dir: /var/lib/docker Debug mode (client): false Debug mode (server): false Registry: https://index.docker.io/v1/ WARNING: No swap limit support ``` Because the OS is Ubuntu 14.04 I downloaded the static binaries. ``` lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.4 LTS Release: 14.04 Codename: trusty ``` Update: Also seeing the behaviour on a different machine: ``` uname -ra Linux ubuntu 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04 LTS Release: 16.04 Codename: xenial ```",True,"Seccomp profile only applying if user defined? - I'm only seeing a seccomp profile apply if I define a user for the container to run as. Is this by design or my bug? Given this profile: ``` cat b.json { ""defaultAction"": ""SCMP_ACT_ALLOW"", ""architectures"": [ ""SCMP_ARCH_X86_64"", ""SCMP_ARCH_X86"", ""SCMP_ARCH_X32"" ], ""syscalls"": [ { ""name"": ""chmod"", ""action"": ""SCMP_ACT_ERRNO"", ""args"": [] }, { ""name"": ""chown"", ""action"": ""SCMP_ACT_ERRNO"", ""args"": [] }, { ""name"": ""chown32"", ""action"": ""SCMP_ACT_ERRNO"", ""args"": [] } ] } ``` I'm seeing this behaviour: ``` > sudo docker run --rm -it -u 1000 --security-opt seccomp:b.json ubuntu chmod 400 /etc/hostname chmod: changing permissions of '/etc/hostname': Operation not permitted > sudo docker run --rm -it -u 1 --security-opt seccomp:b.json ubuntu chmod 400 /etc/hostname chmod: changing permissions of '/etc/hostname': Operation not permitted > sudo docker run --rm -it --security-opt seccomp:b.json ubuntu chmod 400 /etc/hostname > echo $? 0 ``` If I don't set a user, the seccomp profile seems to be ignored? Am I missing something? ``` uname -ar Linux ubuntu-2gb-lon1-01 3.13.0-85-generic #129-Ubuntu SMP Thu Mar 17 20:50:15 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux docker version Client: Version: 1.11.1 API version: 1.23 Go version: go1.5.4 Git commit: 5604cbe Built: Wed Apr 27 00:34:20 2016 OS/Arch: linux/amd64 Server: Version: 1.11.1 API version: 1.23 Go version: go1.5.4 Git commit: 5604cbe Built: Wed Apr 27 00:34:20 2016 OS/Arch: linux/amd64 docker info Containers: 17 Running: 0 Paused: 0 Stopped: 17 Images: 3 Server Version: 1.11.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 42 Dirperm1 Supported: false Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: null host bridge Kernel Version: 3.13.0-85-generic Operating System: Ubuntu 14.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 1.955 GiB Name: ubuntu-2gb-lon1-01 ID: FSWW:DWBL:TRMB:HFDG:VNH7:IX3H:7QCY:EQIX:UY3N:2Q7T:ISK7:3SOL Docker Root Dir: /var/lib/docker Debug mode (client): false Debug mode (server): false Registry: https://index.docker.io/v1/ WARNING: No swap limit support ``` Because the OS is Ubuntu 14.04 I downloaded the static binaries. ``` lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.4 LTS Release: 14.04 Codename: trusty ``` Update: Also seeing the behaviour on a different machine: ``` uname -ra Linux ubuntu 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04 LTS Release: 16.04 Codename: xenial ```",0,seccomp profile only applying if user defined i m only seeing a seccomp profile apply if i define a user for the container to run as is this by design or my bug given this profile cat b json defaultaction scmp act allow architectures scmp arch scmp arch scmp arch syscalls name chmod action scmp act errno args name chown action scmp act errno args name action scmp act errno args i m seeing this behaviour sudo docker run rm it u security opt seccomp b json ubuntu chmod etc hostname chmod changing permissions of etc hostname operation not permitted sudo docker run rm it u security opt seccomp b json ubuntu chmod etc hostname chmod changing permissions of etc hostname operation not permitted sudo docker run rm it security opt seccomp b json ubuntu chmod etc hostname echo if i don t set a user the seccomp profile seems to be ignored am i missing something uname ar linux ubuntu generic ubuntu smp thu mar utc gnu linux docker version client version api version go version git commit built wed apr os arch linux server version api version go version git commit built wed apr os arch linux docker info containers running paused stopped images server version storage driver aufs root dir var lib docker aufs backing filesystem extfs dirs supported false logging driver json file cgroup driver cgroupfs plugins volume local network null host bridge kernel version generic operating system ubuntu lts ostype linux architecture cpus total memory gib name ubuntu id fsww dwbl trmb hfdg eqix docker root dir var lib docker debug mode client false debug mode server false registry warning no swap limit support because the os is ubuntu i downloaded the static binaries lsb release a no lsb modules are available distributor id ubuntu description ubuntu lts release codename trusty update also seeing the behaviour on a different machine uname ra linux ubuntu generic ubuntu smp mon apr utc gnu linux lsb release a no lsb modules are available distributor id ubuntu description ubuntu lts release codename xenial ,0 99943,8718634494.0,IssuesEvent,2018-12-07 21:07:50,rancher/rancher,https://api.github.com/repos/rancher/rancher,closed,Azure cluster with cloud provider - worker nodes fails to start because of kubelet restarting constantly with cloud provider option set.,kind/bug-qa priority/0 releases/alpha1 status/resolved status/to-test version/2.0,"Rancher server - Build from master - Dec 5 Steps to reproduce the problem: Provision a Azure cluster with cloud provider with following node configuration with Azure cloud provider option set: 1 control node 1 etcd node 3 worker nodes. Azure cluster gets to ""Active"" sate but worker nodes continue to be stuck in ""Unavailable"" state with error message ```Kubelet stopped posting node status.``` or ``` Container runtime is down,PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s,runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized ``` Rancher server logs: ``` 018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] ``` Agent logs: ``` time=""2018-12-05T20:48:44Z"" level=info msg=""For process kubelet, Env has changed from [RKE_CLOUD_CONFIG_CHECKSUM=da94b08b5b80f8b5d5440cfd61760890] to [RKE_CLOUD_CONFIG_CHECKSUM=a2a7f88d0f324000661600c46fbe024a PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin]"" ``` ",1.0,"Azure cluster with cloud provider - worker nodes fails to start because of kubelet restarting constantly with cloud provider option set. - Rancher server - Build from master - Dec 5 Steps to reproduce the problem: Provision a Azure cluster with cloud provider with following node configuration with Azure cloud provider option set: 1 control node 1 etcd node 3 worker nodes. Azure cluster gets to ""Active"" sate but worker nodes continue to be stuck in ""Unavailable"" state with error message ```Kubelet stopped posting node status.``` or ``` Container runtime is down,PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s,runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized ``` Rancher server logs: ``` 018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioning cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Updating cluster [c-bcgdr] 2018/12/05 20:46:51 [INFO] Provisioned cluster [c-bcgdr] ``` Agent logs: ``` time=""2018-12-05T20:48:44Z"" level=info msg=""For process kubelet, Env has changed from [RKE_CLOUD_CONFIG_CHECKSUM=da94b08b5b80f8b5d5440cfd61760890] to [RKE_CLOUD_CONFIG_CHECKSUM=a2a7f88d0f324000661600c46fbe024a PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin]"" ``` ",0,azure cluster with cloud provider worker nodes fails to start because of kubelet restarting constantly with cloud provider option set rancher server build from master dec steps to reproduce the problem provision a azure cluster with cloud provider with following node configuration with azure cloud provider option set control node etcd node worker nodes azure cluster gets to active sate but worker nodes continue to be stuck in unavailable state with error message kubelet stopped posting node status or container runtime is down pleg is not healthy pleg was last seen active ago threshold is runtime network not ready networkready false reason networkpluginnotready message docker network plugin is not ready cni config uninitialized rancher server logs provisioned cluster provisioning cluster updating cluster provisioned cluster provisioning cluster updating cluster provisioned cluster provisioning cluster updating cluster provisioned cluster provisioning cluster updating cluster provisioned cluster provisioning cluster updating cluster provisioned cluster provisioning cluster updating cluster provisioned cluster provisioning cluster updating cluster provisioned cluster provisioning cluster updating cluster provisioned cluster provisioning cluster updating cluster provisioned cluster provisioning cluster updating cluster provisioned cluster agent logs time level info msg for process kubelet env has changed from to ,0 9705,30305902687.0,IssuesEvent,2023-07-10 09:27:19,litentry/litentry-parachain,https://api.github.com/repos/litentry/litentry-parachain,closed,Create a script/GHA to tell if sidechain on staging works,I3-high D6-automation,"### Context It's possible that we get error notifications from the staging-sidechain but it still functions. Before we restart it, it's better to test if ""it still works"" in the first place. We need a script/GHA for that, similar to ts-test but more light-weighted and accurate. --- :heavy_check_mark: Please set appropriate **labels** and **assignees** if applicable.",1.0,"Create a script/GHA to tell if sidechain on staging works - ### Context It's possible that we get error notifications from the staging-sidechain but it still functions. Before we restart it, it's better to test if ""it still works"" in the first place. We need a script/GHA for that, similar to ts-test but more light-weighted and accurate. --- :heavy_check_mark: Please set appropriate **labels** and **assignees** if applicable.",1,create a script gha to tell if sidechain on staging works context it s possible that we get error notifications from the staging sidechain but it still functions before we restart it it s better to test if it still works in the first place we need a script gha for that similar to ts test but more light weighted and accurate heavy check mark please set appropriate labels and assignees if applicable ,1 2234,11622409452.0,IssuesEvent,2020-02-27 06:24:33,amahesh98/Chefmate,https://api.github.com/repos/amahesh98/Chefmate,closed,Automate Client Side UI Deployment,Automation frontend,Automate deployment of client so that it only takes one command to deploy everything. Minimize ways it can be broken.,1.0,Automate Client Side UI Deployment - Automate deployment of client so that it only takes one command to deploy everything. Minimize ways it can be broken.,1,automate client side ui deployment automate deployment of client so that it only takes one command to deploy everything minimize ways it can be broken ,1 7218,24459516059.0,IssuesEvent,2022-10-07 09:49:43,o3de/o3de,https://api.github.com/repos/o3de/o3de,opened,Nightly build Bug Report: iOS profile job red due to linking error,kind/bug needs-triage sig/graphics-audio kind/automation,"**Failed Jenkins Job Information:** https://jenkins-pipeline.agscollab.com/blue/organizations/jenkins/O3DE-LY-Fork-development_periodic-incremental-daily-internal/detail/O3DE-LY-Fork-development_periodic-incremental-daily-internal/504/pipeline/818 ``` [2022-10-07T05:45:13.490Z] Undefined symbols for architecture arm64: [2022-10-07T05:45:13.499Z] ""_png_do_expand_palette_rgba8_neon"", referenced from: [2022-10-07T05:45:13.515Z] _png_do_read_transformations in libpng16.a(pngrtran.o) [2022-10-07T05:45:13.531Z] ""_png_do_expand_palette_rgb8_neon"", referenced from: [2022-10-07T05:45:13.546Z] _png_do_read_transformations in libpng16.a(pngrtran.o) [2022-10-07T05:45:13.563Z] ""_png_riffle_palette_neon"", referenced from: [2022-10-07T05:45:13.578Z] _png_do_read_transformations in libpng16.a(pngrtran.o) [2022-10-07T05:45:13.594Z] ""_png_init_filter_functions_neon"", referenced from: [2022-10-07T05:45:13.611Z] _png_read_filter_row in libpng16.a(pngrutil.o) [2022-10-07T05:45:13.887Z] ld: symbol(s) not found for architecture arm64 ``` **Attachments** [log.txt](https://github.com/o3de/o3de/files/9732825/log.txt) ",1.0,"Nightly build Bug Report: iOS profile job red due to linking error - **Failed Jenkins Job Information:** https://jenkins-pipeline.agscollab.com/blue/organizations/jenkins/O3DE-LY-Fork-development_periodic-incremental-daily-internal/detail/O3DE-LY-Fork-development_periodic-incremental-daily-internal/504/pipeline/818 ``` [2022-10-07T05:45:13.490Z] Undefined symbols for architecture arm64: [2022-10-07T05:45:13.499Z] ""_png_do_expand_palette_rgba8_neon"", referenced from: [2022-10-07T05:45:13.515Z] _png_do_read_transformations in libpng16.a(pngrtran.o) [2022-10-07T05:45:13.531Z] ""_png_do_expand_palette_rgb8_neon"", referenced from: [2022-10-07T05:45:13.546Z] _png_do_read_transformations in libpng16.a(pngrtran.o) [2022-10-07T05:45:13.563Z] ""_png_riffle_palette_neon"", referenced from: [2022-10-07T05:45:13.578Z] _png_do_read_transformations in libpng16.a(pngrtran.o) [2022-10-07T05:45:13.594Z] ""_png_init_filter_functions_neon"", referenced from: [2022-10-07T05:45:13.611Z] _png_read_filter_row in libpng16.a(pngrutil.o) [2022-10-07T05:45:13.887Z] ld: symbol(s) not found for architecture arm64 ``` **Attachments** [log.txt](https://github.com/o3de/o3de/files/9732825/log.txt) ",1,nightly build bug report ios profile job red due to linking error failed jenkins job information undefined symbols for architecture png do expand palette neon referenced from png do read transformations in a pngrtran o png do expand palette neon referenced from png do read transformations in a pngrtran o png riffle palette neon referenced from png do read transformations in a pngrtran o png init filter functions neon referenced from png read filter row in a pngrutil o ld symbol s not found for architecture attachments ,1 250203,18875987982.0,IssuesEvent,2021-11-14 01:54:49,gvenzl/oci-oracle-xe,https://api.github.com/repos/gvenzl/oci-oracle-xe,closed,Docker's engine (network) doesn't support Oracle's 19+ Out Of Break bands,documentation,"As pointed out by https://franckpachot.medium.com/19c-instant-client-and-docker-1566630ab20e and especially https://github.com/oracle/docker-images/blob/main/OracleDatabase/SingleInstance/FAQ.md#ora-12637-packet-receive-failed there can be an issue with Docker 19+ and Oracle 19+. I encountered this with Debian 11 as a host, Docker version 20.10.10 and gvenzl/oracle-xe:21-full. Symptoms: connecting to the database **inside** the container via sqlplus works, but from outside (from the host) you only get timeouts and ORA-12637 or similar errors in the server log. Solution: disable Out Of Bands feature by executing (inside the container) `echo DISABLE_OOB=ON >> $ORACLE_BASE_HOME/network/admin/sqlnet.ora` or similar. This can be done on the client side too according to above sources.",1.0,"Docker's engine (network) doesn't support Oracle's 19+ Out Of Break bands - As pointed out by https://franckpachot.medium.com/19c-instant-client-and-docker-1566630ab20e and especially https://github.com/oracle/docker-images/blob/main/OracleDatabase/SingleInstance/FAQ.md#ora-12637-packet-receive-failed there can be an issue with Docker 19+ and Oracle 19+. I encountered this with Debian 11 as a host, Docker version 20.10.10 and gvenzl/oracle-xe:21-full. Symptoms: connecting to the database **inside** the container via sqlplus works, but from outside (from the host) you only get timeouts and ORA-12637 or similar errors in the server log. Solution: disable Out Of Bands feature by executing (inside the container) `echo DISABLE_OOB=ON >> $ORACLE_BASE_HOME/network/admin/sqlnet.ora` or similar. This can be done on the client side too according to above sources.",0,docker s engine network doesn t support oracle s out of break bands as pointed out by and especially there can be an issue with docker and oracle i encountered this with debian as a host docker version and gvenzl oracle xe full symptoms connecting to the database inside the container via sqlplus works but from outside from the host you only get timeouts and ora or similar errors in the server log solution disable out of bands feature by executing inside the container echo disable oob on oracle base home network admin sqlnet ora or similar this can be done on the client side too according to above sources ,0 34988,14565154261.0,IssuesEvent,2020-12-17 06:43:09,codeoverflow-org/nodecg-io,https://api.github.com/repos/codeoverflow-org/nodecg-io,closed,Add Twitch PubSub service,enhancement minor service,"### Description We should add a Twitch PubSub service to be able to react to e.g. channel points. Full list of supported events: https://dev.twitch.tv/docs/pubsub#topics. As a package we should use [twitch-pubsub-client](https://www.npmjs.com/package/twitch-pubsub-client). I think the currently existing nodecg-io-twitch service should be renamed with this change to nodecg-io-twitch-chat inorder to make it clear that it only handles the chat and not other parts of Twitch. ",1.0,"Add Twitch PubSub service - ### Description We should add a Twitch PubSub service to be able to react to e.g. channel points. Full list of supported events: https://dev.twitch.tv/docs/pubsub#topics. As a package we should use [twitch-pubsub-client](https://www.npmjs.com/package/twitch-pubsub-client). I think the currently existing nodecg-io-twitch service should be renamed with this change to nodecg-io-twitch-chat inorder to make it clear that it only handles the chat and not other parts of Twitch. ",0,add twitch pubsub service description we should add a twitch pubsub service to be able to react to e g channel points full list of supported events as a package we should use i think the currently existing nodecg io twitch service should be renamed with this change to nodecg io twitch chat inorder to make it clear that it only handles the chat and not other parts of twitch ,0 1800,10791140206.0,IssuesEvent,2019-11-05 16:07:53,rsl-dev/RSL,https://api.github.com/repos/rsl-dev/RSL,opened,Azure pipelines is broken,CI & Automation,"Azure pipelines is setup for this project for automated builds but currently fails to build the project. It was initially added to the repo to test if it built the project any quicker than appveyor. The build errors should be fixed, and if it's slower or equal to appveyor it might be better just to remove it.",1.0,"Azure pipelines is broken - Azure pipelines is setup for this project for automated builds but currently fails to build the project. It was initially added to the repo to test if it built the project any quicker than appveyor. The build errors should be fixed, and if it's slower or equal to appveyor it might be better just to remove it.",1,azure pipelines is broken azure pipelines is setup for this project for automated builds but currently fails to build the project it was initially added to the repo to test if it built the project any quicker than appveyor the build errors should be fixed and if it s slower or equal to appveyor it might be better just to remove it ,1 293095,25269319672.0,IssuesEvent,2022-11-16 08:06:51,apache/accumulo,https://api.github.com/repos/apache/accumulo,closed,Broken or Flaky test: HalfDeadTServerIT.testRecover,bug test,"**Test name(s)** - HalfDeadTServerIT.testRecover **Describe the failure observed** It looks like this test has failed via timeout in the last two builds. It looks like it has failed at least once before as well but twice in a row seems more than a coincidence. The failing runs: - [April 7 2022](https://ci-builds.apache.org/job/Accumulo/job/main/311/org.apache.accumulo$accumulo-test/testReport/org.apache.accumulo.test.functional/HalfDeadTServerIT/testRecover/) - [April 6 2022](https://ci-builds.apache.org/job/Accumulo/job/main/org.apache.accumulo$accumulo-test/310/testReport/junit/org.apache.accumulo.test.functional/HalfDeadTServerIT/testRecover/) - [February 5 2022](https://ci-builds.apache.org/job/Accumulo/job/1.x-Hadoop2/org.apache.accumulo$accumulo-test/52/testReport/org.apache.accumulo.test.functional/HalfDeadTServerIT/testRecover/) It looks like the all timed out in the same spot:
Click to expand! ``` Caused by: org.junit.jupiter.api.AssertTimeout$ExecutionTimeoutException: Execution timed out in thread junit-timeout-thread-7 at java.base@11.0.12/java.lang.Object.wait(Native Method) at java.base@11.0.12/java.lang.Object.wait(Object.java:328) at java.base@11.0.12/java.lang.ProcessImpl.waitFor(ProcessImpl.java:495) at app//org.apache.accumulo.test.functional.HalfDeadTServerIT.test(HalfDeadTServerIT.java:218) at app//org.apache.accumulo.test.functional.HalfDeadTServerIT.testRecover(HalfDeadTServerIT.java:143) at java.base@11.0.12/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base@11.0.12/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base@11.0.12/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base@11.0.12/java.lang.reflect.Method.invoke(Method.java:566) at app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) at app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at app//org.apache.accumulo.harness.Timeout$$Lambda$501/0x0000000800358840.get(Unknown Source) at app//org.junit.jupiter.api.AssertTimeout.lambda$assertTimeoutPreemptively$4(AssertTimeout.java:138) at app//org.junit.jupiter.api.AssertTimeout$$Lambda$442/0x00000008002ad840.call(Unknown Source) at java.base@11.0.12/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base@11.0.12/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base@11.0.12/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11.0.12/java.lang.Thread.run(Thread.java:834) ```
Which is this line in the test: https://github.com/apache/accumulo/blob/d454afd39c91ee0a1ab267e0caafc52254d209b0/test/src/main/java/org/apache/accumulo/test/functional/HalfDeadTServerIT.java#L218 **Testing Environment:** - First commit known to fail: bbae5c6da9dbfa25b4dedf2e79e9a969497d217c - Other commits that are failing: d454afd39c91ee0a1ab267e0caafc52254d209b0 and 6d1a9de2f6cba5a3fa5dcd0840cbf0371039eb79 **What have you tried already?** I have tried re-running these tests locally and have been unable to reproduce this failure. This may be caused by resource constraints. **Additional Context** It looks like the logs on all three of these test failures show something similar:
Click to expand! ``` DumpOutput(stdout):2022-04-07T06:08:55,582 [zookeeper.ServiceLock] DEBUG: event null None Expired DumpOutput(stdout):2022-04-07T06:08:55,587 [tserver.TabletServer] ERROR: Lost tablet server lock (reason = SESSION_EXPIRED), exiting. DumpOutput(stdout):2022-04-07T06:08:55,589 [server.GarbageCollectionLogger] DEBUG: gc G1 Young Generation=0.03(+0.00) secs G1 Old Generation=0.04(+0.00) secs freemem=71,002,128(-43,173,408) totalmem=142,606,336 ```
Click to expand! ``` DumpOutput(stdout):2022-04-06T06:19:08,847 [util.Retry] DEBUG: Sleeping for 1550ms before retrying operation DumpOutput(stdout):2022-04-06T06:19:08,847 [util.Retry] DEBUG: Sleeping for 1529ms before retrying operation DumpOutput(stdout):2022-04-06T06:19:10,334 [zookeeper.ZooSession] DEBUG: Session expired, state of current session : Expired DumpOutput(stdout):2022-04-06T06:19:10,335 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/2b5b3353-1305-4890-973c-2ec7b7e1de69/recovery DumpOutput(stdout):2022-04-06T06:19:10,335 [zookeeper.ServiceLock] DEBUG: event null None Expired DumpOutput(stdout):2022-04-06T06:19:10,345 [tserver.TabletServer] ERROR: Lost tablet server lock (reason = SESSION_EXPIRED), exiting. DumpOutput(stdout):2022-04-06T06:19:10,347 [zookeeper.ZooSession] DEBUG: Removing closed ZooKeeper session to localhost:38779 DumpOutput(stdout):2022-04-06T06:19:10,347 [zookeeper.ZooSession] DEBUG: Connecting to localhost:38779 with timeout 15000 with auth DumpOutput(stdout):2022-04-06T06:19:10,347 [server.GarbageCollectionLogger] DEBUG: gc G1 Young Generation=0.04(+0.00) secs G1 Old Generation=0.05(+0.00) secs freemem=66,295,976(-48,341,288) totalmem=142,606,336 DumpOutput(stdout):2022-04-06T06:19:10,347 [server.GarbageCollectionLogger] WARN : GC pause checker not called in a timely fashion. Expected every 15.0 seconds but was 18.5 seconds since last check ```
Click to expand! ``` 2022-02-05 00:18:42,997 [util.Retry] DEBUG: Sleeping for 1000ms before retrying operation 2022-02-05 00:18:42,995 [zookeeper.ZooSession] DEBUG: Connecting to localhost:44669 with timeout 15000 with auth 2022-02-05 00:18:43,396 [zookeeper.ZooSession] DEBUG: Removing closed ZooKeeper session to localhost:44669 2022-02-05 00:18:43,396 [zookeeper.ZooSession] DEBUG: Connecting to localhost:44669 with timeout 15000 with auth 2022-02-05 00:18:44,605 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/1cd2ca23-e434-4e07-8e80-3c3ff5f53ee2/replication/workqueue 2022-02-05 00:18:44,605 [Audit] INFO : operation: permitted; user: root; client: 67.195.81.162:34616; action: authenticate; 2022-02-05 00:18:44,605 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/1cd2ca23-e434-4e07-8e80-3c3ff5f53ee2/recovery 2022-02-05 00:18:44,605 [zookeeper.ZooLock] DEBUG: event null None Disconnected 2022-02-05 00:18:44,608 [Audit] INFO : operation: permitted; user: root; client: 67.195.81.162:34616; action: performSystemAction; principal: root; 2022-02-05 00:18:44,610 [tracer.ZooTraceClient] DEBUG: Processing event for trace server zk watch 2022-02-05 00:18:44,613 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/1cd2ca23-e434-4e07-8e80-3c3ff5f53ee2/bulk_failed_copyq 2022-02-05 00:18:44,615 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/1cd2ca23-e434-4e07-8e80-3c3ff5f53ee2/replication/workqueue 2022-02-05 00:18:44,615 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/1cd2ca23-e434-4e07-8e80-3c3ff5f53ee2/recovery 2022-02-05 00:18:44,615 [zookeeper.ZooLock] DEBUG: event null None Expired 2022-02-05 00:18:44,618 [tserver.TabletServer] ERROR: Lost tablet server lock (reason = SESSION_EXPIRED), exiting. 2022-02-05 00:18:44,620 [server.GarbageCollectionLogger] DEBUG: gc G1 Young Generation=0.04(+0.00) secs G1 Old Generation=0.03(+0.00) secs freemem=175,325,160(-10,182,592) totalmem=209,715,200 2022-02-05 00:18:44,964 [datanode.DataNode] ERROR: 127.0.0.1:43529:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38814 dst: /127.0.0.1:43529 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:496) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:891) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:239) at java.base/java.lang.Thread.run(Thread.java:834) 2022-02-05 00:18:44,964 [datanode.DataNode] ERROR: 127.0.0.1:43529:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38818 dst: /127.0.0.1:43529 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:496) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:891) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:239) at java.base/java.lang.Thread.run(Thread.java:834) ```
",1.0,"Broken or Flaky test: HalfDeadTServerIT.testRecover - **Test name(s)** - HalfDeadTServerIT.testRecover **Describe the failure observed** It looks like this test has failed via timeout in the last two builds. It looks like it has failed at least once before as well but twice in a row seems more than a coincidence. The failing runs: - [April 7 2022](https://ci-builds.apache.org/job/Accumulo/job/main/311/org.apache.accumulo$accumulo-test/testReport/org.apache.accumulo.test.functional/HalfDeadTServerIT/testRecover/) - [April 6 2022](https://ci-builds.apache.org/job/Accumulo/job/main/org.apache.accumulo$accumulo-test/310/testReport/junit/org.apache.accumulo.test.functional/HalfDeadTServerIT/testRecover/) - [February 5 2022](https://ci-builds.apache.org/job/Accumulo/job/1.x-Hadoop2/org.apache.accumulo$accumulo-test/52/testReport/org.apache.accumulo.test.functional/HalfDeadTServerIT/testRecover/) It looks like the all timed out in the same spot:
Click to expand! ``` Caused by: org.junit.jupiter.api.AssertTimeout$ExecutionTimeoutException: Execution timed out in thread junit-timeout-thread-7 at java.base@11.0.12/java.lang.Object.wait(Native Method) at java.base@11.0.12/java.lang.Object.wait(Object.java:328) at java.base@11.0.12/java.lang.ProcessImpl.waitFor(ProcessImpl.java:495) at app//org.apache.accumulo.test.functional.HalfDeadTServerIT.test(HalfDeadTServerIT.java:218) at app//org.apache.accumulo.test.functional.HalfDeadTServerIT.testRecover(HalfDeadTServerIT.java:143) at java.base@11.0.12/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base@11.0.12/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base@11.0.12/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base@11.0.12/java.lang.reflect.Method.invoke(Method.java:566) at app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) at app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at app//org.apache.accumulo.harness.Timeout$$Lambda$501/0x0000000800358840.get(Unknown Source) at app//org.junit.jupiter.api.AssertTimeout.lambda$assertTimeoutPreemptively$4(AssertTimeout.java:138) at app//org.junit.jupiter.api.AssertTimeout$$Lambda$442/0x00000008002ad840.call(Unknown Source) at java.base@11.0.12/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base@11.0.12/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base@11.0.12/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11.0.12/java.lang.Thread.run(Thread.java:834) ```
Which is this line in the test: https://github.com/apache/accumulo/blob/d454afd39c91ee0a1ab267e0caafc52254d209b0/test/src/main/java/org/apache/accumulo/test/functional/HalfDeadTServerIT.java#L218 **Testing Environment:** - First commit known to fail: bbae5c6da9dbfa25b4dedf2e79e9a969497d217c - Other commits that are failing: d454afd39c91ee0a1ab267e0caafc52254d209b0 and 6d1a9de2f6cba5a3fa5dcd0840cbf0371039eb79 **What have you tried already?** I have tried re-running these tests locally and have been unable to reproduce this failure. This may be caused by resource constraints. **Additional Context** It looks like the logs on all three of these test failures show something similar:
Click to expand! ``` DumpOutput(stdout):2022-04-07T06:08:55,582 [zookeeper.ServiceLock] DEBUG: event null None Expired DumpOutput(stdout):2022-04-07T06:08:55,587 [tserver.TabletServer] ERROR: Lost tablet server lock (reason = SESSION_EXPIRED), exiting. DumpOutput(stdout):2022-04-07T06:08:55,589 [server.GarbageCollectionLogger] DEBUG: gc G1 Young Generation=0.03(+0.00) secs G1 Old Generation=0.04(+0.00) secs freemem=71,002,128(-43,173,408) totalmem=142,606,336 ```
Click to expand! ``` DumpOutput(stdout):2022-04-06T06:19:08,847 [util.Retry] DEBUG: Sleeping for 1550ms before retrying operation DumpOutput(stdout):2022-04-06T06:19:08,847 [util.Retry] DEBUG: Sleeping for 1529ms before retrying operation DumpOutput(stdout):2022-04-06T06:19:10,334 [zookeeper.ZooSession] DEBUG: Session expired, state of current session : Expired DumpOutput(stdout):2022-04-06T06:19:10,335 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/2b5b3353-1305-4890-973c-2ec7b7e1de69/recovery DumpOutput(stdout):2022-04-06T06:19:10,335 [zookeeper.ServiceLock] DEBUG: event null None Expired DumpOutput(stdout):2022-04-06T06:19:10,345 [tserver.TabletServer] ERROR: Lost tablet server lock (reason = SESSION_EXPIRED), exiting. DumpOutput(stdout):2022-04-06T06:19:10,347 [zookeeper.ZooSession] DEBUG: Removing closed ZooKeeper session to localhost:38779 DumpOutput(stdout):2022-04-06T06:19:10,347 [zookeeper.ZooSession] DEBUG: Connecting to localhost:38779 with timeout 15000 with auth DumpOutput(stdout):2022-04-06T06:19:10,347 [server.GarbageCollectionLogger] DEBUG: gc G1 Young Generation=0.04(+0.00) secs G1 Old Generation=0.05(+0.00) secs freemem=66,295,976(-48,341,288) totalmem=142,606,336 DumpOutput(stdout):2022-04-06T06:19:10,347 [server.GarbageCollectionLogger] WARN : GC pause checker not called in a timely fashion. Expected every 15.0 seconds but was 18.5 seconds since last check ```
Click to expand! ``` 2022-02-05 00:18:42,997 [util.Retry] DEBUG: Sleeping for 1000ms before retrying operation 2022-02-05 00:18:42,995 [zookeeper.ZooSession] DEBUG: Connecting to localhost:44669 with timeout 15000 with auth 2022-02-05 00:18:43,396 [zookeeper.ZooSession] DEBUG: Removing closed ZooKeeper session to localhost:44669 2022-02-05 00:18:43,396 [zookeeper.ZooSession] DEBUG: Connecting to localhost:44669 with timeout 15000 with auth 2022-02-05 00:18:44,605 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/1cd2ca23-e434-4e07-8e80-3c3ff5f53ee2/replication/workqueue 2022-02-05 00:18:44,605 [Audit] INFO : operation: permitted; user: root; client: 67.195.81.162:34616; action: authenticate; 2022-02-05 00:18:44,605 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/1cd2ca23-e434-4e07-8e80-3c3ff5f53ee2/recovery 2022-02-05 00:18:44,605 [zookeeper.ZooLock] DEBUG: event null None Disconnected 2022-02-05 00:18:44,608 [Audit] INFO : operation: permitted; user: root; client: 67.195.81.162:34616; action: performSystemAction; principal: root; 2022-02-05 00:18:44,610 [tracer.ZooTraceClient] DEBUG: Processing event for trace server zk watch 2022-02-05 00:18:44,613 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/1cd2ca23-e434-4e07-8e80-3c3ff5f53ee2/bulk_failed_copyq 2022-02-05 00:18:44,615 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/1cd2ca23-e434-4e07-8e80-3c3ff5f53ee2/replication/workqueue 2022-02-05 00:18:44,615 [zookeeper.DistributedWorkQueue] INFO : Got unexpected zookeeper event: None for /accumulo/1cd2ca23-e434-4e07-8e80-3c3ff5f53ee2/recovery 2022-02-05 00:18:44,615 [zookeeper.ZooLock] DEBUG: event null None Expired 2022-02-05 00:18:44,618 [tserver.TabletServer] ERROR: Lost tablet server lock (reason = SESSION_EXPIRED), exiting. 2022-02-05 00:18:44,620 [server.GarbageCollectionLogger] DEBUG: gc G1 Young Generation=0.04(+0.00) secs G1 Old Generation=0.03(+0.00) secs freemem=175,325,160(-10,182,592) totalmem=209,715,200 2022-02-05 00:18:44,964 [datanode.DataNode] ERROR: 127.0.0.1:43529:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38814 dst: /127.0.0.1:43529 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:496) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:891) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:239) at java.base/java.lang.Thread.run(Thread.java:834) 2022-02-05 00:18:44,964 [datanode.DataNode] ERROR: 127.0.0.1:43529:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38818 dst: /127.0.0.1:43529 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:496) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:891) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:239) at java.base/java.lang.Thread.run(Thread.java:834) ```
",0,broken or flaky test halfdeadtserverit testrecover test name s halfdeadtserverit testrecover describe the failure observed it looks like this test has failed via timeout in the last two builds it looks like it has failed at least once before as well but twice in a row seems more than a coincidence the failing runs it looks like the all timed out in the same spot click to expand caused by org junit jupiter api asserttimeout executiontimeoutexception execution timed out in thread junit timeout thread at java base java lang object wait native method at java base java lang object wait object java at java base java lang processimpl waitfor processimpl java at app org apache accumulo test functional halfdeadtserverit test halfdeadtserverit java at app org apache accumulo test functional halfdeadtserverit testrecover halfdeadtserverit java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at app org junit platform commons util reflectionutils invokemethod reflectionutils java at app org junit jupiter engine execution methodinvocation proceed methodinvocation java at app org junit jupiter engine execution invocationinterceptorchain validatinginvocation proceed invocationinterceptorchain java at app org apache accumulo harness timeout lambda get unknown source at app org junit jupiter api asserttimeout lambda asserttimeoutpreemptively asserttimeout java at app org junit jupiter api asserttimeout lambda call unknown source at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java which is this line in the test testing environment first commit known to fail other commits that are failing and what have you tried already i have tried re running these tests locally and have been unable to reproduce this failure this may be caused by resource constraints additional context it looks like the logs on all three of these test failures show something similar click to expand dumpoutput stdout debug event null none expired dumpoutput stdout error lost tablet server lock reason session expired exiting dumpoutput stdout debug gc young generation secs old generation secs freemem totalmem click to expand dumpoutput stdout debug sleeping for before retrying operation dumpoutput stdout debug sleeping for before retrying operation dumpoutput stdout debug session expired state of current session expired dumpoutput stdout info got unexpected zookeeper event none for accumulo recovery dumpoutput stdout debug event null none expired dumpoutput stdout error lost tablet server lock reason session expired exiting dumpoutput stdout debug removing closed zookeeper session to localhost dumpoutput stdout debug connecting to localhost with timeout with auth dumpoutput stdout debug gc young generation secs old generation secs freemem totalmem dumpoutput stdout warn gc pause checker not called in a timely fashion expected every seconds but was seconds since last check click to expand debug sleeping for before retrying operation debug connecting to localhost with timeout with auth debug removing closed zookeeper session to localhost debug connecting to localhost with timeout with auth info got unexpected zookeeper event none for accumulo replication workqueue info operation permitted user root client action authenticate info got unexpected zookeeper event none for accumulo recovery debug event null none disconnected info operation permitted user root client action performsystemaction principal root debug processing event for trace server zk watch info got unexpected zookeeper event none for accumulo bulk failed copyq info got unexpected zookeeper event none for accumulo replication workqueue info got unexpected zookeeper event none for accumulo recovery debug event null none expired error lost tablet server lock reason session expired exiting debug gc young generation secs old generation secs freemem totalmem error dataxceiver error processing write block operation src dst java io ioexception premature eof from inputstream at org apache hadoop io ioutils readfully ioutils java at org apache hadoop hdfs protocol datatransfer packetreceiver doreadfully packetreceiver java at org apache hadoop hdfs protocol datatransfer packetreceiver doread packetreceiver java at org apache hadoop hdfs protocol datatransfer packetreceiver receivenextpacket packetreceiver java at org apache hadoop hdfs server datanode blockreceiver receivepacket blockreceiver java at org apache hadoop hdfs server datanode blockreceiver receiveblock blockreceiver java at org apache hadoop hdfs server datanode dataxceiver writeblock dataxceiver java at org apache hadoop hdfs protocol datatransfer receiver opwriteblock receiver java at org apache hadoop hdfs protocol datatransfer receiver processop receiver java at org apache hadoop hdfs server datanode dataxceiver run dataxceiver java at java base java lang thread run thread java error dataxceiver error processing write block operation src dst java io ioexception premature eof from inputstream at org apache hadoop io ioutils readfully ioutils java at org apache hadoop hdfs protocol datatransfer packetreceiver doreadfully packetreceiver java at org apache hadoop hdfs protocol datatransfer packetreceiver doread packetreceiver java at org apache hadoop hdfs protocol datatransfer packetreceiver receivenextpacket packetreceiver java at org apache hadoop hdfs server datanode blockreceiver receivepacket blockreceiver java at org apache hadoop hdfs server datanode blockreceiver receiveblock blockreceiver java at org apache hadoop hdfs server datanode dataxceiver writeblock dataxceiver java at org apache hadoop hdfs protocol datatransfer receiver opwriteblock receiver java at org apache hadoop hdfs protocol datatransfer receiver processop receiver java at org apache hadoop hdfs server datanode dataxceiver run dataxceiver java at java base java lang thread run thread java ,0 54125,29524241304.0,IssuesEvent,2023-06-05 06:17:38,tailscale/tailscale,https://api.github.com/repos/tailscale/tailscale,closed,drastic reduction in network bandwidth when routing traffic via exit node ,L2 Few P2 Aggravating T3 Performance/Debugging exit-node bug,"### What is the issue? I have a Exit node setup in AWS N.virgina(US_east-1) region. Its a t2.micro(Amazon Linux) which allows for speeds upto 1GBPS. Seeing drastic reduction in network bandwidth when routing traffic via the same. I am located in New york please guide on some of the possible reasons for the issue ![image (2)](https://user-images.githubusercontent.com/106259068/187134865-e191797a-e382-47bc-a36d-2ed78008b7ba.png) ![image (3)](https://user-images.githubusercontent.com/106259068/187134871-a81332d0-358c-4213-bcd7-9a41b32d1fbd.png) ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS _No response_ ### OS version Amazon linux 2 ### Tailscale version 1.28.0 ### Bug report _No response_",True,"drastic reduction in network bandwidth when routing traffic via exit node - ### What is the issue? I have a Exit node setup in AWS N.virgina(US_east-1) region. Its a t2.micro(Amazon Linux) which allows for speeds upto 1GBPS. Seeing drastic reduction in network bandwidth when routing traffic via the same. I am located in New york please guide on some of the possible reasons for the issue ![image (2)](https://user-images.githubusercontent.com/106259068/187134865-e191797a-e382-47bc-a36d-2ed78008b7ba.png) ![image (3)](https://user-images.githubusercontent.com/106259068/187134871-a81332d0-358c-4213-bcd7-9a41b32d1fbd.png) ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS _No response_ ### OS version Amazon linux 2 ### Tailscale version 1.28.0 ### Bug report _No response_",0,drastic reduction in network bandwidth when routing traffic via exit node what is the issue i have a exit node setup in aws n virgina us east region its a micro amazon linux which allows for speeds upto seeing drastic reduction in network bandwidth when routing traffic via the same i am located in new york please guide on some of the possible reasons for the issue steps to reproduce no response are there any recent changes that introduced the issue no response os no response os version amazon linux tailscale version bug report no response ,0 3341,13519009042.0,IssuesEvent,2020-09-15 00:45:08,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,[Automation API] relax stack name requirements,area/automation-api,"Since OSS stack identity will take a bit longer to flesh out, we're going to relax the constraints and remove any of the magic around ""fully qualified stack names"". This means removing the validation and making docs clear that fqsn is optional. There are also some methods like `pulumi stack` and `pulumi stack ls` that make an effort to massage names into the fully qualified form before returning to the user. We'll remove this in its entirety for the time being. ",1.0,"[Automation API] relax stack name requirements - Since OSS stack identity will take a bit longer to flesh out, we're going to relax the constraints and remove any of the magic around ""fully qualified stack names"". This means removing the validation and making docs clear that fqsn is optional. There are also some methods like `pulumi stack` and `pulumi stack ls` that make an effort to massage names into the fully qualified form before returning to the user. We'll remove this in its entirety for the time being. ",1, relax stack name requirements since oss stack identity will take a bit longer to flesh out we re going to relax the constraints and remove any of the magic around fully qualified stack names this means removing the validation and making docs clear that fqsn is optional there are also some methods like pulumi stack and pulumi stack ls that make an effort to massage names into the fully qualified form before returning to the user we ll remove this in its entirety for the time being ,1 230173,7605088352.0,IssuesEvent,2018-04-30 07:09:52,apiato/apiato,https://api.github.com/repos/apiato/apiato,closed,API path customization,Priority:High Type:Enhancement,"Instead of a subdomain (api), it would nice to be able to use the URL path instead. For example: api.example.com/... => example.com/api/... Reasoning: not everyone has easy access to update DNS records. [Link to RoutesLoader](https://github.com/apiato/apiato/blob/3ba97554f1cf2600cf02be429cdda70fdaa12915/app/Ship/Engine/Loaders/RoutesLoaderTrait.php#L94) As mentioned in Slack, you could add ` $versionPrefix = ""api/"" . $this->getRouteFileVersionFromFileName($file); ` and then change the API_URL for your .env example.com the same goes for admin.example.com/... => example.com/admin/... ",1.0,"API path customization - Instead of a subdomain (api), it would nice to be able to use the URL path instead. For example: api.example.com/... => example.com/api/... Reasoning: not everyone has easy access to update DNS records. [Link to RoutesLoader](https://github.com/apiato/apiato/blob/3ba97554f1cf2600cf02be429cdda70fdaa12915/app/Ship/Engine/Loaders/RoutesLoaderTrait.php#L94) As mentioned in Slack, you could add ` $versionPrefix = ""api/"" . $this->getRouteFileVersionFromFileName($file); ` and then change the API_URL for your .env example.com the same goes for admin.example.com/... => example.com/admin/... ",0,api path customization instead of a subdomain api it would nice to be able to use the url path instead for example api example com example com api reasoning not everyone has easy access to update dns records as mentioned in slack you could add versionprefix api this getroutefileversionfromfilename file and then change the api url for your env example com the same goes for admin example com example com admin ,0 146376,19402617004.0,IssuesEvent,2021-12-19 13:06:56,growerp/growerp-chat,https://api.github.com/repos/growerp/growerp-chat,opened,WS-2021-0419 (High) detected in gson-2.8.0.jar,security vulnerability,"## WS-2021-0419 - High Severity Vulnerability
Vulnerable Library - gson-2.8.0.jar

Gson JSON library

Library home page: https://github.com/google/gson

Path to dependency file: growerp-chat/build.gradle

Path to vulnerable library: e/caches/modules-2/files-2.1/com.google.code.gson/gson/2.8.0/c4ba5371a29ac9b2ad6129b1d39ea38750043eff/gson-2.8.0.jar

Dependency Hierarchy: - :x: **gson-2.8.0.jar** (Vulnerable Library)

Found in HEAD commit: 4cb9afca7b4ab356e0863ec7515cb10a779ea02d

Found in base branch: master

Vulnerability Details

Denial of Service vulnerability was discovered in gson before 2.8.9 via the writeReplace() method.

Publish Date: 2021-10-11

URL: WS-2021-0419

CVSS 3 Score Details (7.7)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/google/gson/releases/tag/gson-parent-2.8.9

Release Date: 2021-10-11

Fix Resolution: com.google.code.gson:gson:2.8.9

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2021-0419 (High) detected in gson-2.8.0.jar - ## WS-2021-0419 - High Severity Vulnerability
Vulnerable Library - gson-2.8.0.jar

Gson JSON library

Library home page: https://github.com/google/gson

Path to dependency file: growerp-chat/build.gradle

Path to vulnerable library: e/caches/modules-2/files-2.1/com.google.code.gson/gson/2.8.0/c4ba5371a29ac9b2ad6129b1d39ea38750043eff/gson-2.8.0.jar

Dependency Hierarchy: - :x: **gson-2.8.0.jar** (Vulnerable Library)

Found in HEAD commit: 4cb9afca7b4ab356e0863ec7515cb10a779ea02d

Found in base branch: master

Vulnerability Details

Denial of Service vulnerability was discovered in gson before 2.8.9 via the writeReplace() method.

Publish Date: 2021-10-11

URL: WS-2021-0419

CVSS 3 Score Details (7.7)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/google/gson/releases/tag/gson-parent-2.8.9

Release Date: 2021-10-11

Fix Resolution: com.google.code.gson:gson:2.8.9

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws high detected in gson jar ws high severity vulnerability vulnerable library gson jar gson json library library home page a href path to dependency file growerp chat build gradle path to vulnerable library e caches modules files com google code gson gson gson jar dependency hierarchy x gson jar vulnerable library found in head commit a href found in base branch master vulnerability details denial of service vulnerability was discovered in gson before via the writereplace method publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com google code gson gson step up your open source security game with whitesource ,0 8974,27295087239.0,IssuesEvent,2023-02-23 19:37:41,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Cypress Test -Change Authorization scope from 'Kong API Key with ACL' to 'Outh2 Client Credential',automation aps-demo," - [ ] Check manually If 'Kong API Key with ACL' to 'Outh2 Client Credential' works correctly - [ ] Prepare Automation test to change authorization scope Change Authorization profile from Kong ACL-API to Client Credential 1.1 Authenticates api owner 1.2 Activates the namespace 1.3 Create an authorization profile 1.4 Deactivate the service for Test environment 1.5 Update the authorization scope from Kong ACL-API to Client Credential 1.6 applies authorization plugin to service published to Kong Gateway 1.7 activate the service for Test environment 2.Developer creates an access request for Client ID/Secret authenticator 2.1 Developer logs in 2.2 Creates an application 2.3 Creates an access request Access manager approves developer access request for Client ID/Secret authenticator 3.1 Access Manager logs in 3.2 Access Manager approves developer access request 3.3 approves an access request Make an API request using Client ID, Secret, and Access Token 4.1 Get access token using client ID and secret; make API request",1.0,"Cypress Test -Change Authorization scope from 'Kong API Key with ACL' to 'Outh2 Client Credential' - - [ ] Check manually If 'Kong API Key with ACL' to 'Outh2 Client Credential' works correctly - [ ] Prepare Automation test to change authorization scope Change Authorization profile from Kong ACL-API to Client Credential 1.1 Authenticates api owner 1.2 Activates the namespace 1.3 Create an authorization profile 1.4 Deactivate the service for Test environment 1.5 Update the authorization scope from Kong ACL-API to Client Credential 1.6 applies authorization plugin to service published to Kong Gateway 1.7 activate the service for Test environment 2.Developer creates an access request for Client ID/Secret authenticator 2.1 Developer logs in 2.2 Creates an application 2.3 Creates an access request Access manager approves developer access request for Client ID/Secret authenticator 3.1 Access Manager logs in 3.2 Access Manager approves developer access request 3.3 approves an access request Make an API request using Client ID, Secret, and Access Token 4.1 Get access token using client ID and secret; make API request",1,cypress test change authorization scope from kong api key with acl to client credential check manually if kong api key with acl to client credential works correctly prepare automation test to change authorization scope change authorization profile from kong acl api to client credential authenticates api owner activates the namespace create an authorization profile deactivate the service for test environment update the authorization scope from kong acl api to client credential applies authorization plugin to service published to kong gateway activate the service for test environment developer creates an access request for client id secret authenticator developer logs in creates an application creates an access request access manager approves developer access request for client id secret authenticator access manager logs in access manager approves developer access request approves an access request make an api request using client id secret and access token get access token using client id and secret make api request,1 187518,22045790240.0,IssuesEvent,2022-05-30 01:26:10,utopikkad/my-Todo-List,https://api.github.com/repos/utopikkad/my-Todo-List,closed,"CVE-2021-27290 (High) detected in ssri-7.1.0.tgz, ssri-6.0.1.tgz - autoclosed",security vulnerability,"## CVE-2021-27290 - High Severity Vulnerability
Vulnerable Libraries - ssri-7.1.0.tgz, ssri-6.0.1.tgz

ssri-7.1.0.tgz

Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.

Library home page: https://registry.npmjs.org/ssri/-/ssri-7.1.0.tgz

Dependency Hierarchy: - build-angular-0.900.5.tgz (Root Library) - cacache-13.0.1.tgz - :x: **ssri-7.1.0.tgz** (Vulnerable Library)

ssri-6.0.1.tgz

Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.

Library home page: https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz

Dependency Hierarchy: - cli-7.3.9.tgz (Root Library) - pacote-9.4.0.tgz - :x: **ssri-6.0.1.tgz** (Vulnerable Library)

Found in HEAD commit: a575471c4d32902c4fe3a01ed7cb42670c976994

Vulnerability Details

ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option.

Publish Date: 2021-03-12

URL: CVE-2021-27290

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-vx3p-948g-6vhq

Release Date: 2021-03-12

Fix Resolution: ssri - 6.0.2,7.1.1,8.0.1

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-27290 (High) detected in ssri-7.1.0.tgz, ssri-6.0.1.tgz - autoclosed - ## CVE-2021-27290 - High Severity Vulnerability
Vulnerable Libraries - ssri-7.1.0.tgz, ssri-6.0.1.tgz

ssri-7.1.0.tgz

Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.

Library home page: https://registry.npmjs.org/ssri/-/ssri-7.1.0.tgz

Dependency Hierarchy: - build-angular-0.900.5.tgz (Root Library) - cacache-13.0.1.tgz - :x: **ssri-7.1.0.tgz** (Vulnerable Library)

ssri-6.0.1.tgz

Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.

Library home page: https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz

Dependency Hierarchy: - cli-7.3.9.tgz (Root Library) - pacote-9.4.0.tgz - :x: **ssri-6.0.1.tgz** (Vulnerable Library)

Found in HEAD commit: a575471c4d32902c4fe3a01ed7cb42670c976994

Vulnerability Details

ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option.

Publish Date: 2021-03-12

URL: CVE-2021-27290

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-vx3p-948g-6vhq

Release Date: 2021-03-12

Fix Resolution: ssri - 6.0.2,7.1.1,8.0.1

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in ssri tgz ssri tgz autoclosed cve high severity vulnerability vulnerable libraries ssri tgz ssri tgz ssri tgz standard subresource integrity library parses serializes generates and verifies integrity metadata according to the sri spec library home page a href dependency hierarchy build angular tgz root library cacache tgz x ssri tgz vulnerable library ssri tgz standard subresource integrity library parses serializes generates and verifies integrity metadata according to the sri spec library home page a href dependency hierarchy cli tgz root library pacote tgz x ssri tgz vulnerable library found in head commit a href vulnerability details ssri fixed in processes sris using a regular expression which is vulnerable to a denial of service malicious sris could take an extremely long time to process leading to denial of service this issue only affects consumers using the strict option publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ssri step up your open source security game with whitesource ,0 7558,25130055603.0,IssuesEvent,2022-11-09 14:33:52,vmware-labs/wasm-workers-server,https://api.github.com/repos/vmware-labs/wasm-workers-server,closed,Add cargo fmt and clippy to every PR,⚙️ automation,"We should check the PRs follow Rust best practices and a consistent code style. This can be done by the popular Rust projects [fmt](https://github.com/rust-lang/rustfmt) and [clippy](https://github.com/rust-lang/rust-clippy). These two tools will be run against the project on every PR. In fact, the PR that integrates this feature may require code changes to make the code compliant with their expectations.",1.0,"Add cargo fmt and clippy to every PR - We should check the PRs follow Rust best practices and a consistent code style. This can be done by the popular Rust projects [fmt](https://github.com/rust-lang/rustfmt) and [clippy](https://github.com/rust-lang/rust-clippy). These two tools will be run against the project on every PR. In fact, the PR that integrates this feature may require code changes to make the code compliant with their expectations.",1,add cargo fmt and clippy to every pr we should check the prs follow rust best practices and a consistent code style this can be done by the popular rust projects and these two tools will be run against the project on every pr in fact the pr that integrates this feature may require code changes to make the code compliant with their expectations ,1 253819,8066345266.0,IssuesEvent,2018-08-04 14:40:40,crslab/cloud-vision-explorer,https://api.github.com/repos/crslab/cloud-vision-explorer,closed,[Low] Lock panning when interpolating,bug low priority,"In slider mode, the panning functionality is still turned on. We want to block this!",1.0,"[Low] Lock panning when interpolating - In slider mode, the panning functionality is still turned on. We want to block this!",0, lock panning when interpolating in slider mode the panning functionality is still turned on we want to block this ,0 180188,14740758723.0,IssuesEvent,2021-01-07 09:35:09,PKJrod/CPW-212-LawnMowingService,https://api.github.com/repos/PKJrod/CPW-212-LawnMowingService,opened,Create documentation,documentation,Add some documentation to the code when creating algorithm for the calculation to show what is being done ,1.0,Create documentation - Add some documentation to the code when creating algorithm for the calculation to show what is being done ,0,create documentation add some documentation to the code when creating algorithm for the calculation to show what is being done ,0 6611,23515603677.0,IssuesEvent,2022-08-18 20:59:56,o3de/o3de,https://api.github.com/repos/o3de/o3de,opened,PhysX Heightfield Collider Component returns a memory access violation when getting its Component Property Tree with types ,kind/bug needs-triage kind/automation sig/simulation,"**Describe the bug** When attempting to get the **Component Property Tree** from a **PhysX Heightfield Collider Component** a memory access violation is returned **Steps to reproduce** Steps to reproduce the behavior: 1. Create a Python Editor Test that makes a call to get the **Component Property Tree** from the **PhysX Heightfield Collider Component**. ``` test_entity = EditorEntity.create_editor_entity(""Test"") test_component = test_entity.add_component(""PhysX Heightfield Collider"") print(test_component.get_property_type_visibility()) ``` or ``` test_entity = hydra.Entity(""test"") entity.create_entity(position, [""PhysX Heightfield Collider""]) component = test_entity.components[0] print(hydra.get_property_tree(component) ``` 2. Run automation **Expected behavior** A property tree with paths is returned and printed to the stream **Actual behavior** A Read Access Memory exception is returned **Callstack** ``` ``` ",1.0,"PhysX Heightfield Collider Component returns a memory access violation when getting its Component Property Tree with types - **Describe the bug** When attempting to get the **Component Property Tree** from a **PhysX Heightfield Collider Component** a memory access violation is returned **Steps to reproduce** Steps to reproduce the behavior: 1. Create a Python Editor Test that makes a call to get the **Component Property Tree** from the **PhysX Heightfield Collider Component**. ``` test_entity = EditorEntity.create_editor_entity(""Test"") test_component = test_entity.add_component(""PhysX Heightfield Collider"") print(test_component.get_property_type_visibility()) ``` or ``` test_entity = hydra.Entity(""test"") entity.create_entity(position, [""PhysX Heightfield Collider""]) component = test_entity.components[0] print(hydra.get_property_tree(component) ``` 2. Run automation **Expected behavior** A property tree with paths is returned and printed to the stream **Actual behavior** A Read Access Memory exception is returned **Callstack** ``` ``` ",1,physx heightfield collider component returns a memory access violation when getting its component property tree with types describe the bug when attempting to get the component property tree from a physx heightfield collider component a memory access violation is returned steps to reproduce steps to reproduce the behavior create a python editor test that makes a call to get the component property tree from the physx heightfield collider component test entity editorentity create editor entity test test component test entity add component physx heightfield collider print test component get property type visibility or test entity hydra entity test entity create entity position component test entity components print hydra get property tree component run automation expected behavior a property tree with paths is returned and printed to the stream actual behavior a read access memory exception is returned callstack ,1 2114,11420498024.0,IssuesEvent,2020-02-03 10:13:43,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,Refactoring: Only install gcloud management cli where it is really necessary,automation,"Right now we install gcloud for every stage of our builds. This always takes up a lot of time and is not needed. The only parts where this is needed is: - cron (we create a GKE cluster to test) - build_cil (we upload the binary/artifact to gcloud) ",1.0,"Refactoring: Only install gcloud management cli where it is really necessary - Right now we install gcloud for every stage of our builds. This always takes up a lot of time and is not needed. The only parts where this is needed is: - cron (we create a GKE cluster to test) - build_cil (we upload the binary/artifact to gcloud) ",1,refactoring only install gcloud management cli where it is really necessary right now we install gcloud for every stage of our builds this always takes up a lot of time and is not needed the only parts where this is needed is cron we create a gke cluster to test build cil we upload the binary artifact to gcloud ,1 8895,27182927035.0,IssuesEvent,2023-02-18 21:30:32,Abstrat-Technologies/rustedanvil,https://api.github.com/repos/Abstrat-Technologies/rustedanvil,closed,[CORE][BUG] Build failure,bug core automation,"**Describe the bug** npm is unable to successfully build from the core due to a error with dependencies **To Reproduce** `npm ci` **Expected behavior** [Return check with pass](https://i.imgur.com/CjPff80.png) **Screenshots** [Screen 1](https://i.imgur.com/56jVosx.png) **Additional context** Build scrips included to stop exactly this of note is the weird behaviour of the modules, took 3 reinstalls before it behaved ",1.0,"[CORE][BUG] Build failure - **Describe the bug** npm is unable to successfully build from the core due to a error with dependencies **To Reproduce** `npm ci` **Expected behavior** [Return check with pass](https://i.imgur.com/CjPff80.png) **Screenshots** [Screen 1](https://i.imgur.com/56jVosx.png) **Additional context** Build scrips included to stop exactly this of note is the weird behaviour of the modules, took 3 reinstalls before it behaved ",1, build failure describe the bug npm is unable to successfully build from the core due to a error with dependencies to reproduce npm ci expected behavior screenshots additional context build scrips included to stop exactly this of note is the weird behaviour of the modules took reinstalls before it behaved ,1 123943,4889306823.0,IssuesEvent,2016-11-18 09:46:22,kubernetes/dashboard,https://api.github.com/repos/kubernetes/dashboard,closed,CPU and Memory usage monitoring feature,area/usability kind/feature priority/P1,"I am working on a new cpu and memory monitoring feature for our dashboard. The aim is to add graphs of CPU and memory usage to the details page. Example can be seen in the [design specification ](https://github.com/kubernetes/dashboard/blob/master/docs/design/mockups/23-03-2016-scale-and-navigation/single-resource-page-template.png). My idea is to show not only resource usage versus time but also add extra annotations to the graph, showing events of interest and max/min resource consumtion. I would really appreciate some discussion here as I am not exactly sure how the graph should look like and how the annotations should be displayed so that the the feature is both easy to use and neat. The goal of this discussion is to create more detailed mock designs. cc @romlein @Lukenickerson @floreks @bryk @pwittrock ",1.0,"CPU and Memory usage monitoring feature - I am working on a new cpu and memory monitoring feature for our dashboard. The aim is to add graphs of CPU and memory usage to the details page. Example can be seen in the [design specification ](https://github.com/kubernetes/dashboard/blob/master/docs/design/mockups/23-03-2016-scale-and-navigation/single-resource-page-template.png). My idea is to show not only resource usage versus time but also add extra annotations to the graph, showing events of interest and max/min resource consumtion. I would really appreciate some discussion here as I am not exactly sure how the graph should look like and how the annotations should be displayed so that the the feature is both easy to use and neat. The goal of this discussion is to create more detailed mock designs. cc @romlein @Lukenickerson @floreks @bryk @pwittrock ",0,cpu and memory usage monitoring feature i am working on a new cpu and memory monitoring feature for our dashboard the aim is to add graphs of cpu and memory usage to the details page example can be seen in the design specification my idea is to show not only resource usage versus time but also add extra annotations to the graph showing events of interest and max min resource consumtion i would really appreciate some discussion here as i am not exactly sure how the graph should look like and how the annotations should be displayed so that the the feature is both easy to use and neat the goal of this discussion is to create more detailed mock designs cc romlein lukenickerson floreks bryk pwittrock ,0 8825,27172300587.0,IssuesEvent,2023-02-17 20:39:05,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Invalid quota returned,area:OneDrive for Business area:DriveItem Storage Needs: Investigation automation:Closed,"Hello, I'm having a problem that was marked as solved in another issue regarding invalid quotas. reference: https://github.com/OneDrive/onedrive-api-docs/issues/1245 GET `/https://graph.microsoft.com/v1.0/me/drives` ``` ""quota"": { ""deleted"": 0, ""remaining"": 0, ""total"": 0, ""used"": 0 } ``` ``` { ""cache-control"": ""private"", ""client-request-id"": ""693b258a-c980-5224-94c6-91e09dc58ffc"", ""content-length"": ""583"", ""content-type"": ""application/json;odata.metadata=minimal;odata.streaming=true;IEEE754Compatible=false;charset=utf-8"", ""request-id"": ""f903c832-43e8-4bef-8f43-26140337859f"" } ```",1.0,"Invalid quota returned - Hello, I'm having a problem that was marked as solved in another issue regarding invalid quotas. reference: https://github.com/OneDrive/onedrive-api-docs/issues/1245 GET `/https://graph.microsoft.com/v1.0/me/drives` ``` ""quota"": { ""deleted"": 0, ""remaining"": 0, ""total"": 0, ""used"": 0 } ``` ``` { ""cache-control"": ""private"", ""client-request-id"": ""693b258a-c980-5224-94c6-91e09dc58ffc"", ""content-length"": ""583"", ""content-type"": ""application/json;odata.metadata=minimal;odata.streaming=true;IEEE754Compatible=false;charset=utf-8"", ""request-id"": ""f903c832-43e8-4bef-8f43-26140337859f"" } ```",1,invalid quota returned hello i m having a problem that was marked as solved in another issue regarding invalid quotas reference get quota deleted remaining total used cache control private client request id content length content type application json odata metadata minimal odata streaming true false charset utf request id ,1 6536,23368609467.0,IssuesEvent,2022-08-10 17:37:09,webanno/webanno,https://api.github.com/repos/webanno/webanno,closed,Remote API for Automation,🆕Enhancement Module: Remote API Module: Automation ➔ INCEpTION,"As we have remote API for annotation, it is better also to have remote API for automation to automatically tag new documents remotely inside WebAnno",1.0,"Remote API for Automation - As we have remote API for annotation, it is better also to have remote API for automation to automatically tag new documents remotely inside WebAnno",1,remote api for automation as we have remote api for annotation it is better also to have remote api for automation to automatically tag new documents remotely inside webanno,1 55182,23409033654.0,IssuesEvent,2022-08-12 15:30:05,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,[i18n] Check usage of FormattedNumber instead of Numeral JS,Team:Core Project:i18n enhancement loe:hours Team:AppServicesSv Feature:FieldFormatters impact:low,"Check whether we can use `FormattedNumber` instead of `Numeral JS` and whether we should set the locale for `Numeral JS` according to `i18n.locale` preference. Note: There is a field `Formatting locale` in [Advanced settings](https://github.com/LeanidShutau/kibana/blob/master/src/legacy/core_plugins/kibana/ui_setting_defaults.js#L696) that is related to locale for `Numeral JS` /cc @azasypkin ",1.0,"[i18n] Check usage of FormattedNumber instead of Numeral JS - Check whether we can use `FormattedNumber` instead of `Numeral JS` and whether we should set the locale for `Numeral JS` according to `i18n.locale` preference. Note: There is a field `Formatting locale` in [Advanced settings](https://github.com/LeanidShutau/kibana/blob/master/src/legacy/core_plugins/kibana/ui_setting_defaults.js#L696) that is related to locale for `Numeral JS` /cc @azasypkin ",0, check usage of formattednumber instead of numeral js check whether we can use formattednumber instead of numeral js and whether we should set the locale for numeral js according to locale preference note there is a field formatting locale in that is related to locale for numeral js cc azasypkin ,0 3096,13081035463.0,IssuesEvent,2020-08-01 09:35:47,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,"Error when open ""add automation dialog""",integration: device_automation stale," ## The problem When you add automation for a device that have entries missing in entry register HA throw an error: This causes add automation dialog to be empty even if there are possible automation to add. This can be fixed by adding a check in _async_get_device_automations ```py for entry_id in device.config_entries: config_entry = hass.config_entries.async_get_entry(entry_id) if config_entry: #Add this check domains.add(config_entry.domain) ``` I don't know what cause the device pointing on missing entities, maybe there is a bug there also. ## Environment Home Assistant: 0.108.5 - Home Assistant Core release with the issue: 0.108.5 - Last working Home Assistant Core release (if known): Never - Operating environment (Home Assistant/Supervised/Docker/venv): Ubunto with HassIO ## Problem-relevant `configuration.yaml` ```yaml ``` ## Traceback/Error logs ```txt 2020-04-16 21:53:06 ERROR (MainThread) [homeassistant.components.websocket_api.http.connection.140597786213904] Error handling message: Unknown error Traceback (most recent call last): File ""/home/hakan/github/home-assistant/homeassistant/components/websocket_api/decorators.py"", line 20, in _handle_async_response await func(hass, connection, msg) File ""/home/hakan/github/home-assistant/homeassistant/components/device_automation/__init__.py"", line 189, in with_error_handling await func(hass, connection, msg) File ""/home/hakan/github/home-assistant/homeassistant/components/device_automation/__init__.py"", line 224, in websocket_device_automation_list_conditions conditions = await _async_get_device_automations(hass, ""condition"", device_id) File ""/home/hakan/github/home-assistant/homeassistant/components/device_automation/__init__.py"", line 129, in _async_get_device_automations domains.add(config_entry.domain) AttributeError: 'NoneType' object has no attribute 'domain' ``` ## Additional information ",1.0,"Error when open ""add automation dialog"" - ## The problem When you add automation for a device that have entries missing in entry register HA throw an error: This causes add automation dialog to be empty even if there are possible automation to add. This can be fixed by adding a check in _async_get_device_automations ```py for entry_id in device.config_entries: config_entry = hass.config_entries.async_get_entry(entry_id) if config_entry: #Add this check domains.add(config_entry.domain) ``` I don't know what cause the device pointing on missing entities, maybe there is a bug there also. ## Environment Home Assistant: 0.108.5 - Home Assistant Core release with the issue: 0.108.5 - Last working Home Assistant Core release (if known): Never - Operating environment (Home Assistant/Supervised/Docker/venv): Ubunto with HassIO ## Problem-relevant `configuration.yaml` ```yaml ``` ## Traceback/Error logs ```txt 2020-04-16 21:53:06 ERROR (MainThread) [homeassistant.components.websocket_api.http.connection.140597786213904] Error handling message: Unknown error Traceback (most recent call last): File ""/home/hakan/github/home-assistant/homeassistant/components/websocket_api/decorators.py"", line 20, in _handle_async_response await func(hass, connection, msg) File ""/home/hakan/github/home-assistant/homeassistant/components/device_automation/__init__.py"", line 189, in with_error_handling await func(hass, connection, msg) File ""/home/hakan/github/home-assistant/homeassistant/components/device_automation/__init__.py"", line 224, in websocket_device_automation_list_conditions conditions = await _async_get_device_automations(hass, ""condition"", device_id) File ""/home/hakan/github/home-assistant/homeassistant/components/device_automation/__init__.py"", line 129, in _async_get_device_automations domains.add(config_entry.domain) AttributeError: 'NoneType' object has no attribute 'domain' ``` ## Additional information ",1,error when open add automation dialog read this first if you need additional help with this template please refer to make sure you are running the latest version of home assistant before reporting an issue do not report issues for integrations if you are using custom components or integrations provide as many details as possible paste logs configuration samples and code into the backticks do not delete any text from this template otherwise your issue may be closed without comment the problem describe the issue you are experiencing here to communicate to the maintainers tell us what you were trying to do and what happened when you add automation for a device that have entries missing in entry register ha throw an error this causes add automation dialog to be empty even if there are possible automation to add this can be fixed by adding a check in async get device automations py for entry id in device config entries config entry hass config entries async get entry entry id if config entry add this check domains add config entry domain i don t know what cause the device pointing on missing entities maybe there is a bug there also environment provide details about the versions you are using which helps us to reproduce and find the issue quicker version information is found in the home assistant frontend developer tools info home assistant home assistant core release with the issue last working home assistant core release if known never operating environment home assistant supervised docker venv ubunto with hassio problem relevant configuration yaml an example configuration that caused the problem for you fill this out even if it seems unimportant to you please be sure to remove personal information like passwords private urls and other credentials yaml traceback error logs if you come across any trace or error logs please provide them txt error mainthread error handling message unknown error traceback most recent call last file home hakan github home assistant homeassistant components websocket api decorators py line in handle async response await func hass connection msg file home hakan github home assistant homeassistant components device automation init py line in with error handling await func hass connection msg file home hakan github home assistant homeassistant components device automation init py line in websocket device automation list conditions conditions await async get device automations hass condition device id file home hakan github home assistant homeassistant components device automation init py line in async get device automations domains add config entry domain attributeerror nonetype object has no attribute domain additional information ,1 15729,27802989799.0,IssuesEvent,2023-03-17 17:16:40,renovatebot/renovate,https://api.github.com/repos/renovatebot/renovate,opened,Stability Days documentation used ation and update stabilityDays builds to link to this documentation,type:feature status:requirements priority-5-triage,"### What would you like Renovate to be able to do? Create a dedicated documentation page to explain how **Stability Days** works. Then use this new page as the hyperlink for the `renovate/stability-days` build, like those below ![image](https://user-images.githubusercontent.com/386277/225973577-2e109f04-6d7d-4736-a884-e09ac70765ca.png) ### If you have any ideas on how this should be implemented, please tell us here. create new docs page and update builds to reference it ### Is this a feature you are interested in implementing yourself? No",1.0,"Stability Days documentation used ation and update stabilityDays builds to link to this documentation - ### What would you like Renovate to be able to do? Create a dedicated documentation page to explain how **Stability Days** works. Then use this new page as the hyperlink for the `renovate/stability-days` build, like those below ![image](https://user-images.githubusercontent.com/386277/225973577-2e109f04-6d7d-4736-a884-e09ac70765ca.png) ### If you have any ideas on how this should be implemented, please tell us here. create new docs page and update builds to reference it ### Is this a feature you are interested in implementing yourself? No",0,stability days documentation used ation and update stabilitydays builds to link to this documentation what would you like renovate to be able to do create a dedicated documentation page to explain how stability days works then use this new page as the hyperlink for the renovate stability days build like those below if you have any ideas on how this should be implemented please tell us here create new docs page and update builds to reference it is this a feature you are interested in implementing yourself no,0 1698,10586173875.0,IssuesEvent,2019-10-08 19:08:38,perfsonar/project,https://api.github.com/repos/perfsonar/project,opened,Ansible Deployment Troubleshooting for PWA,Automation enhancement,"Make sure testpoints, toolkits, and dashboards can contact PWA after deployment",1.0,"Ansible Deployment Troubleshooting for PWA - Make sure testpoints, toolkits, and dashboards can contact PWA after deployment",1,ansible deployment troubleshooting for pwa make sure testpoints toolkits and dashboards can contact pwa after deployment,1 133929,12557887127.0,IssuesEvent,2020-06-07 14:19:25,aaesalamanca/d-eventer,https://api.github.com/repos/aaesalamanca/d-eventer,closed,Diseñar modelos,documentation,"- [x] Diseñar árbol JSON - [x] Diseñar MVVM - [x] Diseñar arquitectura/infraestructura _cloud_ ",1.0,"Diseñar modelos - - [x] Diseñar árbol JSON - [x] Diseñar MVVM - [x] Diseñar arquitectura/infraestructura _cloud_ ",0,diseñar modelos diseñar árbol json diseñar mvvm diseñar arquitectura infraestructura cloud ,0 82950,23929254473.0,IssuesEvent,2022-09-10 09:24:44,bitcoin/bitcoin,https://api.github.com/repos/bitcoin/bitcoin,closed,./configure error with --experimental-kernel-lib,Build system,"The new, experimental `bitcoin-chainstate` is default configured off (https://github.com/bitcoin/bitcoin/pull/24304). Explicitly turning it off works properly on master: `./configure --disable-experimental-util-chainstate` > checking whether to build experimental bitcoin-chainstate... no However, when I add `--without-experimental-kernel-lib` (which is superfluous, since default is to build if we're building libraries and the experimental bitcoin-chainstate executable - it is default to build libraries, but default not to build bitcoin-chainstate, which we've redundantly stated explicitly). `./configure --disable-experimental-util-chainstate --without-experimental-kernel-lib` > checking whether to build experimental bitcoin-chainstate... configure: error: experimental bitcoin-chainstate cannot be built without the experimental bitcoinkernel library. Use --with-experimental-kernel-lib It seems like it is trying to build the bitcoin-chainstate executable for some reason? I would expect the same output as before: > checking whether to build experimental bitcoin-chainstate... no This is easy to reproduce on master. ",1.0,"./configure error with --experimental-kernel-lib - The new, experimental `bitcoin-chainstate` is default configured off (https://github.com/bitcoin/bitcoin/pull/24304). Explicitly turning it off works properly on master: `./configure --disable-experimental-util-chainstate` > checking whether to build experimental bitcoin-chainstate... no However, when I add `--without-experimental-kernel-lib` (which is superfluous, since default is to build if we're building libraries and the experimental bitcoin-chainstate executable - it is default to build libraries, but default not to build bitcoin-chainstate, which we've redundantly stated explicitly). `./configure --disable-experimental-util-chainstate --without-experimental-kernel-lib` > checking whether to build experimental bitcoin-chainstate... configure: error: experimental bitcoin-chainstate cannot be built without the experimental bitcoinkernel library. Use --with-experimental-kernel-lib It seems like it is trying to build the bitcoin-chainstate executable for some reason? I would expect the same output as before: > checking whether to build experimental bitcoin-chainstate... no This is easy to reproduce on master. ",0, configure error with experimental kernel lib the new experimental bitcoin chainstate is default configured off explicitly turning it off works properly on master configure disable experimental util chainstate checking whether to build experimental bitcoin chainstate no however when i add without experimental kernel lib which is superfluous since default is to build if we re building libraries and the experimental bitcoin chainstate executable it is default to build libraries but default not to build bitcoin chainstate which we ve redundantly stated explicitly configure disable experimental util chainstate without experimental kernel lib checking whether to build experimental bitcoin chainstate configure error experimental bitcoin chainstate cannot be built without the experimental bitcoinkernel library use with experimental kernel lib it seems like it is trying to build the bitcoin chainstate executable for some reason i would expect the same output as before checking whether to build experimental bitcoin chainstate no this is easy to reproduce on master ,0 9583,4559322659.0,IssuesEvent,2016-09-14 01:32:18,rust-lang/rust,https://api.github.com/repos/rust-lang/rust,closed,Move compiler-rt build into a crate dependency of libcore,A-build A-rustbuild E-help-wanted,"One of the major blockers of our dream to ""lazily compile std"" is to ensure that we have the ability to compile compiler-rt on-demand. This is a repository maintained by LLVM which contains a large set of intrinsics which LLVM lowers function calls down to on some platforms. Unfortunately the build system of compiler-rt is a bit of a nightmare. We, at the time of the writing, have a large pile of hacks on its makefile-based build system to get things working, and it appears that LLVM has deprecated this build system anyway. We're [trying to move to cmake](https://github.com/rust-lang/rust/pull/34055) but it's still unfortunately a nightmare compiling compiler-rt. To solve both these problems in one fell swoop, @brson and I were chatting this morning and had the idea of moving the build entirely to a build script of libcore, and basically just using gcc-rs to compile compiler-rt instead of using compiler-rt's build system. This means we don't have to have LLVM installed (why does compiler-rt need llvm-config?) and cross-compiling should be *much* more robust/easy as we're driving the compiles, not working around an opaque build system. To make matters worse in compiler-rt as well it contains code for a massive number of intrinsics we'll probably never use. And *even worse* these bits and pieces of code often cause compile failures which don't end up mattering in the end. To solve this problem we should just whitelist a set of intrinsics to build and ignore all others. This may be a bit of a rocky road as we discover some we should have compiled but forgot, but in theory we should be able to select a subset to compile and be done with it. This may make updating compiler-rt difficult, but we've already only done it once in like the past year or two years, so we don't seem to need to do this too urgently. This is a worry to keep in mind, however. Basically here's what I think we should do: * Add a build script to libcore, link gcc-rs into it * Compile select portions of compiler-rt as part of this build script, using gcc-rs * Disable injection of compiler-rt in the compiler Staging this is still a bit up in the air, but I'm curious what others think about this as well. cc @rust-lang/tools cc @brson cc @japaric ",2.0,"Move compiler-rt build into a crate dependency of libcore - One of the major blockers of our dream to ""lazily compile std"" is to ensure that we have the ability to compile compiler-rt on-demand. This is a repository maintained by LLVM which contains a large set of intrinsics which LLVM lowers function calls down to on some platforms. Unfortunately the build system of compiler-rt is a bit of a nightmare. We, at the time of the writing, have a large pile of hacks on its makefile-based build system to get things working, and it appears that LLVM has deprecated this build system anyway. We're [trying to move to cmake](https://github.com/rust-lang/rust/pull/34055) but it's still unfortunately a nightmare compiling compiler-rt. To solve both these problems in one fell swoop, @brson and I were chatting this morning and had the idea of moving the build entirely to a build script of libcore, and basically just using gcc-rs to compile compiler-rt instead of using compiler-rt's build system. This means we don't have to have LLVM installed (why does compiler-rt need llvm-config?) and cross-compiling should be *much* more robust/easy as we're driving the compiles, not working around an opaque build system. To make matters worse in compiler-rt as well it contains code for a massive number of intrinsics we'll probably never use. And *even worse* these bits and pieces of code often cause compile failures which don't end up mattering in the end. To solve this problem we should just whitelist a set of intrinsics to build and ignore all others. This may be a bit of a rocky road as we discover some we should have compiled but forgot, but in theory we should be able to select a subset to compile and be done with it. This may make updating compiler-rt difficult, but we've already only done it once in like the past year or two years, so we don't seem to need to do this too urgently. This is a worry to keep in mind, however. Basically here's what I think we should do: * Add a build script to libcore, link gcc-rs into it * Compile select portions of compiler-rt as part of this build script, using gcc-rs * Disable injection of compiler-rt in the compiler Staging this is still a bit up in the air, but I'm curious what others think about this as well. cc @rust-lang/tools cc @brson cc @japaric ",0,move compiler rt build into a crate dependency of libcore one of the major blockers of our dream to lazily compile std is to ensure that we have the ability to compile compiler rt on demand this is a repository maintained by llvm which contains a large set of intrinsics which llvm lowers function calls down to on some platforms unfortunately the build system of compiler rt is a bit of a nightmare we at the time of the writing have a large pile of hacks on its makefile based build system to get things working and it appears that llvm has deprecated this build system anyway we re but it s still unfortunately a nightmare compiling compiler rt to solve both these problems in one fell swoop brson and i were chatting this morning and had the idea of moving the build entirely to a build script of libcore and basically just using gcc rs to compile compiler rt instead of using compiler rt s build system this means we don t have to have llvm installed why does compiler rt need llvm config and cross compiling should be much more robust easy as we re driving the compiles not working around an opaque build system to make matters worse in compiler rt as well it contains code for a massive number of intrinsics we ll probably never use and even worse these bits and pieces of code often cause compile failures which don t end up mattering in the end to solve this problem we should just whitelist a set of intrinsics to build and ignore all others this may be a bit of a rocky road as we discover some we should have compiled but forgot but in theory we should be able to select a subset to compile and be done with it this may make updating compiler rt difficult but we ve already only done it once in like the past year or two years so we don t seem to need to do this too urgently this is a worry to keep in mind however basically here s what i think we should do add a build script to libcore link gcc rs into it compile select portions of compiler rt as part of this build script using gcc rs disable injection of compiler rt in the compiler staging this is still a bit up in the air but i m curious what others think about this as well cc rust lang tools cc brson cc japaric ,0 3465,13787103250.0,IssuesEvent,2020-10-09 03:51:08,bandprotocol/bandchain,https://api.github.com/repos/bandprotocol/bandchain,closed,Delegate UI test case,automation scan,"Let's implement cypress script for testing on delegation flow. Things should be tested. - Check the submit button are disabled before input the value on delegation modal. - The input bar on delegation modal can be inputed. - Delegate transaction should be send successfully. - The user amount and the validator's bonded amount should be updated to correct one. (Need to ask @pzshine if you don't know how to do it). - If you have another test case, feel free to add it more.",1.0,"Delegate UI test case - Let's implement cypress script for testing on delegation flow. Things should be tested. - Check the submit button are disabled before input the value on delegation modal. - The input bar on delegation modal can be inputed. - Delegate transaction should be send successfully. - The user amount and the validator's bonded amount should be updated to correct one. (Need to ask @pzshine if you don't know how to do it). - If you have another test case, feel free to add it more.",1,delegate ui test case let s implement cypress script for testing on delegation flow things should be tested check the submit button are disabled before input the value on delegation modal the input bar on delegation modal can be inputed delegate transaction should be send successfully the user amount and the validator s bonded amount should be updated to correct one need to ask pzshine if you don t know how to do it if you have another test case feel free to add it more ,1 20497,3814948358.0,IssuesEvent,2016-03-28 15:49:03,mozilla/pdf.js,https://api.github.com/repos/mozilla/pdf.js,closed,"The ""read with streaming"" unit-test (in network_spec.js) fails on the bots when run using the `unittest` command",1-test,"As testing in PR #7116 shows, the [""read with streaming"" unit-test](https://github.com/mozilla/pdf.js/blob/master/test/unit/network_spec.js#L67) fails on the bots when run using the `unittest` command. *However*, the unit-test pass when run using the `test` command. This issue thus seems to be identical to https://github.com/mozilla/pdf.js/pull/6209#issuecomment-159606071, which means that we either need to use a locally available PDF file for that test, or change the unit-test framework to be able to deal with linked files. /cc @brendandahl, @yurydelendik ",1.0,"The ""read with streaming"" unit-test (in network_spec.js) fails on the bots when run using the `unittest` command - As testing in PR #7116 shows, the [""read with streaming"" unit-test](https://github.com/mozilla/pdf.js/blob/master/test/unit/network_spec.js#L67) fails on the bots when run using the `unittest` command. *However*, the unit-test pass when run using the `test` command. This issue thus seems to be identical to https://github.com/mozilla/pdf.js/pull/6209#issuecomment-159606071, which means that we either need to use a locally available PDF file for that test, or change the unit-test framework to be able to deal with linked files. /cc @brendandahl, @yurydelendik ",0,the read with streaming unit test in network spec js fails on the bots when run using the unittest command as testing in pr shows the fails on the bots when run using the unittest command however the unit test pass when run using the test command this issue thus seems to be identical to which means that we either need to use a locally available pdf file for that test or change the unit test framework to be able to deal with linked files cc brendandahl yurydelendik ,0 5172,18796933518.0,IssuesEvent,2021-11-08 23:54:20,EthanThatOneKid/acmcsuf.com,https://api.github.com/repos/EthanThatOneKid/acmcsuf.com,closed,[TEST] See #186,automation:officer,"### >>Officer Name<< Mike Ploythai ### >>Overwrite Officer Position<< S22 ### >>Overwrite Officer Image<< ![mike-ploythai](https://user-images.githubusercontent.com/31261035/140824039-e6d7be00-dc0c-46ee-b852-2936a3e8ca7d.png) TITLE=Create Director",1.0,"[TEST] See #186 - ### >>Officer Name<< Mike Ploythai ### >>Overwrite Officer Position<< S22 ### >>Overwrite Officer Image<< ![mike-ploythai](https://user-images.githubusercontent.com/31261035/140824039-e6d7be00-dc0c-46ee-b852-2936a3e8ca7d.png) TITLE=Create Director",1, see officer name mike ploythai overwrite officer position overwrite officer image title create director,1 816380,30597527443.0,IssuesEvent,2023-07-22 01:19:50,pudding-tech/mikane,https://api.github.com/repos/pudding-tech/mikane,closed,Names not correctly displaying in categories/expenses,bug priority backend,"Users' names in categories/expenses should show last name (or added username) depending of matching names in the event, not in the category/expense lists themselves.",1.0,"Names not correctly displaying in categories/expenses - Users' names in categories/expenses should show last name (or added username) depending of matching names in the event, not in the category/expense lists themselves.",0,names not correctly displaying in categories expenses users names in categories expenses should show last name or added username depending of matching names in the event not in the category expense lists themselves ,0 16883,3573332270.0,IssuesEvent,2016-01-27 05:32:16,Microsoft/vscode,https://api.github.com/repos/Microsoft/vscode,closed,Configurable console support for Mono debug,testplan-item,"- [x] mac @SofianHn - [x] linux @dbaeumer Mono-debug now supports the `externalConsole` attribute in the same way as node-debug and consequently the default has been changed to `externalConsole= false`. Verify that: - output to stdout or stderr appears either in the VSCode debug console or in the external console. - VSCode creates launch configs with the `externalConsole` attribute. Compile a C# program into a debuggable executable with this command: `mcs -debug Program.cs` A sample: ```cs using System; using System.Diagnostics; namespace Simple { class Simple { public static void Main(string[] args) { Console.Error.WriteLine(""Hello stderr!""); Console.WriteLine(""Hello stdout!""); } } } ``` ",1.0,"Configurable console support for Mono debug - - [x] mac @SofianHn - [x] linux @dbaeumer Mono-debug now supports the `externalConsole` attribute in the same way as node-debug and consequently the default has been changed to `externalConsole= false`. Verify that: - output to stdout or stderr appears either in the VSCode debug console or in the external console. - VSCode creates launch configs with the `externalConsole` attribute. Compile a C# program into a debuggable executable with this command: `mcs -debug Program.cs` A sample: ```cs using System; using System.Diagnostics; namespace Simple { class Simple { public static void Main(string[] args) { Console.Error.WriteLine(""Hello stderr!""); Console.WriteLine(""Hello stdout!""); } } } ``` ",0,configurable console support for mono debug mac sofianhn linux dbaeumer mono debug now supports the externalconsole attribute in the same way as node debug and consequently the default has been changed to externalconsole false verify that output to stdout or stderr appears either in the vscode debug console or in the external console vscode creates launch configs with the externalconsole attribute compile a c program into a debuggable executable with this command mcs debug program cs a sample cs using system using system diagnostics namespace simple class simple public static void main string args console error writeline hello stderr console writeline hello stdout ,0 8910,27209084950.0,IssuesEvent,2023-02-20 15:11:56,awslabs/aws-lambda-powertools-typescript,https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript,closed,Maintenance: make `layer-publisher` part of the main npm workspace,area/automation type/internal status/confirmed,"### Summary The `layer-publisher` folder contains the utilities and tests used to publish AWS Lambda Layers for Powertools. When this was developed, it was developed in isolation and with its own dependency tree. Now that the component is somewhat stable, and the dependencies of the main npm workspace has been recently updated, we can bring the folder in the npm workspace so that the dependencies are shared. ### Why is this needed? Because if the two dependency trees are not aligned we risk incurring in the issue described in #1227 every time one of the two trees change. Both trees depend on `aws-cdk-lib` which has a weekly release cadence so, while the issue is now fixed, it's likely it'll happen again. Additionally, by having the `layer-publisher` in the main npm workspace we can decrease the project's footprint on development & CI hosts (because of sharing dependencies) and also simplify some of the workflow that are now disjointed. ### Which area does this relate to? Automation, Other ### Solution _No response_ ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/)",1.0,"Maintenance: make `layer-publisher` part of the main npm workspace - ### Summary The `layer-publisher` folder contains the utilities and tests used to publish AWS Lambda Layers for Powertools. When this was developed, it was developed in isolation and with its own dependency tree. Now that the component is somewhat stable, and the dependencies of the main npm workspace has been recently updated, we can bring the folder in the npm workspace so that the dependencies are shared. ### Why is this needed? Because if the two dependency trees are not aligned we risk incurring in the issue described in #1227 every time one of the two trees change. Both trees depend on `aws-cdk-lib` which has a weekly release cadence so, while the issue is now fixed, it's likely it'll happen again. Additionally, by having the `layer-publisher` in the main npm workspace we can decrease the project's footprint on development & CI hosts (because of sharing dependencies) and also simplify some of the workflow that are now disjointed. ### Which area does this relate to? Automation, Other ### Solution _No response_ ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/)",1,maintenance make layer publisher part of the main npm workspace summary the layer publisher folder contains the utilities and tests used to publish aws lambda layers for powertools when this was developed it was developed in isolation and with its own dependency tree now that the component is somewhat stable and the dependencies of the main npm workspace has been recently updated we can bring the folder in the npm workspace so that the dependencies are shared why is this needed because if the two dependency trees are not aligned we risk incurring in the issue described in every time one of the two trees change both trees depend on aws cdk lib which has a weekly release cadence so while the issue is now fixed it s likely it ll happen again additionally by having the layer publisher in the main npm workspace we can decrease the project s footprint on development ci hosts because of sharing dependencies and also simplify some of the workflow that are now disjointed which area does this relate to automation other solution no response acknowledgment this request meets should this be considered in other lambda powertools languages i e ,1 593361,17971445843.0,IssuesEvent,2021-09-14 02:53:19,ConnerHAnderson/discord-hvz,https://api.github.com/repos/ConnerHAnderson/discord-hvz,closed,Tag logging should give error and success feedback,enhancement critical priority,"After the verification step of tag logging, the server needs to validate the tag, then give feedback to the user.",1.0,"Tag logging should give error and success feedback - After the verification step of tag logging, the server needs to validate the tag, then give feedback to the user.",0,tag logging should give error and success feedback after the verification step of tag logging the server needs to validate the tag then give feedback to the user ,0 8536,27084566223.0,IssuesEvent,2023-02-14 16:07:57,appsmithorg/appsmith,https://api.github.com/repos/appsmithorg/appsmith,closed,"[Bug]: List widget - When using pagination, no way to go back to previous page after reaching last page",Bug App Viewers Pod High List Widget AutomationGap1,"### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When using server-side pagination on the list widget, if you click on the Next button after the last page with data, it shows ""No data to display"" and then there is no way to go back to the previous page and the list widget stays stuck on the no data page. Uploading Screenrecording_20220222_084207.mp4… . ### Steps To Reproduce 1. Connect data to a list widget and use server-side pagination 2. Go to the last page with data and click Next 3. It shows a page with the text ""No data to display"" 4. Now there is no way to go back to the previous page ### Environment Production ### Version Cloud",1.0,"[Bug]: List widget - When using pagination, no way to go back to previous page after reaching last page - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When using server-side pagination on the list widget, if you click on the Next button after the last page with data, it shows ""No data to display"" and then there is no way to go back to the previous page and the list widget stays stuck on the no data page. Uploading Screenrecording_20220222_084207.mp4… . ### Steps To Reproduce 1. Connect data to a list widget and use server-side pagination 2. Go to the last page with data and click Next 3. It shows a page with the text ""No data to display"" 4. Now there is no way to go back to the previous page ### Environment Production ### Version Cloud",1, list widget when using pagination no way to go back to previous page after reaching last page is there an existing issue for this i have searched the existing issues current behavior when using server side pagination on the list widget if you click on the next button after the last page with data it shows no data to display and then there is no way to go back to the previous page and the list widget stays stuck on the no data page uploading screenrecording … steps to reproduce connect data to a list widget and use server side pagination go to the last page with data and click next it shows a page with the text no data to display now there is no way to go back to the previous page environment production version cloud,1 286428,21576014775.0,IssuesEvent,2022-05-02 13:49:36,lets-blade/blade,https://api.github.com/repos/lets-blade/blade,closed,Typo in Render To Browser section,documentation,"Render to Browser part of README has a typo which redirects to https://github.com/lets-blade/blade#render-to-broser instead of https://github.com/lets-blade/blade#render-to-browser I fixed it here https://github.com/lets-blade/blade/pull/413",1.0,"Typo in Render To Browser section - Render to Browser part of README has a typo which redirects to https://github.com/lets-blade/blade#render-to-broser instead of https://github.com/lets-blade/blade#render-to-browser I fixed it here https://github.com/lets-blade/blade/pull/413",0,typo in render to browser section render to browser part of readme has a typo which redirects to instead of i fixed it here ,0 37950,12510902563.0,IssuesEvent,2020-06-02 19:31:50,kenferrara/react-base-table,https://api.github.com/repos/kenferrara/react-base-table,opened,"CVE-2019-6284 (Medium) detected in node-sass-4.14.1.tgz, opennms-opennms-source-22.0.1-1",security vulnerability,"## CVE-2019-6284 - Medium Severity Vulnerability
Vulnerable Libraries - node-sass-4.14.1.tgz

node-sass-4.14.1.tgz

Wrapper around libsass

Library home page: https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz

Path to dependency file: /tmp/ws-scm/react-base-table/package.json

Path to vulnerable library: /react-base-table/node_modules/node-sass/package.json

Dependency Hierarchy: - :x: **node-sass-4.14.1.tgz** (Vulnerable Library)

Found in HEAD commit: 8e278435a954b3faf16104b3f871a7a2a913555a

Vulnerability Details

In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.

Publish Date: 2019-01-14

URL: CVE-2019-6284

CVSS 3 Score Details (6.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284

Release Date: 2019-08-06

Fix Resolution: LibSass - 3.6.0

*** - [ ] Check this box to open an automated fix PR ",True,"CVE-2019-6284 (Medium) detected in node-sass-4.14.1.tgz, opennms-opennms-source-22.0.1-1 - ## CVE-2019-6284 - Medium Severity Vulnerability
Vulnerable Libraries - node-sass-4.14.1.tgz

node-sass-4.14.1.tgz

Wrapper around libsass

Library home page: https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz

Path to dependency file: /tmp/ws-scm/react-base-table/package.json

Path to vulnerable library: /react-base-table/node_modules/node-sass/package.json

Dependency Hierarchy: - :x: **node-sass-4.14.1.tgz** (Vulnerable Library)

Found in HEAD commit: 8e278435a954b3faf16104b3f871a7a2a913555a

Vulnerability Details

In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.

Publish Date: 2019-01-14

URL: CVE-2019-6284

CVSS 3 Score Details (6.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284

Release Date: 2019-08-06

Fix Resolution: LibSass - 3.6.0

*** - [ ] Check this box to open an automated fix PR ",0,cve medium detected in node sass tgz opennms opennms source cve medium severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm react base table package json path to vulnerable library react base table node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href vulnerability details in libsass a heap based buffer over read exists in sass prelexer alternatives in prelexer hpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in libsass a heap based buffer over read exists in sass prelexer alternatives in prelexer hpp vulnerabilityurl ,0 39316,19809686177.0,IssuesEvent,2022-01-19 10:48:06,MaterializeInc/materialize,https://api.github.com/repos/MaterializeInc/materialize,opened,Performance im,C-bug D-good first issue T-performance needs-discussion A-optimization,"### What version of Materialize are you using? v0.15.0 (f79f63205) ### How did you install Materialize? Built from source ### What was the issue? ### Environment ```sql -- database CREATE DATABASE reduction_pushdown; -- schema CREATE TABLE R(x INT NOT NULL, y INT NOT NULL, z INT NOT NULL); -- data INSERT INTO R VALUES (1, 1, 1), (1, 2, 1), (1, 3, 1), (2, 4, 1), (2, 5, 1), (3, 6, 1), (3, 7, 0); ``` ### Queries ```sql SELECT DISTINCT R.x, R.y, R.z, R.x / R.z FROM R INNER JOIN (VALUES (1),(2)) AS S(x) ON R.x = S.x ``` ### Plans
Optimized Plan (0.12.0) Optimized Plan (0.15.0)
### Problem Discussion The following is an example of two regressions observed for the given query and environment between versions `0.12.0` and `0.15.0`. 1. The query fails with a `division by zero` error. 2. The query consumes memory proportional to the number of distinct `R.x` values and not the number of `S(x)` values used to restrict `R`. Both issues are caused by the rewrite introduced in #8399 which pushes `Distinct` through the `Join` that implements the list-based filter. Regression (1) is not really a bug, because the SQL standard does not guarantee the order of execution of filters. Rather, this is an example of an issue with the user query which does not really protect itself against ""division by zero"", so I suggest to offload these type of fixes to the use. Regression (2) is more concerning, because the number of distinct values in `R.x` might be much bigger than `S.x`. I therefore suggest to prohibit reduction pushdown in cases where the other side of the join is a literal collection with size up to a certain point. ### Relevant log output _No response_",True,"Performance im - ### What version of Materialize are you using? v0.15.0 (f79f63205) ### How did you install Materialize? Built from source ### What was the issue? ### Environment ```sql -- database CREATE DATABASE reduction_pushdown; -- schema CREATE TABLE R(x INT NOT NULL, y INT NOT NULL, z INT NOT NULL); -- data INSERT INTO R VALUES (1, 1, 1), (1, 2, 1), (1, 3, 1), (2, 4, 1), (2, 5, 1), (3, 6, 1), (3, 7, 0); ``` ### Queries ```sql SELECT DISTINCT R.x, R.y, R.z, R.x / R.z FROM R INNER JOIN (VALUES (1),(2)) AS S(x) ON R.x = S.x ``` ### Plans
Optimized Plan (0.12.0) Optimized Plan (0.15.0)
### Problem Discussion The following is an example of two regressions observed for the given query and environment between versions `0.12.0` and `0.15.0`. 1. The query fails with a `division by zero` error. 2. The query consumes memory proportional to the number of distinct `R.x` values and not the number of `S(x)` values used to restrict `R`. Both issues are caused by the rewrite introduced in #8399 which pushes `Distinct` through the `Join` that implements the list-based filter. Regression (1) is not really a bug, because the SQL standard does not guarantee the order of execution of filters. Rather, this is an example of an issue with the user query which does not really protect itself against ""division by zero"", so I suggest to offload these type of fixes to the use. Regression (2) is more concerning, because the number of distinct values in `R.x` might be much bigger than `S.x`. I therefore suggest to prohibit reduction pushdown in cases where the other side of the join is a literal collection with size up to a certain point. ### Relevant log output _No response_",0,performance im what version of materialize are you using how did you install materialize built from source what was the issue environment sql database create database reduction pushdown schema create table r x int not null y int not null z int not null data insert into r values queries sql select distinct r x r y r z r x r z from r inner join values as s x on r x s x plans optimized plan optimized plan problem discussion the following is an example of two regressions observed for the given query and environment between versions and the query fails with a division by zero error the query consumes memory proportional to the number of distinct r x values and not the number of s x values used to restrict r both issues are caused by the rewrite introduced in which pushes distinct through the join that implements the list based filter regression is not really a bug because the sql standard does not guarantee the order of execution of filters rather this is an example of an issue with the user query which does not really protect itself against division by zero so i suggest to offload these type of fixes to the use regression is more concerning because the number of distinct values in r x might be much bigger than s x i therefore suggest to prohibit reduction pushdown in cases where the other side of the join is a literal collection with size up to a certain point relevant log output no response ,0 207,4698565335.0,IssuesEvent,2016-10-12 13:22:36,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,opened,Add `speed` option for actions,AREA: client AREA: server SYSTEM: API SYSTEM: automations TYPE: enhancement,"It's a value between `0 ... 1`. The option can be set by two methods: 1) as an option in an action 2) as `testController` options (works for all actions in the test) ",1.0,"Add `speed` option for actions - It's a value between `0 ... 1`. The option can be set by two methods: 1) as an option in an action 2) as `testController` options (works for all actions in the test) ",1,add speed option for actions it s a value between the option can be set by two methods as an option in an action as testcontroller options works for all actions in the test ,1 399361,27237981583.0,IssuesEvent,2023-02-21 17:46:36,aces/Loris,https://api.github.com/repos/aces/Loris,closed,[data_release] Test plan and module README need updating,Documentation 24.0.0-bugs,"Both the test plan and README of this module do not describe or handle the case of files with no version. The test plan should include steps to test adding Unversioned files, filtering by Unversioned files, adding/managing permissions for Unversioned files. The README should mention that files can be uploaded with no version and that when you add a version permission, the user will also be automatically granted permission to see any Unversioned files. ",1.0,"[data_release] Test plan and module README need updating - Both the test plan and README of this module do not describe or handle the case of files with no version. The test plan should include steps to test adding Unversioned files, filtering by Unversioned files, adding/managing permissions for Unversioned files. The README should mention that files can be uploaded with no version and that when you add a version permission, the user will also be automatically granted permission to see any Unversioned files. ",0, test plan and module readme need updating both the test plan and readme of this module do not describe or handle the case of files with no version the test plan should include steps to test adding unversioned files filtering by unversioned files adding managing permissions for unversioned files the readme should mention that files can be uploaded with no version and that when you add a version permission the user will also be automatically granted permission to see any unversioned files ,0 3046,13033794458.0,IssuesEvent,2020-07-28 07:37:30,elastic/e2e-testing,https://api.github.com/repos/elastic/e2e-testing,closed,Support passing a specific version of the elactic-agent on CI,automation,"It's possible to override the version of the elastic-agent locally, using the ELASTIC_AGENT_DOWNLOAD_URL env var, but it's not possible to do it on CI",1.0,"Support passing a specific version of the elactic-agent on CI - It's possible to override the version of the elastic-agent locally, using the ELASTIC_AGENT_DOWNLOAD_URL env var, but it's not possible to do it on CI",1,support passing a specific version of the elactic agent on ci it s possible to override the version of the elastic agent locally using the elastic agent download url env var but it s not possible to do it on ci,1 10008,31147586696.0,IssuesEvent,2023-08-16 07:43:11,treasuryguild/Catalyst-Training-and-Automation,https://api.github.com/repos/treasuryguild/Catalyst-Training-and-Automation,closed,2874.77 ADA Outgoing,Catalyst-Training-and-Automation-Treasury-Wallet Outgoing,"{ ""id"" : ""1692171730844"", ""date"": ""Wed, 16 Aug 2023 07:42:10 GMT"", ""fund"": ""TreasuryWallet"", ""project"": ""Catalyst-Training-and-Automation"", ""proposal"": ""Catalyst-Training-and-Automation-Treasury-Wallet"", ""ideascale"": """", ""budget"": ""Other"", ""name"": ""3hjnmt"", ""exchangeRate"": ""0.279 USD per ADA"", ""ada"" : ""2874.77"", ""walletBalance"": [""0.00 ADA""], ""txid"": ""6f684daf134af58008e96976a1fa39a277dcc95debcafe03b02c1456da8b1ef3"", ""description"": ""Moving leftovers to the new wallet"" } ",1.0,"2874.77 ADA Outgoing - { ""id"" : ""1692171730844"", ""date"": ""Wed, 16 Aug 2023 07:42:10 GMT"", ""fund"": ""TreasuryWallet"", ""project"": ""Catalyst-Training-and-Automation"", ""proposal"": ""Catalyst-Training-and-Automation-Treasury-Wallet"", ""ideascale"": """", ""budget"": ""Other"", ""name"": ""3hjnmt"", ""exchangeRate"": ""0.279 USD per ADA"", ""ada"" : ""2874.77"", ""walletBalance"": [""0.00 ADA""], ""txid"": ""6f684daf134af58008e96976a1fa39a277dcc95debcafe03b02c1456da8b1ef3"", ""description"": ""Moving leftovers to the new wallet"" } ",1, ada outgoing id date wed aug gmt fund treasurywallet project catalyst training and automation proposal catalyst training and automation treasury wallet ideascale budget other name exchangerate usd per ada ada walletbalance txid description moving leftovers to the new wallet ,1 49510,13187223463.0,IssuesEvent,2020-08-13 02:44:24,icecube-trac/tix3,https://api.github.com/repos/icecube-trac/tix3,opened,[DOMLauncher] tests gone wild! (Trac #1563),Incomplete Migration Migrated from Trac combo simulation defect,"
Migrated from https://code.icecube.wisc.edu/ticket/1563, reported by nega and owned by cweaver

```json { ""status"": ""closed"", ""changetime"": ""2016-04-28T16:27:59"", ""description"": ""see #1561 and #1562\n\n{{{\n21246 ? Rl 26420:41 python /home/nega/i3/combo/src/DOMLauncher/resources/test/LC-logicTest.py\n}}}\n\n{{{\n(gdb) bt\n#0 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#1 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x3f57140, n=55) at fileops.c:1251\n#2 0x00007f1f4853d39f in new_do_write (to_do=55, data=0x3f57140 \""\\n *** Break *** write on a pipe with no one to read it\\n\"", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#3 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=, n=55) at fileops.c:1330\n#4 0x00007f1f48532488 in __GI__IO_fputs (str=0x3f57140 \""\\n *** Break *** write on a pipe with no one to read it\\n\"", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofputs.c:40\n#5 0x00007f1f43c3a436 in DebugPrint(char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#6 0x00007f1f43c3ad04 in DefaultErrorHandler(int, bool, char const*, char const*) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#7 0x00007f1f43c3a66a in ErrorHandler () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#8 0x00007f1f43c3a97f in Break(char const*, char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#9 0x00007f1f43cc9e2f in TUnixSystem::DispatchSignals(ESignals) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#10 \n#11 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#12 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x7f1f48c694dc, n=1) at fileops.c:1251\n#13 0x00007f1f4853d39f in new_do_write (to_do=1, data=0x7f1f48c694dc \"".\"", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#14 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=, n=1) at fileops.c:1330\n#15 0x00007f1f48532b69 in __GI__IO_fwrite (buf=0x7f1f48c694dc, size=size@entry=1, count=1, fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofwrite.c:43\n#16 0x0000000000551c02 in file_write.lto_priv () at ../Objects/fileobject.c:1852\n#17 0x00000000004ccd05 in call_function (oparg=, pp_stack=) at ../Python/ceval.c:4035\n#18 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#19 0x00000000004cd4e2 in fast_function (nk=, na=, n=, pp_stack=, func=) at ../Python/ceval.c:4121\n#20 call_function (oparg=, pp_stack=) at ../Python/ceval.c:4056\n#21 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#22 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=, defcount=, defs=, kwcount=, kws=, argcount=, args=, locals=, \n globals=, co=) at ../Python/ceval.c:3267\n#23 function_call.lto_priv () at ../Objects/funcobject.c:526\n#24 0x00000000004cf239 in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#25 ext_do_call (nk=, na=, flags=, pp_stack=, func=) at ../Python/ceval.c:4348\n#26 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#27 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=, defcount=, defs=, kwcount=, kws=, argcount=, args=, locals=, \n globals=, co=) at ../Python/ceval.c:3267\n#28 function_call.lto_priv () at ../Objects/funcobject.c:526\n#29 0x000000000050b968 in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#30 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#31 0x0000000000573bfd in PyObject_Call (kw=0x0, arg=\n (, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=) at ../Objects/abstract.c:2529\n#32 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#33 0x00000000004cd9ab in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#34 do_call (nk=, na=, pp_stack=, func=) at ../Python/ceval.c:4253\n#35 call_function (oparg=, pp_stack=) at ../Python/ceval.c:4058\n#36 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#37 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=, defcount=, defs=, kwcount=, kws=, argcount=, args=, locals=, \n globals=, co=) at ../Python/ceval.c:3267\n#38 function_call.lto_priv () at ../Objects/funcobject.c:526\n#39 0x00000000004cf239 in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#40 ext_do_call (nk=, na=, flags=, pp_stack=, func=) at ../Python/ceval.c:4348\n#41 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#42 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=, defcount=, defs=, kwcount=, kws=, argcount=, args=, locals=, \n globals=, co=) at ../Python/ceval.c:3267\n#43 function_call.lto_priv () at ../Objects/funcobject.c:526\n#44 0x000000000050b968 in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#45 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#46 0x0000000000573bfd in PyObject_Call (kw=0x0, \n arg=(, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=) at ../Objects/abstract.c:2529\n#47 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#48 0x00000000004cd9ab in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#49 do_call (nk=, na=, pp_stack=, func=) at ../Python/ceval.c:4253\n#50 call_function (oparg=, pp_stack=) at ../Python/ceval.c:4058\n#51 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#52 0x00000000004cd4e2 in fast_function (nk=, na=, n=, pp_stack=, func=) at ../Python/ceval.c:4121\n#53 call_function (oparg=, pp_stack=) at ../Python/ceval.c:4056\n#54 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#55 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=, defcount=, defs=, kwcount=, kws=, argcount=, args=, locals=, \n globals=, co=) at ../Python/ceval.c:3267\n#56 function_call.lto_priv () at ../Objects/funcobject.c:526\n#57 0x000000000050b968 in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#58 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#59 0x00000000004d437b in PyObject_Call (kw=, arg=(,), func=) at ../Objects/abstract.c:2529\n#60 PyEval_CallObjectWithKeywords () at ../Python/ceval.c:3904\n#61 0x0000000000495b80 in PyEval_CallFunction (obj=, format=) at ../Python/modsupport.c:557\n#62 0x00007f1f46b9bcd0 in boost::exception_detail::clone_impl >::clone_impl (this=0x6a8711e3b3cd6900, x=..., __in_chrg=, __vtt_parm=)\n at /usr/include/boost/exception/exception.hpp:446\n#63 0x00007f1f46b99877 in std::_Deque_base, std::allocator > >::_M_destroy_nodes (this=0x7ffd571daa60, __nstart=0x1730df0, __nfinish=0x7ffd571daaa0)\n at /usr/include/c++/4.9/bits/stl_deque.h:647\n#64 0x00007f1f46c0187b in PythonModule::Physics (this=0x6a8711e3b3cd6900, frame=...) at ../../src/icetray/private/icetray/PythonModule.cxx:249\n#65 0x00007f1f46b8bcdf in boost::python::objects::make_ptr_instance >::get_class_object_impl (p=0x7ffd571daa80)\n at /usr/include/boost/python/object/make_ptr_instance.hpp:51\n#66 0x00007ffd571dab80 in ?? ()\n#67 0x0000000001730de8 in ?? ()\n#68 0x0000000001730f80 in ?? ()\n#69 0x0000000001730da0 in ?? ()\n#70 0x00007ffd571daad0 in ?? ()\n#71 0x6a8711e3b3cd6900 in ?? ()\n#72 0x00007ffd571dab10 in ?? ()\n#73 0x0000000001401000 in ?? ()\n#74 0x00007ffd571dace0 in ?? ()\n#75 0x00007f1f46b8537b in boost::function1, I3Context const&>::function1 (this=0xd3ffd78948c68948, f=...) at /usr/include/boost/function/function_template.hpp:749\nBacktrace stopped: previous frame inner to this frame (corrupt stack?)\n}}}"", ""reporter"": ""nega"", ""cc"": ""sflis"", ""resolution"": ""fixed"", ""_ts"": ""1461860879759677"", ""component"": ""combo simulation"", ""summary"": ""[DOMLauncher] tests gone wild!"", ""priority"": ""normal"", ""keywords"": ""domlauncher, tests, SIGPIPE, signal-handler, root"", ""time"": ""2016-02-23T05:00:46"", ""milestone"": """", ""owner"": ""cweaver"", ""type"": ""defect"" } ```

",1.0,"[DOMLauncher] tests gone wild! (Trac #1563) -
Migrated from https://code.icecube.wisc.edu/ticket/1563, reported by nega and owned by cweaver

```json { ""status"": ""closed"", ""changetime"": ""2016-04-28T16:27:59"", ""description"": ""see #1561 and #1562\n\n{{{\n21246 ? Rl 26420:41 python /home/nega/i3/combo/src/DOMLauncher/resources/test/LC-logicTest.py\n}}}\n\n{{{\n(gdb) bt\n#0 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#1 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x3f57140, n=55) at fileops.c:1251\n#2 0x00007f1f4853d39f in new_do_write (to_do=55, data=0x3f57140 \""\\n *** Break *** write on a pipe with no one to read it\\n\"", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#3 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=, n=55) at fileops.c:1330\n#4 0x00007f1f48532488 in __GI__IO_fputs (str=0x3f57140 \""\\n *** Break *** write on a pipe with no one to read it\\n\"", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofputs.c:40\n#5 0x00007f1f43c3a436 in DebugPrint(char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#6 0x00007f1f43c3ad04 in DefaultErrorHandler(int, bool, char const*, char const*) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#7 0x00007f1f43c3a66a in ErrorHandler () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#8 0x00007f1f43c3a97f in Break(char const*, char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#9 0x00007f1f43cc9e2f in TUnixSystem::DispatchSignals(ESignals) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#10 \n#11 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#12 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x7f1f48c694dc, n=1) at fileops.c:1251\n#13 0x00007f1f4853d39f in new_do_write (to_do=1, data=0x7f1f48c694dc \"".\"", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#14 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=, n=1) at fileops.c:1330\n#15 0x00007f1f48532b69 in __GI__IO_fwrite (buf=0x7f1f48c694dc, size=size@entry=1, count=1, fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofwrite.c:43\n#16 0x0000000000551c02 in file_write.lto_priv () at ../Objects/fileobject.c:1852\n#17 0x00000000004ccd05 in call_function (oparg=, pp_stack=) at ../Python/ceval.c:4035\n#18 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#19 0x00000000004cd4e2 in fast_function (nk=, na=, n=, pp_stack=, func=) at ../Python/ceval.c:4121\n#20 call_function (oparg=, pp_stack=) at ../Python/ceval.c:4056\n#21 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#22 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=, defcount=, defs=, kwcount=, kws=, argcount=, args=, locals=, \n globals=, co=) at ../Python/ceval.c:3267\n#23 function_call.lto_priv () at ../Objects/funcobject.c:526\n#24 0x00000000004cf239 in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#25 ext_do_call (nk=, na=, flags=, pp_stack=, func=) at ../Python/ceval.c:4348\n#26 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#27 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=, defcount=, defs=, kwcount=, kws=, argcount=, args=, locals=, \n globals=, co=) at ../Python/ceval.c:3267\n#28 function_call.lto_priv () at ../Objects/funcobject.c:526\n#29 0x000000000050b968 in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#30 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#31 0x0000000000573bfd in PyObject_Call (kw=0x0, arg=\n (, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=) at ../Objects/abstract.c:2529\n#32 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#33 0x00000000004cd9ab in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#34 do_call (nk=, na=, pp_stack=, func=) at ../Python/ceval.c:4253\n#35 call_function (oparg=, pp_stack=) at ../Python/ceval.c:4058\n#36 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#37 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=, defcount=, defs=, kwcount=, kws=, argcount=, args=, locals=, \n globals=, co=) at ../Python/ceval.c:3267\n#38 function_call.lto_priv () at ../Objects/funcobject.c:526\n#39 0x00000000004cf239 in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#40 ext_do_call (nk=, na=, flags=, pp_stack=, func=) at ../Python/ceval.c:4348\n#41 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#42 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=, defcount=, defs=, kwcount=, kws=, argcount=, args=, locals=, \n globals=, co=) at ../Python/ceval.c:3267\n#43 function_call.lto_priv () at ../Objects/funcobject.c:526\n#44 0x000000000050b968 in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#45 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#46 0x0000000000573bfd in PyObject_Call (kw=0x0, \n arg=(, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=) at ../Objects/abstract.c:2529\n#47 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#48 0x00000000004cd9ab in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#49 do_call (nk=, na=, pp_stack=, func=) at ../Python/ceval.c:4253\n#50 call_function (oparg=, pp_stack=) at ../Python/ceval.c:4058\n#51 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#52 0x00000000004cd4e2 in fast_function (nk=, na=, n=, pp_stack=, func=) at ../Python/ceval.c:4121\n#53 call_function (oparg=, pp_stack=) at ../Python/ceval.c:4056\n#54 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#55 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=, defcount=, defs=, kwcount=, kws=, argcount=, args=, locals=, \n globals=, co=) at ../Python/ceval.c:3267\n#56 function_call.lto_priv () at ../Objects/funcobject.c:526\n#57 0x000000000050b968 in PyObject_Call (kw=, arg=, func=) at ../Objects/abstract.c:2529\n#58 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#59 0x00000000004d437b in PyObject_Call (kw=, arg=(,), func=) at ../Objects/abstract.c:2529\n#60 PyEval_CallObjectWithKeywords () at ../Python/ceval.c:3904\n#61 0x0000000000495b80 in PyEval_CallFunction (obj=, format=) at ../Python/modsupport.c:557\n#62 0x00007f1f46b9bcd0 in boost::exception_detail::clone_impl >::clone_impl (this=0x6a8711e3b3cd6900, x=..., __in_chrg=, __vtt_parm=)\n at /usr/include/boost/exception/exception.hpp:446\n#63 0x00007f1f46b99877 in std::_Deque_base, std::allocator > >::_M_destroy_nodes (this=0x7ffd571daa60, __nstart=0x1730df0, __nfinish=0x7ffd571daaa0)\n at /usr/include/c++/4.9/bits/stl_deque.h:647\n#64 0x00007f1f46c0187b in PythonModule::Physics (this=0x6a8711e3b3cd6900, frame=...) at ../../src/icetray/private/icetray/PythonModule.cxx:249\n#65 0x00007f1f46b8bcdf in boost::python::objects::make_ptr_instance >::get_class_object_impl (p=0x7ffd571daa80)\n at /usr/include/boost/python/object/make_ptr_instance.hpp:51\n#66 0x00007ffd571dab80 in ?? ()\n#67 0x0000000001730de8 in ?? ()\n#68 0x0000000001730f80 in ?? ()\n#69 0x0000000001730da0 in ?? ()\n#70 0x00007ffd571daad0 in ?? ()\n#71 0x6a8711e3b3cd6900 in ?? ()\n#72 0x00007ffd571dab10 in ?? ()\n#73 0x0000000001401000 in ?? ()\n#74 0x00007ffd571dace0 in ?? ()\n#75 0x00007f1f46b8537b in boost::function1, I3Context const&>::function1 (this=0xd3ffd78948c68948, f=...) at /usr/include/boost/function/function_template.hpp:749\nBacktrace stopped: previous frame inner to this frame (corrupt stack?)\n}}}"", ""reporter"": ""nega"", ""cc"": ""sflis"", ""resolution"": ""fixed"", ""_ts"": ""1461860879759677"", ""component"": ""combo simulation"", ""summary"": ""[DOMLauncher] tests gone wild!"", ""priority"": ""normal"", ""keywords"": ""domlauncher, tests, SIGPIPE, signal-handler, root"", ""time"": ""2016-02-23T05:00:46"", ""milestone"": """", ""owner"": ""cweaver"", ""type"": ""defect"" } ```

",0, tests gone wild trac migrated from json status closed changetime description see and n n rl python home nega combo src domlauncher resources test lc logictest py n n n n gdb bt n in write at sysdeps unix syscall template s n in io new file write f data n at fileops c n in new do write to do data n break write on a pipe with no one to read it n fp at fileops c n io new file xsputn f data n at fileops c n in gi io fputs str n break write on a pipe with no one to read it n fp at iofputs c n in debugprint char const from home nega ports root lib libcore so n in defaulterrorhandler int bool char const char const from home nega ports root lib libcore so n in errorhandler from home nega ports root lib libcore so n in break char const char const from home nega ports root lib libcore so n in tunixsystem dispatchsignals esignals from home nega ports root lib libcore so n n in write at sysdeps unix syscall template s n in io new file write f data n at fileops c n in new do write to do data fp at fileops c n io new file xsputn f data n at fileops c n in gi io fwrite buf size size entry count fp at iofwrite c n in file write lto priv at objects fileobject c n in call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in fast function nk na n pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n ext do call nk na flags pp stack func at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n instancemethod call lto priv at objects classobject c n in pyobject call kw arg n dots true skipped mirroroutput false stream at remote testsrun buffer false original stderr showall false stdout buffer none stderr buffer none modulesetupfailed false expectedfailures errors descriptions true previoustestclass unexpectedsuccesses failures testrunentered true shouldstop false failfast false at remote func at objects abstract c n slot tp call lto priv at objects typeobject c n in pyobject call kw arg func at objects abstract c n do call nk na pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n ext do call nk na flags pp stack func at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n instancemethod call lto priv at objects classobject c n in pyobject call kw n arg dots true skipped mirroroutput false stream at remote testsrun buffer false original stderr showall false stdout buffer none stderr buffer none modulesetupfailed false expectedfailures errors descriptions true previoustestclass unexpectedsuccesses failures testrunentered true shouldstop false failfast false at remote func at objects abstract c n slot tp call lto priv at objects typeobject c n in pyobject call kw arg func at objects abstract c n do call nk na pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in fast function nk na n pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n instancemethod call lto priv at objects classobject c n in pyobject call kw arg func at objects abstract c n pyeval callobjectwithkeywords at python ceval c n in pyeval callfunction obj format at python modsupport c n in boost exception detail clone impl clone impl this x in chrg vtt parm n at usr include boost exception exception hpp n in std deque base std allocator m destroy nodes this nstart nfinish n at usr include c bits stl deque h n in pythonmodule physics this frame at src icetray private icetray pythonmodule cxx n in boost python objects make ptr instance get class object impl p n at usr include boost python object make ptr instance hpp n in n in n in n in n in n in n in n in n in n in boost const this f at usr include boost function function template hpp nbacktrace stopped previous frame inner to this frame corrupt stack n reporter nega cc sflis resolution fixed ts component combo simulation summary tests gone wild priority normal keywords domlauncher tests sigpipe signal handler root time milestone owner cweaver type defect ,0 4590,16963501636.0,IssuesEvent,2021-06-29 08:06:21,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,Automate setting the default `shkeptnspecversion` in `keptn/go-utils`,automation release-automation,"In our [keptn/go-utils](https://github.com/keptn/go-utils), we have a couple of helper functions for sending CloudEvents. These also make sure that the property `shkeptnspecversion` is set, if it is not already included in the passed CloudEvent. The default value for this property is set here: https://github.com/keptn/go-utils/blob/28ca6b2be5dcb1cfe45509696a7f68e5a5296e0f/pkg/lib/v0_2_0/events.go#L25 However, this is likely to get overlooked when a new version of the `go-utils`, that is based on a new version of the [keptn/spec](https://github.com/keptn/spec) is released. Therefore, it would make sense to have an automated step that updates that property via a PR, once a new version of the spec has been released. **Definition of Done:** Once a new version of the [keptn/spec](https://github.com/keptn/spec) has been released, a PR to the master branch of [ktpn/go-utils](https://github.com/keptn/go-utils) is created. This PR makes sure that the correct `shkeptnspecversion` is added to the CloudEvents that are being sent via this library.",2.0,"Automate setting the default `shkeptnspecversion` in `keptn/go-utils` - In our [keptn/go-utils](https://github.com/keptn/go-utils), we have a couple of helper functions for sending CloudEvents. These also make sure that the property `shkeptnspecversion` is set, if it is not already included in the passed CloudEvent. The default value for this property is set here: https://github.com/keptn/go-utils/blob/28ca6b2be5dcb1cfe45509696a7f68e5a5296e0f/pkg/lib/v0_2_0/events.go#L25 However, this is likely to get overlooked when a new version of the `go-utils`, that is based on a new version of the [keptn/spec](https://github.com/keptn/spec) is released. Therefore, it would make sense to have an automated step that updates that property via a PR, once a new version of the spec has been released. **Definition of Done:** Once a new version of the [keptn/spec](https://github.com/keptn/spec) has been released, a PR to the master branch of [ktpn/go-utils](https://github.com/keptn/go-utils) is created. This PR makes sure that the correct `shkeptnspecversion` is added to the CloudEvents that are being sent via this library.",1,automate setting the default shkeptnspecversion in keptn go utils in our we have a couple of helper functions for sending cloudevents these also make sure that the property shkeptnspecversion is set if it is not already included in the passed cloudevent the default value for this property is set here however this is likely to get overlooked when a new version of the go utils that is based on a new version of the is released therefore it would make sense to have an automated step that updates that property via a pr once a new version of the spec has been released definition of done once a new version of the has been released a pr to the master branch of is created this pr makes sure that the correct shkeptnspecversion is added to the cloudevents that are being sent via this library ,1 175540,6552094260.0,IssuesEvent,2017-09-05 16:57:17,SuLab/ChlamBase.org,https://api.github.com/repos/SuLab/ChlamBase.org,closed,build instructions,highest priority,-create project build instructions and setup scripts to launch custom XBase instance,1.0,build instructions - -create project build instructions and setup scripts to launch custom XBase instance,0,build instructions create project build instructions and setup scripts to launch custom xbase instance,0 9675,30217499574.0,IssuesEvent,2023-07-05 16:37:32,nephio-project/nephio,https://api.github.com/repos/nephio-project/nephio,closed, Auto-propose / auto-approve controllers,area/package-management sig/automation,"Auto-propose / auto-approve controllers ([example](https://github.com/mortent/porch-controller)) Note : This is a stretch goal. ",1.0," Auto-propose / auto-approve controllers - Auto-propose / auto-approve controllers ([example](https://github.com/mortent/porch-controller)) Note : This is a stretch goal. ",1, auto propose auto approve controllers auto propose auto approve controllers note this is a stretch goal ,1 3578,14014537686.0,IssuesEvent,2020-10-29 12:04:21,assistify/Rocket.Chat,https://api.github.com/repos/assistify/Rocket.Chat,closed,Welcome message in new requests,Cmp: Rocket.Chat Core Type: New Feature stat: stale ~bot/automation ~livechat,"If a new request arrives via the live chat, there should be a welcome message to the user. It should be possible to configure the welcome message. Originally by @andytipp https://github.com/mrsimpson/Rocket.Chat/issues/76",1.0,"Welcome message in new requests - If a new request arrives via the live chat, there should be a welcome message to the user. It should be possible to configure the welcome message. Originally by @andytipp https://github.com/mrsimpson/Rocket.Chat/issues/76",1,welcome message in new requests if a new request arrives via the live chat there should be a welcome message to the user it should be possible to configure the welcome message originally by andytipp ,1 6458,23202684149.0,IssuesEvent,2022-08-01 23:50:42,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Is this information current?,automation/svc triaged cxp product-question Pri2,"The function ""MicrosoftDefaultComputerGroup"" does not appear in Saved Groups for the workspace associated with update management (a number of groups which I created for assigning non-azure computers to deployment schedules are displayed). If I try to run MicrosoftDefaultComputerGroup or Updates__MicrosoftDefaultComputerGroup in the query editor, both result in error ""Failed to resolve table or column or scalar expression"", suggesting even if they are hidden from display for some reason, the functions do not exist in the workspace? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 9a94d637-558c-b26e-a1de-c4381aa6783c * Version Independent ID: d8c47851-0ac5-3932-e1e1-e224285e7476 * Content: [Remove machines from Azure Automation Update Management](https://docs.microsoft.com/en-us/azure/automation/update-management/remove-vms?tabs=azure-vm) * Content Source: [articles/automation/update-management/remove-vms.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/update-management/remove-vms.md) * Service: **automation** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**",1.0,"Is this information current? - The function ""MicrosoftDefaultComputerGroup"" does not appear in Saved Groups for the workspace associated with update management (a number of groups which I created for assigning non-azure computers to deployment schedules are displayed). If I try to run MicrosoftDefaultComputerGroup or Updates__MicrosoftDefaultComputerGroup in the query editor, both result in error ""Failed to resolve table or column or scalar expression"", suggesting even if they are hidden from display for some reason, the functions do not exist in the workspace? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 9a94d637-558c-b26e-a1de-c4381aa6783c * Version Independent ID: d8c47851-0ac5-3932-e1e1-e224285e7476 * Content: [Remove machines from Azure Automation Update Management](https://docs.microsoft.com/en-us/azure/automation/update-management/remove-vms?tabs=azure-vm) * Content Source: [articles/automation/update-management/remove-vms.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/update-management/remove-vms.md) * Service: **automation** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**",1,is this information current the function microsoftdefaultcomputergroup does not appear in saved groups for the workspace associated with update management a number of groups which i created for assigning non azure computers to deployment schedules are displayed if i try to run microsoftdefaultcomputergroup or updates microsoftdefaultcomputergroup in the query editor both result in error failed to resolve table or column or scalar expression suggesting even if they are hidden from display for some reason the functions do not exist in the workspace document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login sgsneha microsoft alias v ssudhir ,1 6287,22698994662.0,IssuesEvent,2022-07-05 08:57:40,rancher/elemental-toolkit,https://api.github.com/repos/rancher/elemental-toolkit,closed,Fix CI issues,automation chore,"Currently the CI is failing on two issues: - https://github.com/rancher/elemental-toolkit/runs/6945594390?check_suite_focus=true (arm recovery test failing) - drop uploading Odroid_c2 leap image on release. it fails as it exceed maximum image length",1.0,"Fix CI issues - Currently the CI is failing on two issues: - https://github.com/rancher/elemental-toolkit/runs/6945594390?check_suite_focus=true (arm recovery test failing) - drop uploading Odroid_c2 leap image on release. it fails as it exceed maximum image length",1,fix ci issues currently the ci is failing on two issues arm recovery test failing drop uploading odroid leap image on release it fails as it exceed maximum image length,1 5855,21470023491.0,IssuesEvent,2022-04-26 08:41:04,rancher-sandbox/cOS-toolkit,https://api.github.com/repos/rancher-sandbox/cOS-toolkit,closed,change elemental-cli bump strategy based on releases,automation chore,"This card is about changing the [elemental-cli](https://github.com/rancher-sandbox/cOS-toolkit/blob/d5d3f8466f1b62dd6a69daedd501619d86945d05/packages/toolchain/elemental-cli/collection.yaml#L12) bump strategy to track back github releases. as we are outside MVP with the golang binary, we can just start to consume tagged versions from now on",1.0,"change elemental-cli bump strategy based on releases - This card is about changing the [elemental-cli](https://github.com/rancher-sandbox/cOS-toolkit/blob/d5d3f8466f1b62dd6a69daedd501619d86945d05/packages/toolchain/elemental-cli/collection.yaml#L12) bump strategy to track back github releases. as we are outside MVP with the golang binary, we can just start to consume tagged versions from now on",1,change elemental cli bump strategy based on releases this card is about changing the bump strategy to track back github releases as we are outside mvp with the golang binary we can just start to consume tagged versions from now on,1 188150,15144498445.0,IssuesEvent,2021-02-11 01:28:47,sniper-fly/minishell,https://api.github.com/repos/sniper-fly/minishell,closed,並列パイプにしないと再現できない動作がある。,documentation invalid,"https://gist.github.com/sasakiyudai/f7cb5ccc5cd85ae7f23d37748bfb1cfa 下記のページは、上記syudaiさんの並列パイプからシステムコールのエラー処理を抜いたものです。 https://github.com/sniper-fly/minishell/blob/16ce8371ac8e7b4c20a132df35bbf639b0894334/study/rnakai/multi_pipe_with_same_parent.c https://github.com/sniper-fly/minishell/blob/16ce8371ac8e7b4c20a132df35bbf639b0894334/study/rnakai/multi_pipe_with_same_parent.c#L58-L59 ここのwaitが意味ないだろうと思っていたら意味がありありでした。 試しに、この2行を抜いてみるとcat | lsは一瞬で終了しますが、 2行を追加するとcat | lsはcatの終了をwaitするようになります。 waitは親プロセスから作成されたプロセスを、作成した数だけすべて待ってくれるようです。 ちなみに、親プロセスから孫プロセスをwaitすることはできないようです。 https://stackoverflow.com/questions/12822611/fork-and-wait-how-to-wait-for-all-grandchildren-to-finish https://pubs.opengroup.org/onlinepubs/9699919799/functions/wait.html >A call to the wait() or waitpid() function only returns status on an immediate child process of the calling process; that is, a child that was produced by a single fork() call (perhaps followed by an exec or other function calls) from the parent. If a child produces grandchildren by further use of fork(), none of those grandchildren nor any of their descendants affect the behavior of a wait() from the original parent process. Nothing in this volume of POSIX.1-2017 prevents an implementation from providing extensions that permit a process to get status from a grandchild or any other process, but a process that does not use such extensions must be guaranteed to see status from only its direct children.",1.0,"並列パイプにしないと再現できない動作がある。 - https://gist.github.com/sasakiyudai/f7cb5ccc5cd85ae7f23d37748bfb1cfa 下記のページは、上記syudaiさんの並列パイプからシステムコールのエラー処理を抜いたものです。 https://github.com/sniper-fly/minishell/blob/16ce8371ac8e7b4c20a132df35bbf639b0894334/study/rnakai/multi_pipe_with_same_parent.c https://github.com/sniper-fly/minishell/blob/16ce8371ac8e7b4c20a132df35bbf639b0894334/study/rnakai/multi_pipe_with_same_parent.c#L58-L59 ここのwaitが意味ないだろうと思っていたら意味がありありでした。 試しに、この2行を抜いてみるとcat | lsは一瞬で終了しますが、 2行を追加するとcat | lsはcatの終了をwaitするようになります。 waitは親プロセスから作成されたプロセスを、作成した数だけすべて待ってくれるようです。 ちなみに、親プロセスから孫プロセスをwaitすることはできないようです。 https://stackoverflow.com/questions/12822611/fork-and-wait-how-to-wait-for-all-grandchildren-to-finish https://pubs.opengroup.org/onlinepubs/9699919799/functions/wait.html >A call to the wait() or waitpid() function only returns status on an immediate child process of the calling process; that is, a child that was produced by a single fork() call (perhaps followed by an exec or other function calls) from the parent. If a child produces grandchildren by further use of fork(), none of those grandchildren nor any of their descendants affect the behavior of a wait() from the original parent process. Nothing in this volume of POSIX.1-2017 prevents an implementation from providing extensions that permit a process to get status from a grandchild or any other process, but a process that does not use such extensions must be guaranteed to see status from only its direct children.",0,並列パイプにしないと再現できない動作がある。 下記のページは、上記syudaiさんの並列パイプからシステムコールのエラー処理を抜いたものです。 ここのwaitが意味ないだろうと思っていたら意味がありありでした。 試しに、 lsは一瞬で終了しますが、 lsはcatの終了をwaitするようになります。 waitは親プロセスから作成されたプロセスを、作成した数だけすべて待ってくれるようです。 ちなみに、親プロセスから孫プロセスをwaitすることはできないようです。 a call to the wait or waitpid function only returns status on an immediate child process of the calling process that is a child that was produced by a single fork call perhaps followed by an exec or other function calls from the parent if a child produces grandchildren by further use of fork none of those grandchildren nor any of their descendants affect the behavior of a wait from the original parent process nothing in this volume of posix prevents an implementation from providing extensions that permit a process to get status from a grandchild or any other process but a process that does not use such extensions must be guaranteed to see status from only its direct children ,0 7834,25777445118.0,IssuesEvent,2022-12-09 13:12:14,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,CDC data inconsistency if pull based sink in on,type/bug severity/critical found/automation area/ticdc affects-6.5,"### What did you do? Run automation test: tikv_unavailable_sync - start changefeed, downstream mysql - Run tpcc workload - Inject tikv failure chaos for 20 seconds ### What did you expect to see? Data should be consistent ### What did you see instead? Data in consistent ![Uploading 62e8d328-c598-47c9-b2eb-623a89c78dd1.jpeg…]() ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console / # /tidb-server -V Release Version: v6.5.0-alpha Edition: Community Git Commit Hash: f5487e3f2596cdd63ae9840606c7fff3fb539e03 Git Branch: heads/refs/tags/v6.5.0-alpha UTC Build Time: 2022-11-30 11:14:22 GoVersion: go1.19.3 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false Store: unistore ``` Upstream TiKV version (execute `tikv-server --version`): ```console / # /tikv-server -V TiKV Release Version: 6.5.0-alpha Edition: Community Git Commit Hash: fbaaab32100292a54909b69649d15ee0e75fe58e Git Commit Branch: heads/refs/tags/v6.5.0-alpha UTC Build Time: 2022-11-30 11:07:05 Rust Version: rustc 1.67.0-nightly (96ddd32c4 2022-11-14) Enable Features: pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure Profile: dist_release ``` TiCDC version (execute `cdc version`): ```console bash-5.1# /cdc version Release Version: v6.5.0-alpha Git Commit Hash: 519c6f08dc5ca000d8c92101e2987daf071d716e Git Branch: heads/refs/tags/v6.5.0-alpha UTC Build Time: 2022-11-30 11:07:02 Go Version: go version go1.19.3 linux/amd64 Failpoint Build: false ```",1.0,"CDC data inconsistency if pull based sink in on - ### What did you do? Run automation test: tikv_unavailable_sync - start changefeed, downstream mysql - Run tpcc workload - Inject tikv failure chaos for 20 seconds ### What did you expect to see? Data should be consistent ### What did you see instead? Data in consistent ![Uploading 62e8d328-c598-47c9-b2eb-623a89c78dd1.jpeg…]() ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console / # /tidb-server -V Release Version: v6.5.0-alpha Edition: Community Git Commit Hash: f5487e3f2596cdd63ae9840606c7fff3fb539e03 Git Branch: heads/refs/tags/v6.5.0-alpha UTC Build Time: 2022-11-30 11:14:22 GoVersion: go1.19.3 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false Store: unistore ``` Upstream TiKV version (execute `tikv-server --version`): ```console / # /tikv-server -V TiKV Release Version: 6.5.0-alpha Edition: Community Git Commit Hash: fbaaab32100292a54909b69649d15ee0e75fe58e Git Commit Branch: heads/refs/tags/v6.5.0-alpha UTC Build Time: 2022-11-30 11:07:05 Rust Version: rustc 1.67.0-nightly (96ddd32c4 2022-11-14) Enable Features: pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure Profile: dist_release ``` TiCDC version (execute `cdc version`): ```console bash-5.1# /cdc version Release Version: v6.5.0-alpha Git Commit Hash: 519c6f08dc5ca000d8c92101e2987daf071d716e Git Branch: heads/refs/tags/v6.5.0-alpha UTC Build Time: 2022-11-30 11:07:02 Go Version: go version go1.19.3 linux/amd64 Failpoint Build: false ```",1,cdc data inconsistency if pull based sink in on what did you do run automation test tikv unavailable sync start changefeed downstream mysql run tpcc workload inject tikv failure chaos for seconds what did you expect to see data should be consistent what did you see instead data in consistent versions of the cluster upstream tidb cluster version execute select tidb version in a mysql client console tidb server v release version alpha edition community git commit hash git branch heads refs tags alpha utc build time goversion race enabled false tikv min version alpha check table before drop false store unistore upstream tikv version execute tikv server version console tikv server v tikv release version alpha edition community git commit hash git commit branch heads refs tags alpha utc build time rust version rustc nightly enable features pprof fp jemalloc mem profiling portable sse test engine kv rocksdb test engine raft raft engine cloud aws cloud gcp cloud azure profile dist release ticdc version execute cdc version console bash cdc version release version alpha git commit hash git branch heads refs tags alpha utc build time go version go version linux failpoint build false ,1 10130,31779440538.0,IssuesEvent,2023-09-12 16:21:32,tm24fan8/Home-Assistant-Configs,https://api.github.com/repos/tm24fan8/Home-Assistant-Configs,closed,Vacation Mode Tweaks,bug lighting presence detection automation TTS P4,"Now that we're back home from the first successful test of Vacation Mode, there are 2 things I noticed that need fixed - [x] Porch light did not get shut off in the morning - [x] Audible notifications did not get re-enabled when we got home",1.0,"Vacation Mode Tweaks - Now that we're back home from the first successful test of Vacation Mode, there are 2 things I noticed that need fixed - [x] Porch light did not get shut off in the morning - [x] Audible notifications did not get re-enabled when we got home",1,vacation mode tweaks now that we re back home from the first successful test of vacation mode there are things i noticed that need fixed porch light did not get shut off in the morning audible notifications did not get re enabled when we got home,1 285329,8757404579.0,IssuesEvent,2018-12-14 21:08:11,terascope/teraslice,https://api.github.com/repos/terascope/teraslice,closed,Clean up logging,enhancement priority:low,"Example [pulled from here](https://github.com/terascope/teraslice/blob/a5e15b0602af2581eba67a342fbbc238d6121ade/lib/cluster/slicer.js#L185): logger.error(`worker: ${workerId} has error on slice: ${JSON.stringify(sliceData)} , slicer: ${exId}`); is better written as something like the following to maintain the structured logging: logger.error(Object.assign({worker: workerId, slicer: exId}, sliceData), 'error on slice'); ",1.0,"Clean up logging - Example [pulled from here](https://github.com/terascope/teraslice/blob/a5e15b0602af2581eba67a342fbbc238d6121ade/lib/cluster/slicer.js#L185): logger.error(`worker: ${workerId} has error on slice: ${JSON.stringify(sliceData)} , slicer: ${exId}`); is better written as something like the following to maintain the structured logging: logger.error(Object.assign({worker: workerId, slicer: exId}, sliceData), 'error on slice'); ",0,clean up logging example logger error worker workerid has error on slice json stringify slicedata slicer exid is better written as something like the following to maintain the structured logging logger error object assign worker workerid slicer exid slicedata error on slice ,0 161265,25312973480.0,IssuesEvent,2022-11-17 19:01:45,Azure/LogicAppsUX,https://api.github.com/repos/Azure/LogicAppsUX,closed,Missing error state for operation data fetching,bug designer bugbash,"### When designer fails to fetch operation data we currently just show loading states forever => ![image](https://user-images.githubusercontent.com/25409734/198397802-32a45eae-7e72-4735-b16d-6d26c1fcde52.png) ### Instead of this => ![image](https://user-images.githubusercontent.com/25409734/198397121-8f2795cf-b3b3-4113-bc34-81705dbbf29e.png) AB#16028084",1.0,"Missing error state for operation data fetching - ### When designer fails to fetch operation data we currently just show loading states forever => ![image](https://user-images.githubusercontent.com/25409734/198397802-32a45eae-7e72-4735-b16d-6d26c1fcde52.png) ### Instead of this => ![image](https://user-images.githubusercontent.com/25409734/198397121-8f2795cf-b3b3-4113-bc34-81705dbbf29e.png) AB#16028084",0,missing error state for operation data fetching when designer fails to fetch operation data we currently just show loading states forever instead of this ab ,0 3987,15097802939.0,IssuesEvent,2021-02-07 20:06:30,pulumi/docs,https://api.github.com/repos/pulumi/docs,closed,Basic cloudwatch -> s3 example seems broken,automation/tfgen-provider-docs,"File: [docs/reference/pkg/aws/cloudtrail/trail.md](https://www.pulumi.com/docs/reference/pkg/aws/cloudtrail/trail/) I don't see how this could work, since the policy has `tf-test-trail` hardcoded as the bucket name, but the bucket name is 1) going to be dynamic, and 2) going to start with `foo`. ``` const current = aws.getCallerIdentity({}); const foo = new aws.s3.Bucket(""foo"", { forceDestroy: true, policy: current.then(current => `{ ""Version"": ""2012-10-17"", ""Statement"": [ { ""Sid"": ""AWSCloudTrailAclCheck"", ""Effect"": ""Allow"", ""Principal"": { ""Service"": ""cloudtrail.amazonaws.com"" }, ""Action"": ""s3:GetBucketAcl"", ""Resource"": ""arn:aws:s3:::tf-test-trail"" }, ```",1.0,"Basic cloudwatch -> s3 example seems broken - File: [docs/reference/pkg/aws/cloudtrail/trail.md](https://www.pulumi.com/docs/reference/pkg/aws/cloudtrail/trail/) I don't see how this could work, since the policy has `tf-test-trail` hardcoded as the bucket name, but the bucket name is 1) going to be dynamic, and 2) going to start with `foo`. ``` const current = aws.getCallerIdentity({}); const foo = new aws.s3.Bucket(""foo"", { forceDestroy: true, policy: current.then(current => `{ ""Version"": ""2012-10-17"", ""Statement"": [ { ""Sid"": ""AWSCloudTrailAclCheck"", ""Effect"": ""Allow"", ""Principal"": { ""Service"": ""cloudtrail.amazonaws.com"" }, ""Action"": ""s3:GetBucketAcl"", ""Resource"": ""arn:aws:s3:::tf-test-trail"" }, ```",1,basic cloudwatch example seems broken file i don t see how this could work since the policy has tf test trail hardcoded as the bucket name but the bucket name is going to be dynamic and going to start with foo const current aws getcalleridentity const foo new aws bucket foo forcedestroy true policy current then current version statement sid awscloudtrailaclcheck effect allow principal service cloudtrail amazonaws com action getbucketacl resource arn aws tf test trail ,1 34163,28379547580.0,IssuesEvent,2023-04-13 01:06:31,APSIMInitiative/ApsimX,https://api.github.com/repos/APSIMInitiative/ApsimX,closed,Stop building APSIM for .NET 3.1,interface/infrastructure refactor,"- Need to change all the .csproj files. - Can also remove the ApsimX\Setup\netcoreapp3.1 directory. - Can also remove ApsimX\Setup\apsimx.iss - not used. - Can also remove ApsimX\Setup\Linux - not used I think. - Can also remove ApsimX\Setup\osx - not used I think. ",1.0,"Stop building APSIM for .NET 3.1 - - Need to change all the .csproj files. - Can also remove the ApsimX\Setup\netcoreapp3.1 directory. - Can also remove ApsimX\Setup\apsimx.iss - not used. - Can also remove ApsimX\Setup\Linux - not used I think. - Can also remove ApsimX\Setup\osx - not used I think. ",0,stop building apsim for net need to change all the csproj files can also remove the apsimx setup directory can also remove apsimx setup apsimx iss not used can also remove apsimx setup linux not used i think can also remove apsimx setup osx not used i think ,0 9196,24198693084.0,IssuesEvent,2022-09-24 08:30:09,openzfs/zfs,https://api.github.com/repos/openzfs/zfs,closed,Kernel page fault while loading zfs module,Type: Architecture Type: Defect Status: Stale," ### System information Distribution Name | Deepin Distribution Version | 15.5 SP2 Linux Kernel | 3.10.84-23.fc21 Architecture | mips64el ZFS Version | 2.0.0-0 SPL Version | 2.0.0-0 ### Describe the problem you're observing Kernel page fault while loading zfs module. This issue does not exist in zfs 0.8.5. Everything works well from zfs 0.8.3 to 0.8.5. ### Describe how to reproduce the problem ```bash > sudo insmod zfs/zfs/zfs.ko Segmentation fault (core dumped) ``` ### Include any warning/errors/backtraces from the system logs /var/log/kern.log: ``` [ 798.425781] zavl: module license 'CDDL' taints kernel. [ 798.425781] Disabling lock debugging due to kernel taint [ 855.035156] ------------[ cut here ]------------ [ 855.035156] WARNING: CPU: 2 PID: 19590 at lib/scatterlist.c:287 __sg_alloc_table+0x174/0x188 [ 855.035156] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O) [ 855.035156] CPU: 2 PID: 19590 Comm: insmod Tainted: P O ------------ 3.10.0+ #1 [ 855.035156] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017 [ 855.035156] Stack : 0000000000000000 0000000000000000 ffffffff817c0000 ffffffff817bccd8 [ 855.035156] ffffffff80279dd8 ffffffff80e7a58b ffffffff80d10488 ffffffff817bc448 [ 855.035156] 0000000000004c86 0000000000000002 0000000000000004 ffffffffc10a0a20 [ 855.035156] 0000000000000000 ffffffff80a9de4c 9800000275803a68 0000000000000001 [ 855.035156] ffffffff80279dd8 ffffffff802767a8 0000000000000000 ffffffff8027a960 [ 855.035156] 9800000275836e00 ffffffff80d10488 ffffffff817bd0c0 0000000000000265 [ 855.035156] 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 855.035156] 0000000000000000 98000002758039c0 0000000000000000 ffffffff80276a2c [ 855.035156] 0000000000000000 ffffffff80d4d200 ffffffff80599f54 0000000000000000 [ 855.035156] 0000000000000000 ffffffff8021a0e8 ffffffff80599f54 ffffffff80276a2c [ 855.035156] ... [ 855.035156] Call Trace: [ 855.035156] [] show_stack+0x68/0x80 [ 855.035156] [] __warn+0xf4/0x108 [ 855.035156] [] __sg_alloc_table+0x174/0x188 [ 855.035156] [] sg_alloc_table+0x24/0x60 [ 855.035156] [] abd_init+0x1f8/0x340 [zfs] [ 855.035156] [] dmu_init+0x18/0x110 [zfs] [ 855.035156] [] spa_init+0x190/0x2d8 [zfs] [ 855.035156] [] zfs_kmod_init+0x44/0x1090 [zfs] [ 855.035156] [] _init+0x3c/0xc4 [zfs] [ 855.035156] [] do_one_initcall+0x88/0x1b0 [ 855.035156] [] load_module+0x1e68/0x2590 [ 855.035156] [] SyS_finit_module+0x94/0xb0 [ 855.035156] [] syscall_common+0x34/0x58 [ 855.035156] [ 855.035156] ---[ end trace fc863b931c75040c ]--- [ 855.035156] BUG: Bad page state in process insmod pfn:00468 [ 855.035156] page:980000027f709a00 count:0 mapcount:0 mapping: (null) index:0x0 [ 855.035156] page flags: 0xfff000400(reserved) [ 855.035156] page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set [ 855.035156] bad because of flags: [ 855.035156] page flags: 0x400(reserved) [ 855.035156] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O) [ 855.035156] CPU: 2 PID: 19590 Comm: insmod Tainted: P W O ------------ 3.10.0+ #1 [ 855.035156] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017 [ 855.035156] Stack : 0000000000000000 0000000000000000 ffffffff817c0000 ffffffff817bccd8 [ 855.035156] ffffffff80279dd8 ffffffff80e7a58b ffffffff80d10488 ffffffff817bc448 [ 855.035156] 0000000000004c86 0000000000000002 ffffffff80d20000 ffffffffffffffff [ 855.035156] fffffff000ffffff ffffffff80a9de4c 98000002758039e8 ffffffff817bccd8 [ 855.035156] ffffffff80279dd8 ffffffff802767a8 980000027f709a00 ffffffff8027a960 [ 855.035156] 9800000275836e00 ffffffff80d10488 ffffffff817bd0c0 0000000000000304 [ 855.035156] 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 855.035156] 0000000000000000 9800000275803940 0000000000000000 ffffffff8035db18 [ 855.035156] 0000000000000000 ffffffff81810000 ffffffff80e30000 980000027f709a00 [ 855.035156] ffffffff81810000 ffffffff8021a0e8 ffffffff80e30000 ffffffff8035db18 [ 855.035156] ... [ 855.039062] Call Trace: [ 855.039062] [] show_stack+0x68/0x80 [ 855.039062] [] bad_page+0xf0/0x140 [ 855.039062] [] free_pages_prepare+0x14c/0x1f8 [ 855.039062] [] free_hot_cold_page+0x3c/0x208 [ 855.039062] [] __sg_free_table+0x88/0xb0 [ 855.039062] [] sg_alloc_table+0x50/0x60 [ 855.039062] [] abd_init+0x1f8/0x340 [zfs] [ 855.039062] [] dmu_init+0x18/0x110 [zfs] [ 855.039062] [] spa_init+0x190/0x2d8 [zfs] [ 855.039062] [] zfs_kmod_init+0x44/0x1090 [zfs] [ 855.039062] [] _init+0x3c/0xc4 [zfs] [ 855.039062] [] do_one_initcall+0x88/0x1b0 [ 855.039062] [] load_module+0x1e68/0x2590 [ 855.039062] [] SyS_finit_module+0x94/0xb0 [ 855.039062] [] syscall_common+0x34/0x58 [ 855.039062] [ 855.039062] CPU 2 Unable to handle kernel paging request at virtual address 0000000000003fe0, epc == ffffffff80599808, ra == ffffffff80599828 [ 855.058593] Oops[#1]: [ 855.082031] CPU: 2 PID: 19590 Comm: insmod Tainted: P B W O ------------ 3.10.0+ #1 [ 855.101562] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017 [ 855.125000] task: 9800000275836e00 ti: 9800000275800000 task.ti: 9800000275800000 [ 855.148437] $ 0 : 0000000000000000 0000000000000001 0000000000000000 0000000000000001 [ 855.171875] $ 4 : 0000000000000000 fffffffffffffe00 0000000000000000 0000000000000001 [ 855.195312] $ 8 : 0000000000000002 ffffffff8066e5b8 00000000000002f3 0000000000000005 [ 855.214843] $12 : 0000000000000000 ffffffff817e0000 0000000000000000 ffffffff817e0000 [ 855.238281] $16 : 0000000000000000 0000000000000200 ffffffff80599960 9800000275803bd0 [ 855.261718] $20 : 0000000000003fe0 00000000000001ff fffffffffffffffc ffffffffc10a0a20 [ 855.285156] $24 : 0000000000010020 ffffffff817bd560 [ 855.304687] $28 : 9800000275800000 9800000275803b60 0000000000000000 ffffffff80599828 [ 855.328125] Hi : 0000000000000f42 [ 855.351562] Lo : 000000003333703c [ 855.371093] epc : ffffffff80599808 __sg_free_table+0x68/0xb0 [ 855.394531] Tainted: P B W O ------------ [ 855.414062] ra : ffffffff80599828 __sg_free_table+0x88/0xb0 [ 855.437500] Status: d400cce3 KX SX UX KERNEL EXL IE [ 855.457031] Cause : 10000008 [ 855.480468] BadVA : 0000000000003fe0 [ 855.503906] PrId : 00146309 (ICT Loongson-3) [ 855.523437] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O) [ 855.628906] Process insmod (pid: 19590, threadinfo=9800000275800000, task=9800000275836e00, tls=000000fff16f7b20) [ 855.656250] Stack : 9800000275803bd0 ffffffffc1220000 ffffffff80599f68 ffffffff80aa86d0 [ 855.656250] ffffffffc109eda8 ffffffffc1220000 0000000000000004 ffffffff80599fb8 [ 855.656250] ffffffffffffffea ffffffffc1220000 980000027d853940 ffffffffc1015898 [ 855.656250] 0000000000000000 ffffffffc1083708 ffffffffc11a0000 fffffe00c11e6f80 [ 855.656250] ffffffffc0d80258 ffffffff802a4530 ffffffffc11948e8 0000000000000003 [ 855.656250] ffffffffc1080000 0000000000000001 98000000f0141380 ffffffffc0effd68 [ 855.656250] 0000000000000000 ffffffffc0f99470 0000000000000020 ffffffffc1049708 [ 855.656250] 0002701fff000000 ffffffffc11a0000 ffffffff80a9ddc4 ffffffffc1080000 [ 855.656250] ffffffffc1080000 ffffffffc0ff32bc ffffffff80e7a280 ffffffff80e7a280 [ 855.656250] ffffffff80e7a280 ffffffffc129003c ffffffff80e7a280 ffffffff817b0000 [ 855.656250] ... [ 855.953125] Call Trace: [ 855.980468] [] __sg_free_table+0x68/0xb0 [ 856.007812] [] sg_alloc_table+0x50/0x60 [ 856.031250] [] abd_init+0x1f8/0x340 [zfs] [ 856.058593] [] dmu_init+0x18/0x110 [zfs] [ 856.085937] [] spa_init+0x190/0x2d8 [zfs] [ 856.113281] [] zfs_kmod_init+0x44/0x1090 [zfs] [ 856.140625] [] _init+0x3c/0xc4 [zfs] [ 856.167968] [] do_one_initcall+0x88/0x1b0 [ 856.195312] [] load_module+0x1e68/0x2590 [ 856.222656] [] SyS_finit_module+0x94/0xb0 [ 856.246093] [] syscall_common+0x34/0x58 [ 856.273437] [ 856.300781] [ 856.300781] Code: 0000102d 10600005 0000802d 00b51023 0220282d 02168024 14c0fff3 ae62000c [ 856.355468] ---[ end trace fc863b931c75040d ]--- ``` ",1.0,"Kernel page fault while loading zfs module - ### System information Distribution Name | Deepin Distribution Version | 15.5 SP2 Linux Kernel | 3.10.84-23.fc21 Architecture | mips64el ZFS Version | 2.0.0-0 SPL Version | 2.0.0-0 ### Describe the problem you're observing Kernel page fault while loading zfs module. This issue does not exist in zfs 0.8.5. Everything works well from zfs 0.8.3 to 0.8.5. ### Describe how to reproduce the problem ```bash > sudo insmod zfs/zfs/zfs.ko Segmentation fault (core dumped) ``` ### Include any warning/errors/backtraces from the system logs /var/log/kern.log: ``` [ 798.425781] zavl: module license 'CDDL' taints kernel. [ 798.425781] Disabling lock debugging due to kernel taint [ 855.035156] ------------[ cut here ]------------ [ 855.035156] WARNING: CPU: 2 PID: 19590 at lib/scatterlist.c:287 __sg_alloc_table+0x174/0x188 [ 855.035156] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O) [ 855.035156] CPU: 2 PID: 19590 Comm: insmod Tainted: P O ------------ 3.10.0+ #1 [ 855.035156] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017 [ 855.035156] Stack : 0000000000000000 0000000000000000 ffffffff817c0000 ffffffff817bccd8 [ 855.035156] ffffffff80279dd8 ffffffff80e7a58b ffffffff80d10488 ffffffff817bc448 [ 855.035156] 0000000000004c86 0000000000000002 0000000000000004 ffffffffc10a0a20 [ 855.035156] 0000000000000000 ffffffff80a9de4c 9800000275803a68 0000000000000001 [ 855.035156] ffffffff80279dd8 ffffffff802767a8 0000000000000000 ffffffff8027a960 [ 855.035156] 9800000275836e00 ffffffff80d10488 ffffffff817bd0c0 0000000000000265 [ 855.035156] 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 855.035156] 0000000000000000 98000002758039c0 0000000000000000 ffffffff80276a2c [ 855.035156] 0000000000000000 ffffffff80d4d200 ffffffff80599f54 0000000000000000 [ 855.035156] 0000000000000000 ffffffff8021a0e8 ffffffff80599f54 ffffffff80276a2c [ 855.035156] ... [ 855.035156] Call Trace: [ 855.035156] [] show_stack+0x68/0x80 [ 855.035156] [] __warn+0xf4/0x108 [ 855.035156] [] __sg_alloc_table+0x174/0x188 [ 855.035156] [] sg_alloc_table+0x24/0x60 [ 855.035156] [] abd_init+0x1f8/0x340 [zfs] [ 855.035156] [] dmu_init+0x18/0x110 [zfs] [ 855.035156] [] spa_init+0x190/0x2d8 [zfs] [ 855.035156] [] zfs_kmod_init+0x44/0x1090 [zfs] [ 855.035156] [] _init+0x3c/0xc4 [zfs] [ 855.035156] [] do_one_initcall+0x88/0x1b0 [ 855.035156] [] load_module+0x1e68/0x2590 [ 855.035156] [] SyS_finit_module+0x94/0xb0 [ 855.035156] [] syscall_common+0x34/0x58 [ 855.035156] [ 855.035156] ---[ end trace fc863b931c75040c ]--- [ 855.035156] BUG: Bad page state in process insmod pfn:00468 [ 855.035156] page:980000027f709a00 count:0 mapcount:0 mapping: (null) index:0x0 [ 855.035156] page flags: 0xfff000400(reserved) [ 855.035156] page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set [ 855.035156] bad because of flags: [ 855.035156] page flags: 0x400(reserved) [ 855.035156] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O) [ 855.035156] CPU: 2 PID: 19590 Comm: insmod Tainted: P W O ------------ 3.10.0+ #1 [ 855.035156] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017 [ 855.035156] Stack : 0000000000000000 0000000000000000 ffffffff817c0000 ffffffff817bccd8 [ 855.035156] ffffffff80279dd8 ffffffff80e7a58b ffffffff80d10488 ffffffff817bc448 [ 855.035156] 0000000000004c86 0000000000000002 ffffffff80d20000 ffffffffffffffff [ 855.035156] fffffff000ffffff ffffffff80a9de4c 98000002758039e8 ffffffff817bccd8 [ 855.035156] ffffffff80279dd8 ffffffff802767a8 980000027f709a00 ffffffff8027a960 [ 855.035156] 9800000275836e00 ffffffff80d10488 ffffffff817bd0c0 0000000000000304 [ 855.035156] 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 855.035156] 0000000000000000 9800000275803940 0000000000000000 ffffffff8035db18 [ 855.035156] 0000000000000000 ffffffff81810000 ffffffff80e30000 980000027f709a00 [ 855.035156] ffffffff81810000 ffffffff8021a0e8 ffffffff80e30000 ffffffff8035db18 [ 855.035156] ... [ 855.039062] Call Trace: [ 855.039062] [] show_stack+0x68/0x80 [ 855.039062] [] bad_page+0xf0/0x140 [ 855.039062] [] free_pages_prepare+0x14c/0x1f8 [ 855.039062] [] free_hot_cold_page+0x3c/0x208 [ 855.039062] [] __sg_free_table+0x88/0xb0 [ 855.039062] [] sg_alloc_table+0x50/0x60 [ 855.039062] [] abd_init+0x1f8/0x340 [zfs] [ 855.039062] [] dmu_init+0x18/0x110 [zfs] [ 855.039062] [] spa_init+0x190/0x2d8 [zfs] [ 855.039062] [] zfs_kmod_init+0x44/0x1090 [zfs] [ 855.039062] [] _init+0x3c/0xc4 [zfs] [ 855.039062] [] do_one_initcall+0x88/0x1b0 [ 855.039062] [] load_module+0x1e68/0x2590 [ 855.039062] [] SyS_finit_module+0x94/0xb0 [ 855.039062] [] syscall_common+0x34/0x58 [ 855.039062] [ 855.039062] CPU 2 Unable to handle kernel paging request at virtual address 0000000000003fe0, epc == ffffffff80599808, ra == ffffffff80599828 [ 855.058593] Oops[#1]: [ 855.082031] CPU: 2 PID: 19590 Comm: insmod Tainted: P B W O ------------ 3.10.0+ #1 [ 855.101562] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017 [ 855.125000] task: 9800000275836e00 ti: 9800000275800000 task.ti: 9800000275800000 [ 855.148437] $ 0 : 0000000000000000 0000000000000001 0000000000000000 0000000000000001 [ 855.171875] $ 4 : 0000000000000000 fffffffffffffe00 0000000000000000 0000000000000001 [ 855.195312] $ 8 : 0000000000000002 ffffffff8066e5b8 00000000000002f3 0000000000000005 [ 855.214843] $12 : 0000000000000000 ffffffff817e0000 0000000000000000 ffffffff817e0000 [ 855.238281] $16 : 0000000000000000 0000000000000200 ffffffff80599960 9800000275803bd0 [ 855.261718] $20 : 0000000000003fe0 00000000000001ff fffffffffffffffc ffffffffc10a0a20 [ 855.285156] $24 : 0000000000010020 ffffffff817bd560 [ 855.304687] $28 : 9800000275800000 9800000275803b60 0000000000000000 ffffffff80599828 [ 855.328125] Hi : 0000000000000f42 [ 855.351562] Lo : 000000003333703c [ 855.371093] epc : ffffffff80599808 __sg_free_table+0x68/0xb0 [ 855.394531] Tainted: P B W O ------------ [ 855.414062] ra : ffffffff80599828 __sg_free_table+0x88/0xb0 [ 855.437500] Status: d400cce3 KX SX UX KERNEL EXL IE [ 855.457031] Cause : 10000008 [ 855.480468] BadVA : 0000000000003fe0 [ 855.503906] PrId : 00146309 (ICT Loongson-3) [ 855.523437] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O) [ 855.628906] Process insmod (pid: 19590, threadinfo=9800000275800000, task=9800000275836e00, tls=000000fff16f7b20) [ 855.656250] Stack : 9800000275803bd0 ffffffffc1220000 ffffffff80599f68 ffffffff80aa86d0 [ 855.656250] ffffffffc109eda8 ffffffffc1220000 0000000000000004 ffffffff80599fb8 [ 855.656250] ffffffffffffffea ffffffffc1220000 980000027d853940 ffffffffc1015898 [ 855.656250] 0000000000000000 ffffffffc1083708 ffffffffc11a0000 fffffe00c11e6f80 [ 855.656250] ffffffffc0d80258 ffffffff802a4530 ffffffffc11948e8 0000000000000003 [ 855.656250] ffffffffc1080000 0000000000000001 98000000f0141380 ffffffffc0effd68 [ 855.656250] 0000000000000000 ffffffffc0f99470 0000000000000020 ffffffffc1049708 [ 855.656250] 0002701fff000000 ffffffffc11a0000 ffffffff80a9ddc4 ffffffffc1080000 [ 855.656250] ffffffffc1080000 ffffffffc0ff32bc ffffffff80e7a280 ffffffff80e7a280 [ 855.656250] ffffffff80e7a280 ffffffffc129003c ffffffff80e7a280 ffffffff817b0000 [ 855.656250] ... [ 855.953125] Call Trace: [ 855.980468] [] __sg_free_table+0x68/0xb0 [ 856.007812] [] sg_alloc_table+0x50/0x60 [ 856.031250] [] abd_init+0x1f8/0x340 [zfs] [ 856.058593] [] dmu_init+0x18/0x110 [zfs] [ 856.085937] [] spa_init+0x190/0x2d8 [zfs] [ 856.113281] [] zfs_kmod_init+0x44/0x1090 [zfs] [ 856.140625] [] _init+0x3c/0xc4 [zfs] [ 856.167968] [] do_one_initcall+0x88/0x1b0 [ 856.195312] [] load_module+0x1e68/0x2590 [ 856.222656] [] SyS_finit_module+0x94/0xb0 [ 856.246093] [] syscall_common+0x34/0x58 [ 856.273437] [ 856.300781] [ 856.300781] Code: 0000102d 10600005 0000802d 00b51023 0220282d 02168024 14c0fff3 ae62000c [ 856.355468] ---[ end trace fc863b931c75040d ]--- ``` ",0,kernel page fault while loading zfs module system information distribution name deepin distribution version linux kernel architecture zfs version spl version describe the problem you re observing kernel page fault while loading zfs module this issue does not exist in zfs everything works well from zfs to describe how to reproduce the problem bash sudo insmod zfs zfs zfs ko segmentation fault core dumped include any warning errors backtraces from the system logs var log kern log zavl module license cddl taints kernel disabling lock debugging due to kernel taint warning cpu pid at lib scatterlist c sg alloc table modules linked in zfs po zcommon po zunicode po znvpair po zlua o icp po zavl po zzstd o spl o zlib zlib deflate veth ipt masquerade nf nat masquerade iptable nat nf conntrack nf defrag nf nat nf nat nf conntrack tunnel ipcomp xfrm ipcomp af key bridge stp llc fuse uvcvideo vmalloc memops core videodev media o rtl pci o rtlwifi o joydev serio raw sg snd hda codec realtek snd hda codec generic snd hda codec hdmi snd hda intel snd hda codec snd hwdep snd hda core rfkill snd pcm snd timer shpchp sch fq codel binfmt misc async recov async memcpy async pq async xor async tx xor pq multipath linear md mod pata atiixp o radeon o cpu pid comm insmod tainted p o hardware name loongson loongson demo loongson demo bios loongson pmon stack call trace show stack warn sg alloc table sg alloc table abd init dmu init spa init zfs kmod init init do one initcall load module sys finit module syscall common bug bad page state in process insmod pfn page count mapcount mapping null index page flags reserved page dumped because page flags check at free flag s set bad because of flags page flags reserved modules linked in zfs po zcommon po zunicode po znvpair po zlua o icp po zavl po zzstd o spl o zlib zlib deflate veth ipt masquerade nf nat masquerade iptable nat nf conntrack nf defrag nf nat nf nat nf conntrack tunnel ipcomp xfrm ipcomp af key bridge stp llc fuse uvcvideo vmalloc memops core videodev media o rtl pci o rtlwifi o joydev serio raw sg snd hda codec realtek snd hda codec generic snd hda codec hdmi snd hda intel snd hda codec snd hwdep snd hda core rfkill snd pcm snd timer shpchp sch fq codel binfmt misc async recov async memcpy async pq async xor async tx xor pq multipath linear md mod pata atiixp o radeon o cpu pid comm insmod tainted p w o hardware name loongson loongson demo loongson demo bios loongson pmon stack ffffffffffffffff call trace show stack bad page free pages prepare free hot cold page sg free table sg alloc table abd init dmu init spa init zfs kmod init init do one initcall load module sys finit module syscall common cpu unable to handle kernel paging request at virtual address epc ra oops cpu pid comm insmod tainted p b w o hardware name loongson loongson demo loongson demo bios loongson pmon task ti task ti fffffffffffffffc hi lo epc sg free table tainted p b w o ra sg free table status kx sx ux kernel exl ie cause badva prid ict loongson modules linked in zfs po zcommon po zunicode po znvpair po zlua o icp po zavl po zzstd o spl o zlib zlib deflate veth ipt masquerade nf nat masquerade iptable nat nf conntrack nf defrag nf nat nf nat nf conntrack tunnel ipcomp xfrm ipcomp af key bridge stp llc fuse uvcvideo vmalloc memops core videodev media o rtl pci o rtlwifi o joydev serio raw sg snd hda codec realtek snd hda codec generic snd hda codec hdmi snd hda intel snd hda codec snd hwdep snd hda core rfkill snd pcm snd timer shpchp sch fq codel binfmt misc async recov async memcpy async pq async xor async tx xor pq multipath linear md mod pata atiixp o radeon o process insmod pid threadinfo task tls stack ffffffffffffffea call trace sg free table sg alloc table abd init dmu init spa init zfs kmod init init do one initcall load module sys finit module syscall common code ,0 6278,22675711640.0,IssuesEvent,2022-07-04 04:12:32,flow-typed/flow-typed,https://api.github.com/repos/flow-typed/flow-typed,closed,[Test CI] `validate-defs` should ensure that each npm package name is actually an npm package name,automation,"Currently all of our definitions are assumed to be valid npm package names, but we don't automatically verify this. (I've been verifying it as I accept PRs). It would be good to automate this in the CI tests. ",1.0,"[Test CI] `validate-defs` should ensure that each npm package name is actually an npm package name - Currently all of our definitions are assumed to be valid npm package names, but we don't automatically verify this. (I've been verifying it as I accept PRs). It would be good to automate this in the CI tests. ",1, validate defs should ensure that each npm package name is actually an npm package name currently all of our definitions are assumed to be valid npm package names but we don t automatically verify this i ve been verifying it as i accept prs it would be good to automate this in the ci tests ,1 5715,20829641534.0,IssuesEvent,2022-03-19 07:49:15,boto/botocore,https://api.github.com/repos/boto/botocore,closed,Support custom SSLContext objects,feature-request automation-exempt,"Basically there is this [boto3/issues/1976](https://github.com/boto/boto3/issues/1976) where I asked for there to be PFS (perfect forward secrecy) in boto3, the response is a feature request to add support for a custom `SSLContext` to the `requests.adapters.HTTPAdapter`. It's been identified the best place to introduce this is in the botocore class `URLLib3Session` because end-users can provide custom botocore session to boto3, and boto3 itself calls `botocore.session.get_session()` when an end-user custom botocore session is not provided. To see if i might contribute a PR i forked botocore and dug through the codebase to look for usage of `requests.adapters.HTTPAdapter` and i found a very very scary pattern... there are forked vendored libraries hardcoded in botocore, frozen without patches (including security no patches!!!), particularly i found requests `adapters.py` file, but it has been hacked to pieces and doesn't even represent anything close to what requests actually looks like today... Putting aside my ranting about some of the worst appsec i've seen in a while.. Just don't ever embed deps in your repos.. So, looking at botocore/httpsession.py class `URLLib3Session` i can tell that this is the enabler of `HTTPAdapter` usage, awesome! good start. However there is no ability to provide a HTTPAdapter with a custom end-user provided SSLContext, because there is no existence of the ssl context representation in this HTTPAdapter at all... it's not a HTTPAdapter at all close to what urllib3 or requests uses despite the botocore comments... Moving on. Next i looked for where botocore might call a `create_urllib3_context`, and again i find a method that is called this but is not even close the the _real_ urllib3 method, acording to comments, because _reasons_; ```python def create_urllib3_context(ssl_version=None, cert_reqs=None, options=None, ciphers=None): """""" This function is a vendored version of the same function in urllib3 We vendor this function to ensure that the SSL contexts we construct always use the std lib SSLContext instead of pyopenssl. """""" ``` But this isn't true either... Worse, it allows `context.set_ciphers(ciphers or DEFAULT_CIPHERS)` and `DEFAULT_CIPHERS` comes from `urllib3` (Okay i guess... might be better to use the std lib SSL, see below) and `ciphers` is an argument with default `None` as you can see above, but the method `create_urllib3_context` is called by; ```python class URLLib3Session(object): def _get_ssl_context(self): return create_urllib3_context() ``` Without giving it any arguments... Meaning the ssl_context provided by `_get_connection_manager` for the `URLLib3Session.send` method only supports ciphers from `urllib3`, not the std ssl lib or a custom provided set of ciphers (ergo this discussion). Back to looking at the SSLContext, If we follow the imports to understand where exactly we can set the ciphers (assuming python 3.7, you'll see why); 1) `context = SSLContext` from `botocore/httpsession.py` which imports from ```python try: # Always import the original SSLContext, even if it has been patched from urllib3.contrib.pyopenssl import orig_util_SSLContext as SSLContext except ImportError: from urllib3.util.ssl_ import SSLContext ``` 2a) we follow `urllib3.contrib.pyopenssl.orig_util_SSLContext` to `/usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py` which has `orig_util_SSLContext = util.ssl_.SSLContext` which is `/usr/lib/python3/dist-packages/urllib3/util/ssl_.py` import; ```python try: from ssl import SSLContext # Modern SSL? except ImportError: import sys class SSLContext(object): # Platform-specific: Python 2 & 3.1 ... ... ... ``` So `Modern SSL?` line leads us to `2b` from our original content, and we will look at `Platform-specific: Python 2 & 3.1` as `2c`. 2b) Leads us to `/usr/lib/python3.7/ssl.py` awesome :+1: but... 2c) this is the urllib3 `ssl_.py` not the standard SSLContext library written by Bill Janssen. Doing a diff there are 171 vs 57 LOC and nothing is shared at all. It does do 1 thing the std lib doesn'r do, it TypeError's when SSLContext is used by 2.7, 3.2, or earlier versions of Python which did not support setting a custom cipher suite. 2a and 2b will instead ImportError for `import _ssl` even when openssl is installed. interestingly this non-standard SSLContext is it only imported when it is run from Python 2 (all versions it seems) & 3.1 (botocore will not support) therefore with v2 deprecated and EOL in 3 months i'll continue assuming we are running botocore supported python3.3 or higher only. So, for the task of supporting custom ciphers in a botocore session, we are going to need only to expose the std SSL lib which provides SSLContext, because ciphers are set using the `wrap_socket` method. **Looking good finally** 3) Let's learn where botocore utilises `wrap_socket` with regards to a `URLLib3Session`. Because `create_urllib3_context` is called from `URLLib3Session._get_ssl_context`. I can see `URLLib3Session` is used in `Endpoint` and interestingly represented as an argument `http_session`, it is also in `EndpointCreator.create_endpoint` (where Endpoint is also used). So `ClientArgsCreator.get_client_args` seems to be a good place to provide an end-user configurable, let's name it `enforce_pfs`, which essentially instantiates a version of the botocore custom SSLContext using existing methods without deleting or modifying existing arguments. essentially making no side effects to ensure backwards compatibility. Based on the rules of contributing, i've started the discussion. will push the PR that references this issue. ",1.0,"Support custom SSLContext objects - Basically there is this [boto3/issues/1976](https://github.com/boto/boto3/issues/1976) where I asked for there to be PFS (perfect forward secrecy) in boto3, the response is a feature request to add support for a custom `SSLContext` to the `requests.adapters.HTTPAdapter`. It's been identified the best place to introduce this is in the botocore class `URLLib3Session` because end-users can provide custom botocore session to boto3, and boto3 itself calls `botocore.session.get_session()` when an end-user custom botocore session is not provided. To see if i might contribute a PR i forked botocore and dug through the codebase to look for usage of `requests.adapters.HTTPAdapter` and i found a very very scary pattern... there are forked vendored libraries hardcoded in botocore, frozen without patches (including security no patches!!!), particularly i found requests `adapters.py` file, but it has been hacked to pieces and doesn't even represent anything close to what requests actually looks like today... Putting aside my ranting about some of the worst appsec i've seen in a while.. Just don't ever embed deps in your repos.. So, looking at botocore/httpsession.py class `URLLib3Session` i can tell that this is the enabler of `HTTPAdapter` usage, awesome! good start. However there is no ability to provide a HTTPAdapter with a custom end-user provided SSLContext, because there is no existence of the ssl context representation in this HTTPAdapter at all... it's not a HTTPAdapter at all close to what urllib3 or requests uses despite the botocore comments... Moving on. Next i looked for where botocore might call a `create_urllib3_context`, and again i find a method that is called this but is not even close the the _real_ urllib3 method, acording to comments, because _reasons_; ```python def create_urllib3_context(ssl_version=None, cert_reqs=None, options=None, ciphers=None): """""" This function is a vendored version of the same function in urllib3 We vendor this function to ensure that the SSL contexts we construct always use the std lib SSLContext instead of pyopenssl. """""" ``` But this isn't true either... Worse, it allows `context.set_ciphers(ciphers or DEFAULT_CIPHERS)` and `DEFAULT_CIPHERS` comes from `urllib3` (Okay i guess... might be better to use the std lib SSL, see below) and `ciphers` is an argument with default `None` as you can see above, but the method `create_urllib3_context` is called by; ```python class URLLib3Session(object): def _get_ssl_context(self): return create_urllib3_context() ``` Without giving it any arguments... Meaning the ssl_context provided by `_get_connection_manager` for the `URLLib3Session.send` method only supports ciphers from `urllib3`, not the std ssl lib or a custom provided set of ciphers (ergo this discussion). Back to looking at the SSLContext, If we follow the imports to understand where exactly we can set the ciphers (assuming python 3.7, you'll see why); 1) `context = SSLContext` from `botocore/httpsession.py` which imports from ```python try: # Always import the original SSLContext, even if it has been patched from urllib3.contrib.pyopenssl import orig_util_SSLContext as SSLContext except ImportError: from urllib3.util.ssl_ import SSLContext ``` 2a) we follow `urllib3.contrib.pyopenssl.orig_util_SSLContext` to `/usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py` which has `orig_util_SSLContext = util.ssl_.SSLContext` which is `/usr/lib/python3/dist-packages/urllib3/util/ssl_.py` import; ```python try: from ssl import SSLContext # Modern SSL? except ImportError: import sys class SSLContext(object): # Platform-specific: Python 2 & 3.1 ... ... ... ``` So `Modern SSL?` line leads us to `2b` from our original content, and we will look at `Platform-specific: Python 2 & 3.1` as `2c`. 2b) Leads us to `/usr/lib/python3.7/ssl.py` awesome :+1: but... 2c) this is the urllib3 `ssl_.py` not the standard SSLContext library written by Bill Janssen. Doing a diff there are 171 vs 57 LOC and nothing is shared at all. It does do 1 thing the std lib doesn'r do, it TypeError's when SSLContext is used by 2.7, 3.2, or earlier versions of Python which did not support setting a custom cipher suite. 2a and 2b will instead ImportError for `import _ssl` even when openssl is installed. interestingly this non-standard SSLContext is it only imported when it is run from Python 2 (all versions it seems) & 3.1 (botocore will not support) therefore with v2 deprecated and EOL in 3 months i'll continue assuming we are running botocore supported python3.3 or higher only. So, for the task of supporting custom ciphers in a botocore session, we are going to need only to expose the std SSL lib which provides SSLContext, because ciphers are set using the `wrap_socket` method. **Looking good finally** 3) Let's learn where botocore utilises `wrap_socket` with regards to a `URLLib3Session`. Because `create_urllib3_context` is called from `URLLib3Session._get_ssl_context`. I can see `URLLib3Session` is used in `Endpoint` and interestingly represented as an argument `http_session`, it is also in `EndpointCreator.create_endpoint` (where Endpoint is also used). So `ClientArgsCreator.get_client_args` seems to be a good place to provide an end-user configurable, let's name it `enforce_pfs`, which essentially instantiates a version of the botocore custom SSLContext using existing methods without deleting or modifying existing arguments. essentially making no side effects to ensure backwards compatibility. Based on the rules of contributing, i've started the discussion. will push the PR that references this issue. ",1,support custom sslcontext objects basically there is this where i asked for there to be pfs perfect forward secrecy in the response is a feature request to add support for a custom sslcontext to the requests adapters httpadapter it s been identified the best place to introduce this is in the botocore class because end users can provide custom botocore session to and itself calls botocore session get session when an end user custom botocore session is not provided to see if i might contribute a pr i forked botocore and dug through the codebase to look for usage of requests adapters httpadapter and i found a very very scary pattern there are forked vendored libraries hardcoded in botocore frozen without patches including security no patches particularly i found requests adapters py file but it has been hacked to pieces and doesn t even represent anything close to what requests actually looks like today putting aside my ranting about some of the worst appsec i ve seen in a while just don t ever embed deps in your repos so looking at botocore httpsession py class i can tell that this is the enabler of httpadapter usage awesome good start however there is no ability to provide a httpadapter with a custom end user provided sslcontext because there is no existence of the ssl context representation in this httpadapter at all it s not a httpadapter at all close to what or requests uses despite the botocore comments moving on next i looked for where botocore might call a create context and again i find a method that is called this but is not even close the the real method acording to comments because reasons python def create context ssl version none cert reqs none options none ciphers none this function is a vendored version of the same function in we vendor this function to ensure that the ssl contexts we construct always use the std lib sslcontext instead of pyopenssl but this isn t true either worse it allows context set ciphers ciphers or default ciphers and default ciphers comes from okay i guess might be better to use the std lib ssl see below and ciphers is an argument with default none as you can see above but the method create context is called by python class object def get ssl context self return create context without giving it any arguments meaning the ssl context provided by get connection manager for the send method only supports ciphers from not the std ssl lib or a custom provided set of ciphers ergo this discussion back to looking at the sslcontext if we follow the imports to understand where exactly we can set the ciphers assuming python you ll see why context sslcontext from botocore httpsession py which imports from python try always import the original sslcontext even if it has been patched from contrib pyopenssl import orig util sslcontext as sslcontext except importerror from util ssl import sslcontext we follow contrib pyopenssl orig util sslcontext to usr lib dist packages contrib pyopenssl py which has orig util sslcontext util ssl sslcontext which is usr lib dist packages util ssl py import python try from ssl import sslcontext modern ssl except importerror import sys class sslcontext object platform specific python so modern ssl line leads us to from our original content and we will look at platform specific python as leads us to usr lib ssl py awesome but this is the ssl py not the standard sslcontext library written by bill janssen doing a diff there are vs loc and nothing is shared at all it does do thing the std lib doesn r do it typeerror s when sslcontext is used by or earlier versions of python which did not support setting a custom cipher suite and will instead importerror for import ssl even when openssl is installed interestingly this non standard sslcontext is it only imported when it is run from python all versions it seems botocore will not support therefore with deprecated and eol in months i ll continue assuming we are running botocore supported or higher only so for the task of supporting custom ciphers in a botocore session we are going to need only to expose the std ssl lib which provides sslcontext because ciphers are set using the wrap socket method looking good finally let s learn where botocore utilises wrap socket with regards to a because create context is called from get ssl context i can see is used in endpoint and interestingly represented as an argument http session it is also in endpointcreator create endpoint where endpoint is also used so clientargscreator get client args seems to be a good place to provide an end user configurable let s name it enforce pfs which essentially instantiates a version of the botocore custom sslcontext using existing methods without deleting or modifying existing arguments essentially making no side effects to ensure backwards compatibility based on the rules of contributing i ve started the discussion will push the pr that references this issue ,1 172022,21031018594.0,IssuesEvent,2022-03-31 01:01:23,samq-ghdemo/SEARCH-NCJIS-nibrs,https://api.github.com/repos/samq-ghdemo/SEARCH-NCJIS-nibrs,opened,CVE-2022-22950 (Medium) detected in multiple libraries,security vulnerability,"## CVE-2022-22950 - Medium Severity Vulnerability
Vulnerable Libraries - spring-expression-5.0.9.RELEASE.jar, spring-expression-3.2.16.RELEASE.jar, spring-expression-5.1.7.RELEASE.jar, spring-expression-4.3.11.RELEASE.jar

spring-expression-5.0.9.RELEASE.jar

Spring Expression Language (SpEL)

Library home page: https://github.com/spring-projects/spring-framework

Path to dependency file: /tools/nibrs-summary-report/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/spring-expression-5.0.9.RELEASE.jar

Dependency Hierarchy: - :x: **spring-expression-5.0.9.RELEASE.jar** (Vulnerable Library)

spring-expression-3.2.16.RELEASE.jar

Spring Expression Language (SpEL)

Library home page: https://github.com/SpringSource/spring-framework

Path to dependency file: /tools/nibrs-common/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/3.2.16.RELEASE/spring-expression-3.2.16.RELEASE.jar

Dependency Hierarchy: - tika-parsers-1.18.jar (Root Library) - uimafit-core-2.2.0.jar - spring-context-3.2.16.RELEASE.jar - :x: **spring-expression-3.2.16.RELEASE.jar** (Vulnerable Library)

spring-expression-5.1.7.RELEASE.jar

Spring Expression Language (SpEL)

Library home page: https://github.com/spring-projects/spring-framework

Path to dependency file: /tools/nibrs-summary-report-common/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.1.7.RELEASE/spring-expression-5.1.7.RELEASE.jar

Dependency Hierarchy: - spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library) - spring-webmvc-5.1.7.RELEASE.jar - :x: **spring-expression-5.1.7.RELEASE.jar** (Vulnerable Library)

spring-expression-4.3.11.RELEASE.jar

Spring Expression Language (SpEL)

Library home page: https://github.com/spring-projects/spring-framework

Path to dependency file: /tools/nibrs-fbi-service/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/4.3.11.RELEASE/spring-expression-4.3.11.RELEASE.jar,/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/spring-expression-4.3.11.RELEASE.jar

Dependency Hierarchy: - :x: **spring-expression-4.3.11.RELEASE.jar** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

In Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition

Publish Date: 2022-01-11

URL: CVE-2022-22950

CVSS 3 Score Details (5.4)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://tanzu.vmware.com/security/cve-2022-22950

Release Date: 2022-01-11

Fix Resolution: org.springframework:spring-expression:5.3.17

*** - [ ] Check this box to open an automated fix PR ",True,"CVE-2022-22950 (Medium) detected in multiple libraries - ## CVE-2022-22950 - Medium Severity Vulnerability
Vulnerable Libraries - spring-expression-5.0.9.RELEASE.jar, spring-expression-3.2.16.RELEASE.jar, spring-expression-5.1.7.RELEASE.jar, spring-expression-4.3.11.RELEASE.jar

spring-expression-5.0.9.RELEASE.jar

Spring Expression Language (SpEL)

Library home page: https://github.com/spring-projects/spring-framework

Path to dependency file: /tools/nibrs-summary-report/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/spring-expression-5.0.9.RELEASE.jar

Dependency Hierarchy: - :x: **spring-expression-5.0.9.RELEASE.jar** (Vulnerable Library)

spring-expression-3.2.16.RELEASE.jar

Spring Expression Language (SpEL)

Library home page: https://github.com/SpringSource/spring-framework

Path to dependency file: /tools/nibrs-common/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/3.2.16.RELEASE/spring-expression-3.2.16.RELEASE.jar

Dependency Hierarchy: - tika-parsers-1.18.jar (Root Library) - uimafit-core-2.2.0.jar - spring-context-3.2.16.RELEASE.jar - :x: **spring-expression-3.2.16.RELEASE.jar** (Vulnerable Library)

spring-expression-5.1.7.RELEASE.jar

Spring Expression Language (SpEL)

Library home page: https://github.com/spring-projects/spring-framework

Path to dependency file: /tools/nibrs-summary-report-common/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.1.7.RELEASE/spring-expression-5.1.7.RELEASE.jar

Dependency Hierarchy: - spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library) - spring-webmvc-5.1.7.RELEASE.jar - :x: **spring-expression-5.1.7.RELEASE.jar** (Vulnerable Library)

spring-expression-4.3.11.RELEASE.jar

Spring Expression Language (SpEL)

Library home page: https://github.com/spring-projects/spring-framework

Path to dependency file: /tools/nibrs-fbi-service/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/4.3.11.RELEASE/spring-expression-4.3.11.RELEASE.jar,/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/spring-expression-4.3.11.RELEASE.jar

Dependency Hierarchy: - :x: **spring-expression-4.3.11.RELEASE.jar** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

In Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition

Publish Date: 2022-01-11

URL: CVE-2022-22950

CVSS 3 Score Details (5.4)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://tanzu.vmware.com/security/cve-2022-22950

Release Date: 2022-01-11

Fix Resolution: org.springframework:spring-expression:5.3.17

*** - [ ] Check this box to open an automated fix PR ",0,cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries spring expression release jar spring expression release jar spring expression release jar spring expression release jar spring expression release jar spring expression language spel library home page a href path to dependency file tools nibrs summary report pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar web nibrs web target nibrs web web inf lib spring expression release jar dependency hierarchy x spring expression release jar vulnerable library spring expression release jar spring expression language spel library home page a href path to dependency file tools nibrs common pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar dependency hierarchy tika parsers jar root library uimafit core jar spring context release jar x spring expression release jar vulnerable library spring expression release jar spring expression language spel library home page a href path to dependency file tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar dependency hierarchy spring boot starter web release jar root library spring webmvc release jar x spring expression release jar vulnerable library spring expression release jar spring expression language spel library home page a href path to dependency file tools nibrs fbi service pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar tools nibrs fbi service target nibrs fbi service web inf lib spring expression release jar dependency hierarchy x spring expression release jar vulnerable library found in base branch master vulnerability details in spring framework versions and older unsupported versions it is possible for a user to provide a specially crafted spel expression that may cause a denial of service condition publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring expression check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org springframework spring expression release isminimumfixversionavailable true minimumfixversion org springframework spring expression isbinary false packagetype java groupid org springframework packagename spring expression packageversion release packagefilepaths istransitivedependency true dependencytree org apache tika tika parsers org apache uima uimafit core org springframework spring context release org springframework spring expression release isminimumfixversionavailable true minimumfixversion org springframework spring expression isbinary false packagetype java groupid org springframework packagename spring expression packageversion release packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework spring webmvc release org springframework spring expression release isminimumfixversionavailable true minimumfixversion org springframework spring expression isbinary false packagetype java groupid org springframework packagename spring expression packageversion release packagefilepaths istransitivedependency false dependencytree org springframework spring expression release isminimumfixversionavailable true minimumfixversion org springframework spring expression isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails in spring framework versions and older unsupported versions it is possible for a user to provide a specially crafted spel expression that may cause a denial of service condition vulnerabilityurl ,0 194669,14684657366.0,IssuesEvent,2021-01-01 04:19:04,github-vet/rangeloop-pointer-findings,https://api.github.com/repos/github-vet/rangeloop-pointer-findings,closed,srackham/go-rimu: rimugo/rimugo_test.go; 67 LoC,fresh medium test," Found a possible issue in [srackham/go-rimu](https://www.github.com/srackham/go-rimu) at [rimugo/rimugo_test.go](https://github.com/srackham/go-rimu/blob/56277b6cde5f725036cda89006b7c9b16814659d/rimugo/rimugo_test.go#L35-L101) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > range-loop variable tt used in defer or goroutine at line 67 [Click here to see the code in its original context.](https://github.com/srackham/go-rimu/blob/56277b6cde5f725036cda89006b7c9b16814659d/rimugo/rimugo_test.go#L35-L101)
Click here to show the 67 line(s) of Go which triggered the analyzer. ```go for _, tt := range tests { if strings.Contains(tt.Unsupported, ""go"") { continue } for _, layout := range []string{"""", ""classic"", ""flex"", ""sequel""} { // Skip if not a layouts test and we have a layout, or if it is a layouts test but no layout is specified. if !tt.Layouts && layout != """" || tt.Layouts && layout == """" { continue } tt.Expected = strings.Replace(tt.Expected, ""./test/fixtures/"", ""./testdata/"", -1) tt.Args = strings.Replace(tt.Args, ""./test/fixtures/"", ""./testdata/"", -1) tt.Args = strings.Replace(tt.Args, ""./examples/example-rimurc.rmu"", ""./testdata/example-rimurc.rmu"", -1) command := ""rimugo --no-rimurc"" if layout != """" { command += "" --layout "" + layout } command += "" "" + tt.Args var cmd *exec.Cmd if runtime.GOOS == ""windows"" { cmd = exec.Command(""PowerShell.exe"", ""-Command"", command) } else { cmd = exec.Command(""bash"", ""-c"", command) } var outb, errb bytes.Buffer cmd.Stdout = &outb cmd.Stderr = &errb stdin, err := cmd.StdinPipe() if err != nil { panic(err.Error()) } go func() { defer stdin.Close() io.WriteString(stdin, tt.Input) }() err = cmd.Run() exitCode := 0 if err != nil { exitCode = 1 } out := outb.String() + errb.String() out = strings.Replace(out, ""\r"", """", -1) // Strip Windows return characters. passed := false switch tt.Predicate { case ""contains"": passed = strings.Contains(out, tt.Expected) case ""!contains"": passed = !strings.Contains(out, tt.Expected) case ""equals"": passed = out == tt.Expected case ""!equals"": passed = out != tt.Expected case ""startsWith"": passed = strings.HasPrefix(out, tt.Expected) default: panic(tt.Description + "": illegal predicate: "" + tt.Predicate) } if !passed { t.Errorf(""\n%-15s: %s\n%-15s: %s\n%-15s: %s\n%-15s: %s\n%-15s: %s\n%-15s: %s\n\n"", ""description"", tt.Description, ""args"", tt.Args, ""input"", tt.Input, ""predicate:"", tt.Predicate, ""expected"", tt.Expected, ""got"", out) } if exitCode != tt.ExitCode { t.Errorf(""\n%-15s: %s\n%-15s: %d (expected %d)\n\n"", ""description"", tt.Description, ""exitcode"", exitCode, tt.ExitCode) } } } ```
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 56277b6cde5f725036cda89006b7c9b16814659d ",1.0,"srackham/go-rimu: rimugo/rimugo_test.go; 67 LoC - Found a possible issue in [srackham/go-rimu](https://www.github.com/srackham/go-rimu) at [rimugo/rimugo_test.go](https://github.com/srackham/go-rimu/blob/56277b6cde5f725036cda89006b7c9b16814659d/rimugo/rimugo_test.go#L35-L101) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > range-loop variable tt used in defer or goroutine at line 67 [Click here to see the code in its original context.](https://github.com/srackham/go-rimu/blob/56277b6cde5f725036cda89006b7c9b16814659d/rimugo/rimugo_test.go#L35-L101)
Click here to show the 67 line(s) of Go which triggered the analyzer. ```go for _, tt := range tests { if strings.Contains(tt.Unsupported, ""go"") { continue } for _, layout := range []string{"""", ""classic"", ""flex"", ""sequel""} { // Skip if not a layouts test and we have a layout, or if it is a layouts test but no layout is specified. if !tt.Layouts && layout != """" || tt.Layouts && layout == """" { continue } tt.Expected = strings.Replace(tt.Expected, ""./test/fixtures/"", ""./testdata/"", -1) tt.Args = strings.Replace(tt.Args, ""./test/fixtures/"", ""./testdata/"", -1) tt.Args = strings.Replace(tt.Args, ""./examples/example-rimurc.rmu"", ""./testdata/example-rimurc.rmu"", -1) command := ""rimugo --no-rimurc"" if layout != """" { command += "" --layout "" + layout } command += "" "" + tt.Args var cmd *exec.Cmd if runtime.GOOS == ""windows"" { cmd = exec.Command(""PowerShell.exe"", ""-Command"", command) } else { cmd = exec.Command(""bash"", ""-c"", command) } var outb, errb bytes.Buffer cmd.Stdout = &outb cmd.Stderr = &errb stdin, err := cmd.StdinPipe() if err != nil { panic(err.Error()) } go func() { defer stdin.Close() io.WriteString(stdin, tt.Input) }() err = cmd.Run() exitCode := 0 if err != nil { exitCode = 1 } out := outb.String() + errb.String() out = strings.Replace(out, ""\r"", """", -1) // Strip Windows return characters. passed := false switch tt.Predicate { case ""contains"": passed = strings.Contains(out, tt.Expected) case ""!contains"": passed = !strings.Contains(out, tt.Expected) case ""equals"": passed = out == tt.Expected case ""!equals"": passed = out != tt.Expected case ""startsWith"": passed = strings.HasPrefix(out, tt.Expected) default: panic(tt.Description + "": illegal predicate: "" + tt.Predicate) } if !passed { t.Errorf(""\n%-15s: %s\n%-15s: %s\n%-15s: %s\n%-15s: %s\n%-15s: %s\n%-15s: %s\n\n"", ""description"", tt.Description, ""args"", tt.Args, ""input"", tt.Input, ""predicate:"", tt.Predicate, ""expected"", tt.Expected, ""got"", out) } if exitCode != tt.ExitCode { t.Errorf(""\n%-15s: %s\n%-15s: %d (expected %d)\n\n"", ""description"", tt.Description, ""exitcode"", exitCode, tt.ExitCode) } } } ```
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 56277b6cde5f725036cda89006b7c9b16814659d ",0,srackham go rimu rimugo rimugo test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable tt used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for tt range tests if strings contains tt unsupported go continue for layout range string classic flex sequel skip if not a layouts test and we have a layout or if it is a layouts test but no layout is specified if tt layouts layout tt layouts layout continue tt expected strings replace tt expected test fixtures testdata tt args strings replace tt args test fixtures testdata tt args strings replace tt args examples example rimurc rmu testdata example rimurc rmu command rimugo no rimurc if layout command layout layout command tt args var cmd exec cmd if runtime goos windows cmd exec command powershell exe command command else cmd exec command bash c command var outb errb bytes buffer cmd stdout outb cmd stderr errb stdin err cmd stdinpipe if err nil panic err error go func defer stdin close io writestring stdin tt input err cmd run exitcode if err nil exitcode out outb string errb string out strings replace out r strip windows return characters passed false switch tt predicate case contains passed strings contains out tt expected case contains passed strings contains out tt expected case equals passed out tt expected case equals passed out tt expected case startswith passed strings hasprefix out tt expected default panic tt description illegal predicate tt predicate if passed t errorf n s n s n s n s n s n s n n description tt description args tt args input tt input predicate tt predicate expected tt expected got out if exitcode tt exitcode t errorf n s n d expected d n n description tt description exitcode exitcode tt exitcode leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id ,0 276748,8608953692.0,IssuesEvent,2018-11-18 16:56:24,CzolgIT/tanksgame,https://api.github.com/repos/CzolgIT/tanksgame,opened,czy dodać warstwy rysowania? czy może sortować listę gameobjectów?,help wanted invalid priority: high,"czolg musi byc rysowany ostatni... nwm jak to rozegrac narazie rysuje sie dwa razy ``` for (int i = 0; i < gameObjects.size();i++) { gameObjects[i]->draw( x0 , y0 ); } player->draw( x0 , y0 ); ``` ",1.0,"czy dodać warstwy rysowania? czy może sortować listę gameobjectów? - czolg musi byc rysowany ostatni... nwm jak to rozegrac narazie rysuje sie dwa razy ``` for (int i = 0; i < gameObjects.size();i++) { gameObjects[i]->draw( x0 , y0 ); } player->draw( x0 , y0 ); ``` ",0,czy dodać warstwy rysowania czy może sortować listę gameobjectów czolg musi byc rysowany ostatni nwm jak to rozegrac narazie rysuje sie dwa razy for int i i gameobjects size i gameobjects draw player draw ,0 8597,27169802872.0,IssuesEvent,2023-02-17 18:18:42,zowe/zowe-chat,https://api.github.com/repos/zowe/zowe-chat,closed,Automated Pipeline Development,Epic automation 22P4,"- Building: - Code scan: SonarQube / CodeQL - Write building script - Implement building pipeline - Testing: - Write UT/FVT testing case - Implement testing pipeline automation",1.0,"Automated Pipeline Development - - Building: - Code scan: SonarQube / CodeQL - Write building script - Implement building pipeline - Testing: - Write UT/FVT testing case - Implement testing pipeline automation",1,automated pipeline development building code scan sonarqube codeql write building script implement building pipeline testing write ut fvt testing case implement testing pipeline automation,1 7832,25771394673.0,IssuesEvent,2022-12-09 08:18:09,easystats/easystats,https://api.github.com/repos/easystats/easystats,opened,About the new `R-CMD-check-devel-easystats` workflow,Core Packages :package: Dissemination :speaker: automation :robot:,"## Preamble In the book [Accelerate](https://www.amazon.de/-/en/Nicole-Ph-D-Forsgren/dp/1942788339), the authors demonstrate that the key difference between high-performing vs low-performing software teams is the ability to make a release at the drop of a hat. For us, this translates to making sure that the `main`-branch is always in sync with the rest of the development version of `easystats`, and being ready to make a CRAN release with the latest commit on the `main`, without having to worry whether the reverse `easystats` dependencies have already been updated for any potential breaking changes. For example, `{datawizard}` relies on `{insight}`, and so this check would make sure that the latest commit on `{datawizard}`'s `main`-branch playes nicely with the latest commit on `{insight}`'s `main`-branch. This is currently [the case](https://github.com/easystats/datawizard/actions/workflows/R-CMD-check-devel-easystats.yaml). This is also important because we provide `easystats::install_latest()` function, which assumes that all packages are in sync with each other, and it would be good to have an objective check for this. ## Workflow  To make sure that all packages are in sync with each other, I have created and added a new `R-CMD-check-devel-easystats`, which runs `R CMD check` on the given package using the development versions of `easystats` packages. It's run once every day at midnight. So you just need to make sure that this workflow isn't failing. If not daily, at least check once a week. ## Progress tracker There is no progress tracker here, since making sure that this check pass is a continuous process. This post is meant merely to inform you of this check. ",1.0,"About the new `R-CMD-check-devel-easystats` workflow - ## Preamble In the book [Accelerate](https://www.amazon.de/-/en/Nicole-Ph-D-Forsgren/dp/1942788339), the authors demonstrate that the key difference between high-performing vs low-performing software teams is the ability to make a release at the drop of a hat. For us, this translates to making sure that the `main`-branch is always in sync with the rest of the development version of `easystats`, and being ready to make a CRAN release with the latest commit on the `main`, without having to worry whether the reverse `easystats` dependencies have already been updated for any potential breaking changes. For example, `{datawizard}` relies on `{insight}`, and so this check would make sure that the latest commit on `{datawizard}`'s `main`-branch playes nicely with the latest commit on `{insight}`'s `main`-branch. This is currently [the case](https://github.com/easystats/datawizard/actions/workflows/R-CMD-check-devel-easystats.yaml). This is also important because we provide `easystats::install_latest()` function, which assumes that all packages are in sync with each other, and it would be good to have an objective check for this. ## Workflow  To make sure that all packages are in sync with each other, I have created and added a new `R-CMD-check-devel-easystats`, which runs `R CMD check` on the given package using the development versions of `easystats` packages. It's run once every day at midnight. So you just need to make sure that this workflow isn't failing. If not daily, at least check once a week. ## Progress tracker There is no progress tracker here, since making sure that this check pass is a continuous process. This post is meant merely to inform you of this check. ",1,about the new r cmd check devel easystats workflow preamble in the book the authors demonstrate that the key difference between high performing vs low performing software teams is the ability to make a release at the drop of a hat for us this translates to making sure that the main branch is always in sync with the rest of the development version of easystats and being ready to make a cran release with the latest commit on the main without having to worry whether the reverse easystats dependencies have already been updated for any potential breaking changes for example datawizard relies on insight and so this check would make sure that the latest commit on datawizard s main branch playes nicely with the latest commit on insight s main branch this is currently this is also important because we provide easystats install latest function which assumes that all packages are in sync with each other and it would be good to have an objective check for this workflow  to make sure that all packages are in sync with each other i have created and added a new r cmd check devel easystats which runs r cmd check on the given package using the development versions of easystats packages it s run once every day at midnight so you just need to make sure that this workflow isn t failing if not daily at least check once a week progress tracker there is no progress tracker here since making sure that this check pass is a continuous process this post is meant merely to inform you of this check  ,1 499648,14475360628.0,IssuesEvent,2020-12-10 01:25:51,microsoft/terminal,https://api.github.com/repos/microsoft/terminal,closed,Tab close icon is too big,Area-User Interface In-PR Issue-Bug Priority-3 Product-Terminal," # Environment ```none Windows build number: 10.0.19042.630 Windows Terminal version (if applicable): 1.4.3243.0 Any other software? No ``` # Steps to reproduce Right-click on an open tab, notice the close option # Expected behavior The tab close icon should be the same size as the other icons # Actual behavior The tab close icon is too large and distracting almost. ![image](https://user-images.githubusercontent.com/22644543/100432334-c0572c00-3099-11eb-9aa0-d0670b0fa207.png) ",1.0,"Tab close icon is too big - # Environment ```none Windows build number: 10.0.19042.630 Windows Terminal version (if applicable): 1.4.3243.0 Any other software? No ``` # Steps to reproduce Right-click on an open tab, notice the close option # Expected behavior The tab close icon should be the same size as the other icons # Actual behavior The tab close icon is too large and distracting almost. ![image](https://user-images.githubusercontent.com/22644543/100432334-c0572c00-3099-11eb-9aa0-d0670b0fa207.png) ",0,tab close icon is too big 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 i acknowledge the following before proceeding if i delete this entire template and go my own path the core team may close my issue without further explanation or engagement if i list multiple bugs concerns in this one issue the core team may close my issue without further explanation or engagement if i write an issue that has many duplicates the core team may close my issue without further explanation or engagement and without necessarily spending time to find the exact duplicate id number if i leave the title incomplete when filing the issue the core team may close my issue without further explanation or engagement if i file something completely blank in the body the core team may close my issue without further explanation or engagement all good then proceed this bug tracker is monitored by windows terminal development team and other technical folks important when reporting bsods or security issues do not attach memory dumps logs or traces to github issues instead send dumps traces to secure microsoft com referencing this github issue if this is an application crash please also provide a feedback hub submission link so we can find your diagnostic data on the backend use the category apps windows terminal preview and choose share my feedback after submission to get the link please use this form and describe your issue concisely but precisely with as much detail as possible environment none windows build number windows terminal version if applicable any other software no steps to reproduce right click on an open tab notice the close option expected behavior the tab close icon should be the same size as the other icons actual behavior the tab close icon is too large and distracting almost ,0 768430,26963043199.0,IssuesEvent,2023-02-08 19:48:02,Tedeapolis/development,https://api.github.com/repos/Tedeapolis/development,closed,[Bug]: Justitie als job2,bug new low priority,"### Contact Details Nubudi#1907 ### Wat gebeurde er? Wanneer men Justitie als job2 heeft, werkt bijna niks meer, namelijk: - [x] Geen advocatenpas in F2 - [ ] Geen toegang tot dienstvoertuig - [x] Geen toegang tot EUP - [x] Wordt niet meegeteld in F10 Met de job als secondary heeft men echter wel nog toegang tot de sloten en de meldingen die binnen komen. ### Wat hoorde er te gebeuren? Graag alle bovenstaande opmerkingen inschakelen waardoor mensen zoals ik ook een andere primary job (in mijn geval Radio Baas) kunnen hebben. Het vervelendste is dat ik nu niet meegeteld word in F10 en agenten dus niet kunnen zien dat er een advocaat in dienst is. ### Wat veroorzaakt de bug? De job als secondary ipv primary hebben ### Frequentie van de bug Altijd ### Wat is de impact van deze bug? laag ### Relevante log bestanden _No response_",1.0,"[Bug]: Justitie als job2 - ### Contact Details Nubudi#1907 ### Wat gebeurde er? Wanneer men Justitie als job2 heeft, werkt bijna niks meer, namelijk: - [x] Geen advocatenpas in F2 - [ ] Geen toegang tot dienstvoertuig - [x] Geen toegang tot EUP - [x] Wordt niet meegeteld in F10 Met de job als secondary heeft men echter wel nog toegang tot de sloten en de meldingen die binnen komen. ### Wat hoorde er te gebeuren? Graag alle bovenstaande opmerkingen inschakelen waardoor mensen zoals ik ook een andere primary job (in mijn geval Radio Baas) kunnen hebben. Het vervelendste is dat ik nu niet meegeteld word in F10 en agenten dus niet kunnen zien dat er een advocaat in dienst is. ### Wat veroorzaakt de bug? De job als secondary ipv primary hebben ### Frequentie van de bug Altijd ### Wat is de impact van deze bug? laag ### Relevante log bestanden _No response_",0, justitie als contact details nubudi wat gebeurde er wanneer men justitie als heeft werkt bijna niks meer namelijk geen advocatenpas in geen toegang tot dienstvoertuig geen toegang tot eup wordt niet meegeteld in met de job als secondary heeft men echter wel nog toegang tot de sloten en de meldingen die binnen komen wat hoorde er te gebeuren graag alle bovenstaande opmerkingen inschakelen waardoor mensen zoals ik ook een andere primary job in mijn geval radio baas kunnen hebben het vervelendste is dat ik nu niet meegeteld word in en agenten dus niet kunnen zien dat er een advocaat in dienst is wat veroorzaakt de bug de job als secondary ipv primary hebben frequentie van de bug altijd wat is de impact van deze bug laag relevante log bestanden no response ,0 210459,16103005879.0,IssuesEvent,2021-04-27 11:50:22,camunda-cloud/zeebe,https://api.github.com/repos/camunda-cloud/zeebe,opened,Tests which verify incident resolving on termination take to long,Impact: Testing Scope: broker Status: Needs Priority Type: Maintenance,"**Description** It seems that we have several tests (I counted 3) which seem to have the same structure and each of them take ~ 20 seconds to run. It looks like they waiting on something and reaching multiple times the timeout ? ![test](https://user-images.githubusercontent.com/2758593/116236496-6c8f6280-a75f-11eb-8af5-8a427e302a7d.png) ] Example test: ```java @Test public void shouldResolveIncidentsWhenTerminating() { // given final String processId = Strings.newRandomValidBpmnId(); ENGINE .deployment() .withXmlResource( Bpmn.createExecutableProcess(processId) .startEvent() .exclusiveGateway(""xor"") .sequenceFlowId(""s1"") .defaultFlow() .endEvent(""default-end"") .moveToLastGateway() .sequenceFlowId(""s2"") .conditionExpression(""nonexisting_variable"") .endEvent(""non-default-end"") .done()) .deploy(); final long processInstanceKey = ENGINE.processInstance().ofBpmnProcessId(processId).withVariable(""foo"", 10).create(); assertThat( RecordingExporter.incidentRecords().withProcessInstanceKey(processInstanceKey).limit(2)) .extracting(Record::getIntent) .containsExactly(IncidentIntent.CREATED); // when ENGINE.processInstance().withInstanceKey(processInstanceKey).cancel(); // then assertThat( RecordingExporter.processInstanceRecords() .withProcessInstanceKey(processInstanceKey) .limitToProcessInstanceTerminated()) .extracting(r -> tuple(r.getValue().getBpmnElementType(), r.getIntent())) .containsSubsequence( tuple(BpmnElementType.PROCESS, ProcessInstanceIntent.ELEMENT_TERMINATING), tuple(BpmnElementType.EXCLUSIVE_GATEWAY, ProcessInstanceIntent.ELEMENT_TERMINATING), tuple(BpmnElementType.EXCLUSIVE_GATEWAY, ProcessInstanceIntent.ELEMENT_TERMINATED), tuple(BpmnElementType.PROCESS, ProcessInstanceIntent.ELEMENT_TERMINATED)); assertThat( RecordingExporter.incidentRecords().withProcessInstanceKey(processInstanceKey).limit(3)) .extracting(Record::getIntent) .containsExactly(IncidentIntent.CREATED, IncidentIntent.RESOLVED); } ``` Would be cool if we could reduce the execution time of these tests.",1.0,"Tests which verify incident resolving on termination take to long - **Description** It seems that we have several tests (I counted 3) which seem to have the same structure and each of them take ~ 20 seconds to run. It looks like they waiting on something and reaching multiple times the timeout ? ![test](https://user-images.githubusercontent.com/2758593/116236496-6c8f6280-a75f-11eb-8af5-8a427e302a7d.png) ] Example test: ```java @Test public void shouldResolveIncidentsWhenTerminating() { // given final String processId = Strings.newRandomValidBpmnId(); ENGINE .deployment() .withXmlResource( Bpmn.createExecutableProcess(processId) .startEvent() .exclusiveGateway(""xor"") .sequenceFlowId(""s1"") .defaultFlow() .endEvent(""default-end"") .moveToLastGateway() .sequenceFlowId(""s2"") .conditionExpression(""nonexisting_variable"") .endEvent(""non-default-end"") .done()) .deploy(); final long processInstanceKey = ENGINE.processInstance().ofBpmnProcessId(processId).withVariable(""foo"", 10).create(); assertThat( RecordingExporter.incidentRecords().withProcessInstanceKey(processInstanceKey).limit(2)) .extracting(Record::getIntent) .containsExactly(IncidentIntent.CREATED); // when ENGINE.processInstance().withInstanceKey(processInstanceKey).cancel(); // then assertThat( RecordingExporter.processInstanceRecords() .withProcessInstanceKey(processInstanceKey) .limitToProcessInstanceTerminated()) .extracting(r -> tuple(r.getValue().getBpmnElementType(), r.getIntent())) .containsSubsequence( tuple(BpmnElementType.PROCESS, ProcessInstanceIntent.ELEMENT_TERMINATING), tuple(BpmnElementType.EXCLUSIVE_GATEWAY, ProcessInstanceIntent.ELEMENT_TERMINATING), tuple(BpmnElementType.EXCLUSIVE_GATEWAY, ProcessInstanceIntent.ELEMENT_TERMINATED), tuple(BpmnElementType.PROCESS, ProcessInstanceIntent.ELEMENT_TERMINATED)); assertThat( RecordingExporter.incidentRecords().withProcessInstanceKey(processInstanceKey).limit(3)) .extracting(Record::getIntent) .containsExactly(IncidentIntent.CREATED, IncidentIntent.RESOLVED); } ``` Would be cool if we could reduce the execution time of these tests.",0,tests which verify incident resolving on termination take to long description it seems that we have several tests i counted which seem to have the same structure and each of them take seconds to run it looks like they waiting on something and reaching multiple times the timeout example test java test public void shouldresolveincidentswhenterminating given final string processid strings newrandomvalidbpmnid engine deployment withxmlresource bpmn createexecutableprocess processid startevent exclusivegateway xor sequenceflowid defaultflow endevent default end movetolastgateway sequenceflowid conditionexpression nonexisting variable endevent non default end done deploy final long processinstancekey engine processinstance ofbpmnprocessid processid withvariable foo create assertthat recordingexporter incidentrecords withprocessinstancekey processinstancekey limit extracting record getintent containsexactly incidentintent created when engine processinstance withinstancekey processinstancekey cancel then assertthat recordingexporter processinstancerecords withprocessinstancekey processinstancekey limittoprocessinstanceterminated extracting r tuple r getvalue getbpmnelementtype r getintent containssubsequence tuple bpmnelementtype process processinstanceintent element terminating tuple bpmnelementtype exclusive gateway processinstanceintent element terminating tuple bpmnelementtype exclusive gateway processinstanceintent element terminated tuple bpmnelementtype process processinstanceintent element terminated assertthat recordingexporter incidentrecords withprocessinstancekey processinstancekey limit extracting record getintent containsexactly incidentintent created incidentintent resolved would be cool if we could reduce the execution time of these tests ,0 289538,31933096557.0,IssuesEvent,2023-09-19 08:44:40,Trinadh465/linux-4.1.15_CVE-2023-4128,https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128,opened,CVE-2015-8787 (Critical) detected in linuxlinux-4.6,Mend: dependency security vulnerability,"## CVE-2015-8787 - Critical Severity Vulnerability
Vulnerable Library - linuxlinux-4.6

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in HEAD commit: 0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8

Found in base branch: master

Vulnerable Source Files (2)

/net/netfilter/nf_nat_redirect.c /net/netfilter/nf_nat_redirect.c

Vulnerability Details

The nf_nat_redirect_ipv4 function in net/netfilter/nf_nat_redirect.c in the Linux kernel before 4.4 allows remote attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by sending certain IPv4 packets to an incompletely configured interface, a related issue to CVE-2003-1604.

Publish Date: 2016-02-08

URL: CVE-2015-8787

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://nvd.nist.gov/vuln/detail/CVE-2015-8787

Release Date: 2016-02-08

Fix Resolution: 4.4

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2015-8787 (Critical) detected in linuxlinux-4.6 - ## CVE-2015-8787 - Critical Severity Vulnerability
Vulnerable Library - linuxlinux-4.6

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in HEAD commit: 0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8

Found in base branch: master

Vulnerable Source Files (2)

/net/netfilter/nf_nat_redirect.c /net/netfilter/nf_nat_redirect.c

Vulnerability Details

The nf_nat_redirect_ipv4 function in net/netfilter/nf_nat_redirect.c in the Linux kernel before 4.4 allows remote attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by sending certain IPv4 packets to an incompletely configured interface, a related issue to CVE-2003-1604.

Publish Date: 2016-02-08

URL: CVE-2015-8787

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://nvd.nist.gov/vuln/detail/CVE-2015-8787

Release Date: 2016-02-08

Fix Resolution: 4.4

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve critical detected in linuxlinux cve critical severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files net netfilter nf nat redirect c net netfilter nf nat redirect c vulnerability details the nf nat redirect function in net netfilter nf nat redirect c in the linux kernel before allows remote attackers to cause a denial of service null pointer dereference and system crash or possibly have unspecified other impact by sending certain packets to an incompletely configured interface a related issue to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0 3486,13837763133.0,IssuesEvent,2020-10-14 04:34:18,bandprotocol/bandchain,https://api.github.com/repos/bandprotocol/bandchain,closed,Render every pages test case,automation scan,Let's implement the test case that checking every pages can render properly.,1.0,Render every pages test case - Let's implement the test case that checking every pages can render properly.,1,render every pages test case let s implement the test case that checking every pages can render properly ,1 112495,24281026726.0,IssuesEvent,2022-09-28 17:24:50,roanlinde/nodegoat,https://api.github.com/repos/roanlinde/nodegoat,opened,Cross-Site Request Forgery (CSRF) ('Authentication Issues') [VID:18],VeracodeFlaw: Medium Veracode Policy Scan,"**Filename:** server.js **Line:** 15 **CWE:** 352 (Cross-Site Request Forgery (CSRF) ('Authentication Issues')) This Express application does not appear to use a known library or tool to protect against cross-site request forgery. Ensure that all actions and routes that modify data are either protected with anti-CSRF tokens, or are designed in such a way to eliminate CSRF risk. References: CWE OWASP",2.0,"Cross-Site Request Forgery (CSRF) ('Authentication Issues') [VID:18] - **Filename:** server.js **Line:** 15 **CWE:** 352 (Cross-Site Request Forgery (CSRF) ('Authentication Issues')) This Express application does not appear to use a known library or tool to protect against cross-site request forgery. Ensure that all actions and routes that modify data are either protected with anti-CSRF tokens, or are designed in such a way to eliminate CSRF risk. References: CWE OWASP",0,cross site request forgery csrf authentication issues filename server js line cwe cross site request forgery csrf authentication issues this express application does not appear to use a known library or tool to protect against cross site request forgery ensure that all actions and routes that modify data are either protected with anti csrf tokens or are designed in such a way to eliminate csrf risk references a href a href ,0 381691,11278690853.0,IssuesEvent,2020-01-15 07:32:50,openmsupply/mobile,https://api.github.com/repos/openmsupply/mobile,opened,Create response indicator values on supplier requisition finalisation,Docs: done Effort: small Feature Module: indicators Priority: normal,"## Is your feature request related to a problem? Please describe. Indicator values are not currently generated for response requisitions when a supplier requisition is finalised on mobile. ## Describe the solution you'd like Implement desktop indicators finalisation logic on mobile. ## Implementation Add actions, reducers for generating indicator values for response requisition. ## Describe alternatives you've considered N/A. ## Additional context N/A. ",1.0,"Create response indicator values on supplier requisition finalisation - ## Is your feature request related to a problem? Please describe. Indicator values are not currently generated for response requisitions when a supplier requisition is finalised on mobile. ## Describe the solution you'd like Implement desktop indicators finalisation logic on mobile. ## Implementation Add actions, reducers for generating indicator values for response requisition. ## Describe alternatives you've considered N/A. ## Additional context N/A. ",0,create response indicator values on supplier requisition finalisation is your feature request related to a problem please describe indicator values are not currently generated for response requisitions when a supplier requisition is finalised on mobile describe the solution you d like implement desktop indicators finalisation logic on mobile implementation add actions reducers for generating indicator values for response requisition describe alternatives you ve considered n a additional context n a ,0 2494,12126307345.0,IssuesEvent,2020-04-22 16:48:46,elastic/beats,https://api.github.com/repos/elastic/beats,opened,[CI] Upload test result files,automation ci,"The test suit generates some files that are needed for diagnosing issues, we have to upload those files to Jenkins to help to diagnose when an error happens. ",1.0,"[CI] Upload test result files - The test suit generates some files that are needed for diagnosing issues, we have to upload those files to Jenkins to help to diagnose when an error happens. ",1, upload test result files the test suit generates some files that are needed for diagnosing issues we have to upload those files to jenkins to help to diagnose when an error happens ,1 2090,11360349979.0,IssuesEvent,2020-01-26 05:56:52,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,a8n: Automatically revoke found credentials / tokens,automation,"From [RFC 73](https://docs.google.com/document/d/143FIe1Tak4_YPPuOuEzuobTQW09ZU85WfOCdZkNwIVQ/edit#heading=h.b5eblutsndzz) A way to automatically revoke tokens that are found (similar to GitHub's token scanning for public repositories).",1.0,"a8n: Automatically revoke found credentials / tokens - From [RFC 73](https://docs.google.com/document/d/143FIe1Tak4_YPPuOuEzuobTQW09ZU85WfOCdZkNwIVQ/edit#heading=h.b5eblutsndzz) A way to automatically revoke tokens that are found (similar to GitHub's token scanning for public repositories).",1, automatically revoke found credentials tokens from a way to automatically revoke tokens that are found similar to github s token scanning for public repositories ,1 10123,31760106545.0,IssuesEvent,2023-09-12 04:01:13,AzureAD/microsoft-authentication-library-for-objc,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-objc,opened,Automation tests failure,automation failure,"@AzureAD/appleidentity Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Remove unused param [407e29cb9041f829257429e70994475017983ce8] Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1171196&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1171196&view=logs)",1.0,"Automation tests failure - @AzureAD/appleidentity Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Remove unused param [407e29cb9041f829257429e70994475017983ce8] Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1171196&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1171196&view=logs)",1,automation tests failure azuread appleidentity automation failed for ran against commit remove unused param pipeline url ,1 239098,19808953470.0,IssuesEvent,2022-01-19 10:06:23,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,opened,Intermittent UI test failure - ,eng:ui-test,"### Firebase Test Run: [Firebase link](https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/7249116838797798859/executions/bs.a2aa7e97e6be75f8/testcases/1/test-cases) ### Stacktrace: androidx.test.espresso.PerformException: Error performing 'single click - At Coordinates: 366, 1710 and precision: 16, 16' on view '(with id is and with text: is ""First Collection"")'. at androidx.test.espresso.PerformException$Builder.build(PerformException.java:5) at androidx.test.espresso.base.DefaultFailureHandler.getUserFriendlyError(DefaultFailureHandler.java:25) at androidx.test.espresso.base.DefaultFailureHandler.handle(DefaultFailureHandler.java:36) at androidx.test.espresso.ViewInteraction.waitForAndHandleInteractionResults(ViewInteraction.java:106) at androidx.test.espresso.ViewInteraction.desugaredPerform(ViewInteraction.java:43) at androidx.test.espresso.ViewInteraction.perform(ViewInteraction.java:94) at org.mozilla.fenix.helpers.ViewInteractionKt.click(ViewInteraction.kt:18) at org.mozilla.fenix.ui.robots.HomeScreenRobot$Transition.expandCollection(HomeScreenRobot.kt:331) at org.mozilla.fenix.ui.SmokeTest.deleteCollectionTest(SmokeTest.kt:887) at java.lang.reflect.Method.invoke(Native Method) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at androidx.test.internal.runner.junit4.statement.RunBefores.evaluate(RunBefores.java:80) at androidx.test.internal.runner.junit4.statement.RunAfters.evaluate(RunAfters.java:61) at androidx.test.rule.ActivityTestRule$ActivityStatement.evaluate(ActivityTestRule.java:531) at androidx.test.rule.ActivityTestRule$ActivityStatement.evaluate(ActivityTestRule.java:531) at androidx.compose.ui.test.junit4.AndroidComposeTestRule$AndroidComposeStatement.evaluateInner(AndroidComposeTestRule.android.kt:357) at androidx.compose.ui.test.junit4.AndroidComposeTestRule$AndroidComposeStatement.evaluate(AndroidComposeTestRule.android.kt:346) at androidx.compose.ui.test.junit4.android.EspressoLink$getStatementFor$1.evaluate(EspressoLink.android.kt:63) at androidx.compose.ui.test.junit4.IdlingResourceRegistry$getStatementFor$1.evaluate(IdlingResourceRegistry.jvm.kt:160) at androidx.compose.ui.test.junit4.android.ComposeRootRegistry$getStatementFor$1.evaluate(ComposeRootRegistry.android.kt:150) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at androidx.test.rule.GrantPermissionRule$RequestPermissionStatement.evaluate(GrantPermissionRule.java:134) at org.mozilla.fenix.helpers.RetryTestRule$apply$$inlined$statement$1.evaluate(RetryTestRule.kt:38) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at org.junit.runner.JUnitCore.run(JUnitCore.java:115) at androidx.test.internal.runner.TestExecutor.execute(TestExecutor.java:56) at androidx.test.runner.AndroidJUnitRunner.onStart(AndroidJUnitRunner.java:395) at android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:2145) Caused by: androidx.test.espresso.IdlingResourceTimeoutException: Wait for [Compose-Espresso link] to become idle timed out at androidx.test.espresso.IdlingPolicy.handleTimeout(IdlingPolicy.java:16) at androidx.test.espresso.base.UiControllerImpl$5.resourcesHaveTimedOut(UiControllerImpl.java:4) at androidx.test.espresso.base.IdlingResourceRegistry$Dispatcher.handleTimeout(IdlingResourceRegistry.java:44) at androidx.test.espresso.base.IdlingResourceRegistry$Dispatcher.handleMessage(IdlingResourceRegistry.java:12) at android.os.Handler.dispatchMessage(Handler.java:102) at androidx.test.espresso.base.Interrogator.loopAndInterrogate(Interrogator.java:53) at androidx.test.espresso.base.UiControllerImpl.loopUntil(UiControllerImpl.java:155) at androidx.test.espresso.base.UiControllerImpl.loopMainThreadUntilIdle(UiControllerImpl.java:129) at androidx.test.espresso.base.UiControllerImpl.injectMotionEvent(UiControllerImpl.java:56) at androidx.test.espresso.action.MotionEvents.sendUp(MotionEvents.java:122) at androidx.test.espresso.action.MotionEvents.sendUp(MotionEvents.java:117) at androidx.test.espresso.action.Tap.sendSingleTap(Tap.java:27) at androidx.test.espresso.action.Tap.access$100(Tap.java:21) at androidx.test.espresso.action.Tap$1.sendTap(Tap.java:3) at androidx.test.espresso.action.GeneralClickAction.perform(GeneralClickAction.java:23) at androidx.test.espresso.ViewInteraction$SingleExecutionViewAction.perform(ViewInteraction.java:16) at androidx.test.espresso.ViewInteraction.doPerform(ViewInteraction.java:65) at androidx.test.espresso.ViewInteraction.access$100(ViewInteraction.java:15) at androidx.test.espresso.ViewInteraction$1.call(ViewInteraction.java:3) at androidx.test.espresso.ViewInteraction$1.call(ViewInteraction.java:2) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at android.os.Handler.handleCallback(Handler.java:873) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:193) at android.app.ActivityThread.main(ActivityThread.java:6669) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858) ### Build: 1/19 Main ",1.0,"Intermittent UI test failure - - ### Firebase Test Run: [Firebase link](https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/7249116838797798859/executions/bs.a2aa7e97e6be75f8/testcases/1/test-cases) ### Stacktrace: androidx.test.espresso.PerformException: Error performing 'single click - At Coordinates: 366, 1710 and precision: 16, 16' on view '(with id is and with text: is ""First Collection"")'. at androidx.test.espresso.PerformException$Builder.build(PerformException.java:5) at androidx.test.espresso.base.DefaultFailureHandler.getUserFriendlyError(DefaultFailureHandler.java:25) at androidx.test.espresso.base.DefaultFailureHandler.handle(DefaultFailureHandler.java:36) at androidx.test.espresso.ViewInteraction.waitForAndHandleInteractionResults(ViewInteraction.java:106) at androidx.test.espresso.ViewInteraction.desugaredPerform(ViewInteraction.java:43) at androidx.test.espresso.ViewInteraction.perform(ViewInteraction.java:94) at org.mozilla.fenix.helpers.ViewInteractionKt.click(ViewInteraction.kt:18) at org.mozilla.fenix.ui.robots.HomeScreenRobot$Transition.expandCollection(HomeScreenRobot.kt:331) at org.mozilla.fenix.ui.SmokeTest.deleteCollectionTest(SmokeTest.kt:887) at java.lang.reflect.Method.invoke(Native Method) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at androidx.test.internal.runner.junit4.statement.RunBefores.evaluate(RunBefores.java:80) at androidx.test.internal.runner.junit4.statement.RunAfters.evaluate(RunAfters.java:61) at androidx.test.rule.ActivityTestRule$ActivityStatement.evaluate(ActivityTestRule.java:531) at androidx.test.rule.ActivityTestRule$ActivityStatement.evaluate(ActivityTestRule.java:531) at androidx.compose.ui.test.junit4.AndroidComposeTestRule$AndroidComposeStatement.evaluateInner(AndroidComposeTestRule.android.kt:357) at androidx.compose.ui.test.junit4.AndroidComposeTestRule$AndroidComposeStatement.evaluate(AndroidComposeTestRule.android.kt:346) at androidx.compose.ui.test.junit4.android.EspressoLink$getStatementFor$1.evaluate(EspressoLink.android.kt:63) at androidx.compose.ui.test.junit4.IdlingResourceRegistry$getStatementFor$1.evaluate(IdlingResourceRegistry.jvm.kt:160) at androidx.compose.ui.test.junit4.android.ComposeRootRegistry$getStatementFor$1.evaluate(ComposeRootRegistry.android.kt:150) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at androidx.test.rule.GrantPermissionRule$RequestPermissionStatement.evaluate(GrantPermissionRule.java:134) at org.mozilla.fenix.helpers.RetryTestRule$apply$$inlined$statement$1.evaluate(RetryTestRule.kt:38) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at org.junit.runner.JUnitCore.run(JUnitCore.java:115) at androidx.test.internal.runner.TestExecutor.execute(TestExecutor.java:56) at androidx.test.runner.AndroidJUnitRunner.onStart(AndroidJUnitRunner.java:395) at android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:2145) Caused by: androidx.test.espresso.IdlingResourceTimeoutException: Wait for [Compose-Espresso link] to become idle timed out at androidx.test.espresso.IdlingPolicy.handleTimeout(IdlingPolicy.java:16) at androidx.test.espresso.base.UiControllerImpl$5.resourcesHaveTimedOut(UiControllerImpl.java:4) at androidx.test.espresso.base.IdlingResourceRegistry$Dispatcher.handleTimeout(IdlingResourceRegistry.java:44) at androidx.test.espresso.base.IdlingResourceRegistry$Dispatcher.handleMessage(IdlingResourceRegistry.java:12) at android.os.Handler.dispatchMessage(Handler.java:102) at androidx.test.espresso.base.Interrogator.loopAndInterrogate(Interrogator.java:53) at androidx.test.espresso.base.UiControllerImpl.loopUntil(UiControllerImpl.java:155) at androidx.test.espresso.base.UiControllerImpl.loopMainThreadUntilIdle(UiControllerImpl.java:129) at androidx.test.espresso.base.UiControllerImpl.injectMotionEvent(UiControllerImpl.java:56) at androidx.test.espresso.action.MotionEvents.sendUp(MotionEvents.java:122) at androidx.test.espresso.action.MotionEvents.sendUp(MotionEvents.java:117) at androidx.test.espresso.action.Tap.sendSingleTap(Tap.java:27) at androidx.test.espresso.action.Tap.access$100(Tap.java:21) at androidx.test.espresso.action.Tap$1.sendTap(Tap.java:3) at androidx.test.espresso.action.GeneralClickAction.perform(GeneralClickAction.java:23) at androidx.test.espresso.ViewInteraction$SingleExecutionViewAction.perform(ViewInteraction.java:16) at androidx.test.espresso.ViewInteraction.doPerform(ViewInteraction.java:65) at androidx.test.espresso.ViewInteraction.access$100(ViewInteraction.java:15) at androidx.test.espresso.ViewInteraction$1.call(ViewInteraction.java:3) at androidx.test.espresso.ViewInteraction$1.call(ViewInteraction.java:2) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at android.os.Handler.handleCallback(Handler.java:873) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:193) at android.app.ActivityThread.main(ActivityThread.java:6669) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858) ### Build: 1/19 Main ",0,intermittent ui test failure firebase test run stacktrace androidx test espresso performexception error performing single click at coordinates and precision on view with id is and with text is first collection at androidx test espresso performexception builder build performexception java at androidx test espresso base defaultfailurehandler getuserfriendlyerror defaultfailurehandler java at androidx test espresso base defaultfailurehandler handle defaultfailurehandler java at androidx test espresso viewinteraction waitforandhandleinteractionresults viewinteraction java at androidx test espresso viewinteraction desugaredperform viewinteraction java at androidx test espresso viewinteraction perform viewinteraction java at org mozilla fenix helpers viewinteractionkt click viewinteraction kt at org mozilla fenix ui robots homescreenrobot transition expandcollection homescreenrobot kt at org mozilla fenix ui smoketest deletecollectiontest smoketest kt at java lang reflect method invoke native method at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at androidx test internal runner statement runbefores evaluate runbefores java at androidx test internal runner statement runafters evaluate runafters java at androidx test rule activitytestrule activitystatement evaluate activitytestrule java at androidx test rule activitytestrule activitystatement evaluate activitytestrule java at androidx compose ui test androidcomposetestrule androidcomposestatement evaluateinner androidcomposetestrule android kt at androidx compose ui test androidcomposetestrule androidcomposestatement evaluate androidcomposetestrule android kt at androidx compose ui test android espressolink getstatementfor evaluate espressolink android kt at androidx compose ui test idlingresourceregistry getstatementfor evaluate idlingresourceregistry jvm kt at androidx compose ui test android composerootregistry getstatementfor evaluate composerootregistry android kt at org junit rules runrules evaluate runrules java at androidx test rule grantpermissionrule requestpermissionstatement evaluate grantpermissionrule java at org mozilla fenix helpers retrytestrule apply inlined statement evaluate retrytestrule kt at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runner junitcore run junitcore java at org junit runner junitcore run junitcore java at androidx test internal runner testexecutor execute testexecutor java at androidx test runner androidjunitrunner onstart androidjunitrunner java at android app instrumentation instrumentationthread run instrumentation java caused by androidx test espresso idlingresourcetimeoutexception wait for to become idle timed out at androidx test espresso idlingpolicy handletimeout idlingpolicy java at androidx test espresso base uicontrollerimpl resourceshavetimedout uicontrollerimpl java at androidx test espresso base idlingresourceregistry dispatcher handletimeout idlingresourceregistry java at androidx test espresso base idlingresourceregistry dispatcher handlemessage idlingresourceregistry java at android os handler dispatchmessage handler java at androidx test espresso base interrogator loopandinterrogate interrogator java at androidx test espresso base uicontrollerimpl loopuntil uicontrollerimpl java at androidx test espresso base uicontrollerimpl loopmainthreaduntilidle uicontrollerimpl java at androidx test espresso base uicontrollerimpl injectmotionevent uicontrollerimpl java at androidx test espresso action motionevents sendup motionevents java at androidx test espresso action motionevents sendup motionevents java at androidx test espresso action tap sendsingletap tap java at androidx test espresso action tap access tap java at androidx test espresso action tap sendtap tap java at androidx test espresso action generalclickaction perform generalclickaction java at androidx test espresso viewinteraction singleexecutionviewaction perform viewinteraction java at androidx test espresso viewinteraction doperform viewinteraction java at androidx test espresso viewinteraction access viewinteraction java at androidx test espresso viewinteraction call viewinteraction java at androidx test espresso viewinteraction call viewinteraction java at java util concurrent futuretask run futuretask java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java at com android internal os zygoteinit main zygoteinit java build main ,0 239541,26231862692.0,IssuesEvent,2023-01-05 01:25:04,barranquerox/template-framework,https://api.github.com/repos/barranquerox/template-framework,opened,CVE-2022-1471 (High) detected in snakeyaml-1.29.jar,security vulnerability,"## CVE-2022-1471 - High Severity Vulnerability
Vulnerable Library - snakeyaml-1.29.jar

YAML 1.1 parser and emitter for Java

Library home page: http://www.snakeyaml.org

Path to dependency file: /pom.xml

Path to vulnerable library: /ry/org/yaml/snakeyaml/1.29/snakeyaml-1.29.jar,/es/modules-2/files-2.1/org.yaml/snakeyaml/1.29/6d0cdafb2010f1297e574656551d7145240f6e25/snakeyaml-1.29.jar

Dependency Hierarchy: - :x: **snakeyaml-1.29.jar** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

SnakeYaml's Constructor() class does not restrict types which can be instantiated during deserialization. Deserializing yaml content provided by an attacker can lead to remote code execution. We recommend using SnakeYaml's SafeConsturctor when parsing untrusted content to restrict deserialization.

Publish Date: 2022-12-01

URL: CVE-2022-1471

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-1471 (High) detected in snakeyaml-1.29.jar - ## CVE-2022-1471 - High Severity Vulnerability
Vulnerable Library - snakeyaml-1.29.jar

YAML 1.1 parser and emitter for Java

Library home page: http://www.snakeyaml.org

Path to dependency file: /pom.xml

Path to vulnerable library: /ry/org/yaml/snakeyaml/1.29/snakeyaml-1.29.jar,/es/modules-2/files-2.1/org.yaml/snakeyaml/1.29/6d0cdafb2010f1297e574656551d7145240f6e25/snakeyaml-1.29.jar

Dependency Hierarchy: - :x: **snakeyaml-1.29.jar** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

SnakeYaml's Constructor() class does not restrict types which can be instantiated during deserialization. Deserializing yaml content provided by an attacker can lead to remote code execution. We recommend using SnakeYaml's SafeConsturctor when parsing untrusted content to restrict deserialization.

Publish Date: 2022-12-01

URL: CVE-2022-1471

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in snakeyaml jar cve high severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file pom xml path to vulnerable library ry org yaml snakeyaml snakeyaml jar es modules files org yaml snakeyaml snakeyaml jar dependency hierarchy x snakeyaml jar vulnerable library found in base branch master vulnerability details snakeyaml s constructor class does not restrict types which can be instantiated during deserialization deserializing yaml content provided by an attacker can lead to remote code execution we recommend using snakeyaml s safeconsturctor when parsing untrusted content to restrict deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with mend ,0 49948,6284981042.0,IssuesEvent,2017-07-19 09:11:11,getsentry/sentry,https://api.github.com/repos/getsentry/sentry,closed,iOS Breadcrumb Design,Design Review Platform: Cocoa,"We implemented automatic breadcrumb tracking in iOS, the crumbs itself look very ""technical"" and could be condensed without loosing any information. Here is how the breadcrumbs look would look right now ... I've created a quick mockup (of the first 3 entries) what we could do ... Feedback and other suggestions are welcome Just wanted to clear things up before starting the implementation ...",1.0,"iOS Breadcrumb Design - We implemented automatic breadcrumb tracking in iOS, the crumbs itself look very ""technical"" and could be condensed without loosing any information. Here is how the breadcrumbs look would look right now ... I've created a quick mockup (of the first 3 entries) what we could do ... Feedback and other suggestions are welcome Just wanted to clear things up before starting the implementation ...",0,ios breadcrumb design we implemented automatic breadcrumb tracking in ios the crumbs itself look very technical and could be condensed without loosing any information here is how the breadcrumbs look would look right now img width alt screen shot at src i ve created a quick mockup of the first entries what we could do feedback and other suggestions are welcome img width alt screen shot at src just wanted to clear things up before starting the implementation ,0 300919,26002643698.0,IssuesEvent,2022-12-20 16:29:13,WordPress/gutenberg,https://api.github.com/repos/WordPress/gutenberg,closed,Changes to page/post title block in site editor not reflected in editor,[Type] Bug Needs Testing [Block] Post Title,"### Description Changes made to the title block in a page template in the site editor (like centering the text and making the block wide width) shows in the site editor and on the front end, but does not carry over to the editor. When editing a page in the editor the title is left-aligned and not wide width. ### Step-by-step reproduction instructions 1. Activate Twenty Twenty-Three. 2. Edit the page template and make the title block centered and wide width. 3. Edit a singular page in the editor. 4. The title is left-aligned and not wide width. ### Screenshots, screen recording, code snippet ![Skärmavbild 2022-12-14 kl 13 19 25](https://user-images.githubusercontent.com/88452801/207593903-0d0c9e4d-c1cc-4b1a-a314-00bcf6e8d25f.png) Site Editor ![Skärmavbild 2022-12-14 kl 13 19 55](https://user-images.githubusercontent.com/88452801/207593889-0a96b889-6b83-422a-96cf-7d61f8be89af.png) Editor ### Environment info - WordPress 6.1.1 - Gutenberg 14.7.2 - Safari 16.1 ### Please confirm that you have searched existing issues in the repo. Yes ### Please confirm that you have tested with all plugins deactivated except Gutenberg. No",1.0,"Changes to page/post title block in site editor not reflected in editor - ### Description Changes made to the title block in a page template in the site editor (like centering the text and making the block wide width) shows in the site editor and on the front end, but does not carry over to the editor. When editing a page in the editor the title is left-aligned and not wide width. ### Step-by-step reproduction instructions 1. Activate Twenty Twenty-Three. 2. Edit the page template and make the title block centered and wide width. 3. Edit a singular page in the editor. 4. The title is left-aligned and not wide width. ### Screenshots, screen recording, code snippet ![Skärmavbild 2022-12-14 kl 13 19 25](https://user-images.githubusercontent.com/88452801/207593903-0d0c9e4d-c1cc-4b1a-a314-00bcf6e8d25f.png) Site Editor ![Skärmavbild 2022-12-14 kl 13 19 55](https://user-images.githubusercontent.com/88452801/207593889-0a96b889-6b83-422a-96cf-7d61f8be89af.png) Editor ### Environment info - WordPress 6.1.1 - Gutenberg 14.7.2 - Safari 16.1 ### Please confirm that you have searched existing issues in the repo. Yes ### Please confirm that you have tested with all plugins deactivated except Gutenberg. No",0,changes to page post title block in site editor not reflected in editor description changes made to the title block in a page template in the site editor like centering the text and making the block wide width shows in the site editor and on the front end but does not carry over to the editor when editing a page in the editor the title is left aligned and not wide width step by step reproduction instructions activate twenty twenty three edit the page template and make the title block centered and wide width edit a singular page in the editor the title is left aligned and not wide width screenshots screen recording code snippet site editor editor environment info wordpress gutenberg safari please confirm that you have searched existing issues in the repo yes please confirm that you have tested with all plugins deactivated except gutenberg no,0 183293,14221552669.0,IssuesEvent,2020-11-17 15:50:14,cBioPortal/cbioportal,https://api.github.com/repos/cBioPortal/cbioportal,opened,Add multi study query test for treatments API,test,I think the current tests for treatment API maybe only covers single studies? We deployed a change that worked for single studies but not for multi studies,1.0,Add multi study query test for treatments API - I think the current tests for treatment API maybe only covers single studies? We deployed a change that worked for single studies but not for multi studies,0,add multi study query test for treatments api i think the current tests for treatment api maybe only covers single studies we deployed a change that worked for single studies but not for multi studies,0 6457,23193412617.0,IssuesEvent,2022-08-01 14:25:42,iGEM-Engineering/iGEM-distribution,https://api.github.com/repos/iGEM-Engineering/iGEM-distribution,closed,Add python linter,good first issue automation,"We need to automate code quality checking. To do this, we should add a GitHub Action that runs [a python linter](https://books.agiliq.com/projects/essential-python-tools/en/latest/linters.html), probably pylint and/or flake8. This will likely throw up a bunch of errors that, in turn, also need resolution and/or known issue marking.",1.0,"Add python linter - We need to automate code quality checking. To do this, we should add a GitHub Action that runs [a python linter](https://books.agiliq.com/projects/essential-python-tools/en/latest/linters.html), probably pylint and/or flake8. This will likely throw up a bunch of errors that, in turn, also need resolution and/or known issue marking.",1,add python linter we need to automate code quality checking to do this we should add a github action that runs probably pylint and or this will likely throw up a bunch of errors that in turn also need resolution and or known issue marking ,1 91643,26447536459.0,IssuesEvent,2023-01-16 08:43:44,Snapmaker/Luban,https://api.github.com/repos/Snapmaker/Luban,closed,Feature: M1 native binary for macOS,Type: Build Priority: Medium,Please consider compiling with M1 / Arm processor support for macOS.,1.0,Feature: M1 native binary for macOS - Please consider compiling with M1 / Arm processor support for macOS.,0,feature native binary for macos please consider compiling with arm processor support for macos ,0 2382,11857598463.0,IssuesEvent,2020-03-25 09:53:10,coolOrangeLabs/powerGateTemplate,https://api.github.com/repos/coolOrangeLabs/powerGateTemplate,closed,7 - Plugin implement QUERY for Items,PGServer_Automation,"### ToDo Implement the `QUERY` for the item side: + [ ] Return all items",1.0,"7 - Plugin implement QUERY for Items - ### ToDo Implement the `QUERY` for the item side: + [ ] Return all items",1, plugin implement query for items todo implement the query for the item side return all items,1 1859,10968867286.0,IssuesEvent,2019-11-28 12:38:24,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,a8n: replacer service responds with 503 with cold cache,automation bug,"When previewing a campaign over a large number of repositories and the `replacer` cache in `/tmp/replacer-archives` is empty, `replacer` might respond with `503 Service Unavailable`. Example: with a fresh Sourcegraph instance, locally, I ran the following CampaignPlan: ``` { ""scopeQuery"": ""repo:github"", ""matchTemplate"": ""foobar"", ""rewriteTemplate"": ""barfoo"" } ``` That resulted in 173 campaign jobs. Out of those, 17 resulted in the mentioned error: ``` # SELECT error FrOM campaign_jobs WHERE error != ''; error ----------------------------------------------------------------------------- unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" ```",1.0,"a8n: replacer service responds with 503 with cold cache - When previewing a campaign over a large number of repositories and the `replacer` cache in `/tmp/replacer-archives` is empty, `replacer` might respond with `503 Service Unavailable`. Example: with a fresh Sourcegraph instance, locally, I ran the following CampaignPlan: ``` { ""scopeQuery"": ""repo:github"", ""matchTemplate"": ""foobar"", ""rewriteTemplate"": ""barfoo"" } ``` That resulted in 173 campaign jobs. Out of those, 17 resulted in the mentioned error: ``` # SELECT error FrOM campaign_jobs WHERE error != ''; error ----------------------------------------------------------------------------- unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" unexpected response status from replacer service: ""503 Service Unavailable"" ```",1, replacer service responds with with cold cache when previewing a campaign over a large number of repositories and the replacer cache in tmp replacer archives is empty replacer might respond with service unavailable example with a fresh sourcegraph instance locally i ran the following campaignplan scopequery repo github matchtemplate foobar rewritetemplate barfoo that resulted in campaign jobs out of those resulted in the mentioned error select error from campaign jobs where error error unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable unexpected response status from replacer service service unavailable ,1 201087,15801899721.0,IssuesEvent,2021-04-03 07:04:08,benedictkhoomw/ped,https://api.github.com/repos/benedictkhoomw/ped,opened,UG: Redo command has a typo for format,severity.VeryLow type.DocumentationBug,"## How to reproduce 1. Open the UG 1. Navigate to the `redo` section. 1. Observe the format ## Expected result Format: `redo` ## Actual result Format: `redo’ The screenshot below shows the observed behavior: ![image.png](https://raw.githubusercontent.com/benedictkhoomw/ped/main/files/62c8b6bc-fc38-4625-814f-3c80943808ff.png) ## Justification for severity It's a minor typo so very low was given ",1.0,"UG: Redo command has a typo for format - ## How to reproduce 1. Open the UG 1. Navigate to the `redo` section. 1. Observe the format ## Expected result Format: `redo` ## Actual result Format: `redo’ The screenshot below shows the observed behavior: ![image.png](https://raw.githubusercontent.com/benedictkhoomw/ped/main/files/62c8b6bc-fc38-4625-814f-3c80943808ff.png) ## Justification for severity It's a minor typo so very low was given ",0,ug redo command has a typo for format how to reproduce open the ug navigate to the redo section observe the format expected result format redo actual result format redo’ the screenshot below shows the observed behavior justification for severity it s a minor typo so very low was given ,0 49475,12345381157.0,IssuesEvent,2020-05-15 08:51:10,ClickHouse/ClickHouse,https://api.github.com/repos/ClickHouse/ClickHouse,opened,"There is no synchronization between replicas, neither in readonly state nor in synchronization",build,"2020.05.15 16:50:06.800472 [ 84 ] {} k19_test.replica_shard (ReplicatedMergeTreeRestartingThread): void DB::ReplicatedMergeTreeRestartingThread::run(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected format version: at end of stream., Stack trace (when copying this message, always include the lines below): 0. 0x100ac1bc Poco::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int) in /usr/bin/clickhouse 1. 0x8e74849 DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int) in /usr/bin/clickhouse 2. 0x8eaacd5 ? in /usr/bin/clickhouse 3. 0x8ea8caa DB::assertString(char const*, DB::ReadBuffer&) in /usr/bin/clickhouse 4. 0xd78dd1b DB::ReplicatedMergeTreeLogEntryData::readText(DB::ReadBuffer&) in /usr/bin/clickhouse 5. 0xd78f04b DB::ReplicatedMergeTreeLogEntry::parse(std::__1::basic_string, std::__1::allocator > const&, Coordination::Stat const&) in /usr/bin/clickhouse 6. 0xd7b49c1 DB::ReplicatedMergeTreeQueue::load(std::__1::shared_ptr) in /usr/bin/clickhouse 7. 0xd7d7393 DB::ReplicatedMergeTreeRestartingThread::tryStartup() in /usr/bin/clickhouse 8. 0xd7d7cf8 DB::ReplicatedMergeTreeRestartingThread::run() in /usr/bin/clickhouse 9. 0xcd939f1 DB::BackgroundSchedulePoolTaskInfo::execute() in /usr/bin/clickhouse 10. 0xcd93fca DB::BackgroundSchedulePool::threadFunction() in /usr/bin/clickhouse 11. 0xcd94100 ? in /usr/bin/clickhouse 12. 0x8e97347 ThreadPoolImpl::worker(std::__1::__list_iterator) in /usr/bin/clickhouse 13. 0x8e9580f ? in /usr/bin/clickhouse 14. 0x7e25 start_thread in /usr/lib64/libpthread-2.17.so 15. 0xfebad __clone in /usr/lib64/libc-2.17.so (version 20.1.6.30 (official build)) ",1.0,"There is no synchronization between replicas, neither in readonly state nor in synchronization - 2020.05.15 16:50:06.800472 [ 84 ] {} k19_test.replica_shard (ReplicatedMergeTreeRestartingThread): void DB::ReplicatedMergeTreeRestartingThread::run(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected format version: at end of stream., Stack trace (when copying this message, always include the lines below): 0. 0x100ac1bc Poco::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int) in /usr/bin/clickhouse 1. 0x8e74849 DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int) in /usr/bin/clickhouse 2. 0x8eaacd5 ? in /usr/bin/clickhouse 3. 0x8ea8caa DB::assertString(char const*, DB::ReadBuffer&) in /usr/bin/clickhouse 4. 0xd78dd1b DB::ReplicatedMergeTreeLogEntryData::readText(DB::ReadBuffer&) in /usr/bin/clickhouse 5. 0xd78f04b DB::ReplicatedMergeTreeLogEntry::parse(std::__1::basic_string, std::__1::allocator > const&, Coordination::Stat const&) in /usr/bin/clickhouse 6. 0xd7b49c1 DB::ReplicatedMergeTreeQueue::load(std::__1::shared_ptr) in /usr/bin/clickhouse 7. 0xd7d7393 DB::ReplicatedMergeTreeRestartingThread::tryStartup() in /usr/bin/clickhouse 8. 0xd7d7cf8 DB::ReplicatedMergeTreeRestartingThread::run() in /usr/bin/clickhouse 9. 0xcd939f1 DB::BackgroundSchedulePoolTaskInfo::execute() in /usr/bin/clickhouse 10. 0xcd93fca DB::BackgroundSchedulePool::threadFunction() in /usr/bin/clickhouse 11. 0xcd94100 ? in /usr/bin/clickhouse 12. 0x8e97347 ThreadPoolImpl::worker(std::__1::__list_iterator) in /usr/bin/clickhouse 13. 0x8e9580f ? in /usr/bin/clickhouse 14. 0x7e25 start_thread in /usr/lib64/libpthread-2.17.so 15. 0xfebad __clone in /usr/lib64/libc-2.17.so (version 20.1.6.30 (official build)) ",0,there is no synchronization between replicas neither in readonly state nor in synchronization test replica shard replicatedmergetreerestartingthread void db replicatedmergetreerestartingthread run code e displaytext db exception cannot parse input expected format version at end of stream stack trace when copying this message always include the lines below poco exception exception std basic string std allocator const int in usr bin clickhouse db exception exception std basic string std allocator const int in usr bin clickhouse in usr bin clickhouse db assertstring char const db readbuffer in usr bin clickhouse db replicatedmergetreelogentrydata readtext db readbuffer in usr bin clickhouse db replicatedmergetreelogentry parse std basic string std allocator const coordination stat const in usr bin clickhouse db replicatedmergetreequeue load std shared ptr in usr bin clickhouse db replicatedmergetreerestartingthread trystartup in usr bin clickhouse db replicatedmergetreerestartingthread run in usr bin clickhouse db backgroundschedulepooltaskinfo execute in usr bin clickhouse db backgroundschedulepool threadfunction in usr bin clickhouse in usr bin clickhouse threadpoolimpl worker std list iterator in usr bin clickhouse in usr bin clickhouse start thread in usr libpthread so clone in usr libc so version official build ,0 3684,14286278087.0,IssuesEvent,2020-11-23 14:57:46,rpa-tomorrow/substorm-nlp,https://api.github.com/repos/rpa-tomorrow/substorm-nlp,opened,Remove meeting no longer working ,automation bug,"# Expected behavior ""remove meeting at 16"" # Actual behavior ""Failed to execute action."" # Steps to reproduce 1. Start CLI 2. ""remove meeting at 16"" 3. # Additional context ",1.0,"Remove meeting no longer working - # Expected behavior ""remove meeting at 16"" # Actual behavior ""Failed to execute action."" # Steps to reproduce 1. Start CLI 2. ""remove meeting at 16"" 3. # Additional context ",1,remove meeting no longer working expected behavior remove meeting at actual behavior failed to execute action steps to reproduce start cli remove meeting at additional context ,1 9758,30495328219.0,IssuesEvent,2023-07-18 10:24:30,figuren-theater/ft-maintenance,https://api.github.com/repos/figuren-theater/ft-maintenance,reopened,Establish quality standards,automation,"```[tasklist] ### Repository Standards - [x] Has nice [README.md](https://github.com/figuren-theater/new-ft-module/blob/main/README.md) - [x] Add [`.github/workflows/ft-issue-gardening.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/ft-issue-gardening.yml) file (if not exists) - [x] Add [`.github/workflows/release-drafter.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/release-drafter.yml) file - [x] Delete [`.github/workflows/update-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/update-changelog.yml) file - [x] Add [`.github/workflows/prerelease-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/prerelease-changelog.yml) file - [x] Add [`.editorconfig`](https://github.com/figuren-theater/new-ft-module/blob/main/.editorconfig) file - [x] Add [`.phpcs.xml`](https://github.com/figuren-theater/new-ft-module/blob/main/.phpcs.xml) file - [x] Check that `.phpcs.xml` file is not present in `.gitignore` - [x] Add [`CHANGELOG.md`](https://github.com/figuren-theater/new-ft-module/blob/main/CHANGELOG.md) file with an *Unreleased-Heading* - [x] Add [`phpstan.neon`](https://github.com/figuren-theater/new-ft-module/blob/main/phpstan.neon) file - [x] Run `composer require --dev figuren-theater/code-quality` - [x] Run `composer normalize` - [x] Run `vendor/bin/phpstan analyze .` - [x] Run `vendor/bin/phpcs .` - [ ] Fix all errors ;) - [ ] commit, PR & merge all (additional) changes - [x] Has branch protection enabled - [x] Add [`.github/workflows/build-test-measure.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/build-test-measure.yml) file - [x] Enable repo for required **Build, test & measure** status checks via [Repo Settings](/settings/actions) - [x] Add **Build, test & measure** badge to the [code-quality](https://github.com/figuren-theater/code-quality) README - [x] Submit repo to [packagist.org](https://packagist.org/packages/figuren-theater/) - [x] Remove explicit `repositories` entry from [ft-platform](https://github.com/figuren-theater/ft-platform)s `composer.json` - [x] Update `README.md` to see all workflows running - [ ] Publish the new drafted Release as Prerelease to trigger auto-updating versions in CHANGELOG.md and plugin.php ``` ",1.0,"Establish quality standards - ```[tasklist] ### Repository Standards - [x] Has nice [README.md](https://github.com/figuren-theater/new-ft-module/blob/main/README.md) - [x] Add [`.github/workflows/ft-issue-gardening.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/ft-issue-gardening.yml) file (if not exists) - [x] Add [`.github/workflows/release-drafter.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/release-drafter.yml) file - [x] Delete [`.github/workflows/update-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/update-changelog.yml) file - [x] Add [`.github/workflows/prerelease-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/prerelease-changelog.yml) file - [x] Add [`.editorconfig`](https://github.com/figuren-theater/new-ft-module/blob/main/.editorconfig) file - [x] Add [`.phpcs.xml`](https://github.com/figuren-theater/new-ft-module/blob/main/.phpcs.xml) file - [x] Check that `.phpcs.xml` file is not present in `.gitignore` - [x] Add [`CHANGELOG.md`](https://github.com/figuren-theater/new-ft-module/blob/main/CHANGELOG.md) file with an *Unreleased-Heading* - [x] Add [`phpstan.neon`](https://github.com/figuren-theater/new-ft-module/blob/main/phpstan.neon) file - [x] Run `composer require --dev figuren-theater/code-quality` - [x] Run `composer normalize` - [x] Run `vendor/bin/phpstan analyze .` - [x] Run `vendor/bin/phpcs .` - [ ] Fix all errors ;) - [ ] commit, PR & merge all (additional) changes - [x] Has branch protection enabled - [x] Add [`.github/workflows/build-test-measure.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/build-test-measure.yml) file - [x] Enable repo for required **Build, test & measure** status checks via [Repo Settings](/settings/actions) - [x] Add **Build, test & measure** badge to the [code-quality](https://github.com/figuren-theater/code-quality) README - [x] Submit repo to [packagist.org](https://packagist.org/packages/figuren-theater/) - [x] Remove explicit `repositories` entry from [ft-platform](https://github.com/figuren-theater/ft-platform)s `composer.json` - [x] Update `README.md` to see all workflows running - [ ] Publish the new drafted Release as Prerelease to trigger auto-updating versions in CHANGELOG.md and plugin.php ``` ",1,establish quality standards repository standards has nice add file if not exists add file delete file add file add file add file check that phpcs xml file is not present in gitignore add file with an unreleased heading add file run composer require dev figuren theater code quality run composer normalize run vendor bin phpstan analyze run vendor bin phpcs fix all errors commit pr merge all additional changes has branch protection enabled add file enable repo for required build test measure status checks via settings actions add build test measure badge to the readme submit repo to remove explicit repositories entry from composer json update readme md to see all workflows running publish the new drafted release as prerelease to trigger auto updating versions in changelog md and plugin php ,1 2909,12797106102.0,IssuesEvent,2020-07-02 11:42:50,GoodDollar/GoodDAPP,https://api.github.com/repos/GoodDollar/GoodDAPP,closed,[BUG] An error message is appears when try to create a new wallet,automation bug,"Steps: - go to https://gooddev.netlify.app/ - click on ""Agree & Continue with self custody wallet"" link - enter valid data - pay attention to the screen ![bug_create_wallet-2020-07-01.jpg](https://images.zenhubusercontent.com/5eb529c8c90bb26b8aaf9d9d/3224aa6e-1a38-48ee-8203-13df13ef60d4) video: https://www.screencast.com/t/7NPbz0qpxlG",1.0,"[BUG] An error message is appears when try to create a new wallet - Steps: - go to https://gooddev.netlify.app/ - click on ""Agree & Continue with self custody wallet"" link - enter valid data - pay attention to the screen ![bug_create_wallet-2020-07-01.jpg](https://images.zenhubusercontent.com/5eb529c8c90bb26b8aaf9d9d/3224aa6e-1a38-48ee-8203-13df13ef60d4) video: https://www.screencast.com/t/7NPbz0qpxlG",1, an error message is appears when try to create a new wallet steps go to click on agree continue with self custody wallet link enter valid data pay attention to the screen video ,1 13361,8198814521.0,IssuesEvent,2018-08-31 17:47:07,playcanvas/engine,https://api.github.com/repos/playcanvas/engine,opened,Vector types (and possibly pc.Color) may perform better without a Float32Array,performance,"### Description Better performance may potentially be achieved if pc.Vec2/3/4 and pc.Color are stripped of Float32Arrays to hold component data. Things to analyse: load time performance and runtime FPS.",True,"Vector types (and possibly pc.Color) may perform better without a Float32Array - ### Description Better performance may potentially be achieved if pc.Vec2/3/4 and pc.Color are stripped of Float32Arrays to hold component data. Things to analyse: load time performance and runtime FPS.",0,vector types and possibly pc color may perform better without a description better performance may potentially be achieved if pc and pc color are stripped of to hold component data things to analyse load time performance and runtime fps ,0 233635,17873541711.0,IssuesEvent,2021-09-06 20:43:57,racklet/kicad-rs,https://api.github.com/repos/racklet/kicad-rs,opened,"Explain the ""pipeline"" formed by chaining the tools",documentation,"As noted in #5, an explanation on how the tools in this repository can be chained (evaluator -> parser -> classifier -> \ -> \) to form what has unofficially been called the ""pipeline"" is very much necessary to have in the README (and maybe a separate detailed document as well). A supportive visualization using the diagrams.net rendering integration would also be very helpful, with the format of the data (e.g. KiCad schematic, YAML file, Markdown document) visible between each stage.",1.0,"Explain the ""pipeline"" formed by chaining the tools - As noted in #5, an explanation on how the tools in this repository can be chained (evaluator -> parser -> classifier -> \ -> \) to form what has unofficially been called the ""pipeline"" is very much necessary to have in the README (and maybe a separate detailed document as well). A supportive visualization using the diagrams.net rendering integration would also be very helpful, with the format of the data (e.g. KiCad schematic, YAML file, Markdown document) visible between each stage.",0,explain the pipeline formed by chaining the tools as noted in an explanation on how the tools in this repository can be chained evaluator parser classifier to form what has unofficially been called the pipeline is very much necessary to have in the readme and maybe a separate detailed document as well a supportive visualization using the diagrams net rendering integration would also be very helpful with the format of the data e g kicad schematic yaml file markdown document visible between each stage ,0 2611,12341394884.0,IssuesEvent,2020-05-14 21:50:25,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] Deleting a snapshot shows > 100% progress on the replica in the volume details page,area/engine bug priority/2 require-automation-e2e require-automation-engine,"**Describe the bug** Deleting a snapshot shows > 100% progress on the replica in the volume details page **To Reproduce** - Create a volume and attach it to a workload. - Go to volume details page - Create few snapshots - Delete snapshots - Hover over the progress bar of a replica - It shows Deleting Snapshot 115% - Snapshot Section shows Deleting 100% **Expected behavior** Deletion progress on the replica should match the % in the snapshot section. **Environment:** - Longhorn version: master-04/27/2020 - Kubernetes version: 1.17.4 - Node OS type and version: RKE Linux DO cluster ",2.0,"[BUG] Deleting a snapshot shows > 100% progress on the replica in the volume details page - **Describe the bug** Deleting a snapshot shows > 100% progress on the replica in the volume details page **To Reproduce** - Create a volume and attach it to a workload. - Go to volume details page - Create few snapshots - Delete snapshots - Hover over the progress bar of a replica - It shows Deleting Snapshot 115% - Snapshot Section shows Deleting 100% **Expected behavior** Deletion progress on the replica should match the % in the snapshot section. **Environment:** - Longhorn version: master-04/27/2020 - Kubernetes version: 1.17.4 - Node OS type and version: RKE Linux DO cluster ",1, deleting a snapshot shows progress on the replica in the volume details page describe the bug deleting a snapshot shows progress on the replica in the volume details page to reproduce create a volume and attach it to a workload go to volume details page create few snapshots delete snapshots hover over the progress bar of a replica it shows deleting snapshot snapshot section shows deleting img width alt screen shot at pm src expected behavior deletion progress on the replica should match the in the snapshot section environment longhorn version master kubernetes version node os type and version rke linux do cluster ,1 293400,25289588177.0,IssuesEvent,2022-11-16 22:33:57,modin-project/modin,https://api.github.com/repos/modin-project/modin,opened,REFACTOR: Fix CodeQL errors in HDK-on-Native test utils code,Testing 📈 P2,"We are currently skipping CodeQL checks for modin/experimental/core/execution/native/implementations/hdk_on_native/test/utils.py, so we should address the issues raised here. ",1.0,"REFACTOR: Fix CodeQL errors in HDK-on-Native test utils code - We are currently skipping CodeQL checks for modin/experimental/core/execution/native/implementations/hdk_on_native/test/utils.py, so we should address the issues raised here. ",0,refactor fix codeql errors in hdk on native test utils code we are currently skipping codeql checks for modin experimental core execution native implementations hdk on native test utils py so we should address the issues raised here img width alt image src ,0 291930,25185971458.0,IssuesEvent,2022-11-11 18:00:19,infinitest/infinitest,https://api.github.com/repos/infinitest/infinitest,closed,[infinitest-eclipse] Provide configuration of status bar text and colors,type: feature comp:infinitest-eclipse,"Currently the Eclipse plug-in occupies a large portion of the status bar. It would be better if this could be shortened somehow (only an icon and a word). Additionally, the colors of this status bar updates should be configurable. Especially if there are no tests pending (waiting) the black background is drawing too much attention (it would be better to not change the background and use the default colors). ",1.0,"[infinitest-eclipse] Provide configuration of status bar text and colors - Currently the Eclipse plug-in occupies a large portion of the status bar. It would be better if this could be shortened somehow (only an icon and a word). Additionally, the colors of this status bar updates should be configurable. Especially if there are no tests pending (waiting) the black background is drawing too much attention (it would be better to not change the background and use the default colors). ",0, provide configuration of status bar text and colors currently the eclipse plug in occupies a large portion of the status bar it would be better if this could be shortened somehow only an icon and a word additionally the colors of this status bar updates should be configurable especially if there are no tests pending waiting the black background is drawing too much attention it would be better to not change the background and use the default colors ,0 15557,19703503472.0,IssuesEvent,2022-01-12 19:08:00,googleapis/java-securitycenter-settings,https://api.github.com/repos/googleapis/java-securitycenter-settings,opened,Your .repo-metadata.json file has a problem 🤒,type: process repo-metadata: lint,"You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'securitycenter-settings' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.",1.0,"Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'securitycenter-settings' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.",0,your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname securitycenter settings invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions ,0 4483,16657611390.0,IssuesEvent,2021-06-05 20:22:29,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,opened,Consider gRPC based Automation API ,area/automation-api,"Today, standing up Automation API (inline programs aside) requires hand authoring quite a bit of boilerplate. With the work outlined in https://github.com/pulumi/pulumi/issues/7219 we would have a clean interface directly to the engine for all the functionality the automation api needs. At that point, we could write protobufs and generate gRPC clients in multiple languages. These clients may still need to be augmented by hand to support inline programs, but overall this approach greatly lowers the maintenance burden, as well as the burden to stand up a new language. For languages that pulumi doesn't currently support (like Java) this could be easy way to offer some interop.",1.0,"Consider gRPC based Automation API - Today, standing up Automation API (inline programs aside) requires hand authoring quite a bit of boilerplate. With the work outlined in https://github.com/pulumi/pulumi/issues/7219 we would have a clean interface directly to the engine for all the functionality the automation api needs. At that point, we could write protobufs and generate gRPC clients in multiple languages. These clients may still need to be augmented by hand to support inline programs, but overall this approach greatly lowers the maintenance burden, as well as the burden to stand up a new language. For languages that pulumi doesn't currently support (like Java) this could be easy way to offer some interop.",1,consider grpc based automation api today standing up automation api inline programs aside requires hand authoring quite a bit of boilerplate with the work outlined in we would have a clean interface directly to the engine for all the functionality the automation api needs at that point we could write protobufs and generate grpc clients in multiple languages these clients may still need to be augmented by hand to support inline programs but overall this approach greatly lowers the maintenance burden as well as the burden to stand up a new language for languages that pulumi doesn t currently support like java this could be easy way to offer some interop ,1 90451,15856158066.0,IssuesEvent,2021-04-08 01:39:53,heholek/practical-aspnetcore,https://api.github.com/repos/heholek/practical-aspnetcore,opened,CVE-2019-0564 (High) detected in microsoft.aspnetcore.app.2.1.1.nupkg,security vulnerability,"## CVE-2019-0564 - High Severity Vulnerability
Vulnerable Library - microsoft.aspnetcore.app.2.1.1.nupkg

Microsoft.AspNetCore.App

Library home page: https://api.nuget.org/packages/microsoft.aspnetcore.app.2.1.1.nupkg

Path to dependency file: practical-aspnetcore/projects/localization-5/localization-5.csproj

Path to vulnerable library: practical-aspnetcore/projects/localization-5/localization-5.csproj,practical-aspnetcore/projects/localization-6/localization-6.csproj

Dependency Hierarchy: - :x: **microsoft.aspnetcore.app.2.1.1.nupkg** (Vulnerable Library)

Vulnerability Details

A denial of service vulnerability exists when ASP.NET Core improperly handles web requests, aka ""ASP.NET Core Denial of Service Vulnerability."" This affects ASP.NET Core 2.1. This CVE ID is unique from CVE-2019-0548.

Publish Date: 2019-01-08

URL: CVE-2019-0564

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/aspnet/Announcements/issues/334

Release Date: 2019-01-08

Fix Resolution: Microsoft.AspNetCore.WebSockets - 2.1.7,2.2.1;Microsoft.AspNetCore.Server.Kestrel.Core - 2.1.7;System.Net.WebSockets.WebSocketProtocol - 4.5.3;Microsoft.NETCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.All - 2.1.7,2.2.1

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-0564 (High) detected in microsoft.aspnetcore.app.2.1.1.nupkg - ## CVE-2019-0564 - High Severity Vulnerability
Vulnerable Library - microsoft.aspnetcore.app.2.1.1.nupkg

Microsoft.AspNetCore.App

Library home page: https://api.nuget.org/packages/microsoft.aspnetcore.app.2.1.1.nupkg

Path to dependency file: practical-aspnetcore/projects/localization-5/localization-5.csproj

Path to vulnerable library: practical-aspnetcore/projects/localization-5/localization-5.csproj,practical-aspnetcore/projects/localization-6/localization-6.csproj

Dependency Hierarchy: - :x: **microsoft.aspnetcore.app.2.1.1.nupkg** (Vulnerable Library)

Vulnerability Details

A denial of service vulnerability exists when ASP.NET Core improperly handles web requests, aka ""ASP.NET Core Denial of Service Vulnerability."" This affects ASP.NET Core 2.1. This CVE ID is unique from CVE-2019-0548.

Publish Date: 2019-01-08

URL: CVE-2019-0564

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/aspnet/Announcements/issues/334

Release Date: 2019-01-08

Fix Resolution: Microsoft.AspNetCore.WebSockets - 2.1.7,2.2.1;Microsoft.AspNetCore.Server.Kestrel.Core - 2.1.7;System.Net.WebSockets.WebSocketProtocol - 4.5.3;Microsoft.NETCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.All - 2.1.7,2.2.1

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in microsoft aspnetcore app nupkg cve high severity vulnerability vulnerable library microsoft aspnetcore app nupkg microsoft aspnetcore app library home page a href path to dependency file practical aspnetcore projects localization localization csproj path to vulnerable library practical aspnetcore projects localization localization csproj practical aspnetcore projects localization localization csproj dependency hierarchy x microsoft aspnetcore app nupkg vulnerable library vulnerability details a denial of service vulnerability exists when asp net core improperly handles web requests aka asp net core denial of service vulnerability this affects asp net core this cve id is unique from cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution microsoft aspnetcore websockets microsoft aspnetcore server kestrel core system net websockets websocketprotocol microsoft netcore app microsoft aspnetcore app microsoft aspnetcore all step up your open source security game with whitesource ,0 1244,9762758405.0,IssuesEvent,2019-06-05 12:16:45,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,opened,Limit beta/production releases to the `releases/*` branches,RelEng 🤖 automation,"[Reference](https://mozilla.slack.com/archives/CK6TXNDQU/p1559736727000800) We don't want to release beta/production from the `master` branch. By forcing the releaser to create a separate branch (e.g.: `releases/1.0.0`) first, we can increase confidence that we're releasing ready-to-ship code",1.0,"Limit beta/production releases to the `releases/*` branches - [Reference](https://mozilla.slack.com/archives/CK6TXNDQU/p1559736727000800) We don't want to release beta/production from the `master` branch. By forcing the releaser to create a separate branch (e.g.: `releases/1.0.0`) first, we can increase confidence that we're releasing ready-to-ship code",1,limit beta production releases to the releases branches we don t want to release beta production from the master branch by forcing the releaser to create a separate branch e g releases first we can increase confidence that we re releasing ready to ship code,1 9912,30744755596.0,IssuesEvent,2023-07-28 14:14:11,nullawu/upptime-monit,https://api.github.com/repos/nullawu/upptime-monit,closed,🛑 WonderAutomation no-HTTPs is down,status wonder-automation-no-htt-ps,"In [`4c31572`](https://github.com/nullawu/upptime-monit/commit/4c31572d7eac978a747c670a04a1f3d1f96a49aa ), WonderAutomation no-HTTPs (http://wonderautomation.com) was **down**: - HTTP code: 0 - Response time: 0 ms ",1.0,"🛑 WonderAutomation no-HTTPs is down - In [`4c31572`](https://github.com/nullawu/upptime-monit/commit/4c31572d7eac978a747c670a04a1f3d1f96a49aa ), WonderAutomation no-HTTPs (http://wonderautomation.com) was **down**: - HTTP code: 0 - Response time: 0 ms ",1,🛑 wonderautomation no https is down in wonderautomation no https was down http code response time ms ,1 2715,12468020714.0,IssuesEvent,2020-05-28 18:05:52,dotnet/interactive,https://api.github.com/repos/dotnet/interactive,opened,Enable automated execution of .dib files,Area-Automation enhancement,"Automation of notebooks (i.e., direct execution from the command line) is a common use case and is available to .NET Interactive Jupyter users through [Papermill](https://github.com/nteract/papermill). Directly supporting automation has the potential to simplify authoring and deployment automation for .NET-only users. Goals include: 1) Make code as portable as possible between the interactive authoring and automation modes. 2) Provide a way to accept input such that same code works on both the command line and during interactive use, and is capable of displaying command line help, informative errors, etc. Example: ```csharp #!csharp #!args ParseResult ParseCommandLine(string args) { var root = new RootCommand { new Option(""--number-of-things""), new Argument() }; return root.Parse(args); } #!csharp var something = DoSomethingWith(Args(""--number-of-things"")); ``` In this example: * Command line automation would trigger argument parsing and store the parsed values for use by all subkernels. * Interactive use would trigger a user prompt for the complete command line input. * Identical strings, whether provided on the command line or via a user prompt, should result in identical parsed values when accessed in subsequent submissions. 3) Output formats should be specifiable so that the automation run can control, for example, output formats (e.g. plaintext, HTML, Markdown, `.ipynb`)",1.0,"Enable automated execution of .dib files - Automation of notebooks (i.e., direct execution from the command line) is a common use case and is available to .NET Interactive Jupyter users through [Papermill](https://github.com/nteract/papermill). Directly supporting automation has the potential to simplify authoring and deployment automation for .NET-only users. Goals include: 1) Make code as portable as possible between the interactive authoring and automation modes. 2) Provide a way to accept input such that same code works on both the command line and during interactive use, and is capable of displaying command line help, informative errors, etc. Example: ```csharp #!csharp #!args ParseResult ParseCommandLine(string args) { var root = new RootCommand { new Option(""--number-of-things""), new Argument() }; return root.Parse(args); } #!csharp var something = DoSomethingWith(Args(""--number-of-things"")); ``` In this example: * Command line automation would trigger argument parsing and store the parsed values for use by all subkernels. * Interactive use would trigger a user prompt for the complete command line input. * Identical strings, whether provided on the command line or via a user prompt, should result in identical parsed values when accessed in subsequent submissions. 3) Output formats should be specifiable so that the automation run can control, for example, output formats (e.g. plaintext, HTML, Markdown, `.ipynb`)",1,enable automated execution of dib files automation of notebooks i e direct execution from the command line is a common use case and is available to net interactive jupyter users through directly supporting automation has the potential to simplify authoring and deployment automation for net only users goals include make code as portable as possible between the interactive authoring and automation modes provide a way to accept input such that same code works on both the command line and during interactive use and is capable of displaying command line help informative errors etc example csharp csharp args parseresult parsecommandline string args var root new rootcommand new option number of things new argument return root parse args csharp var something dosomethingwith args number of things in this example command line automation would trigger argument parsing and store the parsed values for use by all subkernels interactive use would trigger a user prompt for the complete command line input identical strings whether provided on the command line or via a user prompt should result in identical parsed values when accessed in subsequent submissions output formats should be specifiable so that the automation run can control for example output formats e g plaintext html markdown ipynb ,1 318326,9690890427.0,IssuesEvent,2019-05-24 09:44:04,bbc/simorgh,https://api.github.com/repos/bbc/simorgh,opened,Cypress tests shouldn't access array data using index numbers,Refinement Needed high priority,"**Is your feature request related to a problem? Please describe.** Following https://github.com/bbc/simorgh/issues/1777 we should change all cypress tests to access array's using the new `getBlockByType` function. This means that any use case of `array[0]` should be changed to access the item by it's `type` using the aforementioned function. **Describe the solution you'd like** Changes all occurrences in the cypress tests that access array's using index values. **Describe alternatives you've considered** N/A **Testing notes** [Tester to complete] Dev insight: the cypress tests should pass regardless of the array items order in the data payload. Could be checked by changing the local fixture data **Additional context** This is high-priority to stop a situation like https://github.com/bbc/simorgh/issues/1777 ",1.0,"Cypress tests shouldn't access array data using index numbers - **Is your feature request related to a problem? Please describe.** Following https://github.com/bbc/simorgh/issues/1777 we should change all cypress tests to access array's using the new `getBlockByType` function. This means that any use case of `array[0]` should be changed to access the item by it's `type` using the aforementioned function. **Describe the solution you'd like** Changes all occurrences in the cypress tests that access array's using index values. **Describe alternatives you've considered** N/A **Testing notes** [Tester to complete] Dev insight: the cypress tests should pass regardless of the array items order in the data payload. Could be checked by changing the local fixture data **Additional context** This is high-priority to stop a situation like https://github.com/bbc/simorgh/issues/1777 ",0,cypress tests shouldn t access array data using index numbers is your feature request related to a problem please describe following we should change all cypress tests to access array s using the new getblockbytype function this means that any use case of array should be changed to access the item by it s type using the aforementioned function describe the solution you d like changes all occurrences in the cypress tests that access array s using index values describe alternatives you ve considered n a testing notes dev insight the cypress tests should pass regardless of the array items order in the data payload could be checked by changing the local fixture data additional context this is high priority to stop a situation like ,0 7224,24481335529.0,IssuesEvent,2022-10-08 21:54:00,Studio-Lovelies/GG-JointJustice-Unity,https://api.github.com/repos/Studio-Lovelies/GG-JointJustice-Unity,closed,[BUG][ACTIONS] PlayMode Scripts and PlayMode Scenes are mislabeled,bug good first issue automation github_actions,"## Describe the bug The results of `PlayMode Scenes` and `PlayMode Scripts` are mixed up. Notice `name` and `with.checkName` using different labels under [Screenshots](#Screenshots). ## Steps To Reproduce 1. Run the `Generate game builds` action on GitHub on any branch 2. Wait for both `PlayMode` jobs to finish 3. Notice `PlayMode Scenes` and `PlayMode Scripts` actually running the tests of the other label ## Expected behavior Tests under `/Scenes` are marked as `PlayMode Scenes` and tests under `/Scripts` are marked as `PlayMode Scripts` on GitHub and codecov.io. ## Screenshots ![image](https://user-images.githubusercontent.com/1689033/161401906-acc5451a-3be7-4a2a-936e-b47fdca1e058.png) ## Additional context Found in 8152da8121202c1ac3cae8f933ff215bd1a08562 ",1.0,"[BUG][ACTIONS] PlayMode Scripts and PlayMode Scenes are mislabeled - ## Describe the bug The results of `PlayMode Scenes` and `PlayMode Scripts` are mixed up. Notice `name` and `with.checkName` using different labels under [Screenshots](#Screenshots). ## Steps To Reproduce 1. Run the `Generate game builds` action on GitHub on any branch 2. Wait for both `PlayMode` jobs to finish 3. Notice `PlayMode Scenes` and `PlayMode Scripts` actually running the tests of the other label ## Expected behavior Tests under `/Scenes` are marked as `PlayMode Scenes` and tests under `/Scripts` are marked as `PlayMode Scripts` on GitHub and codecov.io. ## Screenshots ![image](https://user-images.githubusercontent.com/1689033/161401906-acc5451a-3be7-4a2a-936e-b47fdca1e058.png) ## Additional context Found in 8152da8121202c1ac3cae8f933ff215bd1a08562 ",1, playmode scripts and playmode scenes are mislabeled describe the bug the results of playmode scenes and playmode scripts are mixed up notice name and with checkname using different labels under screenshots steps to reproduce run the generate game builds action on github on any branch wait for both playmode jobs to finish notice playmode scenes and playmode scripts actually running the tests of the other label expected behavior tests under scenes are marked as playmode scenes and tests under scripts are marked as playmode scripts on github and codecov io screenshots additional context found in ,1 63979,15773468015.0,IssuesEvent,2021-03-31 23:21:31,zcash/zcash,https://api.github.com/repos/zcash/zcash,closed,Building from source is painfully slow on macOS,A-build O-macos,"I'll get timings when I can, but it seems to be due to https://github.com/rust-lang/rust/issues/80684 . We don't actually need to build the Rust docs. So we could speed up Zcash builds on all platforms by patching `config.toml` as described in that ticket: ``` [build] docs = false ```",1.0,"Building from source is painfully slow on macOS - I'll get timings when I can, but it seems to be due to https://github.com/rust-lang/rust/issues/80684 . We don't actually need to build the Rust docs. So we could speed up Zcash builds on all platforms by patching `config.toml` as described in that ticket: ``` [build] docs = false ```",0,building from source is painfully slow on macos i ll get timings when i can but it seems to be due to we don t actually need to build the rust docs so we could speed up zcash builds on all platforms by patching config toml as described in that ticket docs false ,0 182908,14170239598.0,IssuesEvent,2020-11-12 14:17:51,eclipse/openj9,https://api.github.com/repos/eclipse/openj9,closed,JDK8 JITServer : ClassCastException: [I incompatible with java.lang.reflect.Method ,comp:jitserver test failure,"Failure link ------------ From an internal build `Test_openjdk8_j9_special.system_x86-64_linux_jit_Nightly_mathLoadTest/10`: ``` 01:01:17 openjdk version ""1.8.0_262-internal"" 01:01:17 OpenJDK Runtime Environment (build 1.8.0_262-internal-jenkins_2020_05_11_21_17-b00) 01:01:17 Eclipse OpenJ9 VM (build ibm_sdk-3abfb62305, JRE 1.8.0 Linux amd64-64-Bit Compressed References 20200511_77 (JIT enabled, AOT enabled) 01:01:17 OpenJ9 - 3abfb62305 01:01:17 OMR - 295075ec1 01:01:17 JCL - d0099f2c689 based on jdk8u262-b01) ``` Optional info ------------- Failure output (captured from console output) --------------------------------------------- ``` =============================================== Running test MathLoadTest_all_special_12 ... =============================================== MLT 22:06:26.582 - Completed 66.9%. Number of tests started=3510 (+485) MLT 22:06:39.705 - First failure detected by thread: load-2. Not creating dumps as no dump generation is requested for this load test MLT 22:06:39.706 - Test failed MLT Failure num. = 1 MLT Test number = 19 MLT Test details = 'JUnit[net.adoptopenjdk.test.bigdecimal.TestSuite019]' MLT Suite number = 0 MLT Thread number = 2 MLT >>> Captured test output >>> MLT Test failed: MLT java.lang.ClassCastException: [I incompatible with java.lang.reflect.Method MLT at java.lang.Class.lookupCachedMethod(Class.java:3636) MLT at java.lang.Class.getMethodHelper(Class.java:1239) MLT at java.lang.Class.getMethod(Class.java:1191) MLT at org.junit.internal.runners.JUnit38ClassRunner.getAnnotations(JUnit38ClassRunner.java:131) MLT at org.junit.internal.runners.JUnit38ClassRunner.makeDescription(JUnit38ClassRunner.java:101) MLT at org.junit.internal.runners.JUnit38ClassRunner.makeDescription(JUnit38ClassRunner.java:109) MLT at org.junit.internal.runners.JUnit38ClassRunner.getDescription(JUnit38ClassRunner.java:95) MLT at org.junit.runners.Suite.describeChild(Suite.java:123) MLT at org.junit.runners.Suite.describeChild(Suite.java:27) MLT at org.junit.runners.ParentRunner.getDescription(ParentRunner.java:352) MLT at org.junit.runner.JUnitCore.run(JUnitCore.java:136) MLT at org.junit.runner.JUnitCore.run(JUnitCore.java:115) MLT at net.adoptopenjdk.loadTest.adaptors.JUnitAdaptor.executeTest(JUnitAdaptor.java:130) MLT at net.adoptopenjdk.loadTest.LoadTestRunner$2.run(LoadTestRunner.java:182) MLT at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) MLT at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) MLT at java.lang.Thread.run(Thread.java:823) MLT <<< MLT MLT 22:06:43.428 - Test failed. Details recorded in execution log. MLT 22:06:46.566 - Completed 76.5%. Number of tests started=4017 (+507) (with 2 failure(s)) MLT 22:06:51.631 - Test failed. Details recorded in execution log. ``` For example, to rebuild the failed tests in =https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder, use the following links: https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/parambuild/?JDK_VERSION=8&JDK_IMPL=openj9&BUILD_LIST=system/mathLoadTest&PLATFORM=x86-64_linux&TARGET=MathLoadTest_all_special_12",1.0,"JDK8 JITServer : ClassCastException: [I incompatible with java.lang.reflect.Method - Failure link ------------ From an internal build `Test_openjdk8_j9_special.system_x86-64_linux_jit_Nightly_mathLoadTest/10`: ``` 01:01:17 openjdk version ""1.8.0_262-internal"" 01:01:17 OpenJDK Runtime Environment (build 1.8.0_262-internal-jenkins_2020_05_11_21_17-b00) 01:01:17 Eclipse OpenJ9 VM (build ibm_sdk-3abfb62305, JRE 1.8.0 Linux amd64-64-Bit Compressed References 20200511_77 (JIT enabled, AOT enabled) 01:01:17 OpenJ9 - 3abfb62305 01:01:17 OMR - 295075ec1 01:01:17 JCL - d0099f2c689 based on jdk8u262-b01) ``` Optional info ------------- Failure output (captured from console output) --------------------------------------------- ``` =============================================== Running test MathLoadTest_all_special_12 ... =============================================== MLT 22:06:26.582 - Completed 66.9%. Number of tests started=3510 (+485) MLT 22:06:39.705 - First failure detected by thread: load-2. Not creating dumps as no dump generation is requested for this load test MLT 22:06:39.706 - Test failed MLT Failure num. = 1 MLT Test number = 19 MLT Test details = 'JUnit[net.adoptopenjdk.test.bigdecimal.TestSuite019]' MLT Suite number = 0 MLT Thread number = 2 MLT >>> Captured test output >>> MLT Test failed: MLT java.lang.ClassCastException: [I incompatible with java.lang.reflect.Method MLT at java.lang.Class.lookupCachedMethod(Class.java:3636) MLT at java.lang.Class.getMethodHelper(Class.java:1239) MLT at java.lang.Class.getMethod(Class.java:1191) MLT at org.junit.internal.runners.JUnit38ClassRunner.getAnnotations(JUnit38ClassRunner.java:131) MLT at org.junit.internal.runners.JUnit38ClassRunner.makeDescription(JUnit38ClassRunner.java:101) MLT at org.junit.internal.runners.JUnit38ClassRunner.makeDescription(JUnit38ClassRunner.java:109) MLT at org.junit.internal.runners.JUnit38ClassRunner.getDescription(JUnit38ClassRunner.java:95) MLT at org.junit.runners.Suite.describeChild(Suite.java:123) MLT at org.junit.runners.Suite.describeChild(Suite.java:27) MLT at org.junit.runners.ParentRunner.getDescription(ParentRunner.java:352) MLT at org.junit.runner.JUnitCore.run(JUnitCore.java:136) MLT at org.junit.runner.JUnitCore.run(JUnitCore.java:115) MLT at net.adoptopenjdk.loadTest.adaptors.JUnitAdaptor.executeTest(JUnitAdaptor.java:130) MLT at net.adoptopenjdk.loadTest.LoadTestRunner$2.run(LoadTestRunner.java:182) MLT at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) MLT at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) MLT at java.lang.Thread.run(Thread.java:823) MLT <<< MLT MLT 22:06:43.428 - Test failed. Details recorded in execution log. MLT 22:06:46.566 - Completed 76.5%. Number of tests started=4017 (+507) (with 2 failure(s)) MLT 22:06:51.631 - Test failed. Details recorded in execution log. ``` For example, to rebuild the failed tests in =https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder, use the following links: https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/parambuild/?JDK_VERSION=8&JDK_IMPL=openj9&BUILD_LIST=system/mathLoadTest&PLATFORM=x86-64_linux&TARGET=MathLoadTest_all_special_12",0, jitserver classcastexception i incompatible with java lang reflect method failure link from an internal build test special system linux jit nightly mathloadtest openjdk version internal openjdk runtime environment build internal jenkins eclipse vm build ibm sdk jre linux bit compressed references jit enabled aot enabled omr jcl based on optional info failure output captured from console output running test mathloadtest all special mlt completed number of tests started mlt first failure detected by thread load not creating dumps as no dump generation is requested for this load test mlt test failed mlt failure num mlt test number mlt test details junit mlt suite number mlt thread number mlt captured test output mlt test failed mlt java lang classcastexception i incompatible with java lang reflect method mlt at java lang class lookupcachedmethod class java mlt at java lang class getmethodhelper class java mlt at java lang class getmethod class java mlt at org junit internal runners getannotations java mlt at org junit internal runners makedescription java mlt at org junit internal runners makedescription java mlt at org junit internal runners getdescription java mlt at org junit runners suite describechild suite java mlt at org junit runners suite describechild suite java mlt at org junit runners parentrunner getdescription parentrunner java mlt at org junit runner junitcore run junitcore java mlt at org junit runner junitcore run junitcore java mlt at net adoptopenjdk loadtest adaptors junitadaptor executetest junitadaptor java mlt at net adoptopenjdk loadtest loadtestrunner run loadtestrunner java mlt at java util concurrent threadpoolexecutor runworker threadpoolexecutor java mlt at java util concurrent threadpoolexecutor worker run threadpoolexecutor java mlt at java lang thread run thread java mlt mlt mlt test failed details recorded in execution log mlt completed number of tests started with failure s mlt test failed details recorded in execution log for example to rebuild the failed tests in use the following links ,0 57965,16195399391.0,IssuesEvent,2021-05-04 14:01:37,vector-im/element-web,https://api.github.com/repos/vector-im/element-web,closed,Spaces no longer take up full width of panel,A-Spaces T-Defect X-Regression,"### Description ![Screenshot_2021-05-04 Element Duo](https://user-images.githubusercontent.com/48614497/117001970-fb950100-acb0-11eb-9b2e-e5d24e51a59f.png) As you can see, the notification dots are no longer aligned due to matrix-org/matrix-react-sdk#5964. Note that despite being on Firefox, I never experienced the visual bug supposedly fixed by that PR. ### Version information - **Platform**: web - **Browser**: Firefox 88.0 - **OS**: NixOS unstable - **URL**: https://develop.element.io/",1.0,"Spaces no longer take up full width of panel - ### Description ![Screenshot_2021-05-04 Element Duo](https://user-images.githubusercontent.com/48614497/117001970-fb950100-acb0-11eb-9b2e-e5d24e51a59f.png) As you can see, the notification dots are no longer aligned due to matrix-org/matrix-react-sdk#5964. Note that despite being on Firefox, I never experienced the visual bug supposedly fixed by that PR. ### Version information - **Platform**: web - **Browser**: Firefox 88.0 - **OS**: NixOS unstable - **URL**: https://develop.element.io/",0,spaces no longer take up full width of panel description as you can see the notification dots are no longer aligned due to matrix org matrix react sdk note that despite being on firefox i never experienced the visual bug supposedly fixed by that pr version information platform web browser firefox os nixos unstable url ,0 212727,16476454210.0,IssuesEvent,2021-05-24 06:14:25,zerolab-fe/awesome-nodejs,https://api.github.com/repos/zerolab-fe/awesome-nodejs,closed,msw,Testing,"在👆 Title 处填写包名,并补充下面信息: ```json { ""repoUrl"": ""https://github.com/mswjs/msw"", ""description"": ""用于浏览器和 Node.js 的 REST/GraphQL API模拟库。"" } ``` ",1.0,"msw - 在👆 Title 处填写包名,并补充下面信息: ```json { ""repoUrl"": ""https://github.com/mswjs/msw"", ""description"": ""用于浏览器和 Node.js 的 REST/GraphQL API模拟库。"" } ``` ",0,msw 在👆 title 处填写包名,并补充下面信息: json repourl description 用于浏览器和 node js 的 rest graphql api模拟库。 ,0 1968,11205219421.0,IssuesEvent,2020-01-05 12:46:07,openfoodfacts/openfoodfacts-server,https://api.github.com/repos/openfoodfacts/openfoodfacts-server,closed,Expand automatic import from wiki to new languages,P3 automation installation,"product-opener / lib / ProductOpener / Config_off.pm lines 152 to 168 add pages marked as done http://en.wiki.openfoodfacts.org/Translations_-_Discover_page http://en.wiki.openfoodfacts.org/Translations_-_Contribute_page http://en.wiki.openfoodfacts.org/Translations_-_Press ",1.0,"Expand automatic import from wiki to new languages - product-opener / lib / ProductOpener / Config_off.pm lines 152 to 168 add pages marked as done http://en.wiki.openfoodfacts.org/Translations_-_Discover_page http://en.wiki.openfoodfacts.org/Translations_-_Contribute_page http://en.wiki.openfoodfacts.org/Translations_-_Press ",1,expand automatic import from wiki to new languages product opener lib productopener config off pm lines to add pages marked as done ,1 264344,23112635501.0,IssuesEvent,2022-07-27 14:11:49,knative/pkg,https://api.github.com/repos/knative/pkg,closed,For `sharedmain` zap's Fatal should deterministically calculate the POSIX return code,area/test-and-release kind/feature lifecycle/stale,"/area test-and-release /kind feature ## Actual Behavior The _Zap_'s _Fatal_ method is calling `os.Exit(1)` regardless of a message. This isn't the best _UX_ choice. The _POSIX_ return code could be used for quick assessment about a process failure. When the retcode is always equal `1` for any error, this can't be utilized. Tools like Docker or Kubernetes report back the return code of the process. By using a deterministic algorithm to calculate the _POSIX_ return code, a quick debugging is possible. Users could indicate if the process is failing because of the same reason, without looking into logs. ## Expected Behavior The best would be to rely on _Fatal_'s message or on an _error_ object if it's attached as a _Field_. ## Additional Info Related to https://github.com/uber-go/zap/issues/1086",1.0,"For `sharedmain` zap's Fatal should deterministically calculate the POSIX return code - /area test-and-release /kind feature ## Actual Behavior The _Zap_'s _Fatal_ method is calling `os.Exit(1)` regardless of a message. This isn't the best _UX_ choice. The _POSIX_ return code could be used for quick assessment about a process failure. When the retcode is always equal `1` for any error, this can't be utilized. Tools like Docker or Kubernetes report back the return code of the process. By using a deterministic algorithm to calculate the _POSIX_ return code, a quick debugging is possible. Users could indicate if the process is failing because of the same reason, without looking into logs. ## Expected Behavior The best would be to rely on _Fatal_'s message or on an _error_ object if it's attached as a _Field_. ## Additional Info Related to https://github.com/uber-go/zap/issues/1086",0,for sharedmain zap s fatal should deterministically calculate the posix return code area test and release kind feature actual behavior the zap s fatal method is calling os exit regardless of a message this isn t the best ux choice the posix return code could be used for quick assessment about a process failure when the retcode is always equal for any error this can t be utilized tools like docker or kubernetes report back the return code of the process by using a deterministic algorithm to calculate the posix return code a quick debugging is possible users could indicate if the process is failing because of the same reason without looking into logs expected behavior the best would be to rely on fatal s message or on an error object if it s attached as a field additional info related to ,0 2094,11363535774.0,IssuesEvent,2020-01-27 04:37:20,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,cx would add proxy password/url for DSC configuration,Pri2 automation/svc cxp dsc/subsvc product-question triaged,"per testing, we can realize it by modifying DSC configuration as below: $password = ""ThisIsAPlaintextPassword"" | ConvertTo-SecureString -asPlainText -Force $username = ""contoso\Administrator"" [PSCredential] $Cred = New-Object System.Management.Automation.PSCredential($username,$password) $cd = @{ AllNodes = @( @{ NodeName = 'localhost' PSDscAllowPlainTextPassword = $true } ) } DscMetaConfigs -ConfigurationData $cd @Params --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: a2c40f2e-91ef-eafc-1288-094266885d41 * Version Independent ID: 4e8f9a62-69fc-a6a4-2a3c-afca18dcd253 * Content: [Onboarding machines for management by Azure Automation State Configuration](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-onboarding#using-a-dsc-configuration) * Content Source: [articles/automation/automation-dsc-onboarding.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-dsc-onboarding.md) * Service: **automation** * Sub-service: **dsc** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1.0,"cx would add proxy password/url for DSC configuration - per testing, we can realize it by modifying DSC configuration as below: $password = ""ThisIsAPlaintextPassword"" | ConvertTo-SecureString -asPlainText -Force $username = ""contoso\Administrator"" [PSCredential] $Cred = New-Object System.Management.Automation.PSCredential($username,$password) $cd = @{ AllNodes = @( @{ NodeName = 'localhost' PSDscAllowPlainTextPassword = $true } ) } DscMetaConfigs -ConfigurationData $cd @Params --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: a2c40f2e-91ef-eafc-1288-094266885d41 * Version Independent ID: 4e8f9a62-69fc-a6a4-2a3c-afca18dcd253 * Content: [Onboarding machines for management by Azure Automation State Configuration](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-onboarding#using-a-dsc-configuration) * Content Source: [articles/automation/automation-dsc-onboarding.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-dsc-onboarding.md) * Service: **automation** * Sub-service: **dsc** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1,cx would add proxy password url for dsc configuration per testing we can realize it by modifying dsc configuration as below password thisisaplaintextpassword convertto securestring asplaintext force username contoso administrator cred new object system management automation pscredential username password cd allnodes nodename localhost psdscallowplaintextpassword true dscmetaconfigs configurationdata cd params document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id eafc version independent id content content source service automation sub service dsc github login mgoedtel microsoft alias magoedte ,1 82401,10280002299.0,IssuesEvent,2019-08-26 03:03:40,rcarson3/rust_data_reader,https://api.github.com/repos/rcarson3/rust_data_reader,closed,Behaviour of row/column restriction,bug documentation,"The `num_lines` and `num_fields` fields in the `ReaderResults` don't scale consistently to the result when the `ReaderParams` restricts the lines/fields to be read. input: a 5*8 matrix ``` 1,2,3,4,5,6,7,8 1,2,3,4,5,6,7,8 1,2,3,4,5,6,7,8 1,2,3,4,5,6,7,8 1,2,3,4,5,6,7,8 ``` # No restriction ``` let source = ""mat.txt""; let params = ReaderParams { comments: b'%', delimiter: Delimiter::Any(b','), skip_header: None, skip_footer: None, usecols: None, max_rows: None, }; let input = load_txt_i64(&source, ¶ms).unwrap(); println!(""{:?}"", input); ``` ReaderResults { num_fields: 8, num_lines: 5, results: [1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8] } `num_*` match the dimension of the matrix. # Row restriction ` max_rows: Some(2),` ReaderResults { num_fields: 8, num_lines: 2, results: [1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8] } **`num_lines` scales** # Column restriction ## Sane column numbers usecols: Some(vec![1, 4]), ReaderResults { num_fields: 8, num_lines: 5, results: [1, 4, 1, 4, 1, 4, 1, 4, 1, 4] } **`num_fields` does not scale** ## Inconsistent column numbers ` usecols: Some(vec![0, 10, 11]),` ReaderResults { num_fields: 8, num_lines: 5, results: [] } inconsistent column numbers are ignored, **`num_fields` does not scale** and **result is empty**, moreover **column indexing starts at 1** and not 0 as one would probably expect in Rust context in absence of further documentation ",1.0,"Behaviour of row/column restriction - The `num_lines` and `num_fields` fields in the `ReaderResults` don't scale consistently to the result when the `ReaderParams` restricts the lines/fields to be read. input: a 5*8 matrix ``` 1,2,3,4,5,6,7,8 1,2,3,4,5,6,7,8 1,2,3,4,5,6,7,8 1,2,3,4,5,6,7,8 1,2,3,4,5,6,7,8 ``` # No restriction ``` let source = ""mat.txt""; let params = ReaderParams { comments: b'%', delimiter: Delimiter::Any(b','), skip_header: None, skip_footer: None, usecols: None, max_rows: None, }; let input = load_txt_i64(&source, ¶ms).unwrap(); println!(""{:?}"", input); ``` ReaderResults { num_fields: 8, num_lines: 5, results: [1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8] } `num_*` match the dimension of the matrix. # Row restriction ` max_rows: Some(2),` ReaderResults { num_fields: 8, num_lines: 2, results: [1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8] } **`num_lines` scales** # Column restriction ## Sane column numbers usecols: Some(vec![1, 4]), ReaderResults { num_fields: 8, num_lines: 5, results: [1, 4, 1, 4, 1, 4, 1, 4, 1, 4] } **`num_fields` does not scale** ## Inconsistent column numbers ` usecols: Some(vec![0, 10, 11]),` ReaderResults { num_fields: 8, num_lines: 5, results: [] } inconsistent column numbers are ignored, **`num_fields` does not scale** and **result is empty**, moreover **column indexing starts at 1** and not 0 as one would probably expect in Rust context in absence of further documentation ",0,behaviour of row column restriction the num lines and num fields fields in the readerresults don t scale consistently to the result when the readerparams restricts the lines fields to be read input a matrix no restriction let source mat txt let params readerparams comments b delimiter delimiter any b skip header none skip footer none usecols none max rows none let input load txt source params unwrap println input readerresults num fields num lines results num match the dimension of the matrix row restriction max rows some readerresults num fields num lines results num lines scales column restriction sane column numbers usecols some vec readerresults num fields num lines results num fields does not scale inconsistent column numbers usecols some vec readerresults num fields num lines results inconsistent column numbers are ignored num fields does not scale and result is empty moreover column indexing starts at and not as one would probably expect in rust context in absence of further documentation ,0 1789,10765683921.0,IssuesEvent,2019-11-01 11:45:07,a-t-0/CoursePlanningTemplate,https://api.github.com/repos/a-t-0/CoursePlanningTemplate,opened,"Automate generation of 2 tikkies, one for payment, 2nd for refund using api",automation,"When the user commits to making an exam solution, and you generate the tikkie to let them pay their commitment, automatically generate the return tikkie, for when they submit their solution so that it can be refunded. (Preferably do this automatically as soon as the private branch of the exam date is filled with the compiled pdf).",1.0,"Automate generation of 2 tikkies, one for payment, 2nd for refund using api - When the user commits to making an exam solution, and you generate the tikkie to let them pay their commitment, automatically generate the return tikkie, for when they submit their solution so that it can be refunded. (Preferably do this automatically as soon as the private branch of the exam date is filled with the compiled pdf).",1,automate generation of tikkies one for payment for refund using api when the user commits to making an exam solution and you generate the tikkie to let them pay their commitment automatically generate the return tikkie for when they submit their solution so that it can be refunded preferably do this automatically as soon as the private branch of the exam date is filled with the compiled pdf ,1 175526,6551771270.0,IssuesEvent,2017-09-05 15:48:14,coreos/tectonic-installer,https://api.github.com/repos/coreos/tectonic-installer,closed,Remove critical-pod annotation from scheduler and controller-manager,kind/bug migrate-issue priority/P1 tectonic/terraform,"This is somewhat of an unknown -- just behavior that should be validated prior to running a [rescheduler](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) and possibly removing the critical annotation from scheduler/controller-manager. If we mark the scheduler as ""[critical](https://github.com/coreos/tectonic-installer/blob/master/modules/bootkube/resources/manifests/kube-scheduler.yaml#L10)"", for example, then it happens to get in a state where a pod is unscheduled because all schedulers are down (has happened when all schedulers end up co-located on same node and that node dies) -- will the rescheduler start evicting other workloads? This would be pretty dangerous as it would likely just keep evicting workloads even though it will never be scheduled (no scheduler to schedule it). Ideally the rescheduler would only evict based on ""known"" scheduling failures (e.g. no more allocatable CPU), not just on ""can't schedule"" -- but I'm unsure of the exact behavior. We don't currently deploy the rescheduler, but we should evaluate this behavior before doing so (possibly remove the critical pod annotation from controller-manager / scheduler). xref: https://github.com/kubernetes-incubator/bootkube/issues/519",1.0,"Remove critical-pod annotation from scheduler and controller-manager - This is somewhat of an unknown -- just behavior that should be validated prior to running a [rescheduler](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) and possibly removing the critical annotation from scheduler/controller-manager. If we mark the scheduler as ""[critical](https://github.com/coreos/tectonic-installer/blob/master/modules/bootkube/resources/manifests/kube-scheduler.yaml#L10)"", for example, then it happens to get in a state where a pod is unscheduled because all schedulers are down (has happened when all schedulers end up co-located on same node and that node dies) -- will the rescheduler start evicting other workloads? This would be pretty dangerous as it would likely just keep evicting workloads even though it will never be scheduled (no scheduler to schedule it). Ideally the rescheduler would only evict based on ""known"" scheduling failures (e.g. no more allocatable CPU), not just on ""can't schedule"" -- but I'm unsure of the exact behavior. We don't currently deploy the rescheduler, but we should evaluate this behavior before doing so (possibly remove the critical pod annotation from controller-manager / scheduler). xref: https://github.com/kubernetes-incubator/bootkube/issues/519",0,remove critical pod annotation from scheduler and controller manager this is somewhat of an unknown just behavior that should be validated prior to running a and possibly removing the critical annotation from scheduler controller manager if we mark the scheduler as for example then it happens to get in a state where a pod is unscheduled because all schedulers are down has happened when all schedulers end up co located on same node and that node dies will the rescheduler start evicting other workloads this would be pretty dangerous as it would likely just keep evicting workloads even though it will never be scheduled no scheduler to schedule it ideally the rescheduler would only evict based on known scheduling failures e g no more allocatable cpu not just on can t schedule but i m unsure of the exact behavior we don t currently deploy the rescheduler but we should evaluate this behavior before doing so possibly remove the critical pod annotation from controller manager scheduler xref ,0 4941,18071560879.0,IssuesEvent,2021-09-21 03:56:42,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Automation Last Triggered Does Not Update When Automation Stops as Condition Fails,integration: automation,"### Checklist - [X] I have updated to the latest available Home Assistant version. - [X] I have cleared the cache of my browser. - [X] I have tried a different browser to see if it is related to my browser. ### Describe the issue you are experiencing Last Triggered date and time is not updated for all automations: ![image](https://user-images.githubusercontent.com/10042655/133779707-e47ed915-5449-433d-82c9-28f11859a78b.png) Both scripts have been triggered: ![image](https://user-images.githubusercontent.com/10042655/133779805-ac4b79ac-f31b-4e94-a6db-2d7f616ded08.png) ![image](https://user-images.githubusercontent.com/10042655/133779907-cb35422e-ba03-4345-9538-81858cbfc178.png) The only difference is automation Alex is Up v1.0 fails due to condition, but Alex is Up v2.0 completes the execution ### Describe the behavior you expected Last Trigerred shows the time when an automation has been trigerred regardless of the automation execution state ### Steps to reproduce the issue 1. Create an automation 1 with failing condition and automation 2 without condition 2. Wait for both automations to be triggered 3. Lat Trigerred is updated only for automation 2 ... ### What version of Home Assistant Core has the issue? Home Assistant 2021.9.6 ### What was the last working version of Home Assistant Core? _No response_ ### In which browser are you experiencing the issue with? Microsoft Edge version 93.0.961.47 ### Which operating system are you using to run this browser? Windows 10 ### State of relevant entities _No response_ ### Problem-relevant frontend configuration ```yaml alias: Alex is up v1.0 description: '' trigger: - platform: state attribute: motion_video_time entity_id: camera.chuangmi_ipc019_7398_camera_control condition: - condition: numeric_state entity_id: sun.sun attribute: elevation above: '0' - condition: template value_template: >- '{{ as_timestamp(trigger.to_state.attributes.motion_video_time) - as_timestamp(trigger.from_state.attributes.motion_video_time) <= 420 }}' action: - service: notify.mobile_app_z_flip data: message: >- Alex is up '{{as_timestamp(trigger.to_state.attributes.motion_video_time)}}', '{{as_timestamp(trigger.from_state.attributes.motion_video_time)}}'; '{{ trigger.to_state.attributes.motion_video_time }}','{{trigger.from_state.attributes.motion_video_time}}' title: Alex is up mode: single alias: Alex is up v2.0 description: '' trigger: - platform: state attribute: motion_video_time entity_id: camera.chuangmi_ipc019_7398_camera_control condition: [] action: - service: notify.mobile_app_z_flip data: message: >- Alex is up '{{ trigger.from_state.attributes.motion_video_time }}', '{{ trigger.to_state.attributes.motion_video_time }} {{ as_timestamp(trigger.from_state.attributes.motion_video_time)-as_timestamp(trigger.to_state.attributes.motion_video_time) }}' title: Alex is up mode: single ``` ### Javascript errors shown in your browser console/inspector _No response_ ### Additional information _No response_",1.0,"Automation Last Triggered Does Not Update When Automation Stops as Condition Fails - ### Checklist - [X] I have updated to the latest available Home Assistant version. - [X] I have cleared the cache of my browser. - [X] I have tried a different browser to see if it is related to my browser. ### Describe the issue you are experiencing Last Triggered date and time is not updated for all automations: ![image](https://user-images.githubusercontent.com/10042655/133779707-e47ed915-5449-433d-82c9-28f11859a78b.png) Both scripts have been triggered: ![image](https://user-images.githubusercontent.com/10042655/133779805-ac4b79ac-f31b-4e94-a6db-2d7f616ded08.png) ![image](https://user-images.githubusercontent.com/10042655/133779907-cb35422e-ba03-4345-9538-81858cbfc178.png) The only difference is automation Alex is Up v1.0 fails due to condition, but Alex is Up v2.0 completes the execution ### Describe the behavior you expected Last Trigerred shows the time when an automation has been trigerred regardless of the automation execution state ### Steps to reproduce the issue 1. Create an automation 1 with failing condition and automation 2 without condition 2. Wait for both automations to be triggered 3. Lat Trigerred is updated only for automation 2 ... ### What version of Home Assistant Core has the issue? Home Assistant 2021.9.6 ### What was the last working version of Home Assistant Core? _No response_ ### In which browser are you experiencing the issue with? Microsoft Edge version 93.0.961.47 ### Which operating system are you using to run this browser? Windows 10 ### State of relevant entities _No response_ ### Problem-relevant frontend configuration ```yaml alias: Alex is up v1.0 description: '' trigger: - platform: state attribute: motion_video_time entity_id: camera.chuangmi_ipc019_7398_camera_control condition: - condition: numeric_state entity_id: sun.sun attribute: elevation above: '0' - condition: template value_template: >- '{{ as_timestamp(trigger.to_state.attributes.motion_video_time) - as_timestamp(trigger.from_state.attributes.motion_video_time) <= 420 }}' action: - service: notify.mobile_app_z_flip data: message: >- Alex is up '{{as_timestamp(trigger.to_state.attributes.motion_video_time)}}', '{{as_timestamp(trigger.from_state.attributes.motion_video_time)}}'; '{{ trigger.to_state.attributes.motion_video_time }}','{{trigger.from_state.attributes.motion_video_time}}' title: Alex is up mode: single alias: Alex is up v2.0 description: '' trigger: - platform: state attribute: motion_video_time entity_id: camera.chuangmi_ipc019_7398_camera_control condition: [] action: - service: notify.mobile_app_z_flip data: message: >- Alex is up '{{ trigger.from_state.attributes.motion_video_time }}', '{{ trigger.to_state.attributes.motion_video_time }} {{ as_timestamp(trigger.from_state.attributes.motion_video_time)-as_timestamp(trigger.to_state.attributes.motion_video_time) }}' title: Alex is up mode: single ``` ### Javascript errors shown in your browser console/inspector _No response_ ### Additional information _No response_",1,automation last triggered does not update when automation stops as condition fails checklist i have updated to the latest available home assistant version i have cleared the cache of my browser i have tried a different browser to see if it is related to my browser describe the issue you are experiencing last triggered date and time is not updated for all automations both scripts have been triggered the only difference is automation alex is up fails due to condition but alex is up completes the execution describe the behavior you expected last trigerred shows the time when an automation has been trigerred regardless of the automation execution state steps to reproduce the issue create an automation with failing condition and automation without condition wait for both automations to be triggered lat trigerred is updated only for automation what version of home assistant core has the issue home assistant what was the last working version of home assistant core no response in which browser are you experiencing the issue with microsoft edge version which operating system are you using to run this browser windows state of relevant entities no response problem relevant frontend configuration yaml alias alex is up description trigger platform state attribute motion video time entity id camera chuangmi camera control condition condition numeric state entity id sun sun attribute elevation above condition template value template as timestamp trigger to state attributes motion video time as timestamp trigger from state attributes motion video time action service notify mobile app z flip data message alex is up as timestamp trigger to state attributes motion video time as timestamp trigger from state attributes motion video time trigger to state attributes motion video time trigger from state attributes motion video time title alex is up mode single alias alex is up description trigger platform state attribute motion video time entity id camera chuangmi camera control condition action service notify mobile app z flip data message alex is up trigger from state attributes motion video time trigger to state attributes motion video time as timestamp trigger from state attributes motion video time as timestamp trigger to state attributes motion video time title alex is up mode single javascript errors shown in your browser console inspector no response additional information no response ,1 6902,24023758996.0,IssuesEvent,2022-09-15 09:45:17,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,closed,[YSQL][Colocation] FATAL: Check failed: colocation_it != kv_store_.colocation_to_table.end() during drop table,kind/bug area/ysql priority/medium qa_automation,"Jira Link: [DB-3286](https://yugabyte.atlassian.net/browse/DB-3286) ### Description itest-system's upgrade test started failing in https://phabricator.dev.yugabyte.com/D19079, which adds a `drop table` after upgrade from 2.12.4.2-b1 to 2.15.3.0-b77. The tables are colocated. There is a similar bug with tablegroups at https://github.com/yugabyte/yugabyte-db/issues/13317, which might have the same root cause. @frozenspider should this be considered a duplicate? Test failure: ``` 2022-08-24 15:13:58,087 test_base.py:163 ERROR testupgrade-aws-rf3-upgrade-2.12.4.2_1 ITEST FAILED testupgrade-aws-rf3-upgrade-2.12.4.2_1 : AdminShutdown('terminating connection due to administrator command\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\n') 2022-08-24 15:13:58,088 test_base.py:164 INFO testupgrade-aws-rf3-upgrade-2.12.4.2_1 Traceback (most recent call last): File ""/var/lib/jenkins/code/internal-services/itest/src/test_base.py"", line 159, in execute_steps step.call() File ""/var/lib/jenkins/code/internal-services/itest/src/test_base.py"", line 51, in call ret = self.function() File ""/var/lib/jenkins/code/internal-services/itest/src/universe_tests/system_tests/test_upgrade.py"", line 148, in do_custom_work self.drop_ysql_objects(universe, release_version) File ""/var/lib/jenkins/code/internal-services/itest/src/universe_tests/system_tests/test_upgrade.py"", line 270, in drop_ysql_objects session.execute(query) psycopg2.errors.AdminShutdown: terminating connection due to administrator command server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ``` ``` F20220824 15:13:58 ../../src/yb/tablet/tablet_metadata.cc:850] Check failed: colocation_it != kv_store_.colocation_to_table.end() @ 0x2a74fc8 google::LogMessage::SendToLog() @ 0x2a75e73 google::LogMessage::Flush() @ 0x2a763ff google::LogMessageFatal::~LogMessageFatal() @ 0x36519bf yb::tablet::RaftGroupMetadata::SetSchema() @ 0x360d80d yb::tablet::Tablet::AlterSchema() @ 0x35ce0cb yb::tablet::ChangeMetadataOperation::DoReplicated() @ 0x35d06df yb::tablet::Operation::Replicated() @ 0x35d1b1e yb::tablet::OperationDriver::ReplicationFinished() @ 0x2e876b5 yb::consensus::ConsensusRound::NotifyReplicationFinished() @ 0x2ed603b yb::consensus::ReplicaState::ApplyPendingOperationsUnlocked() @ 0x2ed543d yb::consensus::ReplicaState::AdvanceCommittedOpIdUnlocked() @ 0x2eb2de7 yb::consensus::RaftConsensus::UpdateMajorityReplicated() @ 0x2e7d0b1 yb::consensus::PeerMessageQueue::NotifyObserversOfMajorityReplOpChangeTask() @ 0x3b42211 yb::ThreadPool::DispatchThread() @ 0x3b3d9a1 yb::Thread::SuperviseThread() @ 0x7f734519e694 start_thread @ 0x7f73456a041d __clone ``` On a side note, when I don't drop the tables and restore the backup, I noticed that the backup is actually not restored, which is why added the drop. I guess this is also unexpected?",1.0,"[YSQL][Colocation] FATAL: Check failed: colocation_it != kv_store_.colocation_to_table.end() during drop table - Jira Link: [DB-3286](https://yugabyte.atlassian.net/browse/DB-3286) ### Description itest-system's upgrade test started failing in https://phabricator.dev.yugabyte.com/D19079, which adds a `drop table` after upgrade from 2.12.4.2-b1 to 2.15.3.0-b77. The tables are colocated. There is a similar bug with tablegroups at https://github.com/yugabyte/yugabyte-db/issues/13317, which might have the same root cause. @frozenspider should this be considered a duplicate? Test failure: ``` 2022-08-24 15:13:58,087 test_base.py:163 ERROR testupgrade-aws-rf3-upgrade-2.12.4.2_1 ITEST FAILED testupgrade-aws-rf3-upgrade-2.12.4.2_1 : AdminShutdown('terminating connection due to administrator command\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\n') 2022-08-24 15:13:58,088 test_base.py:164 INFO testupgrade-aws-rf3-upgrade-2.12.4.2_1 Traceback (most recent call last): File ""/var/lib/jenkins/code/internal-services/itest/src/test_base.py"", line 159, in execute_steps step.call() File ""/var/lib/jenkins/code/internal-services/itest/src/test_base.py"", line 51, in call ret = self.function() File ""/var/lib/jenkins/code/internal-services/itest/src/universe_tests/system_tests/test_upgrade.py"", line 148, in do_custom_work self.drop_ysql_objects(universe, release_version) File ""/var/lib/jenkins/code/internal-services/itest/src/universe_tests/system_tests/test_upgrade.py"", line 270, in drop_ysql_objects session.execute(query) psycopg2.errors.AdminShutdown: terminating connection due to administrator command server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ``` ``` F20220824 15:13:58 ../../src/yb/tablet/tablet_metadata.cc:850] Check failed: colocation_it != kv_store_.colocation_to_table.end() @ 0x2a74fc8 google::LogMessage::SendToLog() @ 0x2a75e73 google::LogMessage::Flush() @ 0x2a763ff google::LogMessageFatal::~LogMessageFatal() @ 0x36519bf yb::tablet::RaftGroupMetadata::SetSchema() @ 0x360d80d yb::tablet::Tablet::AlterSchema() @ 0x35ce0cb yb::tablet::ChangeMetadataOperation::DoReplicated() @ 0x35d06df yb::tablet::Operation::Replicated() @ 0x35d1b1e yb::tablet::OperationDriver::ReplicationFinished() @ 0x2e876b5 yb::consensus::ConsensusRound::NotifyReplicationFinished() @ 0x2ed603b yb::consensus::ReplicaState::ApplyPendingOperationsUnlocked() @ 0x2ed543d yb::consensus::ReplicaState::AdvanceCommittedOpIdUnlocked() @ 0x2eb2de7 yb::consensus::RaftConsensus::UpdateMajorityReplicated() @ 0x2e7d0b1 yb::consensus::PeerMessageQueue::NotifyObserversOfMajorityReplOpChangeTask() @ 0x3b42211 yb::ThreadPool::DispatchThread() @ 0x3b3d9a1 yb::Thread::SuperviseThread() @ 0x7f734519e694 start_thread @ 0x7f73456a041d __clone ``` On a side note, when I don't drop the tables and restore the backup, I noticed that the backup is actually not restored, which is why added the drop. I guess this is also unexpected?",1, fatal check failed colocation it kv store colocation to table end during drop table jira link description itest system s upgrade test started failing in which adds a drop table after upgrade from to the tables are colocated there is a similar bug with tablegroups at which might have the same root cause frozenspider should this be considered a duplicate test failure test base py error testupgrade aws upgrade itest failed testupgrade aws upgrade adminshutdown terminating connection due to administrator command nserver closed the connection unexpectedly n tthis probably means the server terminated abnormally n tbefore or while processing the request n test base py info testupgrade aws upgrade traceback most recent call last file var lib jenkins code internal services itest src test base py line in execute steps step call file var lib jenkins code internal services itest src test base py line in call ret self function file var lib jenkins code internal services itest src universe tests system tests test upgrade py line in do custom work self drop ysql objects universe release version file var lib jenkins code internal services itest src universe tests system tests test upgrade py line in drop ysql objects session execute query errors adminshutdown terminating connection due to administrator command server closed the connection unexpectedly this probably means the server terminated abnormally before or while processing the request src yb tablet tablet metadata cc check failed colocation it kv store colocation to table end google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb tablet raftgroupmetadata setschema yb tablet tablet alterschema yb tablet changemetadataoperation doreplicated yb tablet operation replicated yb tablet operationdriver replicationfinished yb consensus consensusround notifyreplicationfinished yb consensus replicastate applypendingoperationsunlocked yb consensus replicastate advancecommittedopidunlocked yb consensus raftconsensus updatemajorityreplicated yb consensus peermessagequeue notifyobserversofmajorityreplopchangetask yb threadpool dispatchthread yb thread supervisethread start thread clone on a side note when i don t drop the tables and restore the backup i noticed that the backup is actually not restored which is why added the drop i guess this is also unexpected ,1 5549,20048393696.0,IssuesEvent,2022-02-03 01:12:35,bcgov/foi-flow,https://api.github.com/repos/bcgov/foi-flow,closed,QA Automation: Divisional Tracking,Task QA automation,"QA Automation: Divisional Tracking #### Description Summarize issue #### Dependencies Are there any dependencies? #### DOD - [x] Ministry View comments, 247 -250 - [x] IAO view 251 - 252 - [ ] - [ ] - [ ] ",1.0,"QA Automation: Divisional Tracking - QA Automation: Divisional Tracking #### Description Summarize issue #### Dependencies Are there any dependencies? #### DOD - [x] Ministry View comments, 247 -250 - [x] IAO view 251 - 252 - [ ] - [ ] - [ ] ",1,qa automation divisional tracking qa automation divisional tracking description summarize issue dependencies are there any dependencies dod ministry view comments iao view ,1 20125,26659983949.0,IssuesEvent,2023-01-25 20:10:12,keras-team/keras-cv,https://api.github.com/repos/keras-team/keras-cv,reopened,Reorganize [ops] and [custom_ops],high-priority process cleanup api-polish,"Currently, we have some tech debt here: ops contains some custom layers as well as some ops. We should really fix this and put the components that are like layers under layers instead of ops. Then we can merge custom_ops into the ops directory. Its confusing to have both.",1.0,"Reorganize [ops] and [custom_ops] - Currently, we have some tech debt here: ops contains some custom layers as well as some ops. We should really fix this and put the components that are like layers under layers instead of ops. Then we can merge custom_ops into the ops directory. Its confusing to have both.",0,reorganize and currently we have some tech debt here ops contains some custom layers as well as some ops we should really fix this and put the components that are like layers under layers instead of ops then we can merge custom ops into the ops directory its confusing to have both ,0 215033,16587995956.0,IssuesEvent,2021-06-01 01:46:14,RaRe-Technologies/gensim,https://api.github.com/repos/RaRe-Technologies/gensim,closed,Fix documentation link to mycorpus.txt download,bug difficulty easy documentation impact LOW reach LOW," #### Problem description Trying to reproduce Corpora and Vector Space tutorial given in the documentation, but the link to download txt file is not working. The link given in the tutorial [here](https://radimrehurek.com/gensim/auto_examples/core/run_corpora_and_vector_spaces.html#corpus-streaming-one-document-at-a-time) is giving 404 error. #### Steps/code/corpus to reproduce Just visit this [link](https://radimrehurek.com/gensim/mycorpus.txt) which is used in the code given in the documentation, it is not working. ",1.0,"Fix documentation link to mycorpus.txt download - #### Problem description Trying to reproduce Corpora and Vector Space tutorial given in the documentation, but the link to download txt file is not working. The link given in the tutorial [here](https://radimrehurek.com/gensim/auto_examples/core/run_corpora_and_vector_spaces.html#corpus-streaming-one-document-at-a-time) is giving 404 error. #### Steps/code/corpus to reproduce Just visit this [link](https://radimrehurek.com/gensim/mycorpus.txt) which is used in the code given in the documentation, it is not working. ",0,fix documentation link to mycorpus txt download important use the to ask general or usage questions github issues are only for bug reports check first for common answers github bug reports that do not include relevant information and context will be closed without an answer thanks problem description trying to reproduce corpora and vector space tutorial given in the documentation but the link to download txt file is not working the link given in the tutorial is giving error steps code corpus to reproduce just visit this which is used in the code given in the documentation it is not working ,0 98468,12325640799.0,IssuesEvent,2020-05-13 15:19:52,department-of-veterans-affairs/va.gov-cms,https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms,closed,Content proofing prototype for benefits detail page content ,All products Content model Content proofing Design Drupal engineering,"Iterate on http://pr1228.ci.cms.va.gov/health-care/about-va-health-benefits/dental-care AC * We have a prototype to use in usability testing, in a PR environment. * We have an understanding LOE to get this done. To do - [ ] Create a PR environment with latest and share out to team - [ ] Design crit end of Sprint 2 week 1 with design pod and one more drupal engineer - [ ] Decide if the prototype goes into production sooner than later as an MVP, or to a PR environment for user testing. Is this change too disruptive? - [ ] - [ ] Iteration Sprint 2 week 2",1.0,"Content proofing prototype for benefits detail page content - Iterate on http://pr1228.ci.cms.va.gov/health-care/about-va-health-benefits/dental-care AC * We have a prototype to use in usability testing, in a PR environment. * We have an understanding LOE to get this done. To do - [ ] Create a PR environment with latest and share out to team - [ ] Design crit end of Sprint 2 week 1 with design pod and one more drupal engineer - [ ] Decide if the prototype goes into production sooner than later as an MVP, or to a PR environment for user testing. Is this change too disruptive? - [ ] - [ ] Iteration Sprint 2 week 2",0,content proofing prototype for benefits detail page content iterate on ac we have a prototype to use in usability testing in a pr environment we have an understanding loe to get this done to do create a pr environment with latest and share out to team design crit end of sprint week with design pod and one more drupal engineer decide if the prototype goes into production sooner than later as an mvp or to a pr environment for user testing is this change too disruptive iteration sprint week ,0 58581,7163774661.0,IssuesEvent,2018-01-29 08:54:03,maevanapcontact/Ulight,https://api.github.com/repos/maevanapcontact/Ulight,closed,Create design for TABLE and IMG elements matching BASIC template,design help wanted,"We need to have the design for the **``** and **``** elements, matching the **basic template**. The basic template: https://github.com/maevanapcontact/Ulight/blob/master/docs/images/templates/t-basic.png You can create the files - t-basic-table.png - t-basic-img.png and add them to the folder ""t-basic-elements"" in here https://github.com/maevanapcontact/Ulight/blob/master/docs/images/templates ",1.0,"Create design for TABLE and IMG elements matching BASIC template - We need to have the design for the **`
`** and **``** elements, matching the **basic template**. The basic template: https://github.com/maevanapcontact/Ulight/blob/master/docs/images/templates/t-basic.png You can create the files - t-basic-table.png - t-basic-img.png and add them to the folder ""t-basic-elements"" in here https://github.com/maevanapcontact/Ulight/blob/master/docs/images/templates ",0,create design for table and img elements matching basic template we need to have the design for the and elements matching the basic template the basic template you can create the files t basic table png t basic img png and add them to the folder t basic elements in here ,0 5425,19580056580.0,IssuesEvent,2022-01-04 20:01:18,jgyates/genmon,https://api.github.com/repos/jgyates/genmon,closed,MQTT retain message,question automation - monitoring apps AddOn,"I'm setting up mqtt with Home Assistant and it is working great. The only issue I am having is that when the generator is not running all of my dashboards error out. This is because the sensors are not reporting a value to HA via mqtt. Is it possible to add the retain option to the mqtt module? I think this would solve things as the broker would retain the last message which wold be zero. ![image](https://user-images.githubusercontent.com/2295127/148101204-87206b70-6af8-42b3-a68c-b2780e4b729b.png) ",1.0,"MQTT retain message - I'm setting up mqtt with Home Assistant and it is working great. The only issue I am having is that when the generator is not running all of my dashboards error out. This is because the sensors are not reporting a value to HA via mqtt. Is it possible to add the retain option to the mqtt module? I think this would solve things as the broker would retain the last message which wold be zero. ![image](https://user-images.githubusercontent.com/2295127/148101204-87206b70-6af8-42b3-a68c-b2780e4b729b.png) ",1,mqtt retain message i m setting up mqtt with home assistant and it is working great the only issue i am having is that when the generator is not running all of my dashboards error out this is because the sensors are not reporting a value to ha via mqtt is it possible to add the retain option to the mqtt module i think this would solve things as the broker would retain the last message which wold be zero ,1 43105,17404746118.0,IssuesEvent,2021-08-03 03:11:03,browny/gcp-daily,https://api.github.com/repos/browny/gcp-daily,opened,2021-08-03,Cloud NAT Security Service account,"🗞️ Cloud NAT rules [[link](https://cloud.google.com/nat/docs/overview#nat-rules)] 「Cloud NAT 支援設定 egrees rules 啦!」👏 🗞️ View recent usage for service accounts and keys [[link](https://cloud.google.com/iam/docs/service-account-recent-usage)] 「可以稽核 service account 和 keys 的使用啦!」👏",1.0,"2021-08-03 - 🗞️ Cloud NAT rules [[link](https://cloud.google.com/nat/docs/overview#nat-rules)] 「Cloud NAT 支援設定 egrees rules 啦!」👏 🗞️ View recent usage for service accounts and keys [[link](https://cloud.google.com/iam/docs/service-account-recent-usage)] 「可以稽核 service account 和 keys 的使用啦!」👏",0, 🗞️ cloud nat rules 「cloud nat 支援設定 egrees rules 啦!」👏 🗞️ view recent usage for service accounts and keys 「可以稽核 service account 和 keys 的使用啦!」👏,0 373520,11045265245.0,IssuesEvent,2019-12-09 14:49:42,vigetlabs/npm,https://api.github.com/repos/vigetlabs/npm,closed,[Visual QA] Check the blockquote carousel,Medium Priority Needs QA Fixes,"Can we make the carousel either change height when a larger card loads or give NPM a character limit? This is looking odd - they've added long quotes so the whole carousel is taller to accommodate. We should also make sure the quote is always aligned middle (vs top) so that when this happens, it'll appear centered. ![Screen Shot 2019-11-13 at 9 29 35 AM](https://user-images.githubusercontent.com/67819/68772753-5687f380-05f8-11ea-8d1a-0ae20394baeb.png)",1.0,"[Visual QA] Check the blockquote carousel - Can we make the carousel either change height when a larger card loads or give NPM a character limit? This is looking odd - they've added long quotes so the whole carousel is taller to accommodate. We should also make sure the quote is always aligned middle (vs top) so that when this happens, it'll appear centered. ![Screen Shot 2019-11-13 at 9 29 35 AM](https://user-images.githubusercontent.com/67819/68772753-5687f380-05f8-11ea-8d1a-0ae20394baeb.png)",0, check the blockquote carousel can we make the carousel either change height when a larger card loads or give npm a character limit this is looking odd they ve added long quotes so the whole carousel is taller to accommodate we should also make sure the quote is always aligned middle vs top so that when this happens it ll appear centered ,0 46873,7294032109.0,IssuesEvent,2018-02-25 19:53:18,saltstack/salt,https://api.github.com/repos/saltstack/salt,closed,Redhat has moved python-jinja2 package from the EPEL to optional repository,Bug Documentation High Severity P2 Packaging stale,"Please update the documentation to reflect the fact that python-jinja2 moved from epel to optional. Users who wish to install or update saltstack using only RedHat-supplied repos must now enable both repos to successfully install saltstack. ",1.0,"Redhat has moved python-jinja2 package from the EPEL to optional repository - Please update the documentation to reflect the fact that python-jinja2 moved from epel to optional. Users who wish to install or update saltstack using only RedHat-supplied repos must now enable both repos to successfully install saltstack. ",0,redhat has moved python package from the epel to optional repository please update the documentation to reflect the fact that python moved from epel to optional users who wish to install or update saltstack using only redhat supplied repos must now enable both repos to successfully install saltstack ,0 141964,21648189702.0,IssuesEvent,2022-05-06 06:13:46,stores-cedcommerce/Charles-Nelson-Store-redesign,https://api.github.com/repos/stores-cedcommerce/Charles-Nelson-Store-redesign,closed,Wishlist login popup UI issue,Product page Desktop Ready to test Design / UI / UX,"Bug - When user click on add wishlist button without login from quickview popup then login popup appearing behind quickview popup. Exp - Login popup should show above quickview popup when user click on add wishlist button on without login. Ref Link - https://drive.google.com/file/d/1knakXQXi6oUaIqTN7oB5eD5BSZ22pge4/view",1.0,"Wishlist login popup UI issue - Bug - When user click on add wishlist button without login from quickview popup then login popup appearing behind quickview popup. Exp - Login popup should show above quickview popup when user click on add wishlist button on without login. Ref Link - https://drive.google.com/file/d/1knakXQXi6oUaIqTN7oB5eD5BSZ22pge4/view",0,wishlist login popup ui issue bug when user click on add wishlist button without login from quickview popup then login popup appearing behind quickview popup exp login popup should show above quickview popup when user click on add wishlist button on without login ref link ,0 192132,22215903365.0,IssuesEvent,2022-06-08 01:35:31,panasalap/linux-4.1.15,https://api.github.com/repos/panasalap/linux-4.1.15,reopened,CVE-2019-18198 (High) detected in linux179e72b561d3d331c850e1a5779688d7a7de5246,security vulnerability,"## CVE-2019-18198 - High Severity Vulnerability
Vulnerable Library - linux179e72b561d3d331c850e1a5779688d7a7de5246

Linux kernel stable tree mirror

Library home page: https://github.com/gregkh/linux.git

Found in HEAD commit: aae4c2fa46027fd4c477372871df090c6b94f3f1

Found in base branch: master

Vulnerable Source Files (2)

/net/ipv6/fib6_rules.c /net/ipv6/fib6_rules.c

Vulnerability Details

In the Linux kernel before 5.3.4, a reference count usage error in the fib6_rule_suppress() function in the fib6 suppression feature of net/ipv6/fib6_rules.c, when handling the FIB_LOOKUP_NOREF flag, can be exploited by a local attacker to corrupt memory, aka CID-ca7a03c41753.

Publish Date: 2019-10-18

URL: CVE-2019-18198

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18198

Release Date: 2019-10-31

Fix Resolution: v5.4-rc1

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-18198 (High) detected in linux179e72b561d3d331c850e1a5779688d7a7de5246 - ## CVE-2019-18198 - High Severity Vulnerability
Vulnerable Library - linux179e72b561d3d331c850e1a5779688d7a7de5246

Linux kernel stable tree mirror

Library home page: https://github.com/gregkh/linux.git

Found in HEAD commit: aae4c2fa46027fd4c477372871df090c6b94f3f1

Found in base branch: master

Vulnerable Source Files (2)

/net/ipv6/fib6_rules.c /net/ipv6/fib6_rules.c

Vulnerability Details

In the Linux kernel before 5.3.4, a reference count usage error in the fib6_rule_suppress() function in the fib6 suppression feature of net/ipv6/fib6_rules.c, when handling the FIB_LOOKUP_NOREF flag, can be exploited by a local attacker to corrupt memory, aka CID-ca7a03c41753.

Publish Date: 2019-10-18

URL: CVE-2019-18198

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18198

Release Date: 2019-10-31

Fix Resolution: v5.4-rc1

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in cve high severity vulnerability vulnerable library linux kernel stable tree mirror library home page a href found in head commit a href found in base branch master vulnerable source files net rules c net rules c vulnerability details in the linux kernel before a reference count usage error in the rule suppress function in the suppression feature of net rules c when handling the fib lookup noref flag can be exploited by a local attacker to corrupt memory aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 3364,13577517318.0,IssuesEvent,2020-09-20 01:53:44,tkottke90/bin-inventory,https://api.github.com/repos/tkottke90/bin-inventory,closed,Implement GH action with linter on push to branch,API automation,"To support good code practices, linting should be done by the repository when a user pushes new code to a branch that is not master. This linting should take place as a Github action. If a pull request is open for the branch, a comment should be added to the pull request with the results of the linter",1.0,"Implement GH action with linter on push to branch - To support good code practices, linting should be done by the repository when a user pushes new code to a branch that is not master. This linting should take place as a Github action. If a pull request is open for the branch, a comment should be added to the pull request with the results of the linter",1,implement gh action with linter on push to branch to support good code practices linting should be done by the repository when a user pushes new code to a branch that is not master this linting should take place as a github action if a pull request is open for the branch a comment should be added to the pull request with the results of the linter,1 375,5887856792.0,IssuesEvent,2017-05-17 08:40:06,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,ScriptFileWatcher does not cancel scheduled jobs,Automation bug,"When uninstalling the scripted rule support, the following exception is thrown: ``` 16:25:26.849 [ERROR] [automation.module.script.rulesupport] - FrameworkEvent ERROR - org.eclipse.smarthome.automation.module.script.rulesupport java.io.IOException: Exception in opening zip file: /Users/kai/Downloads/oh/userdata/cache/org.eclipse.osgi/198/0/bundleFile at org.eclipse.osgi.framework.util.SecureAction.getZipFile(SecureAction.java:305)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.storage.bundlefile.ZipBundleFile.basicOpen(ZipBundleFile.java:85)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.storage.bundlefile.ZipBundleFile.getZipFile(ZipBundleFile.java:98)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.storage.bundlefile.ZipBundleFile.checkedOpen(ZipBundleFile.java:65)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.storage.bundlefile.ZipBundleFile.getEntry(ZipBundleFile.java:232)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findClassImpl(ClasspathManager.java:562)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findLocalClassImpl(ClasspathManager.java:540)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findLocalClass(ClasspathManager.java:527)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.ModuleClassLoader.findLocalClass(ModuleClassLoader.java:324)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:327)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:402)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:352)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:344)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:160)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at java.lang.ClassLoader.loadClass(ClassLoader.java:357)[:1.8.0_20] at org.eclipse.smarthome.automation.module.script.rulesupport.internal.loader.ScriptFileWatcher.checkFiles(ScriptFileWatcher.java:225)[198:org.eclipse.smarthome.automation.module.script.rulesupport:0.9.0.201705120951] at org.eclipse.smarthome.automation.module.script.rulesupport.internal.loader.ScriptFileWatcher$$Lambda$12/1692515238.run(Unknown Source)[198:org.eclipse.smarthome.automation.module.script.rulesupport:0.9.0.201705120951] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_20] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)[:1.8.0_20] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)[:1.8.0_20] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)[:1.8.0_20] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_20] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_20] at java.lang.Thread.run(Thread.java:745)[:1.8.0_20] Caused by: java.io.FileNotFoundException: /Users/kai/Downloads/oh/userdata/cache/org.eclipse.osgi/198/0/bundleFile (No such file or directory) at java.util.zip.ZipFile.open(Native Method)[:1.8.0_20] at java.util.zip.ZipFile.(ZipFile.java:220)[:1.8.0_20] at java.util.zip.ZipFile.(ZipFile.java:150)[:1.8.0_20] at java.util.zip.ZipFile.(ZipFile.java:164)[:1.8.0_20] at org.eclipse.osgi.framework.util.SecureAction.getZipFile(SecureAction.java:288)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] ... 23 more ``` ",1.0,"ScriptFileWatcher does not cancel scheduled jobs - When uninstalling the scripted rule support, the following exception is thrown: ``` 16:25:26.849 [ERROR] [automation.module.script.rulesupport] - FrameworkEvent ERROR - org.eclipse.smarthome.automation.module.script.rulesupport java.io.IOException: Exception in opening zip file: /Users/kai/Downloads/oh/userdata/cache/org.eclipse.osgi/198/0/bundleFile at org.eclipse.osgi.framework.util.SecureAction.getZipFile(SecureAction.java:305)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.storage.bundlefile.ZipBundleFile.basicOpen(ZipBundleFile.java:85)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.storage.bundlefile.ZipBundleFile.getZipFile(ZipBundleFile.java:98)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.storage.bundlefile.ZipBundleFile.checkedOpen(ZipBundleFile.java:65)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.storage.bundlefile.ZipBundleFile.getEntry(ZipBundleFile.java:232)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findClassImpl(ClasspathManager.java:562)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findLocalClassImpl(ClasspathManager.java:540)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findLocalClass(ClasspathManager.java:527)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.ModuleClassLoader.findLocalClass(ModuleClassLoader.java:324)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:327)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:402)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:352)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:344)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:160)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] at java.lang.ClassLoader.loadClass(ClassLoader.java:357)[:1.8.0_20] at org.eclipse.smarthome.automation.module.script.rulesupport.internal.loader.ScriptFileWatcher.checkFiles(ScriptFileWatcher.java:225)[198:org.eclipse.smarthome.automation.module.script.rulesupport:0.9.0.201705120951] at org.eclipse.smarthome.automation.module.script.rulesupport.internal.loader.ScriptFileWatcher$$Lambda$12/1692515238.run(Unknown Source)[198:org.eclipse.smarthome.automation.module.script.rulesupport:0.9.0.201705120951] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_20] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)[:1.8.0_20] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)[:1.8.0_20] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)[:1.8.0_20] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_20] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_20] at java.lang.Thread.run(Thread.java:745)[:1.8.0_20] Caused by: java.io.FileNotFoundException: /Users/kai/Downloads/oh/userdata/cache/org.eclipse.osgi/198/0/bundleFile (No such file or directory) at java.util.zip.ZipFile.open(Native Method)[:1.8.0_20] at java.util.zip.ZipFile.(ZipFile.java:220)[:1.8.0_20] at java.util.zip.ZipFile.(ZipFile.java:150)[:1.8.0_20] at java.util.zip.ZipFile.(ZipFile.java:164)[:1.8.0_20] at org.eclipse.osgi.framework.util.SecureAction.getZipFile(SecureAction.java:288)[org.eclipse.osgi-3.10.101.v20150820-1432.jar:] ... 23 more ``` ",1,scriptfilewatcher does not cancel scheduled jobs when uninstalling the scripted rule support the following exception is thrown frameworkevent error org eclipse smarthome automation module script rulesupport java io ioexception exception in opening zip file users kai downloads oh userdata cache org eclipse osgi bundlefile at org eclipse osgi framework util secureaction getzipfile secureaction java at org eclipse osgi storage bundlefile zipbundlefile basicopen zipbundlefile java at org eclipse osgi storage bundlefile zipbundlefile getzipfile zipbundlefile java at org eclipse osgi storage bundlefile zipbundlefile checkedopen zipbundlefile java at org eclipse osgi storage bundlefile zipbundlefile getentry zipbundlefile java at org eclipse osgi internal loader classpath classpathmanager findclassimpl classpathmanager java at org eclipse osgi internal loader classpath classpathmanager findlocalclassimpl classpathmanager java at org eclipse osgi internal loader classpath classpathmanager findlocalclass classpathmanager java at org eclipse osgi internal loader moduleclassloader findlocalclass moduleclassloader java at org eclipse osgi internal loader bundleloader findlocalclass bundleloader java at org eclipse osgi internal loader bundleloader findclassinternal bundleloader java at org eclipse osgi internal loader bundleloader findclass bundleloader java at org eclipse osgi internal loader bundleloader findclass bundleloader java at org eclipse osgi internal loader moduleclassloader loadclass moduleclassloader java at java lang classloader loadclass classloader java at org eclipse smarthome automation module script rulesupport internal loader scriptfilewatcher checkfiles scriptfilewatcher java at org eclipse smarthome automation module script rulesupport internal loader scriptfilewatcher lambda run unknown source at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask runandreset futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java io filenotfoundexception users kai downloads oh userdata cache org eclipse osgi bundlefile no such file or directory at java util zip zipfile open native method at java util zip zipfile zipfile java at java util zip zipfile zipfile java at java util zip zipfile zipfile java at org eclipse osgi framework util secureaction getzipfile secureaction java more ,1 5935,21689619220.0,IssuesEvent,2022-05-09 14:19:34,o3de/o3de,https://api.github.com/repos/o3de/o3de,closed,AR Bug Report: Collider_DiffCollisionGroupDiffCollidingLayersNotCollide periodically fails on Linux,kind/bug needs-triage kind/automation,"15:36:50 =================================== FAILURES =================================== 15:36:50 _ TestAutomation.Collider_DiffCollisionGroupDiffCollidingLayersNotCollide[linux-crash_log_watchdog0-AutomatedTesting-windows_editor] _ 15:36:50 ../../../../../../Tools/LyTestTools/ly_test_tools/o3de/editor_test.py:440: in single_run 15:36:50 self._run_single_test(request, workspace, editor, editor_test_data, test_spec) 15:36:50 ../../../../../../Tools/LyTestTools/ly_test_tools/o3de/editor_test.py:1007: in _run_single_test 15:36:50 self._report_result(test_name, test_result) 15:36:50 ../../../../../../Tools/LyTestTools/ly_test_tools/o3de/editor_test.py:756: in _report_result 15:36:50 pytest.fail(error_str) 15:36:50 E Failed: Test Collider_DiffCollisionGroupDiffCollidingLayersNotCollide: 15:36:50 E Test CRASHED, return code -0xb 15:36:50 E --------------- 15:36:50 E | Stacktrace | 15:36:50 E --------------- 15:36:50 E -- No crash log available -- 15:36:50 E [Errno 2] No such file or directory: '/data/workspace/o3de/AutomatedTesting/user/log_test_1/crash.log'------------ 15:36:50 E | Output | 15:36:50 E ------------ 15:36:50 E -- No output -- 15:36:50 E -------------- 15:36:50 E | Editor log | 15:36:50 E -------------- 15:36:50 E -- No editor log found --",1.0,"AR Bug Report: Collider_DiffCollisionGroupDiffCollidingLayersNotCollide periodically fails on Linux - 15:36:50 =================================== FAILURES =================================== 15:36:50 _ TestAutomation.Collider_DiffCollisionGroupDiffCollidingLayersNotCollide[linux-crash_log_watchdog0-AutomatedTesting-windows_editor] _ 15:36:50 ../../../../../../Tools/LyTestTools/ly_test_tools/o3de/editor_test.py:440: in single_run 15:36:50 self._run_single_test(request, workspace, editor, editor_test_data, test_spec) 15:36:50 ../../../../../../Tools/LyTestTools/ly_test_tools/o3de/editor_test.py:1007: in _run_single_test 15:36:50 self._report_result(test_name, test_result) 15:36:50 ../../../../../../Tools/LyTestTools/ly_test_tools/o3de/editor_test.py:756: in _report_result 15:36:50 pytest.fail(error_str) 15:36:50 E Failed: Test Collider_DiffCollisionGroupDiffCollidingLayersNotCollide: 15:36:50 E Test CRASHED, return code -0xb 15:36:50 E --------------- 15:36:50 E | Stacktrace | 15:36:50 E --------------- 15:36:50 E -- No crash log available -- 15:36:50 E [Errno 2] No such file or directory: '/data/workspace/o3de/AutomatedTesting/user/log_test_1/crash.log'------------ 15:36:50 E | Output | 15:36:50 E ------------ 15:36:50 E -- No output -- 15:36:50 E -------------- 15:36:50 E | Editor log | 15:36:50 E -------------- 15:36:50 E -- No editor log found --",1,ar bug report collider diffcollisiongroupdiffcollidinglayersnotcollide periodically fails on linux failures testautomation collider diffcollisiongroupdiffcollidinglayersnotcollide tools lytesttools ly test tools editor test py in single run self run single test request workspace editor editor test data test spec tools lytesttools ly test tools editor test py in run single test self report result test name test result tools lytesttools ly test tools editor test py in report result pytest fail error str e failed test collider diffcollisiongroupdiffcollidinglayersnotcollide e test crashed return code e e stacktrace e e no crash log available e no such file or directory data workspace automatedtesting user log test crash log e output e e no output e e editor log e e no editor log found ,1 5232,18892380030.0,IssuesEvent,2021-11-15 14:32:48,betagouv/preuve-covoiturage,https://api.github.com/repos/betagouv/preuve-covoiturage,closed,"Intégration de l'attribut ""operator_class"" dans le jeu de données openData",Open Data Automation AMELIORATION,"### Sujet _Il nous est demandé de rajouter l'attribut ""operator_class"" dans le jeu de données openData. Celui-ci étant inscrit dans les CGU comme étant partagé en openData, il faut créer une colonne pour l'intégrer et se conformer aux CGU en place. _il faudra ensuite mettre la doc open data à jour : https://www.data.gouv.fr/fr/datasets/trajets-realises-en-covoiturage-registre-de-preuve-de-covoiturage/ Qui fait cela d'habitude ?",1.0,"Intégration de l'attribut ""operator_class"" dans le jeu de données openData - ### Sujet _Il nous est demandé de rajouter l'attribut ""operator_class"" dans le jeu de données openData. Celui-ci étant inscrit dans les CGU comme étant partagé en openData, il faut créer une colonne pour l'intégrer et se conformer aux CGU en place. _il faudra ensuite mettre la doc open data à jour : https://www.data.gouv.fr/fr/datasets/trajets-realises-en-covoiturage-registre-de-preuve-de-covoiturage/ Qui fait cela d'habitude ?",1,intégration de l attribut operator class dans le jeu de données opendata sujet il nous est demandé de rajouter l attribut operator class dans le jeu de données opendata celui ci étant inscrit dans les cgu comme étant partagé en opendata il faut créer une colonne pour l intégrer et se conformer aux cgu en place img width alt capture d’écran à src il faudra ensuite mettre la doc open data à jour qui fait cela d habitude ,1 34133,6291811969.0,IssuesEvent,2017-07-20 02:27:40,rapid7/nexpose-client-python,https://api.github.com/repos/rapid7/nexpose-client-python,closed,Typo in RequestSiteConfig Docstring,documentation,"The docstring for the RequestSiteConfig method call has some typos. ```python """""" Get the configuration of the specified site. This function will return a single **SiteConfigesponse** XML object (API 1.1). """""" ``` The class **SiteConfigesponse** doesn't exist and appears to be a typo. https://github.com/rapid7/nexpose-client-python/blob/master/nexpose/nexpose.py#L382 ## Your Environment * Nexpose-client-python version: 0.1.0 ",1.0,"Typo in RequestSiteConfig Docstring - The docstring for the RequestSiteConfig method call has some typos. ```python """""" Get the configuration of the specified site. This function will return a single **SiteConfigesponse** XML object (API 1.1). """""" ``` The class **SiteConfigesponse** doesn't exist and appears to be a typo. https://github.com/rapid7/nexpose-client-python/blob/master/nexpose/nexpose.py#L382 ## Your Environment * Nexpose-client-python version: 0.1.0 ",0,typo in requestsiteconfig docstring the docstring for the requestsiteconfig method call has some typos python get the configuration of the specified site this function will return a single siteconfigesponse xml object api the class siteconfigesponse doesn t exist and appears to be a typo your environment nexpose client python version ,0 2100,11393219945.0,IssuesEvent,2020-01-30 05:50:40,home-assistant/home-assistant,https://api.github.com/repos/home-assistant/home-assistant,opened,Find related entity/device IDs should also look in automation conditions,integration: automation to do,"The automation integration currently does not find related entity/device IDs that are referenced in conditions. We should add this. Support for finding related entity/device IDs to automations was introduced in 0.105 via #31293. Implementation should update the extraction helpers in `automation/__init__.py` and use the condition extraction helpers defined at the bottom of `helpers/condition.py`. ",1.0,"Find related entity/device IDs should also look in automation conditions - The automation integration currently does not find related entity/device IDs that are referenced in conditions. We should add this. Support for finding related entity/device IDs to automations was introduced in 0.105 via #31293. Implementation should update the extraction helpers in `automation/__init__.py` and use the condition extraction helpers defined at the bottom of `helpers/condition.py`. ",1,find related entity device ids should also look in automation conditions the automation integration currently does not find related entity device ids that are referenced in conditions we should add this support for finding related entity device ids to automations was introduced in via implementation should update the extraction helpers in automation init py and use the condition extraction helpers defined at the bottom of helpers condition py ,1 60052,17023322075.0,IssuesEvent,2021-07-03 01:25:27,tomhughes/trac-tickets,https://api.github.com/repos/tomhughes/trac-tickets,closed,GPX upload: failed import files in db?,Component: website Priority: minor Resolution: wontfix Type: defect,"**[Submitted to the original trac issue database at 12.45am, Friday, 14th November 2008]** Lots of people on the forum has said ""My trace just disappeared"", and I got no email. May be it would be a good idea to keep the references to failed imports+the failed file, and changing the status to ""PARSE ERROR""? This is a lets let Silverstone do it post! (not really, but it's fun to confuse!)",1.0,"GPX upload: failed import files in db? - **[Submitted to the original trac issue database at 12.45am, Friday, 14th November 2008]** Lots of people on the forum has said ""My trace just disappeared"", and I got no email. May be it would be a good idea to keep the references to failed imports+the failed file, and changing the status to ""PARSE ERROR""? This is a lets let Silverstone do it post! (not really, but it's fun to confuse!)",0,gpx upload failed import files in db lots of people on the forum has said my trace just disappeared and i got no email may be it would be a good idea to keep the references to failed imports the failed file and changing the status to parse error this is a lets let silverstone do it post not really but it s fun to confuse ,0 3941,15014667312.0,IssuesEvent,2021-02-01 07:02:43,MISP/MISP,https://api.github.com/repos/MISP/MISP,closed,"MISP Automation , not working properly. event wise data is not getting downloaded.",T: support automation,"Hello, I am trying to automate the process of suricata rules export . I am trying this API format : https://[misp url]/events/nids/[format]/download/[eventid]/[frame]/[tags]/[from]/[to]/[last] my final API would be, let say if I want to export just for event 6: https://[misp url]/events/nids/suricata/download/6 the above event wise api is not working for any specific event id, it is exporting all the rules from all events. even when I am trying to export all the suricata rules with the api: https://[misp url]/events/nids/suricata/download it is leaving my eventa 6, 4, 1207.. to download the suricata rule for. means it is not completed. though these evens contains IDS published attributes. please let me have a solution here.",1.0,"MISP Automation , not working properly. event wise data is not getting downloaded. - Hello, I am trying to automate the process of suricata rules export . I am trying this API format : https://[misp url]/events/nids/[format]/download/[eventid]/[frame]/[tags]/[from]/[to]/[last] my final API would be, let say if I want to export just for event 6: https://[misp url]/events/nids/suricata/download/6 the above event wise api is not working for any specific event id, it is exporting all the rules from all events. even when I am trying to export all the suricata rules with the api: https://[misp url]/events/nids/suricata/download it is leaving my eventa 6, 4, 1207.. to download the suricata rule for. means it is not completed. though these evens contains IDS published attributes. please let me have a solution here.",1,misp automation not working properly event wise data is not getting downloaded hello i am trying to automate the process of suricata rules export i am trying this api format https events nids download my final api would be let say if i want to export just for event https events nids suricata download the above event wise api is not working for any specific event id it is exporting all the rules from all events even when i am trying to export all the suricata rules with the api https events nids suricata download it is leaving my eventa to download the suricata rule for means it is not completed though these evens contains ids published attributes please let me have a solution here ,1 7454,24906829947.0,IssuesEvent,2022-10-29 11:05:35,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,"(Recent) Odd behaviours in automations with no ""From"" state",integration: automation stale,"### The problem I believe this is a recent problem. I had an automation which constantly showed the ""Save"" button, even after saving it. I determined that this was being caused by an entity state trigger with no From state (To state was ""on"").After adding a Frome state ""Off"", I was able to save the animation and subsequently the Save button stopped popping up each time I noticed some other oddities, - the animation was sometimes missing out steps, and one of the entities was showing incorrectly in the traces. Not sure if this was related to the above. ### What version of Home Assistant Core has the issue? 2022.9.1 ### What was the last working version of Home Assistant Core? unsure, but I believe this to be a recent bug ### What type of installation are you running? Home Assistant OS ### Integration causing the issue _No response_ ### Link to integration documentation on our website _No response_ ### Diagnostics information _No response_ ### Example YAML snippet ```yaml working: - platform: state entity_id: - binary_sensor.studio_chilly to: ""on"" from: ""off"" not working: - platform: state entity_id: - binary_sensor.studio_chilly to: ""on"" ``` ### Anything in the logs that might be useful for us? _No response_ ### Additional information The From field is marked as optional in the front end. This documentation advises leaving this empty as it will also trigger on change from ""Unavailable""",1.0,"(Recent) Odd behaviours in automations with no ""From"" state - ### The problem I believe this is a recent problem. I had an automation which constantly showed the ""Save"" button, even after saving it. I determined that this was being caused by an entity state trigger with no From state (To state was ""on"").After adding a Frome state ""Off"", I was able to save the animation and subsequently the Save button stopped popping up each time I noticed some other oddities, - the animation was sometimes missing out steps, and one of the entities was showing incorrectly in the traces. Not sure if this was related to the above. ### What version of Home Assistant Core has the issue? 2022.9.1 ### What was the last working version of Home Assistant Core? unsure, but I believe this to be a recent bug ### What type of installation are you running? Home Assistant OS ### Integration causing the issue _No response_ ### Link to integration documentation on our website _No response_ ### Diagnostics information _No response_ ### Example YAML snippet ```yaml working: - platform: state entity_id: - binary_sensor.studio_chilly to: ""on"" from: ""off"" not working: - platform: state entity_id: - binary_sensor.studio_chilly to: ""on"" ``` ### Anything in the logs that might be useful for us? _No response_ ### Additional information The From field is marked as optional in the front end. This documentation advises leaving this empty as it will also trigger on change from ""Unavailable""",1, recent odd behaviours in automations with no from state the problem i believe this is a recent problem i had an automation which constantly showed the save button even after saving it i determined that this was being caused by an entity state trigger with no from state to state was on after adding a frome state off i was able to save the animation and subsequently the save button stopped popping up each time i noticed some other oddities the animation was sometimes missing out steps and one of the entities was showing incorrectly in the traces not sure if this was related to the above what version of home assistant core has the issue what was the last working version of home assistant core unsure but i believe this to be a recent bug what type of installation are you running home assistant os integration causing the issue no response link to integration documentation on our website no response diagnostics information no response example yaml snippet yaml working platform state entity id binary sensor studio chilly to on from off not working platform state entity id binary sensor studio chilly to on anything in the logs that might be useful for us no response additional information the from field is marked as optional in the front end this documentation advises leaving this empty as it will also trigger on change from unavailable ,1 779,8100750093.0,IssuesEvent,2018-08-12 03:30:08,johnnyflowers/schoolcalendars,https://api.github.com/repos/johnnyflowers/schoolcalendars,closed,Checksum PDFs,automation,Each calendar should have an associated checksum kept in the database. The spider should check that found calendars' checksums don't match any that we already have.,1.0,Checksum PDFs - Each calendar should have an associated checksum kept in the database. The spider should check that found calendars' checksums don't match any that we already have.,1,checksum pdfs each calendar should have an associated checksum kept in the database the spider should check that found calendars checksums don t match any that we already have ,1 8160,26341156290.0,IssuesEvent,2023-01-10 17:46:50,awslabs/aws-lambda-powertools-typescript,https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript,closed,Maintenance: integrate utility with CI/CD merge process,area/automation area/parameters status/completed,"## Description of the feature request **Problem statement** Every time a PR is merged a number of checks and actions are run on the utilities to make sure that the changes introduced are compatible with the existing code (this is somewhat redundant as these same checks also run in the PR itself - but better be safe than sorry). Additionally, the workflow that runs on merge also builds the documentation and API docs and publishes it. **Summary of the feature** This unit of work tracks the activities and changes needed for the new utility to be part of the merge workflow. **Code examples** N/A **Benefits for you and the wider AWS community** N/A **Describe alternatives you've considered** N/A **Additional context** N/A ### Related issues, RFCs #846 ",1.0,"Maintenance: integrate utility with CI/CD merge process - ## Description of the feature request **Problem statement** Every time a PR is merged a number of checks and actions are run on the utilities to make sure that the changes introduced are compatible with the existing code (this is somewhat redundant as these same checks also run in the PR itself - but better be safe than sorry). Additionally, the workflow that runs on merge also builds the documentation and API docs and publishes it. **Summary of the feature** This unit of work tracks the activities and changes needed for the new utility to be part of the merge workflow. **Code examples** N/A **Benefits for you and the wider AWS community** N/A **Describe alternatives you've considered** N/A **Additional context** N/A ### Related issues, RFCs #846 ",1,maintenance integrate utility with ci cd merge process description of the feature request problem statement every time a pr is merged a number of checks and actions are run on the utilities to make sure that the changes introduced are compatible with the existing code this is somewhat redundant as these same checks also run in the pr itself but better be safe than sorry additionally the workflow that runs on merge also builds the documentation and api docs and publishes it summary of the feature this unit of work tracks the activities and changes needed for the new utility to be part of the merge workflow code examples n a benefits for you and the wider aws community n a describe alternatives you ve considered n a additional context n a related issues rfcs ,1 78938,22549623301.0,IssuesEvent,2022-06-27 03:09:41,rust-lang/rust,https://api.github.com/repos/rust-lang/rust,closed,move `download-ci-llvm` logic from bootstrap.py to rustbuild,C-cleanup A-LLVM A-rustbuild,"As discussed a bit in the comments of #77756 , it'd be great if the logic handling LLVM setup / download was integrated with rustbuild directly. The main blocker being that rustbuild would need to depend on an http client library. Currently, the python script does system calls [to `curl` or to Powershell's `WebClient`](https://github.com/rust-lang/rust/blob/7f587168102498a488abf608a86c7fdfa62fb7bb/src/bootstrap/bootstrap.py#L81-L98) depending on the platform.",1.0,"move `download-ci-llvm` logic from bootstrap.py to rustbuild - As discussed a bit in the comments of #77756 , it'd be great if the logic handling LLVM setup / download was integrated with rustbuild directly. The main blocker being that rustbuild would need to depend on an http client library. Currently, the python script does system calls [to `curl` or to Powershell's `WebClient`](https://github.com/rust-lang/rust/blob/7f587168102498a488abf608a86c7fdfa62fb7bb/src/bootstrap/bootstrap.py#L81-L98) depending on the platform.",0,move download ci llvm logic from bootstrap py to rustbuild as discussed a bit in the comments of it d be great if the logic handling llvm setup download was integrated with rustbuild directly the main blocker being that rustbuild would need to depend on an http client library currently the python script does system calls depending on the platform ,0 2396,11865061587.0,IssuesEvent,2020-03-25 23:14:08,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,Preserve exit code when docker-compose exists with an error,automation,"Invoking apm-integration testing doesn't preserve the exit code from docker-compose. **Demonstration** ``` scripts/compose.py start master ``` While running the above command, I kill `docker` which results in the following error: ``` ERROR: dial unix docker.raw.sock: connect: connection refused ``` However, the exit code is 0 indicating that everything went fine. ![image](https://user-images.githubusercontent.com/209966/77426404-b2e3f680-6dd4-11ea-9055-8bfc738bf641.png) ",1.0,"Preserve exit code when docker-compose exists with an error - Invoking apm-integration testing doesn't preserve the exit code from docker-compose. **Demonstration** ``` scripts/compose.py start master ``` While running the above command, I kill `docker` which results in the following error: ``` ERROR: dial unix docker.raw.sock: connect: connection refused ``` However, the exit code is 0 indicating that everything went fine. ![image](https://user-images.githubusercontent.com/209966/77426404-b2e3f680-6dd4-11ea-9055-8bfc738bf641.png) ",1,preserve exit code when docker compose exists with an error invoking apm integration testing doesn t preserve the exit code from docker compose demonstration scripts compose py start master while running the above command i kill docker which results in the following error error dial unix docker raw sock connect connection refused however the exit code is indicating that everything went fine ,1 578077,17143646249.0,IssuesEvent,2021-07-13 12:29:20,GoogleCloudPlatform/python-docs-samples,https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples,opened,storage.cloud-client.public_access_prevention_test: test_get_public_access_prevention failed,flakybot: issue priority: p1 type: bug,"This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: b99df8d36109e4fe3e397bfd2cbacac06960340c buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d11f9a83-4e62-49ef-b85e-d3db176a3e31), [Sponge](http://sponge2/d11f9a83-4e62-49ef-b85e-d3db176a3e31) status: failed
Test output
Traceback (most recent call last):
  File ""/workspace/storage/cloud-client/conftest.py"", line 33, in bucket
    while bucket is None or bucket.exists():
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/storage/bucket.py"", line 804, in exists
    client._get_resource(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/storage/client.py"", line 368, in _get_resource
    return self._connection.api_request(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/storage/_http.py"", line 78, in api_request
    return call()
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/retry.py"", line 285, in retry_wrapped_func
    return retry_target(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/retry.py"", line 188, in retry_target
    return target()
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/_http.py"", line 473, in api_request
    response = self._make_request(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/_http.py"", line 337, in _make_request
    return self._do_request(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/_http.py"", line 375, in _do_request
    return self.http.request(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/auth/transport/requests.py"", line 476, in request
    self.credentials.before_request(auth_request, method, url, request_headers)
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/auth/credentials.py"", line 133, in before_request
    self.refresh(request)
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/service_account.py"", line 407, in refresh
    access_token, expiry, _ = _client.jwt_grant(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py"", line 193, in jwt_grant
    response_data = _token_endpoint_request(request, token_uri, body)
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py"", line 165, in _token_endpoint_request
    _handle_error_response(response_data)
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py"", line 60, in _handle_error_response
    raise exceptions.RefreshError(error_details, response_data)
google.auth.exceptions.RefreshError: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})
",1.0,"storage.cloud-client.public_access_prevention_test: test_get_public_access_prevention failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: b99df8d36109e4fe3e397bfd2cbacac06960340c buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d11f9a83-4e62-49ef-b85e-d3db176a3e31), [Sponge](http://sponge2/d11f9a83-4e62-49ef-b85e-d3db176a3e31) status: failed
Test output
Traceback (most recent call last):
  File ""/workspace/storage/cloud-client/conftest.py"", line 33, in bucket
    while bucket is None or bucket.exists():
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/storage/bucket.py"", line 804, in exists
    client._get_resource(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/storage/client.py"", line 368, in _get_resource
    return self._connection.api_request(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/storage/_http.py"", line 78, in api_request
    return call()
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/retry.py"", line 285, in retry_wrapped_func
    return retry_target(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/retry.py"", line 188, in retry_target
    return target()
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/_http.py"", line 473, in api_request
    response = self._make_request(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/_http.py"", line 337, in _make_request
    return self._do_request(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/_http.py"", line 375, in _do_request
    return self.http.request(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/auth/transport/requests.py"", line 476, in request
    self.credentials.before_request(auth_request, method, url, request_headers)
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/auth/credentials.py"", line 133, in before_request
    self.refresh(request)
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/service_account.py"", line 407, in refresh
    access_token, expiry, _ = _client.jwt_grant(
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py"", line 193, in jwt_grant
    response_data = _token_endpoint_request(request, token_uri, body)
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py"", line 165, in _token_endpoint_request
    _handle_error_response(response_data)
  File ""/workspace/storage/cloud-client/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py"", line 60, in _handle_error_response
    raise exceptions.RefreshError(error_details, response_data)
google.auth.exceptions.RefreshError: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})
",0,storage cloud client public access prevention test test get public access prevention failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output traceback most recent call last file workspace storage cloud client conftest py line in bucket while bucket is none or bucket exists file workspace storage cloud client nox py lib site packages google cloud storage bucket py line in exists client get resource file workspace storage cloud client nox py lib site packages google cloud storage client py line in get resource return self connection api request file workspace storage cloud client nox py lib site packages google cloud storage http py line in api request return call file workspace storage cloud client nox py lib site packages google api core retry py line in retry wrapped func return retry target file workspace storage cloud client nox py lib site packages google api core retry py line in retry target return target file workspace storage cloud client nox py lib site packages google cloud http py line in api request response self make request file workspace storage cloud client nox py lib site packages google cloud http py line in make request return self do request file workspace storage cloud client nox py lib site packages google cloud http py line in do request return self http request file workspace storage cloud client nox py lib site packages google auth transport requests py line in request self credentials before request auth request method url request headers file workspace storage cloud client nox py lib site packages google auth credentials py line in before request self refresh request file workspace storage cloud client nox py lib site packages google service account py line in refresh access token expiry client jwt grant file workspace storage cloud client nox py lib site packages google client py line in jwt grant response data token endpoint request request token uri body file workspace storage cloud client nox py lib site packages google client py line in token endpoint request handle error response response data file workspace storage cloud client nox py lib site packages google client py line in handle error response raise exceptions refresherror error details response data google auth exceptions refresherror invalid grant invalid jwt signature error invalid grant error description invalid jwt signature ,0 1052,9303952260.0,IssuesEvent,2019-03-24 21:20:29,scrum-gang/jobhub-web,https://api.github.com/repos/scrum-gang/jobhub-web,closed,Implement Selenium Testing for Home Page,automation,"### Describe why the project needs this task Selenium is a technology that automates browser functions, such as clicking buttons or scrolling down a page, which is useful in testing UI and user interactions with various application functionalities. Testing the UI will ensure interactions with various page elements are as expected, in the closest thing to a real user test possible. ### Describe the solution to the problem Tests will be implemented for: - Signing in - Navigating to different pages ",1.0,"Implement Selenium Testing for Home Page - ### Describe why the project needs this task Selenium is a technology that automates browser functions, such as clicking buttons or scrolling down a page, which is useful in testing UI and user interactions with various application functionalities. Testing the UI will ensure interactions with various page elements are as expected, in the closest thing to a real user test possible. ### Describe the solution to the problem Tests will be implemented for: - Signing in - Navigating to different pages ",1,implement selenium testing for home page describe why the project needs this task selenium is a technology that automates browser functions such as clicking buttons or scrolling down a page which is useful in testing ui and user interactions with various application functionalities testing the ui will ensure interactions with various page elements are as expected in the closest thing to a real user test possible describe the solution to the problem tests will be implemented for signing in navigating to different pages ,1 12940,3295847264.0,IssuesEvent,2015-11-01 10:26:00,ledgersmb/LedgerSMB,https://api.github.com/repos/ledgersmb/LedgerSMB,closed,Test loading of preconfigured 'CoA' files,enhancement testing,"As part of our database testing routines, a test should be introduced to check that all provided CoA files can be succesfully loaded into LedgerSMB. Point in case: we shipped spanish CoA files for many releases which couldn't be loaded at all.",1.0,"Test loading of preconfigured 'CoA' files - As part of our database testing routines, a test should be introduced to check that all provided CoA files can be succesfully loaded into LedgerSMB. Point in case: we shipped spanish CoA files for many releases which couldn't be loaded at all.",0,test loading of preconfigured coa files as part of our database testing routines a test should be introduced to check that all provided coa files can be succesfully loaded into ledgersmb point in case we shipped spanish coa files for many releases which couldn t be loaded at all ,0 3874,14856172638.0,IssuesEvent,2021-01-18 13:48:18,rudiments-dev/hardcore,https://api.github.com/repos/rudiments-dev/hardcore,opened,"Инфраструктура для демо, деплой через GH Actions",automation domain registry example app,"- [ ] Собрать демо в облаке AWS Free Tier - [ ] Автоматизировать деплой domain-registry для демо",1.0,"Инфраструктура для демо, деплой через GH Actions - - [ ] Собрать демо в облаке AWS Free Tier - [ ] Автоматизировать деплой domain-registry для демо",1,инфраструктура для демо деплой через gh actions собрать демо в облаке aws free tier автоматизировать деплой domain registry для демо,1 1664,10552291010.0,IssuesEvent,2019-10-03 14:54:11,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,Increase UI test wait time constant,eng:automation,"Per @sv-ohorvath and @npark-mozilla a few flaky UI test issues may be caused by waitingtime value being too low. We should prob increase it from 15 to 45 seconds for those rare test runs in which a device may be temporarily running low on resources https://console.firebase.google.com/u/2/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/7400905977103619111/executions/bs.fda44e88a405c81c/test-cases",1.0,"Increase UI test wait time constant - Per @sv-ohorvath and @npark-mozilla a few flaky UI test issues may be caused by waitingtime value being too low. We should prob increase it from 15 to 45 seconds for those rare test runs in which a device may be temporarily running low on resources https://console.firebase.google.com/u/2/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/7400905977103619111/executions/bs.fda44e88a405c81c/test-cases",1,increase ui test wait time constant per sv ohorvath and npark mozilla a few flaky ui test issues may be caused by waitingtime value being too low we should prob increase it from to seconds for those rare test runs in which a device may be temporarily running low on resources ,1 64209,8718621017.0,IssuesEvent,2018-12-07 21:05:00,smiths/MEASURE,https://api.github.com/repos/smiths/MEASURE,closed,Close Vena for rubric entry,Documentation,"- Rubric and Course Report Entry will close Friday December 7, 2018 at 4:00 PM EST - Maria extended the deadline by one week",1.0,"Close Vena for rubric entry - - Rubric and Course Report Entry will close Friday December 7, 2018 at 4:00 PM EST - Maria extended the deadline by one week",0,close vena for rubric entry rubric and course report entry will close friday december at pm est maria extended the deadline by one week,0 4088,15370711806.0,IssuesEvent,2021-03-02 09:08:49,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,closed,FXIOS-1532 ⁃ [UITests] Update test after WKWebview change to keep them green,eng:automation,"After this commit db8fd299959cd420a7b8044f00bc4d58c5667e0e UI tests started to fail not detecting the elements in the WebView. As in that commit we need to update also the [evaluateJavaScript](https://github.com/mozilla-mobile/firefox-ios/blob/main/UITests/Global.swift#L194) to `evaluateJavascriptInDefaultContentWorld`. With that change tests will work again ┆Issue is synchronized with this [Jira Task](https://jira.mozilla.com/browse/FXIOS-1532) ",1.0,"FXIOS-1532 ⁃ [UITests] Update test after WKWebview change to keep them green - After this commit db8fd299959cd420a7b8044f00bc4d58c5667e0e UI tests started to fail not detecting the elements in the WebView. As in that commit we need to update also the [evaluateJavaScript](https://github.com/mozilla-mobile/firefox-ios/blob/main/UITests/Global.swift#L194) to `evaluateJavascriptInDefaultContentWorld`. With that change tests will work again ┆Issue is synchronized with this [Jira Task](https://jira.mozilla.com/browse/FXIOS-1532) ",1,fxios ⁃ update test after wkwebview change to keep them green after this commit ui tests started to fail not detecting the elements in the webview as in that commit we need to update also the to evaluatejavascriptindefaultcontentworld with that change tests will work again ┆issue is synchronized with this ,1 3944,15019081773.0,IssuesEvent,2021-02-01 13:05:56,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,Fix coverage reporting,automation type:chore,"Our code coverage reporting on PRs often shows wrong numbers. In addition, coverage reporting on GitHub README page seems broken: ![image](https://user-images.githubusercontent.com/56065213/104918064-823d0100-5994-11eb-98cf-9e34d717b92c.png) Our coverage should be way higher, but it seems that it always dips down to 22% for no reason: https://codecov.io/gh/keptn/keptn ![image](https://user-images.githubusercontent.com/56065213/104918133-9b45b200-5994-11eb-86d5-934dd7faf80a.png) # Definition of Done - [ ] Coverage Reporting on PR is correct - [ ] Coverage Reportingo n master is correct ",1.0,"Fix coverage reporting - Our code coverage reporting on PRs often shows wrong numbers. In addition, coverage reporting on GitHub README page seems broken: ![image](https://user-images.githubusercontent.com/56065213/104918064-823d0100-5994-11eb-98cf-9e34d717b92c.png) Our coverage should be way higher, but it seems that it always dips down to 22% for no reason: https://codecov.io/gh/keptn/keptn ![image](https://user-images.githubusercontent.com/56065213/104918133-9b45b200-5994-11eb-86d5-934dd7faf80a.png) # Definition of Done - [ ] Coverage Reporting on PR is correct - [ ] Coverage Reportingo n master is correct ",1,fix coverage reporting our code coverage reporting on prs often shows wrong numbers in addition coverage reporting on github readme page seems broken our coverage should be way higher but it seems that it always dips down to for no reason definition of done coverage reporting on pr is correct coverage reportingo n master is correct ,1 4417,16506311230.0,IssuesEvent,2021-05-25 19:48:09,inboundnow/inbound-pro,https://api.github.com/repos/inboundnow/inbound-pro,closed,[speed][automation] Improvements that will affect site loading speed,Automation UX Enhancement,"As of now automation component loads all rules from the CPT and then loops through them each, loading their meta to detect their status. This happens globally. If the rule was enabled/disabled at the wp_posts table level, leveraging the post_status field, then we could ignore disabled rules, preventing unnecessary overhead. https://github.com/inboundnow/inbound-pro/blob/master/core/automation/classes/class.definitions.loader.php#L288 As it stands now. Site load time will increase unnecessarily as user keeps adding rules and Inbound Now supports additional triggers. Also need a way to to disable/custom-select tracking of action hooks. Needs to be a setting included in the $inbound_settings global variable so no additional loading has to occur. ",1.0,"[speed][automation] Improvements that will affect site loading speed - As of now automation component loads all rules from the CPT and then loops through them each, loading their meta to detect their status. This happens globally. If the rule was enabled/disabled at the wp_posts table level, leveraging the post_status field, then we could ignore disabled rules, preventing unnecessary overhead. https://github.com/inboundnow/inbound-pro/blob/master/core/automation/classes/class.definitions.loader.php#L288 As it stands now. Site load time will increase unnecessarily as user keeps adding rules and Inbound Now supports additional triggers. Also need a way to to disable/custom-select tracking of action hooks. Needs to be a setting included in the $inbound_settings global variable so no additional loading has to occur. ",1, improvements that will affect site loading speed as of now automation component loads all rules from the cpt and then loops through them each loading their meta to detect their status this happens globally if the rule was enabled disabled at the wp posts table level leveraging the post status field then we could ignore disabled rules preventing unnecessary overhead as it stands now site load time will increase unnecessarily as user keeps adding rules and inbound now supports additional triggers also need a way to to disable custom select tracking of action hooks needs to be a setting included in the inbound settings global variable so no additional loading has to occur ,1 796343,28107475641.0,IssuesEvent,2023-03-31 02:49:02,ansible-collections/azure,https://api.github.com/repos/ansible-collections/azure,closed,Could not find member 'template' on object of type DeploymentContent,medium_priority work in,"##### SUMMARY When doing ansible deployment using `azure_rm_deployment`, the push is errored out with - ""Message: The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'."" ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` ""stderr_lines"": [ ""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/OpenSSL/crypto.py:8: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release."", "" from cryptography import utils, x509"", ""Traceback (most recent call last):"", "" File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680162048.550992-25028-70112161155416/AnsiballZ_azure_rm_deployment.py\"", line 102, in "", "" _ansiballz_main()"", "" File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680162048.550992-25028-70112161155416/AnsiballZ_azure_rm_deployment.py\"", line 94, in _ansiballz_main"", "" invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)"", "" File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680162048.550992-25028-70112161155416/AnsiballZ_azure_rm_deployment.py\"", line 40, in invoke_module"", "" runpy.run_module(mod_name='ansible.modules.azure_rm_deployment', init_globals=None, run_name='__main__', alter_sys=True)"", "" File \""/usr/lib64/python3.6/runpy.py\"", line 205, in run_module"", "" return _run_module_code(code, init_globals, run_name, mod_spec)"", "" File \""/usr/lib64/python3.6/runpy.py\"", line 96, in _run_module_code"", "" mod_name, mod_spec, pkg_name, script_name)"", "" File \""/usr/lib64/python3.6/runpy.py\"", line 85, in _run_code"", "" exec(code, run_globals)"", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 699, in "", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 695, in main"", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 470, in __init__"", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/module_utils/azure_rm_common.py\"", line 349, in __init__"", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 477, in exec_module"", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 552, in deploy_template"", "" File \""/usr/lib/python3.6/site-packages/azure/core/tracing/decorator.py\"", line 78, in wrapper_use_tracer"", "" return func(*args, **kwargs)"", "" File \""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/azure/mgmt/resource/resources/v2019_10_01/operations/_operations.py\"", line 6882, in begin_create_or_update"", "" **kwargs"", "" File \""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/azure/mgmt/resource/resources/v2019_10_01/operations/_operations.py\"", line 6816, in _create_or_update_initial"", "" raise HttpResponseError(response=response, error_format=ARMErrorFormat)"", ""azure.core.exceptions.HttpResponseError: (InvalidRequestContent) The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'."", ""Code: InvalidRequestContent"", ""Message: The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'."" ], ``` ##### ANSIBLE VERSION ``` ansible 2.9.19 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.6/site-packages/ansible executable location = /usr/bin/ansible python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] ``` ##### COLLECTION VERSION ``` azure-appconfiguration 1.1.1 azure-batch 12.0.0 azure-cli 2.39.0 azure-cli-core 2.39.0 azure-cli-nspkg 3.0.2 azure-cli-telemetry 1.0.7 azure-common 1.1.11 azure-core 1.24.2 azure-cosmos 3.2.0 azure-data-tables 12.4.0 azure-datalake-store 0.0.52 azure-graphrbac 0.60.0 azure-identity 1.10.0 azure-keyvault 1.1.0 azure-keyvault-administration 4.0.0b3 azure-keyvault-keys 4.5.1 azure-loganalytics 0.1.1 azure-mgmt-advisor 9.0.0 azure-mgmt-apimanagement 3.0.0 azure-mgmt-appconfiguration 2.1.0 azure-mgmt-applicationinsights 1.0.0 azure-mgmt-authorization 0.61.0 azure-mgmt-automation 1.0.0 azure-mgmt-batch 16.2.0 azure-mgmt-batchai 7.0.0b1 azure-mgmt-billing 6.0.0 azure-mgmt-botservice 2.0.0b3 azure-mgmt-cdn 12.0.0 azure-mgmt-cognitiveservices 13.2.0 azure-mgmt-compute 27.1.0 azure-mgmt-consumption 2.0.0 azure-mgmt-containerinstance 9.1.0 azure-mgmt-containerregistry 8.2.0 azure-mgmt-containerservice 20.2.0 azure-mgmt-core 1.3.2 azure-mgmt-cosmosdb 7.0.0b6 azure-mgmt-databoxedge 1.0.0 azure-mgmt-datalake-analytics 0.2.1 azure-mgmt-datalake-nspkg 3.0.1 azure-mgmt-datalake-store 0.5.0 azure-mgmt-datamigration 10.0.0 azure-mgmt-deploymentmanager 0.2.0 azure-mgmt-devtestlabs 4.0.0 azure-mgmt-dns 8.0.0 azure-mgmt-eventgrid 10.2.0b2 azure-mgmt-eventhub 10.1.0 azure-mgmt-extendedlocation 1.0.0b2 azure-mgmt-hdinsight 9.0.0 azure-mgmt-imagebuilder 1.1.0 azure-mgmt-iotcentral 10.0.0b1 azure-mgmt-iothub 2.2.0 azure-mgmt-iothubprovisioningservices 1.1.0 azure-mgmt-keyvault 9.3.0 azure-mgmt-kusto 0.3.0 azure-mgmt-loganalytics 13.0.0b4 azure-mgmt-managedservices 1.0.0 azure-mgmt-managementgroups 1.0.0 azure-mgmt-maps 2.0.0 azure-mgmt-marketplaceordering 1.1.0 azure-mgmt-media 9.0.0 azure-mgmt-monitor 3.0.0 azure-mgmt-msi 6.0.1 azure-mgmt-netapp 8.0.0 azure-mgmt-network 20.0.0 azure-mgmt-nspkg 3.0.2 azure-mgmt-policyinsights 1.1.0b2 azure-mgmt-privatedns 1.0.0 azure-mgmt-rdbms 10.2.0b1 azure-mgmt-recoveryservices 2.0.0 azure-mgmt-recoveryservicesbackup 5.0.0 azure-mgmt-redhatopenshift 1.1.0 azure-mgmt-redis 13.1.0 azure-mgmt-relay 0.1.0 azure-mgmt-reservations 2.0.0 azure-mgmt-resource 21.1.0b1 azure-mgmt-search 8.0.0 azure-mgmt-security 2.0.0b1 azure-mgmt-servicebus 7.1.0 azure-mgmt-servicefabric 1.0.0 azure-mgmt-servicefabricmanagedclusters 1.0.0 azure-mgmt-servicelinker 1.0.0 azure-mgmt-signalr 1.0.0b2 azure-mgmt-sql 4.0.0b2 azure-mgmt-sqlvirtualmachine 1.0.0b3 azure-mgmt-storage 20.0.0 azure-mgmt-synapse 2.1.0b2 azure-mgmt-trafficmanager 1.0.0 azure-mgmt-web 7.0.0 azure-multiapi-storage 0.9.0 azure-nspkg 3.0.2 azure-storage 0.35.1 azure-storage-common 1.4.2 azure-synapse-accesscontrol 0.5.0 azure-synapse-artifacts 0.13.0 azure-synapse-managedprivateendpoints 0.3.0 azure-synapse-spark 0.2.0 msrest 0.7.1 msrestazure 0.6.4 ``` ##### CONFIGURATION ``` (ansible) bash-4.4$ ansible-config dump --only-changed /usr/lib/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release. from cryptography.exceptions import InvalidSignature ``` ##### OS / ENVIRONMENT ``` (ansible) bash-4.4$ cat /etc/*release* CentOS Linux release 8.2.2004 (Core) Derived from Red Hat Enterprise Linux 8.2 (Source) NAME=""CentOS Linux"" VERSION=""8 (Core)"" ID=""centos"" ID_LIKE=""rhel fedora"" VERSION_ID=""8"" PLATFORM_ID=""platform:el8"" PRETTY_NAME=""CentOS Linux 8 (Core)"" ANSI_COLOR=""0;31"" CPE_NAME=""cpe:/o:centos:centos:8"" HOME_URL=""https://www.centos.org/"" BUG_REPORT_URL=""https://bugs.centos.org/"" CENTOS_MANTISBT_PROJECT=""CentOS-8"" CENTOS_MANTISBT_PROJECT_VERSION=""8"" REDHAT_SUPPORT_PRODUCT=""centos"" REDHAT_SUPPORT_PRODUCT_VERSION=""8"" CentOS Linux release 8.2.2004 (Core) CentOS Linux release 8.2.2004 (Core) cpe:/o:centos:centos:8 ``` ##### STEPS TO REPRODUCE * The problem came up all of a sudden while it used to be working and it's errored out when adding new vnet * Related tasks ``` - name: Deploy Base Network Components to Azure azure_rm_deployment: name: ""{{ item.1.env_code }}-network-base-deployment"" resource_group: ""{{ item.1.env_code }}{{ rg_suffix }}"" template: ""{{ lookup('template', 'templates/rendered/' + item.1.env_code + rg_suffix + '_network-base-deployment.json' ) }}"" location: ""{{ item.0.region_name }}"" # subscription_id: ""{{ subscription_id_production if item.1.production == true else subscription_id_pre_prod}}"" with_subelements: - ""{{ regions }}"" - environments async: 3600 poll: 0 register: base_deploy tags: - base - deploy_task - name: Wait for base VNET Deploy async_status: jid: ""{{ item.ansible_job_id }}"" register: job_result until: job_result.finished retries: 300 tags: - base - deploy_task with_items: ""{{ base_deploy.results }}"" ``` ##### EXPECTED RESULTS * Expecting Async poll would be successful in return ##### ACTUAL RESULTS * Got ` ""Message: The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'.""` ``` ""cmd"": ""/var/lib/awx/.ansible/tmp/ansible-tmp-1680144009.9892657-44592-5293828901974/AnsiballZ_azure_rm_deployment.py"", ""data"": """", ""stderr"": ""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/OpenSSL/crypto.py:8: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.\n from cryptography import utils, x509\nTraceback (most recent call last):\n File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680144009.9892657-44592-5293828901974/AnsiballZ_azure_rm_deployment.py\"", line 102, in \n _ansiballz_main()\n File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680144009.9892657-44592-5293828901974/AnsiballZ_azure_rm_deployment.py\"", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680144009.9892657-44592-5293828901974/AnsiballZ_azure_rm_deployment.py\"", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.azure_rm_deployment', init_globals=None, run_name='__main__', alter_sys=True)\n File \""/usr/lib64/python3.6/runpy.py\"", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \""/usr/lib64/python3.6/runpy.py\"", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \""/usr/lib64/python3.6/runpy.py\"", line 85, in _run_code\n exec(code, run_globals)\n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 699, in \n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 695, in main\n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 470, in __init__\n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/module_utils/azure_rm_common.py\"", line 349, in __init__\n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 477, in exec_module\n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 552, in deploy_template\n File \""/usr/lib/python3.6/site-packages/azure/core/tracing/decorator.py\"", line 78, in wrapper_use_tracer\n return func(*args, **kwargs)\n File \""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/azure/mgmt/resource/resources/v2019_05_01/operations/_operations.py\"", line 3861, in begin_create_or_update\n **kwargs\n File \""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/azure/mgmt/resource/resources/v2019_05_01/operations/_operations.py\"", line 3795, in _create_or_update_initial\n raise HttpResponseError(response=response, error_format=ARMErrorFormat)\nazure.core.exceptions.HttpResponseError: (InvalidRequestContent) The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'.\nCode: InvalidRequestContent\nMessage: The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'.\n"", ""msg"": ""Traceback (most recent call last):\n File \""/tmp/ansible_async_wrapper_payload_391mug5z/ansible_async_wrapper_payload.zip/ansible/modules/utilities/logic/async_wrapper.py\"", line 166, in _run_module\n File \""/tmp/ansible_async_wrapper_payload_391mug5z/ansible_async_wrapper_payload.zip/ansible/modules/utilities/logic/async_wrapper.py\"", line 94, in _filter_non_json_lines\nValueError: No start of json char found\n"", ``` ",1.0,"Could not find member 'template' on object of type DeploymentContent - ##### SUMMARY When doing ansible deployment using `azure_rm_deployment`, the push is errored out with - ""Message: The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'."" ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` ""stderr_lines"": [ ""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/OpenSSL/crypto.py:8: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release."", "" from cryptography import utils, x509"", ""Traceback (most recent call last):"", "" File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680162048.550992-25028-70112161155416/AnsiballZ_azure_rm_deployment.py\"", line 102, in "", "" _ansiballz_main()"", "" File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680162048.550992-25028-70112161155416/AnsiballZ_azure_rm_deployment.py\"", line 94, in _ansiballz_main"", "" invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)"", "" File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680162048.550992-25028-70112161155416/AnsiballZ_azure_rm_deployment.py\"", line 40, in invoke_module"", "" runpy.run_module(mod_name='ansible.modules.azure_rm_deployment', init_globals=None, run_name='__main__', alter_sys=True)"", "" File \""/usr/lib64/python3.6/runpy.py\"", line 205, in run_module"", "" return _run_module_code(code, init_globals, run_name, mod_spec)"", "" File \""/usr/lib64/python3.6/runpy.py\"", line 96, in _run_module_code"", "" mod_name, mod_spec, pkg_name, script_name)"", "" File \""/usr/lib64/python3.6/runpy.py\"", line 85, in _run_code"", "" exec(code, run_globals)"", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 699, in "", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 695, in main"", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 470, in __init__"", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/module_utils/azure_rm_common.py\"", line 349, in __init__"", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 477, in exec_module"", "" File \""/tmp/ansible_azure_rm_deployment_payload__leee341/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 552, in deploy_template"", "" File \""/usr/lib/python3.6/site-packages/azure/core/tracing/decorator.py\"", line 78, in wrapper_use_tracer"", "" return func(*args, **kwargs)"", "" File \""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/azure/mgmt/resource/resources/v2019_10_01/operations/_operations.py\"", line 6882, in begin_create_or_update"", "" **kwargs"", "" File \""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/azure/mgmt/resource/resources/v2019_10_01/operations/_operations.py\"", line 6816, in _create_or_update_initial"", "" raise HttpResponseError(response=response, error_format=ARMErrorFormat)"", ""azure.core.exceptions.HttpResponseError: (InvalidRequestContent) The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'."", ""Code: InvalidRequestContent"", ""Message: The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'."" ], ``` ##### ANSIBLE VERSION ``` ansible 2.9.19 config file = /etc/ansible/ansible.cfg configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.6/site-packages/ansible executable location = /usr/bin/ansible python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] ``` ##### COLLECTION VERSION ``` azure-appconfiguration 1.1.1 azure-batch 12.0.0 azure-cli 2.39.0 azure-cli-core 2.39.0 azure-cli-nspkg 3.0.2 azure-cli-telemetry 1.0.7 azure-common 1.1.11 azure-core 1.24.2 azure-cosmos 3.2.0 azure-data-tables 12.4.0 azure-datalake-store 0.0.52 azure-graphrbac 0.60.0 azure-identity 1.10.0 azure-keyvault 1.1.0 azure-keyvault-administration 4.0.0b3 azure-keyvault-keys 4.5.1 azure-loganalytics 0.1.1 azure-mgmt-advisor 9.0.0 azure-mgmt-apimanagement 3.0.0 azure-mgmt-appconfiguration 2.1.0 azure-mgmt-applicationinsights 1.0.0 azure-mgmt-authorization 0.61.0 azure-mgmt-automation 1.0.0 azure-mgmt-batch 16.2.0 azure-mgmt-batchai 7.0.0b1 azure-mgmt-billing 6.0.0 azure-mgmt-botservice 2.0.0b3 azure-mgmt-cdn 12.0.0 azure-mgmt-cognitiveservices 13.2.0 azure-mgmt-compute 27.1.0 azure-mgmt-consumption 2.0.0 azure-mgmt-containerinstance 9.1.0 azure-mgmt-containerregistry 8.2.0 azure-mgmt-containerservice 20.2.0 azure-mgmt-core 1.3.2 azure-mgmt-cosmosdb 7.0.0b6 azure-mgmt-databoxedge 1.0.0 azure-mgmt-datalake-analytics 0.2.1 azure-mgmt-datalake-nspkg 3.0.1 azure-mgmt-datalake-store 0.5.0 azure-mgmt-datamigration 10.0.0 azure-mgmt-deploymentmanager 0.2.0 azure-mgmt-devtestlabs 4.0.0 azure-mgmt-dns 8.0.0 azure-mgmt-eventgrid 10.2.0b2 azure-mgmt-eventhub 10.1.0 azure-mgmt-extendedlocation 1.0.0b2 azure-mgmt-hdinsight 9.0.0 azure-mgmt-imagebuilder 1.1.0 azure-mgmt-iotcentral 10.0.0b1 azure-mgmt-iothub 2.2.0 azure-mgmt-iothubprovisioningservices 1.1.0 azure-mgmt-keyvault 9.3.0 azure-mgmt-kusto 0.3.0 azure-mgmt-loganalytics 13.0.0b4 azure-mgmt-managedservices 1.0.0 azure-mgmt-managementgroups 1.0.0 azure-mgmt-maps 2.0.0 azure-mgmt-marketplaceordering 1.1.0 azure-mgmt-media 9.0.0 azure-mgmt-monitor 3.0.0 azure-mgmt-msi 6.0.1 azure-mgmt-netapp 8.0.0 azure-mgmt-network 20.0.0 azure-mgmt-nspkg 3.0.2 azure-mgmt-policyinsights 1.1.0b2 azure-mgmt-privatedns 1.0.0 azure-mgmt-rdbms 10.2.0b1 azure-mgmt-recoveryservices 2.0.0 azure-mgmt-recoveryservicesbackup 5.0.0 azure-mgmt-redhatopenshift 1.1.0 azure-mgmt-redis 13.1.0 azure-mgmt-relay 0.1.0 azure-mgmt-reservations 2.0.0 azure-mgmt-resource 21.1.0b1 azure-mgmt-search 8.0.0 azure-mgmt-security 2.0.0b1 azure-mgmt-servicebus 7.1.0 azure-mgmt-servicefabric 1.0.0 azure-mgmt-servicefabricmanagedclusters 1.0.0 azure-mgmt-servicelinker 1.0.0 azure-mgmt-signalr 1.0.0b2 azure-mgmt-sql 4.0.0b2 azure-mgmt-sqlvirtualmachine 1.0.0b3 azure-mgmt-storage 20.0.0 azure-mgmt-synapse 2.1.0b2 azure-mgmt-trafficmanager 1.0.0 azure-mgmt-web 7.0.0 azure-multiapi-storage 0.9.0 azure-nspkg 3.0.2 azure-storage 0.35.1 azure-storage-common 1.4.2 azure-synapse-accesscontrol 0.5.0 azure-synapse-artifacts 0.13.0 azure-synapse-managedprivateendpoints 0.3.0 azure-synapse-spark 0.2.0 msrest 0.7.1 msrestazure 0.6.4 ``` ##### CONFIGURATION ``` (ansible) bash-4.4$ ansible-config dump --only-changed /usr/lib/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release. from cryptography.exceptions import InvalidSignature ``` ##### OS / ENVIRONMENT ``` (ansible) bash-4.4$ cat /etc/*release* CentOS Linux release 8.2.2004 (Core) Derived from Red Hat Enterprise Linux 8.2 (Source) NAME=""CentOS Linux"" VERSION=""8 (Core)"" ID=""centos"" ID_LIKE=""rhel fedora"" VERSION_ID=""8"" PLATFORM_ID=""platform:el8"" PRETTY_NAME=""CentOS Linux 8 (Core)"" ANSI_COLOR=""0;31"" CPE_NAME=""cpe:/o:centos:centos:8"" HOME_URL=""https://www.centos.org/"" BUG_REPORT_URL=""https://bugs.centos.org/"" CENTOS_MANTISBT_PROJECT=""CentOS-8"" CENTOS_MANTISBT_PROJECT_VERSION=""8"" REDHAT_SUPPORT_PRODUCT=""centos"" REDHAT_SUPPORT_PRODUCT_VERSION=""8"" CentOS Linux release 8.2.2004 (Core) CentOS Linux release 8.2.2004 (Core) cpe:/o:centos:centos:8 ``` ##### STEPS TO REPRODUCE * The problem came up all of a sudden while it used to be working and it's errored out when adding new vnet * Related tasks ``` - name: Deploy Base Network Components to Azure azure_rm_deployment: name: ""{{ item.1.env_code }}-network-base-deployment"" resource_group: ""{{ item.1.env_code }}{{ rg_suffix }}"" template: ""{{ lookup('template', 'templates/rendered/' + item.1.env_code + rg_suffix + '_network-base-deployment.json' ) }}"" location: ""{{ item.0.region_name }}"" # subscription_id: ""{{ subscription_id_production if item.1.production == true else subscription_id_pre_prod}}"" with_subelements: - ""{{ regions }}"" - environments async: 3600 poll: 0 register: base_deploy tags: - base - deploy_task - name: Wait for base VNET Deploy async_status: jid: ""{{ item.ansible_job_id }}"" register: job_result until: job_result.finished retries: 300 tags: - base - deploy_task with_items: ""{{ base_deploy.results }}"" ``` ##### EXPECTED RESULTS * Expecting Async poll would be successful in return ##### ACTUAL RESULTS * Got ` ""Message: The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'.""` ``` ""cmd"": ""/var/lib/awx/.ansible/tmp/ansible-tmp-1680144009.9892657-44592-5293828901974/AnsiballZ_azure_rm_deployment.py"", ""data"": """", ""stderr"": ""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/OpenSSL/crypto.py:8: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.\n from cryptography import utils, x509\nTraceback (most recent call last):\n File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680144009.9892657-44592-5293828901974/AnsiballZ_azure_rm_deployment.py\"", line 102, in \n _ansiballz_main()\n File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680144009.9892657-44592-5293828901974/AnsiballZ_azure_rm_deployment.py\"", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1680144009.9892657-44592-5293828901974/AnsiballZ_azure_rm_deployment.py\"", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.azure_rm_deployment', init_globals=None, run_name='__main__', alter_sys=True)\n File \""/usr/lib64/python3.6/runpy.py\"", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \""/usr/lib64/python3.6/runpy.py\"", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \""/usr/lib64/python3.6/runpy.py\"", line 85, in _run_code\n exec(code, run_globals)\n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 699, in \n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 695, in main\n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 470, in __init__\n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/module_utils/azure_rm_common.py\"", line 349, in __init__\n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 477, in exec_module\n File \""/tmp/ansible_azure_rm_deployment_payload_yq3z7221/ansible_azure_rm_deployment_payload.zip/ansible/modules/azure_rm_deployment.py\"", line 552, in deploy_template\n File \""/usr/lib/python3.6/site-packages/azure/core/tracing/decorator.py\"", line 78, in wrapper_use_tracer\n return func(*args, **kwargs)\n File \""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/azure/mgmt/resource/resources/v2019_05_01/operations/_operations.py\"", line 3861, in begin_create_or_update\n **kwargs\n File \""/var/lib/awx/venv/ansible/lib/python3.6/site-packages/azure/mgmt/resource/resources/v2019_05_01/operations/_operations.py\"", line 3795, in _create_or_update_initial\n raise HttpResponseError(response=response, error_format=ARMErrorFormat)\nazure.core.exceptions.HttpResponseError: (InvalidRequestContent) The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'.\nCode: InvalidRequestContent\nMessage: The request content was invalid and could not be deserialized: 'Could not find member 'template' on object of type 'DeploymentContent'. Path 'template', line 1, position 12.'.\n"", ""msg"": ""Traceback (most recent call last):\n File \""/tmp/ansible_async_wrapper_payload_391mug5z/ansible_async_wrapper_payload.zip/ansible/modules/utilities/logic/async_wrapper.py\"", line 166, in _run_module\n File \""/tmp/ansible_async_wrapper_payload_391mug5z/ansible_async_wrapper_payload.zip/ansible/modules/utilities/logic/async_wrapper.py\"", line 94, in _filter_non_json_lines\nValueError: No start of json char found\n"", ``` ",0,could not find member template on object of type deploymentcontent summary when doing ansible deployment using azure rm deployment the push is errored out with message the request content was invalid and could not be deserialized could not find member template on object of type deploymentcontent path template line position issue type bug report component name stderr lines var lib awx venv ansible lib site packages openssl crypto py cryptographydeprecationwarning python is no longer supported by the python core team therefore support for it is deprecated in cryptography and will be removed in a future release from cryptography import utils traceback most recent call last file var lib awx ansible tmp ansible tmp ansiballz azure rm deployment py line in ansiballz main file var lib awx ansible tmp ansible tmp ansiballz azure rm deployment py line in ansiballz main invoke module zipped mod temp path ansiballz params file var lib awx ansible tmp ansible tmp ansiballz azure rm deployment py line in invoke module runpy run module mod name ansible modules azure rm deployment init globals none run name main alter sys true file usr runpy py line in run module return run module code code init globals run name mod spec file usr runpy py line in run module code mod name mod spec pkg name script name file usr runpy py line in run code exec code run globals file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible modules azure rm deployment py line in file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible modules azure rm deployment py line in main file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible modules azure rm deployment py line in init file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible module utils azure rm common py line in init file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible modules azure rm deployment py line in exec module file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible modules azure rm deployment py line in deploy template file usr lib site packages azure core tracing decorator py line in wrapper use tracer return func args kwargs file var lib awx venv ansible lib site packages azure mgmt resource resources operations operations py line in begin create or update kwargs file var lib awx venv ansible lib site packages azure mgmt resource resources operations operations py line in create or update initial raise httpresponseerror response response error format armerrorformat azure core exceptions httpresponseerror invalidrequestcontent the request content was invalid and could not be deserialized could not find member template on object of type deploymentcontent path template line position code invalidrequestcontent message the request content was invalid and could not be deserialized could not find member template on object of type deploymentcontent path template line position ansible version ansible config file etc ansible ansible cfg configured module search path ansible python module location usr lib site packages ansible executable location usr bin ansible python version default apr collection version between the quotes for example ansible galaxy collection list community general azure appconfiguration azure batch azure cli azure cli core azure cli nspkg azure cli telemetry azure common azure core azure cosmos azure data tables azure datalake store azure graphrbac azure identity azure keyvault azure keyvault administration azure keyvault keys azure loganalytics azure mgmt advisor azure mgmt apimanagement azure mgmt appconfiguration azure mgmt applicationinsights azure mgmt authorization azure mgmt automation azure mgmt batch azure mgmt batchai azure mgmt billing azure mgmt botservice azure mgmt cdn azure mgmt cognitiveservices azure mgmt compute azure mgmt consumption azure mgmt containerinstance azure mgmt containerregistry azure mgmt containerservice azure mgmt core azure mgmt cosmosdb azure mgmt databoxedge azure mgmt datalake analytics azure mgmt datalake nspkg azure mgmt datalake store azure mgmt datamigration azure mgmt deploymentmanager azure mgmt devtestlabs azure mgmt dns azure mgmt eventgrid azure mgmt eventhub azure mgmt extendedlocation azure mgmt hdinsight azure mgmt imagebuilder azure mgmt iotcentral azure mgmt iothub azure mgmt iothubprovisioningservices azure mgmt keyvault azure mgmt kusto azure mgmt loganalytics azure mgmt managedservices azure mgmt managementgroups azure mgmt maps azure mgmt marketplaceordering azure mgmt media azure mgmt monitor azure mgmt msi azure mgmt netapp azure mgmt network azure mgmt nspkg azure mgmt policyinsights azure mgmt privatedns azure mgmt rdbms azure mgmt recoveryservices azure mgmt recoveryservicesbackup azure mgmt redhatopenshift azure mgmt redis azure mgmt relay azure mgmt reservations azure mgmt resource azure mgmt search azure mgmt security azure mgmt servicebus azure mgmt servicefabric azure mgmt servicefabricmanagedclusters azure mgmt servicelinker azure mgmt signalr azure mgmt sql azure mgmt sqlvirtualmachine azure mgmt storage azure mgmt synapse azure mgmt trafficmanager azure mgmt web azure multiapi storage azure nspkg azure storage azure storage common azure synapse accesscontrol azure synapse artifacts azure synapse managedprivateendpoints azure synapse spark msrest msrestazure configuration ansible bash ansible config dump only changed usr lib site packages ansible parsing vault init py cryptographydeprecationwarning python is no longer supported by the python core team therefore support for it is deprecated in cryptography and will be removed in a future release from cryptography exceptions import invalidsignature os environment ansible bash cat etc release centos linux release core derived from red hat enterprise linux source name centos linux version core id centos id like rhel fedora version id platform id platform pretty name centos linux core ansi color cpe name cpe o centos centos home url bug report url centos mantisbt project centos centos mantisbt project version redhat support product centos redhat support product version centos linux release core centos linux release core cpe o centos centos steps to reproduce the problem came up all of a sudden while it used to be working and it s errored out when adding new vnet related tasks name deploy base network components to azure azure rm deployment name item env code network base deployment resource group item env code rg suffix template lookup template templates rendered item env code rg suffix network base deployment json location item region name subscription id subscription id production if item production true else subscription id pre prod with subelements regions environments async poll register base deploy tags base deploy task name wait for base vnet deploy async status jid item ansible job id register job result until job result finished retries tags base deploy task with items base deploy results expected results expecting async poll would be successful in return actual results got message the request content was invalid and could not be deserialized could not find member template on object of type deploymentcontent path template line position cmd var lib awx ansible tmp ansible tmp ansiballz azure rm deployment py data stderr var lib awx venv ansible lib site packages openssl crypto py cryptographydeprecationwarning python is no longer supported by the python core team therefore support for it is deprecated in cryptography and will be removed in a future release n from cryptography import utils ntraceback most recent call last n file var lib awx ansible tmp ansible tmp ansiballz azure rm deployment py line in n ansiballz main n file var lib awx ansible tmp ansible tmp ansiballz azure rm deployment py line in ansiballz main n invoke module zipped mod temp path ansiballz params n file var lib awx ansible tmp ansible tmp ansiballz azure rm deployment py line in invoke module n runpy run module mod name ansible modules azure rm deployment init globals none run name main alter sys true n file usr runpy py line in run module n return run module code code init globals run name mod spec n file usr runpy py line in run module code n mod name mod spec pkg name script name n file usr runpy py line in run code n exec code run globals n file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible modules azure rm deployment py line in n file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible modules azure rm deployment py line in main n file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible modules azure rm deployment py line in init n file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible module utils azure rm common py line in init n file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible modules azure rm deployment py line in exec module n file tmp ansible azure rm deployment payload ansible azure rm deployment payload zip ansible modules azure rm deployment py line in deploy template n file usr lib site packages azure core tracing decorator py line in wrapper use tracer n return func args kwargs n file var lib awx venv ansible lib site packages azure mgmt resource resources operations operations py line in begin create or update n kwargs n file var lib awx venv ansible lib site packages azure mgmt resource resources operations operations py line in create or update initial n raise httpresponseerror response response error format armerrorformat nazure core exceptions httpresponseerror invalidrequestcontent the request content was invalid and could not be deserialized could not find member template on object of type deploymentcontent path template line position ncode invalidrequestcontent nmessage the request content was invalid and could not be deserialized could not find member template on object of type deploymentcontent path template line position n msg traceback most recent call last n file tmp ansible async wrapper payload ansible async wrapper payload zip ansible modules utilities logic async wrapper py line in run module n file tmp ansible async wrapper payload ansible async wrapper payload zip ansible modules utilities logic async wrapper py line in filter non json lines nvalueerror no start of json char found n ,0 14,2735070109.0,IssuesEvent,2015-04-18 02:03:07,backdrop-ops/backdropcms.org,https://api.github.com/repos/backdrop-ops/backdropcms.org,opened,Allow user logins to BackdropCMS.org,github-automation type - enhancement,"Right now we only have a handful of user accounts on BackdropCMS.org. We should allow login to the site so that users can manage the new project and project release nodes. In consideration of login, we should provide at least the following: - Spam prevention to keep out automated account creation (any suggestions here)? - A sign-on with GitHub or OAuth authentication with GitHub to validate the connection. Eventually we'll want to make a user profile page, but for starters we'll probably just include a description + basic user fields.",1.0,"Allow user logins to BackdropCMS.org - Right now we only have a handful of user accounts on BackdropCMS.org. We should allow login to the site so that users can manage the new project and project release nodes. In consideration of login, we should provide at least the following: - Spam prevention to keep out automated account creation (any suggestions here)? - A sign-on with GitHub or OAuth authentication with GitHub to validate the connection. Eventually we'll want to make a user profile page, but for starters we'll probably just include a description + basic user fields.",1,allow user logins to backdropcms org right now we only have a handful of user accounts on backdropcms org we should allow login to the site so that users can manage the new project and project release nodes in consideration of login we should provide at least the following spam prevention to keep out automated account creation any suggestions here a sign on with github or oauth authentication with github to validate the connection eventually we ll want to make a user profile page but for starters we ll probably just include a description basic user fields ,1 190911,22173384831.0,IssuesEvent,2022-06-06 05:09:55,Satheesh575555/linux-4.19.72,https://api.github.com/repos/Satheesh575555/linux-4.19.72,reopened,WS-2021-0596 (High) detected in linuxlinux-4.19.236,security vulnerability,"## WS-2021-0596 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.236

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in HEAD commit: ca82789c9f44a15d0b5166020b5c08fc8685cb69

Found in base branch: master

Vulnerable Source Files (1)

/drivers/net/tun.c

Vulnerability Details

Linux Kernel before 5.15.12 avoid double free in tun_free_netdev

Publish Date: 2021-12-30

URL: WS-2021-0596

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://osv.dev/vulnerability/GSD-2021-1002847

Release Date: 2021-12-30

Fix Resolution: v5.15.2

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2021-0596 (High) detected in linuxlinux-4.19.236 - ## WS-2021-0596 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.236

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in HEAD commit: ca82789c9f44a15d0b5166020b5c08fc8685cb69

Found in base branch: master

Vulnerable Source Files (1)

/drivers/net/tun.c

Vulnerability Details

Linux Kernel before 5.15.12 avoid double free in tun_free_netdev

Publish Date: 2021-12-30

URL: WS-2021-0596

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://osv.dev/vulnerability/GSD-2021-1002847

Release Date: 2021-12-30

Fix Resolution: v5.15.2

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws high detected in linuxlinux ws high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers net tun c vulnerability details linux kernel before avoid double free in tun free netdev publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 9994,31017570705.0,IssuesEvent,2023-08-10 00:39:30,figuren-theater/ft-platform-collection,https://api.github.com/repos/figuren-theater/ft-platform-collection,opened,Establish quality standards,automation,"```[tasklist] ### Repository Standards - [ ] Has nice [README.md](https://github.com/figuren-theater/new-ft-module/blob/main/README.md) - [ ] Add [`.github/workflows/ft-issue-gardening.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/ft-issue-gardening.yml) file (if not exists) - [ ] Add [`.github/workflows/release-drafter.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/release-drafter.yml) file - [ ] Add [`.github/workflows/prerelease-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/prerelease-changelog.yml) file - [ ] Add [`.editorconfig`](https://github.com/figuren-theater/new-ft-module/blob/main/.editorconfig) file - [ ] Add [`.phpcs.xml`](https://github.com/figuren-theater/new-ft-module/blob/main/.phpcs.xml) file - [ ] Check that `.phpcs.xml` file is not present in `.gitignore` - [ ] Add [`CHANGELOG.md`](https://github.com/figuren-theater/new-ft-module/blob/main/CHANGELOG.md) file with an *Unreleased-Heading* - [ ] Add [`phpstan.neon`](https://github.com/figuren-theater/new-ft-module/blob/main/phpstan.neon) file - [ ] Run `composer require --dev figuren-theater/code-quality` - [ ] Run `composer normalize` - [ ] Run `vendor/bin/phpstan analyze .` - [ ] Run `vendor/bin/phpcs .` - [ ] Fix all errors ;) - [ ] commit, PR & merge all (additional) changes ! - [ ] Has branch protection enabled ? - [ ] Add [`.github/workflows/build-test-measure.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/build-test-measure.yml) file - [ ] Enable repo for required **Build, test & measure** status checks via [Repo Settings](/settings/actions) - [ ] Add **Build, test & measure** badge to the [code-quality](https://github.com/figuren-theater/code-quality) README - [ ] Submit repo to [packagist.org](https://packagist.org/packages/figuren-theater/) - [ ] Remove explicit `repositories` entry from [ft-platform](https://github.com/figuren-theater/ft-platform)s `composer.json` - [ ] Update `README.md` and fix **LICENSE-Link**, to see all workflows running - [ ] Publish the new drafted Release as Prerelease to trigger auto-updating versions in CHANGELOG.md and plugin.php ``` ",1.0,"Establish quality standards - ```[tasklist] ### Repository Standards - [ ] Has nice [README.md](https://github.com/figuren-theater/new-ft-module/blob/main/README.md) - [ ] Add [`.github/workflows/ft-issue-gardening.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/ft-issue-gardening.yml) file (if not exists) - [ ] Add [`.github/workflows/release-drafter.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/release-drafter.yml) file - [ ] Add [`.github/workflows/prerelease-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/prerelease-changelog.yml) file - [ ] Add [`.editorconfig`](https://github.com/figuren-theater/new-ft-module/blob/main/.editorconfig) file - [ ] Add [`.phpcs.xml`](https://github.com/figuren-theater/new-ft-module/blob/main/.phpcs.xml) file - [ ] Check that `.phpcs.xml` file is not present in `.gitignore` - [ ] Add [`CHANGELOG.md`](https://github.com/figuren-theater/new-ft-module/blob/main/CHANGELOG.md) file with an *Unreleased-Heading* - [ ] Add [`phpstan.neon`](https://github.com/figuren-theater/new-ft-module/blob/main/phpstan.neon) file - [ ] Run `composer require --dev figuren-theater/code-quality` - [ ] Run `composer normalize` - [ ] Run `vendor/bin/phpstan analyze .` - [ ] Run `vendor/bin/phpcs .` - [ ] Fix all errors ;) - [ ] commit, PR & merge all (additional) changes ! - [ ] Has branch protection enabled ? - [ ] Add [`.github/workflows/build-test-measure.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/build-test-measure.yml) file - [ ] Enable repo for required **Build, test & measure** status checks via [Repo Settings](/settings/actions) - [ ] Add **Build, test & measure** badge to the [code-quality](https://github.com/figuren-theater/code-quality) README - [ ] Submit repo to [packagist.org](https://packagist.org/packages/figuren-theater/) - [ ] Remove explicit `repositories` entry from [ft-platform](https://github.com/figuren-theater/ft-platform)s `composer.json` - [ ] Update `README.md` and fix **LICENSE-Link**, to see all workflows running - [ ] Publish the new drafted Release as Prerelease to trigger auto-updating versions in CHANGELOG.md and plugin.php ``` ",1,establish quality standards repository standards has nice add file if not exists add file add file add file add file check that phpcs xml file is not present in gitignore add file with an unreleased heading add file run composer require dev figuren theater code quality run composer normalize run vendor bin phpstan analyze run vendor bin phpcs fix all errors commit pr merge all additional changes has branch protection enabled add file enable repo for required build test measure status checks via settings actions add build test measure badge to the readme submit repo to remove explicit repositories entry from composer json update readme md and fix license link to see all workflows running publish the new drafted release as prerelease to trigger auto updating versions in changelog md and plugin php ,1 392492,26942477123.0,IssuesEvent,2023-02-08 04:13:15,arduino-libraries/LiquidCrystal,https://api.github.com/repos/arduino-libraries/LiquidCrystal,closed,Update redirecting link in HelloWorld example,type: enhancement topic: documentation,"The comments at https://github.com/arduino-libraries/LiquidCrystal/blob/master/examples/HelloWorld/HelloWorld.ino#L39 have a broken link to https://www.arduino.cc/en/Tutorial/LibraryExamples/HelloWorld See discussion at https://forum.arduino.cc/t/lcd-example-in-arduino/1086684 which suggests using the tutorial at https://docs.arduino.cc/learn/electronics/lcd-displays#hello-world-example instead. (The code there also has the same broken link) ",1.0,"Update redirecting link in HelloWorld example - The comments at https://github.com/arduino-libraries/LiquidCrystal/blob/master/examples/HelloWorld/HelloWorld.ino#L39 have a broken link to https://www.arduino.cc/en/Tutorial/LibraryExamples/HelloWorld See discussion at https://forum.arduino.cc/t/lcd-example-in-arduino/1086684 which suggests using the tutorial at https://docs.arduino.cc/learn/electronics/lcd-displays#hello-world-example instead. (The code there also has the same broken link) ",0,update redirecting link in helloworld example the comments at have a broken link to see discussion at which suggests using the tutorial at instead the code there also has the same broken link ,0 133767,12553200247.0,IssuesEvent,2020-06-06 21:01:09,PANDATD/pandassistant,https://api.github.com/repos/PANDATD/pandassistant,closed,still not support voice ,bug documentation good first issue help wanted,"hey still not support **_voice commonds_** _we are implementing in future as per as possiable_ and minor bugs are there ! _we fix as per as possiable .._ ",1.0,"still not support voice - hey still not support **_voice commonds_** _we are implementing in future as per as possiable_ and minor bugs are there ! _we fix as per as possiable .._ ",0,still not support voice hey still not support voice commonds we are implementing in future as per as possiable and minor bugs are there we fix as per as possiable ,0 8469,26993582063.0,IssuesEvent,2023-02-09 22:09:59,o3de/o3de,https://api.github.com/repos/o3de/o3de,opened,Periodic Test Failure: Several Asset Pipeline tests are consistently failing in Periodic test suites,feature/asset-pipeline kind/bug needs-triage kind/automation,"**Describe the bug** Several Asset Pipeline tests are consistently failing in Periodic test suites ``` AssetPipelineTests.Periodic.periodic - SubID_NoChange_MeshChanged[linux-windows_editor-AutomatedTesting] - SubID_WarningReported_AssetRemoved[linux-windows_editor-AutomatedTesting] APTests.Batch_Periodic.periodic - test_TwoAssetsWithSameProductName_ShouldProcessAfterRename[linux-AutomatedTesting] - test_InvalidServerAddress_Warning_Logs[linux-AutomatedTesting] - test_validateDirectPreloadDependency_Found[windows-AutomatedTesting] APTests.Gui_Periodic.periodic - test_ProcessAssets_ReprocessDeletedCache[linux-AutomatedTesting] - test_APStop_TimesOut[windows-AutomatedTesting] APTests.Gui_2_Periodic.periodic - test_AllSupportedPlatforms_DeleteCachedAssets_AssetsReprocessed[linux-AutomatedTesting] - test_AllSupportedPlatforms_ModifyAssetInfo_AssetsReprocessed[linux-AutomatedTesting] - test_WindowsMacPlatforms_GUIFastScanEnabled_GameLauncherWorksWithAP[windows-AutomatedTesting] APTests.AssetRelocator_Periodic.periodic ERRORS - test_WindowsMacPlatforms_RelocatorLeaveEmptyFolders_WithAndWithoutConfirm[linux-C21968381_a-True-False-True-True-True-relocate-AutomatedTesting] + ALL - test_WindowsMacPlatforms_DeleteWithParameters_DeleteSuccess[windows-C21968355-False-True-True-expected_queries0-unexpected_queries0-AutomatedTesting] + ALL FAILURES - test_WindowsMacPlatforms_MoveCommand_CommandResult[linux-C19462747-AutomatedTesting] + ALL (PASSES ON WINDOWS) AssetProcessorCacheServerTests.Periodic.periodic - test_AssetCacheServer_LocalWorkUnaffected[linux-AutomatedTesting] ``` **Failed Jenkins Job Information:** https://jenkins.build.o3de.org/blue/organizations/jenkins/O3DE_periodic-incremental-daily/detail/development/249/pipeline/1923 **Attachments** Windows: [log.txt](https://github.com/o3de/o3de/files/10701931/log.txt) Linux: [log(1).txt](https://github.com/o3de/o3de/files/10701934/log.1.txt) **Additional context** 1. Affected tests will be skipped if failing on both Windows/Linux or skipif'd for tests only failing on Linux after speaking with @LesaelR . ",1.0,"Periodic Test Failure: Several Asset Pipeline tests are consistently failing in Periodic test suites - **Describe the bug** Several Asset Pipeline tests are consistently failing in Periodic test suites ``` AssetPipelineTests.Periodic.periodic - SubID_NoChange_MeshChanged[linux-windows_editor-AutomatedTesting] - SubID_WarningReported_AssetRemoved[linux-windows_editor-AutomatedTesting] APTests.Batch_Periodic.periodic - test_TwoAssetsWithSameProductName_ShouldProcessAfterRename[linux-AutomatedTesting] - test_InvalidServerAddress_Warning_Logs[linux-AutomatedTesting] - test_validateDirectPreloadDependency_Found[windows-AutomatedTesting] APTests.Gui_Periodic.periodic - test_ProcessAssets_ReprocessDeletedCache[linux-AutomatedTesting] - test_APStop_TimesOut[windows-AutomatedTesting] APTests.Gui_2_Periodic.periodic - test_AllSupportedPlatforms_DeleteCachedAssets_AssetsReprocessed[linux-AutomatedTesting] - test_AllSupportedPlatforms_ModifyAssetInfo_AssetsReprocessed[linux-AutomatedTesting] - test_WindowsMacPlatforms_GUIFastScanEnabled_GameLauncherWorksWithAP[windows-AutomatedTesting] APTests.AssetRelocator_Periodic.periodic ERRORS - test_WindowsMacPlatforms_RelocatorLeaveEmptyFolders_WithAndWithoutConfirm[linux-C21968381_a-True-False-True-True-True-relocate-AutomatedTesting] + ALL - test_WindowsMacPlatforms_DeleteWithParameters_DeleteSuccess[windows-C21968355-False-True-True-expected_queries0-unexpected_queries0-AutomatedTesting] + ALL FAILURES - test_WindowsMacPlatforms_MoveCommand_CommandResult[linux-C19462747-AutomatedTesting] + ALL (PASSES ON WINDOWS) AssetProcessorCacheServerTests.Periodic.periodic - test_AssetCacheServer_LocalWorkUnaffected[linux-AutomatedTesting] ``` **Failed Jenkins Job Information:** https://jenkins.build.o3de.org/blue/organizations/jenkins/O3DE_periodic-incremental-daily/detail/development/249/pipeline/1923 **Attachments** Windows: [log.txt](https://github.com/o3de/o3de/files/10701931/log.txt) Linux: [log(1).txt](https://github.com/o3de/o3de/files/10701934/log.1.txt) **Additional context** 1. Affected tests will be skipped if failing on both Windows/Linux or skipif'd for tests only failing on Linux after speaking with @LesaelR . ",1,periodic test failure several asset pipeline tests are consistently failing in periodic test suites describe the bug several asset pipeline tests are consistently failing in periodic test suites assetpipelinetests periodic periodic subid nochange meshchanged subid warningreported assetremoved aptests batch periodic periodic test twoassetswithsameproductname shouldprocessafterrename test invalidserveraddress warning logs test validatedirectpreloaddependency found aptests gui periodic periodic test processassets reprocessdeletedcache test apstop timesout aptests gui periodic periodic test allsupportedplatforms deletecachedassets assetsreprocessed test allsupportedplatforms modifyassetinfo assetsreprocessed test windowsmacplatforms guifastscanenabled gamelauncherworkswithap aptests assetrelocator periodic periodic errors test windowsmacplatforms relocatorleaveemptyfolders withandwithoutconfirm all test windowsmacplatforms deletewithparameters deletesuccess all failures test windowsmacplatforms movecommand commandresult all passes on windows assetprocessorcacheservertests periodic periodic test assetcacheserver localworkunaffected failed jenkins job information attachments windows linux additional context affected tests will be skipped if failing on both windows linux or skipif d for tests only failing on linux after speaking with lesaelr ,1 47431,19650296301.0,IssuesEvent,2022-01-10 05:50:31,hashicorp/terraform-provider-azurerm,https://api.github.com/repos/hashicorp/terraform-provider-azurerm,closed,"Support for enabling ""Allow public access from any Azure service..."" in azurerm_postgresql_flexible_server resource",enhancement good first issue service/postgresql," ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave ""+1"" or ""me too"" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment ### Description I'd like to request support for the firewall option ""Allow public access from any Azure service within Azure to this server"" in Azure Database for PostgreSQL flexible server. ### New or Affected Resource(s) * azurerm_postgresql_flexible_server ### Potential Terraform Configuration ```hcl resource ""azurerm_postgresql_flexible_server"" ""example"" { name = ""example-psqlflexibleserver"" resource_group_name = azurerm_resource_group.example.name location = azurerm_resource_group.example.location version = ""13"" administrator_login = ""psqladminun"" administrator_password = ""H@Sh1CoR3!"" storage_mb = 32768 sku_name = ""GP_Standard_D4s_v3"" public_azure_access_enabled = true } ``` ### References * https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-firewall-rules#connect-from-azure ",1.0,"Support for enabling ""Allow public access from any Azure service..."" in azurerm_postgresql_flexible_server resource - ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave ""+1"" or ""me too"" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment ### Description I'd like to request support for the firewall option ""Allow public access from any Azure service within Azure to this server"" in Azure Database for PostgreSQL flexible server. ### New or Affected Resource(s) * azurerm_postgresql_flexible_server ### Potential Terraform Configuration ```hcl resource ""azurerm_postgresql_flexible_server"" ""example"" { name = ""example-psqlflexibleserver"" resource_group_name = azurerm_resource_group.example.name location = azurerm_resource_group.example.location version = ""13"" administrator_login = ""psqladminun"" administrator_password = ""H@Sh1CoR3!"" storage_mb = 32768 sku_name = ""GP_Standard_D4s_v3"" public_azure_access_enabled = true } ``` ### References * https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-firewall-rules#connect-from-azure ",0,support for enabling allow public access from any azure service in azurerm postgresql flexible server resource community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description i d like to request support for the firewall option allow public access from any azure service within azure to this server in azure database for postgresql flexible server new or affected resource s azurerm postgresql flexible server potential terraform configuration hcl resource azurerm postgresql flexible server example name example psqlflexibleserver resource group name azurerm resource group example name location azurerm resource group example location version administrator login psqladminun administrator password h storage mb sku name gp standard public azure access enabled true references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example ,0 289489,31933042873.0,IssuesEvent,2023-09-19 08:42:46,Trinadh465/linux-4.1.15_CVE-2023-4128,https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128,opened,CVE-2017-1000363 (High) detected in linux-stable-rtv4.1.33,Mend: dependency security vulnerability,"## CVE-2017-1000363 - High Severity Vulnerability
Vulnerable Library - linux-stable-rtv4.1.33

Julia Cartwright's fork of linux-stable-rt.git

Library home page: https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git

Found in HEAD commit: 0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8

Found in base branch: master

Vulnerable Source Files (1)

/drivers/char/lp.c

Vulnerability Details

Linux drivers/char/lp.c Out-of-Bounds Write. Due to a missing bounds check, and the fact that parport_ptr integer is static, a 'secure boot' kernel command line adversary (can happen due to bootloader vulns, e.g. Google Nexus 6's CVE-2016-10277, where due to a vulnerability the adversary has partial control over the command line) can overflow the parport_nr array in the following code, by appending many (>LP_NO) 'lp=none' arguments to the command line.

Publish Date: 2017-07-17

URL: CVE-2017-1000363

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.linuxkernelcves.com/cves/CVE-2017-1000363

Release Date: 2017-07-13

Fix Resolution: v4.12-rc2,v3.16.46,v3.18.55,v3.2.91,v4.1.41,v4.11.3,v4.4.70,v4.9.30

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2017-1000363 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2017-1000363 - High Severity Vulnerability
Vulnerable Library - linux-stable-rtv4.1.33

Julia Cartwright's fork of linux-stable-rt.git

Library home page: https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git

Found in HEAD commit: 0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8

Found in base branch: master

Vulnerable Source Files (1)

/drivers/char/lp.c

Vulnerability Details

Linux drivers/char/lp.c Out-of-Bounds Write. Due to a missing bounds check, and the fact that parport_ptr integer is static, a 'secure boot' kernel command line adversary (can happen due to bootloader vulns, e.g. Google Nexus 6's CVE-2016-10277, where due to a vulnerability the adversary has partial control over the command line) can overflow the parport_nr array in the following code, by appending many (>LP_NO) 'lp=none' arguments to the command line.

Publish Date: 2017-07-17

URL: CVE-2017-1000363

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.linuxkernelcves.com/cves/CVE-2017-1000363

Release Date: 2017-07-13

Fix Resolution: v4.12-rc2,v3.16.46,v3.18.55,v3.2.91,v4.1.41,v4.11.3,v4.4.70,v4.9.30

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers char lp c vulnerability details linux drivers char lp c out of bounds write due to a missing bounds check and the fact that parport ptr integer is static a secure boot kernel command line adversary can happen due to bootloader vulns e g google nexus s cve where due to a vulnerability the adversary has partial control over the command line can overflow the parport nr array in the following code by appending many lp no lp none arguments to the command line publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0 8451,7297735515.0,IssuesEvent,2018-02-26 15:04:56,istio/istio,https://api.github.com/repos/istio/istio,closed,Vulnerabilities found in proxy image,area/security kind/enhancement,"Istio proxy (debug) [image 0.2.9](https://anchore.io/image/dockerhub/istio%2Fproxy_debug:0.2.9) has known vulnerabilities (>70 medium and >70 low priority vulnerabilities). Result of anchore.io run on 0.2.9 can be found [here](https://anchore.io/image/dockerhub/f0994e4220bc54041dc937ebfe744ef6f44aa112b1973c084ef6e912c2efb09c?repo=istio%2Fproxy_debug&tag=0.2.9#policy) and [here](https://anchore.io/image/dockerhub/f0994e4220bc54041dc937ebfe744ef6f44aa112b1973c084ef6e912c2efb09c?repo=istio%2Fproxy_debug&tag=0.2.9#security)",True,"Vulnerabilities found in proxy image - Istio proxy (debug) [image 0.2.9](https://anchore.io/image/dockerhub/istio%2Fproxy_debug:0.2.9) has known vulnerabilities (>70 medium and >70 low priority vulnerabilities). Result of anchore.io run on 0.2.9 can be found [here](https://anchore.io/image/dockerhub/f0994e4220bc54041dc937ebfe744ef6f44aa112b1973c084ef6e912c2efb09c?repo=istio%2Fproxy_debug&tag=0.2.9#policy) and [here](https://anchore.io/image/dockerhub/f0994e4220bc54041dc937ebfe744ef6f44aa112b1973c084ef6e912c2efb09c?repo=istio%2Fproxy_debug&tag=0.2.9#security)",0,vulnerabilities found in proxy image istio proxy debug has known vulnerabilities medium and low priority vulnerabilities result of anchore io run on can be found and ,0 12348,4434856960.0,IssuesEvent,2016-08-18 05:40:15,eclipse/che,https://api.github.com/repos/eclipse/che,closed,"Move ""Subversion"" menu after to ""Git"" in project's explorer right-click menu",kind/task status/code-review team/enterprise,"Move ""Subversion"" menu after to ""Git"" in project's explorer right-click menu ""Subversion"" must not be at the top of the menu - it should be the next entry after ""Git"". **Che version:** 4.5.0",1.0,"Move ""Subversion"" menu after to ""Git"" in project's explorer right-click menu - Move ""Subversion"" menu after to ""Git"" in project's explorer right-click menu ""Subversion"" must not be at the top of the menu - it should be the next entry after ""Git"". **Che version:** 4.5.0",0,move subversion menu after to git in project s explorer right click menu move subversion menu after to git in project s explorer right click menu img width alt screen shot at src subversion must not be at the top of the menu it should be the next entry after git che version ,0 289034,31931109679.0,IssuesEvent,2023-09-19 07:29:33,Trinadh465/linux-4.1.15_CVE-2023-4128,https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128,opened,CVE-2021-39685 (High) detected in multiple libraries,Mend: dependency security vulnerability,"## CVE-2021-39685 - High Severity Vulnerability
Vulnerable Libraries - linuxlinux-4.6, linuxlinux-4.6, linuxlinux-4.6

Vulnerability Details

In various setup methods of the USB gadget subsystem, there is a possible out of bounds write due to an incorrect flag check. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-210292376References: Upstream kernel

Publish Date: 2022-03-16

URL: CVE-2021-39685

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.linuxkernelcves.com/cves/CVE-2021-39685

Release Date: 2022-03-16

Fix Resolution: v4.4.295,v4.9.293,v4.14.258,v4.19.221,v5.4.165,v5.10.85,v5.15.8,v5.16-rc5

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-39685 (High) detected in multiple libraries - ## CVE-2021-39685 - High Severity Vulnerability
Vulnerable Libraries - linuxlinux-4.6, linuxlinux-4.6, linuxlinux-4.6

Vulnerability Details

In various setup methods of the USB gadget subsystem, there is a possible out of bounds write due to an incorrect flag check. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-210292376References: Upstream kernel

Publish Date: 2022-03-16

URL: CVE-2021-39685

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.linuxkernelcves.com/cves/CVE-2021-39685

Release Date: 2022-03-16

Fix Resolution: v4.4.295,v4.9.293,v4.14.258,v4.19.221,v5.4.165,v5.10.85,v5.15.8,v5.16-rc5

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries linuxlinux linuxlinux linuxlinux vulnerability details in various setup methods of the usb gadget subsystem there is a possible out of bounds write due to an incorrect flag check this could lead to local escalation of privilege with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android kernelandroid id a upstream kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0 244653,20682296183.0,IssuesEvent,2022-03-10 14:56:12,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,Failing test: Jest Tests.x-pack/plugins/canvas/public/routes/workpad/hooks - useWorkpad redirects on alias match,Team:Presentation failed-test,"A test failed on a tracked branch ``` Error: Timed out in waitFor after 1000ms. at waitFor (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/@testing-library/react-hooks/lib/core/asyncUtils.js:70:13) at Object. (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/x-pack/plugins/canvas/public/routes/workpad/hooks/use_workpad.test.tsx:120:7) at _callCircusTest (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/run.js:212:5) at _runTest (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/run.js:149:3) at _runTestsForDescribeBlock (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/run.js:63:9) at _runTestsForDescribeBlock (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/run.js:57:9) at run (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/run.js:25:3) at runAndTransformResultsToJestFormat (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21) at jestAdapter (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19) at runTestInternal (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-runner/build/runTest.js:380:16) at runTest (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-runner/build/runTest.js:472:34) at Object.worker (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-runner/build/testWorker.js:133:12) ``` First failure: [CI Build - 7.16](https://buildkite.com/elastic/kibana-hourly/builds/2241#97d065ee-5ef9-4b80-bf0c-18447d4a8bbc) ",1.0,"Failing test: Jest Tests.x-pack/plugins/canvas/public/routes/workpad/hooks - useWorkpad redirects on alias match - A test failed on a tracked branch ``` Error: Timed out in waitFor after 1000ms. at waitFor (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/@testing-library/react-hooks/lib/core/asyncUtils.js:70:13) at Object. (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/x-pack/plugins/canvas/public/routes/workpad/hooks/use_workpad.test.tsx:120:7) at _callCircusTest (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/run.js:212:5) at _runTest (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/run.js:149:3) at _runTestsForDescribeBlock (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/run.js:63:9) at _runTestsForDescribeBlock (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/run.js:57:9) at run (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/run.js:25:3) at runAndTransformResultsToJestFormat (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21) at jestAdapter (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19) at runTestInternal (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-runner/build/runTest.js:380:16) at runTest (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-runner/build/runTest.js:472:34) at Object.worker (/opt/local-ssd/buildkite/builds/kb-c2-16-e9a1dca4c007e491/elastic/kibana-hourly/kibana/node_modules/jest-runner/build/testWorker.js:133:12) ``` First failure: [CI Build - 7.16](https://buildkite.com/elastic/kibana-hourly/builds/2241#97d065ee-5ef9-4b80-bf0c-18447d4a8bbc) ",0,failing test jest tests x pack plugins canvas public routes workpad hooks useworkpad redirects on alias match a test failed on a tracked branch error timed out in waitfor after at waitfor opt local ssd buildkite builds kb elastic kibana hourly kibana node modules testing library react hooks lib core asyncutils js at object opt local ssd buildkite builds kb elastic kibana hourly kibana x pack plugins canvas public routes workpad hooks use workpad test tsx at callcircustest opt local ssd buildkite builds kb elastic kibana hourly kibana node modules jest circus build run js at runtest opt local ssd buildkite builds kb elastic kibana hourly kibana node modules jest circus build run js at runtestsfordescribeblock opt local ssd buildkite builds kb elastic kibana hourly kibana node modules jest circus build run js at runtestsfordescribeblock opt local ssd buildkite builds kb elastic kibana hourly kibana node modules jest circus build run js at run opt local ssd buildkite builds kb elastic kibana hourly kibana node modules jest circus build run js at runandtransformresultstojestformat opt local ssd buildkite builds kb elastic kibana hourly kibana node modules jest circus build legacy code todo rewrite jestadapterinit js at jestadapter opt local ssd buildkite builds kb elastic kibana hourly kibana node modules jest circus build legacy code todo rewrite jestadapter js at runtestinternal opt local ssd buildkite builds kb elastic kibana hourly kibana node modules jest runner build runtest js at runtest opt local ssd buildkite builds kb elastic kibana hourly kibana node modules jest runner build runtest js at object worker opt local ssd buildkite builds kb elastic kibana hourly kibana node modules jest runner build testworker js first failure ,0 268030,23340171739.0,IssuesEvent,2022-08-09 13:27:17,woocommerce/woocommerce-blocks,https://api.github.com/repos/woocommerce/woocommerce-blocks,reopened,[Flaky Test] should show only products that match the filter,type: flaky test," **Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.** ## Test title should show only products that match the filter ## Test path `tests/e2e/specs/shopper/filter-products-by-price.test.ts` ## Errors Test passed after 3 failed attempts on trunk.
Test passed after 1 failed attempt on poc/6838-create-patterns. ``` ● Filter Products by Price Block › with All Products Block › should show only products that match the filter TimeoutError: waiting for selector `.wc-block-grid__products:not(.is-loading-products)` failed: timeout 30000ms exceeded at new WaitTask (../../node_modules/puppeteer/src/common/DOMWorld.ts:876:28) at DOMWorld._waitForSelectorInPage (../../node_modules/puppeteer/src/common/DOMWorld.ts:710:22) at Object.internalHandler.waitFor (../../node_modules/puppeteer/src/common/QueryHandler.ts:83:23) at DOMWorld.waitForSelector (../../node_modules/puppeteer/src/common/DOMWorld.ts:551:32) at Frame.waitForSelector (../../node_modules/puppeteer/src/common/FrameManager.ts:1310:47) at Page.waitForSelector (../../node_modules/puppeteer/src/common/Page.ts:3303:35) at waitForAllProductsBlockLoaded (../../tests/e2e/utils.js:439:13) at Object. (../../tests/e2e/specs/shopper/filter-products-by-price.test.ts:100:10) at runMicrotasks () ```
",1.0,"[Flaky Test] should show only products that match the filter - **Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.** ## Test title should show only products that match the filter ## Test path `tests/e2e/specs/shopper/filter-products-by-price.test.ts` ## Errors Test passed after 3 failed attempts on trunk.
Test passed after 1 failed attempt on poc/6838-create-patterns. ``` ● Filter Products by Price Block › with All Products Block › should show only products that match the filter TimeoutError: waiting for selector `.wc-block-grid__products:not(.is-loading-products)` failed: timeout 30000ms exceeded at new WaitTask (../../node_modules/puppeteer/src/common/DOMWorld.ts:876:28) at DOMWorld._waitForSelectorInPage (../../node_modules/puppeteer/src/common/DOMWorld.ts:710:22) at Object.internalHandler.waitFor (../../node_modules/puppeteer/src/common/QueryHandler.ts:83:23) at DOMWorld.waitForSelector (../../node_modules/puppeteer/src/common/DOMWorld.ts:551:32) at Frame.waitForSelector (../../node_modules/puppeteer/src/common/FrameManager.ts:1310:47) at Page.waitForSelector (../../node_modules/puppeteer/src/common/Page.ts:3303:35) at waitForAllProductsBlockLoaded (../../tests/e2e/utils.js:439:13) at Object. (../../tests/e2e/specs/shopper/filter-products-by-price.test.ts:100:10) at runMicrotasks () ```
",0, should show only products that match the filter flaky test detected this is an auto generated issue by github actions please do not edit this manually test title should show only products that match the filter test path tests specs shopper filter products by price test ts errors test passed after failed attempts on test passed after failed attempt on a href ● filter products by price block › with all products block › should show only products that match the filter timeouterror waiting for selector wc block grid products not is loading products failed timeout exceeded at new waittask node modules puppeteer src common domworld ts at domworld waitforselectorinpage node modules puppeteer src common domworld ts at object internalhandler waitfor node modules puppeteer src common queryhandler ts at domworld waitforselector node modules puppeteer src common domworld ts at frame waitforselector node modules puppeteer src common framemanager ts at page waitforselector node modules puppeteer src common page ts at waitforallproductsblockloaded tests utils js at object tests specs shopper filter products by price test ts at runmicrotasks ,0 1368,9990555818.0,IssuesEvent,2019-07-11 09:05:18,mozilla-mobile/reference-browser,https://api.github.com/repos/mozilla-mobile/reference-browser,closed,As a developer I want the Reference Browser always using the latest version of GeckoView,:stop_sign: blocked 🤖 automation 🦎 GeckoView,Maybe @pocmo can attach a task list to this one?,1.0,As a developer I want the Reference Browser always using the latest version of GeckoView - Maybe @pocmo can attach a task list to this one?,1,as a developer i want the reference browser always using the latest version of geckoview maybe pocmo can attach a task list to this one ,1 4291,15977805297.0,IssuesEvent,2021-04-17 06:58:38,timsneath/win32,https://api.github.com/repos/timsneath/win32,closed,Automate struct_sizes.cpp creation,automation,"Use win32struct.json to generate this file, so we can be sure we're not missing any structs.",1.0,"Automate struct_sizes.cpp creation - Use win32struct.json to generate this file, so we can be sure we're not missing any structs.",1,automate struct sizes cpp creation use json to generate this file so we can be sure we re not missing any structs ,1 36796,8137211997.0,IssuesEvent,2018-08-20 10:58:04,scalecube/scalecube-services,https://api.github.com/repos/scalecube/scalecube-services,closed,Cluster test MembershipProtocolTest.testInitialPhaseOk fails on Travis build,cluster defect,"cluster test faild: https://travis-ci.org/scalecube/scalecube/builds/190739190 Results : Failed tests: MembershipProtocolTest.testInitialPhaseOk:48->assertTrusted:490 Expected 3 trusted members [172.17.0.3:4801, 172.17.0.3:4802, 172.17.0.3:4803], but actual: [172.17.0.3:4802, 172.17.0.3:4803] expected:<3> but was:<2> ",1.0,"Cluster test MembershipProtocolTest.testInitialPhaseOk fails on Travis build - cluster test faild: https://travis-ci.org/scalecube/scalecube/builds/190739190 Results : Failed tests: MembershipProtocolTest.testInitialPhaseOk:48->assertTrusted:490 Expected 3 trusted members [172.17.0.3:4801, 172.17.0.3:4802, 172.17.0.3:4803], but actual: [172.17.0.3:4802, 172.17.0.3:4803] expected:<3> but was:<2> ",0,cluster test membershipprotocoltest testinitialphaseok fails on travis build cluster test faild results failed tests membershipprotocoltest testinitialphaseok asserttrusted expected trusted members but actual expected but was ,0 41585,5344508811.0,IssuesEvent,2017-02-17 14:43:20,SIB-Colombia/ipt,https://api.github.com/repos/SIB-Colombia/ipt,closed,Hide footer-bottom for non-sib IPT styles,design,"on footer-bottom is the sib logo and address but for the non-sib IPT styles (green, yellow...) this is not correct since it is meant to other institutions ",1.0,"Hide footer-bottom for non-sib IPT styles - on footer-bottom is the sib logo and address but for the non-sib IPT styles (green, yellow...) this is not correct since it is meant to other institutions ",0,hide footer bottom for non sib ipt styles on footer bottom is the sib logo and address but for the non sib ipt styles green yellow this is not correct since it is meant to other institutions ,0 8977,27295125972.0,IssuesEvent,2023-02-23 19:39:39,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Cypress Test -Change Authorization scope from 'Outh2 Client Credential' to 'Kong API Key with ACL',automation aps-demo,"- [x] Check manually If 'Outh2 Client Credential' to 'Kong API Key with ACL' works correctly - [x] Prepare Automation test to change authorization scope 1. Change Authorization profile from Kong ACL-API to Client Credential 1.1 Authenticates api owner 1.2 Activates the namespace 1.3 Create an authorization profile 1.4 Deactivate the service for Test environment 1.5 Update the authorization scope from Kong ACL-API to Client Credential 1.6 applies authorization plugin to service published to Kong Gateway 1.7 activate the service for Test environment 2.Developer creates an access request for Client ID/Secret authenticator 2.1 Developer logs in 2.2 Creates an application 2.3 Creates an access request 3. Access manager approves developer access request for Client ID/Secret authenticator 3.1 Access Manager logs in 3.2 Access Manager approves developer access request 3.3 approves an access request 4. Make an API request using Client ID, Secret, and Access Token 4.1 Get access token using client ID and secret; make API request",1.0,"Cypress Test -Change Authorization scope from 'Outh2 Client Credential' to 'Kong API Key with ACL' - - [x] Check manually If 'Outh2 Client Credential' to 'Kong API Key with ACL' works correctly - [x] Prepare Automation test to change authorization scope 1. Change Authorization profile from Kong ACL-API to Client Credential 1.1 Authenticates api owner 1.2 Activates the namespace 1.3 Create an authorization profile 1.4 Deactivate the service for Test environment 1.5 Update the authorization scope from Kong ACL-API to Client Credential 1.6 applies authorization plugin to service published to Kong Gateway 1.7 activate the service for Test environment 2.Developer creates an access request for Client ID/Secret authenticator 2.1 Developer logs in 2.2 Creates an application 2.3 Creates an access request 3. Access manager approves developer access request for Client ID/Secret authenticator 3.1 Access Manager logs in 3.2 Access Manager approves developer access request 3.3 approves an access request 4. Make an API request using Client ID, Secret, and Access Token 4.1 Get access token using client ID and secret; make API request",1,cypress test change authorization scope from client credential to kong api key with acl check manually if client credential to kong api key with acl works correctly prepare automation test to change authorization scope change authorization profile from kong acl api to client credential authenticates api owner activates the namespace create an authorization profile deactivate the service for test environment update the authorization scope from kong acl api to client credential applies authorization plugin to service published to kong gateway activate the service for test environment developer creates an access request for client id secret authenticator developer logs in creates an application creates an access request access manager approves developer access request for client id secret authenticator access manager logs in access manager approves developer access request approves an access request make an api request using client id secret and access token get access token using client id and secret make api request,1 8606,27171981223.0,IssuesEvent,2023-02-17 20:20:53,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,OneDrive for Business - approot (special folder) naming,type:bug automation:Closed,"Hi all, We have an application that integrates seamlessly with OneDrive Personal accounts, and have started making changes to cater for OneDrive for Business accounts as well. Syncing works as expected however we are experiencing an issue with Business accounts where the first call to **** creates a folder using some sort of UUID sequence instead of the application name i.e. ""Apps/12345-6789-1234-abcd-efghjklmxyz"" instead of ""Apps/My App Name"". The same call works as expected with OneDrive Personal accounts. The URL generated for the request above is https://mydomain.sharepoint.com/_api/v2.0/me/drive/special/approot/children which looks correct. The app settings on ActiveDirectory seem correct i.e. the app name is ""My app name"". Any ideas of why this might be happening? Many thanks in advance. Rog ",1.0,"OneDrive for Business - approot (special folder) naming - Hi all, We have an application that integrates seamlessly with OneDrive Personal accounts, and have started making changes to cater for OneDrive for Business accounts as well. Syncing works as expected however we are experiencing an issue with Business accounts where the first call to **** creates a folder using some sort of UUID sequence instead of the application name i.e. ""Apps/12345-6789-1234-abcd-efghjklmxyz"" instead of ""Apps/My App Name"". The same call works as expected with OneDrive Personal accounts. The URL generated for the request above is https://mydomain.sharepoint.com/_api/v2.0/me/drive/special/approot/children which looks correct. The app settings on ActiveDirectory seem correct i.e. the app name is ""My app name"". Any ideas of why this might be happening? Many thanks in advance. Rog ",1,onedrive for business approot special folder naming hi all we have an application that integrates seamlessly with onedrive personal accounts and have started making changes to cater for onedrive for business accounts as well syncing works as expected however we are experiencing an issue with business accounts where the first call to creates a folder using some sort of uuid sequence instead of the application name i e apps abcd efghjklmxyz instead of apps my app name the same call works as expected with onedrive personal accounts the url generated for the request above is which looks correct the app settings on activedirectory seem correct i e the app name is my app name any ideas of why this might be happening many thanks in advance rog ,1 659,7735843897.0,IssuesEvent,2018-05-27 19:32:15,DoESLiverpool/somebody-should,https://api.github.com/repos/DoESLiverpool/somebody-should,closed,Remove LaserSensor page from the wiki,System: Automation System: Status Page,"There's this page: https://github.com/DoESLiverpool/wiki/wiki/LasersSensor which I believe was ported over from the original wiki, and is no longer operative due to the fact we now have automatic cooling systems and don't have to rely on dustbins full of ice-cubes and water. If this page gets deleted, it should also be removed from: https://github.com/DoESLiverpool/wiki/wiki/Equipment Contents for archiving (due to its quaint use of pachube/cosm/xively) are: > I've built a temperature and current monitoring system for the two laser cutters. The project is based on an Arduino board, two current sensors, one temperature sensor and an LCD display. Through an electronic circuit which the sensors are connected to, the Arduino board calculates the values of current and temperature (which is shown on the LCD) and eventually the data is sent to Xively. > The purposes of this project are basically two: > > 1 - Keeping the water cooling temperature within certain values, in particular under 35C, otherwise the laser cutters reliability might be seriously compromised. The highest efficiency is obtained with a temperature of 15C. > The display shows the temperature and changes backlit color due to temperature change. > Specifically when the temp is lower than 25C it remains blue, when it's between 25C and 35C it gradually fades from blue to red and when it's higher than 35C the red backlit flashes showing ""Alarm!"" on the LCD (just press the unique button on the circuit to silence the Alarm message). > > 2 - Remotely checking (via [""Xively feed 757339326""](https://xively.com/feeds/757339326)) the current values of both laser cutters , measured in Ampere. There are not particular indication about it, except for some current peaks measured by the Arduino input when the board is powered on/off. They are nothing to worry about and they don't affect the board safety at all. > Laser current values usually go from 0.10A when idle up to 6A when the big laser is running at maximum power. > > In order to work properly, the current sensor needs to be connected to the line wire (the brown one) which sticks out from the multiplug adaptor. Since we have just one adaptor with that characteristic, the two laser cutters current could be monitored separately. > > The board and the Arduino must be powered on with the dedicated USB cable and connected to the network via ethernet. Currently, the ethernet cable being used by the Arduino, was originally dedicated to the laser cutter computer, so an another should be used. (it must be long enough because the circuit is an inconvenient position). > > View the feed now at [""xively.com/feeds/757339326""](https://xively.com/feeds/757339326) > ",1.0,"Remove LaserSensor page from the wiki - There's this page: https://github.com/DoESLiverpool/wiki/wiki/LasersSensor which I believe was ported over from the original wiki, and is no longer operative due to the fact we now have automatic cooling systems and don't have to rely on dustbins full of ice-cubes and water. If this page gets deleted, it should also be removed from: https://github.com/DoESLiverpool/wiki/wiki/Equipment Contents for archiving (due to its quaint use of pachube/cosm/xively) are: > I've built a temperature and current monitoring system for the two laser cutters. The project is based on an Arduino board, two current sensors, one temperature sensor and an LCD display. Through an electronic circuit which the sensors are connected to, the Arduino board calculates the values of current and temperature (which is shown on the LCD) and eventually the data is sent to Xively. > The purposes of this project are basically two: > > 1 - Keeping the water cooling temperature within certain values, in particular under 35C, otherwise the laser cutters reliability might be seriously compromised. The highest efficiency is obtained with a temperature of 15C. > The display shows the temperature and changes backlit color due to temperature change. > Specifically when the temp is lower than 25C it remains blue, when it's between 25C and 35C it gradually fades from blue to red and when it's higher than 35C the red backlit flashes showing ""Alarm!"" on the LCD (just press the unique button on the circuit to silence the Alarm message). > > 2 - Remotely checking (via [""Xively feed 757339326""](https://xively.com/feeds/757339326)) the current values of both laser cutters , measured in Ampere. There are not particular indication about it, except for some current peaks measured by the Arduino input when the board is powered on/off. They are nothing to worry about and they don't affect the board safety at all. > Laser current values usually go from 0.10A when idle up to 6A when the big laser is running at maximum power. > > In order to work properly, the current sensor needs to be connected to the line wire (the brown one) which sticks out from the multiplug adaptor. Since we have just one adaptor with that characteristic, the two laser cutters current could be monitored separately. > > The board and the Arduino must be powered on with the dedicated USB cable and connected to the network via ethernet. Currently, the ethernet cable being used by the Arduino, was originally dedicated to the laser cutter computer, so an another should be used. (it must be long enough because the circuit is an inconvenient position). > > View the feed now at [""xively.com/feeds/757339326""](https://xively.com/feeds/757339326) > ",1,remove lasersensor page from the wiki there s this page which i believe was ported over from the original wiki and is no longer operative due to the fact we now have automatic cooling systems and don t have to rely on dustbins full of ice cubes and water if this page gets deleted it should also be removed from contents for archiving due to its quaint use of pachube cosm xively are i ve built a temperature and current monitoring system for the two laser cutters the project is based on an arduino board two current sensors one temperature sensor and an lcd display through an electronic circuit which the sensors are connected to the arduino board calculates the values of current and temperature which is shown on the lcd and eventually the data is sent to xively the purposes of this project are basically two keeping the water cooling temperature within certain values in particular under otherwise the laser cutters reliability might be seriously compromised the highest efficiency is obtained with a temperature of the display shows the temperature and changes backlit color due to temperature change specifically when the temp is lower than it remains blue when it s between and it gradually fades from blue to red and when it s higher than the red backlit flashes showing alarm on the lcd just press the unique button on the circuit to silence the alarm message remotely checking via the current values of both laser cutters measured in ampere there are not particular indication about it except for some current peaks measured by the arduino input when the board is powered on off they are nothing to worry about and they don t affect the board safety at all laser current values usually go from when idle up to when the big laser is running at maximum power in order to work properly the current sensor needs to be connected to the line wire the brown one which sticks out from the multiplug adaptor since we have just one adaptor with that characteristic the two laser cutters current could be monitored separately the board and the arduino must be powered on with the dedicated usb cable and connected to the network via ethernet currently the ethernet cable being used by the arduino was originally dedicated to the laser cutter computer so an another should be used it must be long enough because the circuit is an inconvenient position view the feed now at ,1 29438,14116048930.0,IssuesEvent,2020-11-08 00:29:29,matrix-org/synapse,https://api.github.com/repos/matrix-org/synapse,closed,Summary of performance impact of running on resource constrained devices such as SBCs,docs performance,"### Description I've been running my homeserver on a cubietruck at home now for some time and am often replying to statements like ""you need loads of ram to join large rooms"" with ""it works fine for me"". I thought it might be useful to curate a summary of the issues you're likely to run into to help as a scaling-down guide, maybe highlight these for development work or end up as documentation. ### Performance Issues #### Presence This is the main reason people have a poor matrix experience on resource constrained homeservers. Element web will frequently be saying the server is offline while the python process will be pegged at 100% cpu. This feature is used to tell when other users are active (have a client app in the foreground) and therefore more likely to respond, but requires a lot of network activity to maintain even when nobody is talking in a room. ![Screenshot_2020-10-01_19-29-46](https://user-images.githubusercontent.com/71895/94848963-a47a3580-041c-11eb-8b6e-acb772b4259e.png) While synapse does have some performance issues with presence #3971, the fundamental problem is that this is an easy feature to implement for a centralised service at nearly no overhead, but federation makes it combinatorial #8055. There is also a client-side config option which disables the UI and idle tracking [enable_presence_by_hs_url] to blacklist the largest instances but I didn't notice much difference, so I recommend disabling the feature entirely at the server level as well. [enable_presence_by_hs_url]: https://github.com/vector-im/element-web/blob/v1.7.8/config.sample.json#L45 #### Joining Joining a ""large"", federated room will initially fail with the below message in Element web, but waiting a while (10-60mins) and trying again will succeed without any issue. What counts as ""large"" is not message history, user count, connections to homeservers or even a simple count of the state events, it is instead how long the state resolution algorithm takes. However, each of those numbers are reasonable proxies, so we can use them as estimates since user count is one of the few things you see before joining. ![Screenshot_2020-10-02_17-15-06](https://user-images.githubusercontent.com/71895/94945781-18771500-04d3-11eb-8419-83c2da73a341.png) This is #1211 and will also hopefully be mitigated by peeking matrix-org/matrix-doc#2753 so at least you don't need to wait for a join to complete before finding out if it's the kind of room you want. Note that you should first disable presence, otherwise it'll just make the situation worse #3120. There is a lot of database interaction too, so make sure you've [migrated your data](https://github.com/matrix-org/synapse/blob/master/docs/postgres.md) from the default sqlite to postgresql. Personally, I recommend patience - once the initial join is complete there's rarely any issues with actually interacting with the room, but if you like you can just block ""large"" rooms entirely. #### Sessions Anything that requires modifying the device list #7721 will take a while to propagate, again taking the client ""Offline"" until it's complete. This includes signing in and out, editing the public name and verifying e2ee. The main mitigation I recommend is to keep long-running sessions open e.g. by using Firefox SSB ""Use this site in App mode"" or Chromium PWA ""Install Element"". ### Recommended configuration Put the below in a new file at /etc/matrix-synapse/conf.d/sbc.yaml to override the defaults in homeserver.yaml. ``` # Set to false to disable presence tracking on this homeserver. use_presence: false # When this is enabled, the room ""complexity"" will be checked before a user # joins a new remote room. If it is above the complexity limit, the server will # disallow joining, or will instantly leave. limit_remote_rooms: # Uncomment to enable room complexity checking. #enabled: true complexity: 3.0 # Database configuration database: name: psycopg2 args: user: matrix-synapse # Generate a long, secure one with a password manager password: hunter2 database: matrix-synapse host: localhost cp_min: 5 cp_max: 10 ``` Currently the complexity is measured by [current_state_events / 500](https://github.com/matrix-org/synapse/blob/v1.20.1/synapse/storage/databases/main/events_worker.py#L986). You can find join times and your most complex rooms like this: ``` admin@freedombox:~$ zgrep '/client/r0/join/' /var/log/matrix-synapse/homeserver.log* | awk '{print $18, $25}' | sort --human-numeric-sort 182.088sec/0.003sec /_matrix/client/r0/join/%23decentralizedweb-general%3Amatrix.org admin@freedombox:~$ sudo --user postgres psql matrix-synapse --command 'select canonical_alias, joined_members, current_state_events from room_stats_state natural join room_stats_current where canonical_alias is not null order by current_state_events desc fetch first 5 rows only' canonical_alias | joined_members | current_state_events -------------------------------+----------------+---------------------- #_oftc_#debian:matrix.org | 871 | 52355 #matrix:matrix.org | 6379 | 10684 #irc:matrix.org | 461 | 3751 #decentralizedweb-general:matrix.org | 997 | 1509 #whatsapp:maunium.net | 554 | 854 ``` ### Version information - **Homeserver**: freedombox.emorrp1.name - **Version**: 1.19.1 - **Install method**: debian buster-backports via [freedombox](https://freedombox.org/) with postgresql and ldap - **Platform**: 2x1GHz armhf 2GiB ram [Single Board Computers](https://wiki.debian.org/CheapServerBoxHardware), SSD. It seems that once you get up to about 4x1.5GHz arm64 4GiB these issues are no longer a problem.",True,"Summary of performance impact of running on resource constrained devices such as SBCs - ### Description I've been running my homeserver on a cubietruck at home now for some time and am often replying to statements like ""you need loads of ram to join large rooms"" with ""it works fine for me"". I thought it might be useful to curate a summary of the issues you're likely to run into to help as a scaling-down guide, maybe highlight these for development work or end up as documentation. ### Performance Issues #### Presence This is the main reason people have a poor matrix experience on resource constrained homeservers. Element web will frequently be saying the server is offline while the python process will be pegged at 100% cpu. This feature is used to tell when other users are active (have a client app in the foreground) and therefore more likely to respond, but requires a lot of network activity to maintain even when nobody is talking in a room. ![Screenshot_2020-10-01_19-29-46](https://user-images.githubusercontent.com/71895/94848963-a47a3580-041c-11eb-8b6e-acb772b4259e.png) While synapse does have some performance issues with presence #3971, the fundamental problem is that this is an easy feature to implement for a centralised service at nearly no overhead, but federation makes it combinatorial #8055. There is also a client-side config option which disables the UI and idle tracking [enable_presence_by_hs_url] to blacklist the largest instances but I didn't notice much difference, so I recommend disabling the feature entirely at the server level as well. [enable_presence_by_hs_url]: https://github.com/vector-im/element-web/blob/v1.7.8/config.sample.json#L45 #### Joining Joining a ""large"", federated room will initially fail with the below message in Element web, but waiting a while (10-60mins) and trying again will succeed without any issue. What counts as ""large"" is not message history, user count, connections to homeservers or even a simple count of the state events, it is instead how long the state resolution algorithm takes. However, each of those numbers are reasonable proxies, so we can use them as estimates since user count is one of the few things you see before joining. ![Screenshot_2020-10-02_17-15-06](https://user-images.githubusercontent.com/71895/94945781-18771500-04d3-11eb-8419-83c2da73a341.png) This is #1211 and will also hopefully be mitigated by peeking matrix-org/matrix-doc#2753 so at least you don't need to wait for a join to complete before finding out if it's the kind of room you want. Note that you should first disable presence, otherwise it'll just make the situation worse #3120. There is a lot of database interaction too, so make sure you've [migrated your data](https://github.com/matrix-org/synapse/blob/master/docs/postgres.md) from the default sqlite to postgresql. Personally, I recommend patience - once the initial join is complete there's rarely any issues with actually interacting with the room, but if you like you can just block ""large"" rooms entirely. #### Sessions Anything that requires modifying the device list #7721 will take a while to propagate, again taking the client ""Offline"" until it's complete. This includes signing in and out, editing the public name and verifying e2ee. The main mitigation I recommend is to keep long-running sessions open e.g. by using Firefox SSB ""Use this site in App mode"" or Chromium PWA ""Install Element"". ### Recommended configuration Put the below in a new file at /etc/matrix-synapse/conf.d/sbc.yaml to override the defaults in homeserver.yaml. ``` # Set to false to disable presence tracking on this homeserver. use_presence: false # When this is enabled, the room ""complexity"" will be checked before a user # joins a new remote room. If it is above the complexity limit, the server will # disallow joining, or will instantly leave. limit_remote_rooms: # Uncomment to enable room complexity checking. #enabled: true complexity: 3.0 # Database configuration database: name: psycopg2 args: user: matrix-synapse # Generate a long, secure one with a password manager password: hunter2 database: matrix-synapse host: localhost cp_min: 5 cp_max: 10 ``` Currently the complexity is measured by [current_state_events / 500](https://github.com/matrix-org/synapse/blob/v1.20.1/synapse/storage/databases/main/events_worker.py#L986). You can find join times and your most complex rooms like this: ``` admin@freedombox:~$ zgrep '/client/r0/join/' /var/log/matrix-synapse/homeserver.log* | awk '{print $18, $25}' | sort --human-numeric-sort 182.088sec/0.003sec /_matrix/client/r0/join/%23decentralizedweb-general%3Amatrix.org admin@freedombox:~$ sudo --user postgres psql matrix-synapse --command 'select canonical_alias, joined_members, current_state_events from room_stats_state natural join room_stats_current where canonical_alias is not null order by current_state_events desc fetch first 5 rows only' canonical_alias | joined_members | current_state_events -------------------------------+----------------+---------------------- #_oftc_#debian:matrix.org | 871 | 52355 #matrix:matrix.org | 6379 | 10684 #irc:matrix.org | 461 | 3751 #decentralizedweb-general:matrix.org | 997 | 1509 #whatsapp:maunium.net | 554 | 854 ``` ### Version information - **Homeserver**: freedombox.emorrp1.name - **Version**: 1.19.1 - **Install method**: debian buster-backports via [freedombox](https://freedombox.org/) with postgresql and ldap - **Platform**: 2x1GHz armhf 2GiB ram [Single Board Computers](https://wiki.debian.org/CheapServerBoxHardware), SSD. It seems that once you get up to about 4x1.5GHz arm64 4GiB these issues are no longer a problem.",0,summary of performance impact of running on resource constrained devices such as sbcs description i ve been running my homeserver on a cubietruck at home now for some time and am often replying to statements like you need loads of ram to join large rooms with it works fine for me i thought it might be useful to curate a summary of the issues you re likely to run into to help as a scaling down guide maybe highlight these for development work or end up as documentation performance issues presence this is the main reason people have a poor matrix experience on resource constrained homeservers element web will frequently be saying the server is offline while the python process will be pegged at cpu this feature is used to tell when other users are active have a client app in the foreground and therefore more likely to respond but requires a lot of network activity to maintain even when nobody is talking in a room while synapse does have some performance issues with presence the fundamental problem is that this is an easy feature to implement for a centralised service at nearly no overhead but federation makes it combinatorial there is also a client side config option which disables the ui and idle tracking to blacklist the largest instances but i didn t notice much difference so i recommend disabling the feature entirely at the server level as well joining joining a large federated room will initially fail with the below message in element web but waiting a while and trying again will succeed without any issue what counts as large is not message history user count connections to homeservers or even a simple count of the state events it is instead how long the state resolution algorithm takes however each of those numbers are reasonable proxies so we can use them as estimates since user count is one of the few things you see before joining this is and will also hopefully be mitigated by peeking matrix org matrix doc so at least you don t need to wait for a join to complete before finding out if it s the kind of room you want note that you should first disable presence otherwise it ll just make the situation worse there is a lot of database interaction too so make sure you ve from the default sqlite to postgresql personally i recommend patience once the initial join is complete there s rarely any issues with actually interacting with the room but if you like you can just block large rooms entirely sessions anything that requires modifying the device list will take a while to propagate again taking the client offline until it s complete this includes signing in and out editing the public name and verifying the main mitigation i recommend is to keep long running sessions open e g by using firefox ssb use this site in app mode or chromium pwa install element recommended configuration put the below in a new file at etc matrix synapse conf d sbc yaml to override the defaults in homeserver yaml set to false to disable presence tracking on this homeserver use presence false when this is enabled the room complexity will be checked before a user joins a new remote room if it is above the complexity limit the server will disallow joining or will instantly leave limit remote rooms uncomment to enable room complexity checking enabled true complexity database configuration database name args user matrix synapse generate a long secure one with a password manager password database matrix synapse host localhost cp min cp max currently the complexity is measured by you can find join times and your most complex rooms like this admin freedombox zgrep client join var log matrix synapse homeserver log awk print sort human numeric sort matrix client join general org admin freedombox sudo user postgres psql matrix synapse command select canonical alias joined members current state events from room stats state natural join room stats current where canonical alias is not null order by current state events desc fetch first rows only canonical alias joined members current state events oftc debian matrix org matrix matrix org irc matrix org decentralizedweb general matrix org whatsapp maunium net version information homeserver freedombox name version install method debian buster backports via with postgresql and ldap platform armhf ram ssd it seems that once you get up to about these issues are no longer a problem ,0 3234,13219463740.0,IssuesEvent,2020-08-17 10:31:37,carpentries/amy,https://api.github.com/repos/carpentries/amy,closed,Ask for slug Email Automation action,component: email automation type: new feature,Add `AskForSlugAction` to AMY according to documented values.,1.0,Ask for slug Email Automation action - Add `AskForSlugAction` to AMY according to documented values.,1,ask for slug email automation action add askforslugaction to amy according to documented values ,1 420015,28213744399.0,IssuesEvent,2023-04-05 07:22:10,RatInk/chat-csbe,https://api.github.com/repos/RatInk/chat-csbe,opened,Der CI/CD Prozess sind Grafisch dargestellt,documentation,Herr Michel möchte eine Ordentliche grafik in der Dokumentation haben welche die Continiues Intergration und die Continues Delivery / Deployment Pipelines darstellt. Hierbei handelt sich um ein konzept so ist es unrelevant wann man damit anfängt.,1.0,Der CI/CD Prozess sind Grafisch dargestellt - Herr Michel möchte eine Ordentliche grafik in der Dokumentation haben welche die Continiues Intergration und die Continues Delivery / Deployment Pipelines darstellt. Hierbei handelt sich um ein konzept so ist es unrelevant wann man damit anfängt.,0,der ci cd prozess sind grafisch dargestellt herr michel möchte eine ordentliche grafik in der dokumentation haben welche die continiues intergration und die continues delivery deployment pipelines darstellt hierbei handelt sich um ein konzept so ist es unrelevant wann man damit anfängt ,0 109028,9359244915.0,IssuesEvent,2019-04-02 06:11:05,kyma-project/test-infra,https://api.github.com/repos/kyma-project/test-infra,closed,Run periodic GKE integration tests with Knative enabled,area/ci quality/testability," **Description** **Reasons** **Attachments** ",1.0,"Run periodic GKE integration tests with Knative enabled - **Description** **Reasons** **Attachments** ",0,run periodic gke integration tests with knative enabled thank you for your contribution before you submit the issue search open and closed issues for duplicates read the contributing guidelines description reasons attachments ,0 33009,27143831312.0,IssuesEvent,2023-02-16 18:17:23,WordPress/performance,https://api.github.com/repos/WordPress/performance,opened,Implement admin pointer to indicate to the user they need to activate the new standalone plugins,Infrastructure Creating standalone plugins,"## Feature Description Follow-up to #652: For sites where the conditions in #652 apply, an admin pointer should also show up to ensure the user sees that they need to do something on the PL settings screen (specifically, install/activate the standalone plugins for the modules that they used to have active). ### Requirements * TODO. ",1.0,"Implement admin pointer to indicate to the user they need to activate the new standalone plugins - ## Feature Description Follow-up to #652: For sites where the conditions in #652 apply, an admin pointer should also show up to ensure the user sees that they need to do something on the PL settings screen (specifically, install/activate the standalone plugins for the modules that they used to have active). ### Requirements * TODO. ",0,implement admin pointer to indicate to the user they need to activate the new standalone plugins feature description follow up to for sites where the conditions in apply an admin pointer should also show up to ensure the user sees that they need to do something on the pl settings screen specifically install activate the standalone plugins for the modules that they used to have active requirements todo ,0 7009,24118224457.0,IssuesEvent,2022-09-20 16:18:44,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,opened,Cypress Test - Verify the activity failure in UI if any activity is failed,automation,"- [ ] Find the scenarios to list an activity as failed status - [ ] Create Cypress Test to verify the failed activity in UI and API response",1.0,"Cypress Test - Verify the activity failure in UI if any activity is failed - - [ ] Find the scenarios to list an activity as failed status - [ ] Create Cypress Test to verify the failed activity in UI and API response",1,cypress test verify the activity failure in ui if any activity is failed find the scenarios to list an activity as failed status create cypress test to verify the failed activity in ui and api response,1 37187,8289347701.0,IssuesEvent,2018-09-19 14:28:34,cakephp/cakephp,https://api.github.com/repos/cakephp/cakephp,closed,table names with schema,Defect On hold,"This is a (multiple allowed): * [x] bug * [ ] enhancement * [x] feature-discussion (RFC) * CakePHP Version: 5.6.4 * Platform and Target: Linux SMP Debian 4.9, 10.1-MariaDB ### What you did When changing following line in function describe() ( Database/Schema/Collection.php:93 ): list($config['schema'], $name) = explode('.', $name); to list($config['schema'], $tblName) = explode('.', $name); table names with ""schema-dot-table"" notation will work as expected, p.ex: >>> $tbl = 'information_schema.TABLES'; >>> $tbls = \Cake\ORM\TableRegistry::get( $tbl, [ 'table' => $tbl ] ); >>> echo $tbls->find()->count(); so I wonder if overwriting variable $name in this place when setting schema name, is it a feature or bug? ",1.0,"table names with schema - This is a (multiple allowed): * [x] bug * [ ] enhancement * [x] feature-discussion (RFC) * CakePHP Version: 5.6.4 * Platform and Target: Linux SMP Debian 4.9, 10.1-MariaDB ### What you did When changing following line in function describe() ( Database/Schema/Collection.php:93 ): list($config['schema'], $name) = explode('.', $name); to list($config['schema'], $tblName) = explode('.', $name); table names with ""schema-dot-table"" notation will work as expected, p.ex: >>> $tbl = 'information_schema.TABLES'; >>> $tbls = \Cake\ORM\TableRegistry::get( $tbl, [ 'table' => $tbl ] ); >>> echo $tbls->find()->count(); so I wonder if overwriting variable $name in this place when setting schema name, is it a feature or bug? ",0,table names with schema this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target linux smp debian mariadb what you did when changing following line in function describe database schema collection php list config name explode name to list config tblname explode name table names with schema dot table notation will work as expected p ex tbl information schema tables tbls cake orm tableregistry get tbl echo tbls find count so i wonder if overwriting variable name in this place when setting schema name is it a feature or bug ,0 9803,30536039833.0,IssuesEvent,2023-07-19 17:25:34,rancher/qa-tasks,https://api.github.com/repos/rancher/qa-tasks,opened,Document branching strategy for automation PRs,[zube]: QA Next up area/automation-framework,"We need to document the strategy for automation PRs after we branch - Which issues/PRs should be backported to a previous release line? - Which branch should PRs default to? ",1.0,"Document branching strategy for automation PRs - We need to document the strategy for automation PRs after we branch - Which issues/PRs should be backported to a previous release line? - Which branch should PRs default to? ",1,document branching strategy for automation prs we need to document the strategy for automation prs after we branch which issues prs should be backported to a previous release line which branch should prs default to ,1 1401,10040438096.0,IssuesEvent,2019-07-18 19:57:17,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,Include Firefox Sharp Sans through our build process,eng:automation,"As specified in #1307 we want to use *Firefox Sharp Sans* throughout the application. This font however is not something we can include in our open source project: > we can ship the font to end users as part of our application, but we cannot include it next to the publicly available fenix source code in this GitHub repository This ticket is about building something into our build automation that can fetch the font files from a locked down server and include them in the final build. Since our build process is mostly transparent and publicly visible, the process of including the fonts must be either hidden or not reproducible by a third party. ",1.0,"Include Firefox Sharp Sans through our build process - As specified in #1307 we want to use *Firefox Sharp Sans* throughout the application. This font however is not something we can include in our open source project: > we can ship the font to end users as part of our application, but we cannot include it next to the publicly available fenix source code in this GitHub repository This ticket is about building something into our build automation that can fetch the font files from a locked down server and include them in the final build. Since our build process is mostly transparent and publicly visible, the process of including the fonts must be either hidden or not reproducible by a third party. ",1,include firefox sharp sans through our build process as specified in we want to use firefox sharp sans throughout the application this font however is not something we can include in our open source project we can ship the font to end users as part of our application but we cannot include it next to the publicly available fenix source code in this github repository this ticket is about building something into our build automation that can fetch the font files from a locked down server and include them in the final build since our build process is mostly transparent and publicly visible the process of including the fonts must be either hidden or not reproducible by a third party ,1 5291,19013502853.0,IssuesEvent,2021-11-23 11:57:23,Azure/PSRule.Rules.Azure,https://api.github.com/repos/Azure/PSRule.Rules.Azure,closed,Automation accounts should use managed identities for authentication,rule: automation-account,"# Rule request ## Suggested rule change As recently brought up in: https://azure.microsoft.com/en-us/updates/azure-automation-managed-identities-ga/, Managed identites for Automation accounts are GA and should be used. ## Applies to the following The rule applies to the following: - Resource type: **Microsoft.Automation/automationAccounts** ## Additional context [Template reference](https://docs.microsoft.com/en-us/azure/templates/microsoft.automation/automationaccounts?tabs=bicep)",1.0,"Automation accounts should use managed identities for authentication - # Rule request ## Suggested rule change As recently brought up in: https://azure.microsoft.com/en-us/updates/azure-automation-managed-identities-ga/, Managed identites for Automation accounts are GA and should be used. ## Applies to the following The rule applies to the following: - Resource type: **Microsoft.Automation/automationAccounts** ## Additional context [Template reference](https://docs.microsoft.com/en-us/azure/templates/microsoft.automation/automationaccounts?tabs=bicep)",1,automation accounts should use managed identities for authentication rule request suggested rule change as recently brought up in managed identites for automation accounts are ga and should be used applies to the following the rule applies to the following resource type microsoft automation automationaccounts additional context ,1 4987,18174209522.0,IssuesEvent,2021-09-27 23:48:12,hugsy/gef,https://api.github.com/repos/hugsy/gef,closed,CI Improvement,enhancement new feature automation," - [x] #730 - [x] ~~Changelog (https://github.com/marketplace/actions/generate-changelog)~~ (done locally with `scripts/new-release.py`) - [ ] #729",1.0,"CI Improvement - - [x] #730 - [x] ~~Changelog (https://github.com/marketplace/actions/generate-changelog)~~ (done locally with `scripts/new-release.py`) - [ ] #729",1,ci improvement changelog done locally with scripts new release py ,1 82166,23697052594.0,IssuesEvent,2022-08-29 15:27:59,lanl/BEE,https://api.github.com/repos/lanl/BEE,closed,build: Builder should allow flexible container archive directory,builder on-release,"Right now, builder assumes `bee_workdir` is the base for container archive tarballs. Since these containers can be >20GB, we would like to be able to control where the container archive exists separately from the bee_workdir. Suggested resolution is, on the first launch of `bee_interface.py`, check for config location of container_archive, if not exist, set up defuault and write to config. User can go back in and change later if desired. Note to self, don't forget documentation!",1.0,"build: Builder should allow flexible container archive directory - Right now, builder assumes `bee_workdir` is the base for container archive tarballs. Since these containers can be >20GB, we would like to be able to control where the container archive exists separately from the bee_workdir. Suggested resolution is, on the first launch of `bee_interface.py`, check for config location of container_archive, if not exist, set up defuault and write to config. User can go back in and change later if desired. Note to self, don't forget documentation!",0,build builder should allow flexible container archive directory right now builder assumes bee workdir is the base for container archive tarballs since these containers can be we would like to be able to control where the container archive exists separately from the bee workdir suggested resolution is on the first launch of bee interface py check for config location of container archive if not exist set up defuault and write to config user can go back in and change later if desired note to self don t forget documentation ,0 2237,11627540230.0,IssuesEvent,2020-02-27 16:41:55,nf-core/tools,https://api.github.com/repos/nf-core/tools,closed,Improve Branch Protection GitHub Actions,automation,"Instead of just a simple failure when PR are coming to `master`, I would like to have a comment posted saying something like: > Hi, > You recently made this PR into the `master` branch > It is not a patch, so it should be made into the `dev` branch > You can change that by clicking on EDIT and then changing the base branch to `dev` > Thanks again for your contribution cf #518",1.0,"Improve Branch Protection GitHub Actions - Instead of just a simple failure when PR are coming to `master`, I would like to have a comment posted saying something like: > Hi, > You recently made this PR into the `master` branch > It is not a patch, so it should be made into the `dev` branch > You can change that by clicking on EDIT and then changing the base branch to `dev` > Thanks again for your contribution cf #518",1,improve branch protection github actions instead of just a simple failure when pr are coming to master i would like to have a comment posted saying something like hi you recently made this pr into the master branch it is not a patch so it should be made into the dev branch you can change that by clicking on edit and then changing the base branch to dev thanks again for your contribution cf ,1 359856,10681796761.0,IssuesEvent,2019-10-22 02:26:11,zdnscloud/singlecloud,https://api.github.com/repos/zdnscloud/singlecloud,closed,"重启一个ceph存储节点后, ceph-osd会报错.",bug fixed master priority: Medium,"k logs ceph-osd-10.0.0.131-vdd-8288q -n zlcoud Error from server (NotFound): namespaces ""zlcoud"" not found d3$ k logs ceph-osd-10.0.0.131-vdd-8288q -n zcloud + set -e + [[ -z 2813a0cc-f080-11e9-ae21-52540053eeee ]] + [[ -z /dev/vdd ]] + [[ ! -e /dev/vdd ]] + [[ ! -e ceph ]] + CLUSTER=ceph + [[ 1 -eq 1 ]] + CEPH_DISK_CLI_OPTS+=(--bluestore) + [[ ! -f /var/lib/ceph/bootstrap-osd/ceph.keyring ]] + ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring exported keyring for client.bootstrap-osd + Osd_Args='--setuser ceph --setgroup ceph --default-log-to-file false --ms-learn-addr-from-peer=false' + osd_start ++ get_id ++ ceph-volume lvm list --format json +++ /etc/ceph/ceph_getid /tmp/lvm.json ++ ID=1 ++ echo 1 + id=1 + [[ -z 1 ]] ++ get_id ++ ceph-volume lvm list --format json +++ /etc/ceph/ceph_getid /tmp/lvm.json ++ ID=1 ++ echo 1 + id=1 + [[ -n 1 ]] + osd_activate 1 + ID=1 + ceph-osd --fsid 2813a0cc-f080-11e9-ae21-52540053eeee --setuser ceph --setgroup ceph --default-log-to-file false --ms-learn-addr-from-peer=false --id 1 --cluster ceph '--crush-location=root=default host=10.0.0.131' --foreground 2019-10-17 03:14:06.213 7fac7ecc5e00 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2019-10-17 03:14:06.213 7fac7ecc5e00 -1 AuthRegistry(0x56191e5f2a38) no keyring found at /var/lib/ceph/osd/ceph-1/keyring, disabling cephx 2019-10-17 03:14:06.213 7fac7ecc5e00 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2019-10-17 03:14:06.213 7fac7ecc5e00 -1 AuthRegistry(0x7fffc11dba38) no keyring found at /var/lib/ceph/osd/ceph-1/keyring, disabling cephx failed to fetch mon config (--no-mon-config to skip)",1.0,"重启一个ceph存储节点后, ceph-osd会报错. - k logs ceph-osd-10.0.0.131-vdd-8288q -n zlcoud Error from server (NotFound): namespaces ""zlcoud"" not found d3$ k logs ceph-osd-10.0.0.131-vdd-8288q -n zcloud + set -e + [[ -z 2813a0cc-f080-11e9-ae21-52540053eeee ]] + [[ -z /dev/vdd ]] + [[ ! -e /dev/vdd ]] + [[ ! -e ceph ]] + CLUSTER=ceph + [[ 1 -eq 1 ]] + CEPH_DISK_CLI_OPTS+=(--bluestore) + [[ ! -f /var/lib/ceph/bootstrap-osd/ceph.keyring ]] + ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring exported keyring for client.bootstrap-osd + Osd_Args='--setuser ceph --setgroup ceph --default-log-to-file false --ms-learn-addr-from-peer=false' + osd_start ++ get_id ++ ceph-volume lvm list --format json +++ /etc/ceph/ceph_getid /tmp/lvm.json ++ ID=1 ++ echo 1 + id=1 + [[ -z 1 ]] ++ get_id ++ ceph-volume lvm list --format json +++ /etc/ceph/ceph_getid /tmp/lvm.json ++ ID=1 ++ echo 1 + id=1 + [[ -n 1 ]] + osd_activate 1 + ID=1 + ceph-osd --fsid 2813a0cc-f080-11e9-ae21-52540053eeee --setuser ceph --setgroup ceph --default-log-to-file false --ms-learn-addr-from-peer=false --id 1 --cluster ceph '--crush-location=root=default host=10.0.0.131' --foreground 2019-10-17 03:14:06.213 7fac7ecc5e00 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2019-10-17 03:14:06.213 7fac7ecc5e00 -1 AuthRegistry(0x56191e5f2a38) no keyring found at /var/lib/ceph/osd/ceph-1/keyring, disabling cephx 2019-10-17 03:14:06.213 7fac7ecc5e00 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2019-10-17 03:14:06.213 7fac7ecc5e00 -1 AuthRegistry(0x7fffc11dba38) no keyring found at /var/lib/ceph/osd/ceph-1/keyring, disabling cephx failed to fetch mon config (--no-mon-config to skip)",0,重启一个ceph存储节点后 ceph osd会报错 k logs ceph osd vdd n zlcoud error from server notfound namespaces zlcoud not found k logs ceph osd vdd n zcloud set e cluster ceph ceph disk cli opts bluestore ceph auth get client bootstrap osd o var lib ceph bootstrap osd ceph keyring exported keyring for client bootstrap osd osd args setuser ceph setgroup ceph default log to file false ms learn addr from peer false osd start get id ceph volume lvm list format json etc ceph ceph getid tmp lvm json id echo id get id ceph volume lvm list format json etc ceph ceph getid tmp lvm json id echo id osd activate id ceph osd fsid setuser ceph setgroup ceph default log to file false ms learn addr from peer false id cluster ceph crush location root default host foreground auth unable to find a keyring on var lib ceph osd ceph keyring no such file or directory authregistry no keyring found at var lib ceph osd ceph keyring disabling cephx auth unable to find a keyring on var lib ceph osd ceph keyring no such file or directory authregistry no keyring found at var lib ceph osd ceph keyring disabling cephx failed to fetch mon config no mon config to skip ,0 9414,28241791880.0,IssuesEvent,2023-04-06 07:47:19,Azure/azure-sdk-tools,https://api.github.com/repos/Azure/azure-sdk-tools,closed,Support typespec naming in SDK generation pipeline rather than cadl,SDK Automation,typespec core libraries completed renaming and some emitters are done too. It needs to support that in SDK generation pipeline.,1.0,Support typespec naming in SDK generation pipeline rather than cadl - typespec core libraries completed renaming and some emitters are done too. It needs to support that in SDK generation pipeline.,1,support typespec naming in sdk generation pipeline rather than cadl typespec core libraries completed renaming and some emitters are done too it needs to support that in sdk generation pipeline ,1 225142,24814596820.0,IssuesEvent,2022-10-25 12:13:15,Baneeishaque/Android-Common-Utils16,https://api.github.com/repos/Baneeishaque/Android-Common-Utils16,closed,CVE-2021-35517 (High) detected in commons-compress-1.20.jar - autoclosed,security vulnerability,"## CVE-2021-35517 - High Severity Vulnerability
Vulnerable Library - commons-compress-1.20.jar

Apache Commons Compress software defines an API for working with compression and archive formats. These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4, Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.

Library home page: https://commons.apache.org/proper/commons-compress/

Path to dependency file: /tests16/build.gradle

Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.20/b8df472b31e1f17c232d2ad78ceb1c84e00c641b/commons-compress-1.20.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.20/b8df472b31e1f17c232d2ad78ceb1c84e00c641b/commons-compress-1.20.jar

Dependency Hierarchy: - lint-gradle-30.0.3.jar (Root Library) - sdklib-30.0.3.jar - :x: **commons-compress-1.20.jar** (Vulnerable Library)

Found in HEAD commit: 13e2d25e25b49ea823e48b46e94b7f19c267ce01

Found in base branch: master

Vulnerability Details

When reading a specially crafted TAR archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' tar package.

Publish Date: 2021-07-13

URL: CVE-2021-35517

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://commons.apache.org/proper/commons-compress/security-reports.html

Release Date: 2021-07-13

Fix Resolution: org.apache.commons:commons-compress:1.21

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-35517 (High) detected in commons-compress-1.20.jar - autoclosed - ## CVE-2021-35517 - High Severity Vulnerability
Vulnerable Library - commons-compress-1.20.jar

Apache Commons Compress software defines an API for working with compression and archive formats. These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4, Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.

Library home page: https://commons.apache.org/proper/commons-compress/

Path to dependency file: /tests16/build.gradle

Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.20/b8df472b31e1f17c232d2ad78ceb1c84e00c641b/commons-compress-1.20.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.20/b8df472b31e1f17c232d2ad78ceb1c84e00c641b/commons-compress-1.20.jar

Dependency Hierarchy: - lint-gradle-30.0.3.jar (Root Library) - sdklib-30.0.3.jar - :x: **commons-compress-1.20.jar** (Vulnerable Library)

Found in HEAD commit: 13e2d25e25b49ea823e48b46e94b7f19c267ce01

Found in base branch: master

Vulnerability Details

When reading a specially crafted TAR archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' tar package.

Publish Date: 2021-07-13

URL: CVE-2021-35517

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://commons.apache.org/proper/commons-compress/security-reports.html

Release Date: 2021-07-13

Fix Resolution: org.apache.commons:commons-compress:1.21

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in commons compress jar autoclosed cve high severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip lzma xz snappy traditional unix compress deflate brotli zstandard and ar cpio jar tar zip dump arj library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org apache commons commons compress commons compress jar home wss scanner gradle caches modules files org apache commons commons compress commons compress jar dependency hierarchy lint gradle jar root library sdklib jar x commons compress jar vulnerable library found in head commit a href found in base branch master vulnerability details when reading a specially crafted tar archive compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs this could be used to mount a denial of service attack against services that use compress tar package publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache commons commons compress step up your open source security game with mend ,0 10197,31882454979.0,IssuesEvent,2023-09-16 14:41:47,xtermjs/xterm.js,https://api.github.com/repos/xtermjs/xterm.js,closed,Website publish automation isn't working,type/automation,"https://dev.azure.com/xtermjs/xterm.js/_build/results?buildId=2809&view=logs&j=8d802004-fbbb-5f17-b73e-f23de0c1dec8&t=7a29959a-6c04-5e83-f8f0-319180787bbf ``` error: src refspec update-4.2.0 does not match any error: failed to push some refs to 'https://github.com/xtermjs/xtermjs.org' fatal: The current branch update-4.3.0 has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin update-4.3.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 173 100 93 100 80 642 552 --:--:-- --:--:-- --:--:-- 645 { ""message"": ""Bad credentials"", ""documentation_url"": ""https://developer.github.com/v3"" } ```",1.0,"Website publish automation isn't working - https://dev.azure.com/xtermjs/xterm.js/_build/results?buildId=2809&view=logs&j=8d802004-fbbb-5f17-b73e-f23de0c1dec8&t=7a29959a-6c04-5e83-f8f0-319180787bbf ``` error: src refspec update-4.2.0 does not match any error: failed to push some refs to 'https://github.com/xtermjs/xtermjs.org' fatal: The current branch update-4.3.0 has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin update-4.3.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 173 100 93 100 80 642 552 --:--:-- --:--:-- --:--:-- 645 { ""message"": ""Bad credentials"", ""documentation_url"": ""https://developer.github.com/v3"" } ```",1,website publish automation isn t working error src refspec update does not match any error failed to push some refs to fatal the current branch update has no upstream branch to push the current branch and set the remote as upstream use git push set upstream origin update total received xferd average speed time time time current dload upload total spent left speed message bad credentials documentation url ,1 656,7719494964.0,IssuesEvent,2018-05-23 19:36:42,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,"publish script is missing a """,automation cxp doc-bug triaged,"$automationAccountName = ""AutomationAccount"" #here in front of the AutomationAccount :) $runbookName = ""Sample_TestRunbook"" $RGName = ""ResourceGroup"" Publish-AzureRmAutomationRunbook -AutomationAccountName $automationAccountName ` -Name $runbookName -ResourceGroupName $RGName --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 30d56a39-fe02-4960-eba8-f4ac82bae9d3 * Version Independent ID: 2d726714-d427-38e9-14f4-3596de958db7 * Content: [Creating or importing a runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-creating-importing-runbook) * Content Source: [articles/automation/automation-creating-importing-runbook.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-creating-importing-runbook.md) * Service: **automation** * Product: **unspecified** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"publish script is missing a "" - $automationAccountName = ""AutomationAccount"" #here in front of the AutomationAccount :) $runbookName = ""Sample_TestRunbook"" $RGName = ""ResourceGroup"" Publish-AzureRmAutomationRunbook -AutomationAccountName $automationAccountName ` -Name $runbookName -ResourceGroupName $RGName --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 30d56a39-fe02-4960-eba8-f4ac82bae9d3 * Version Independent ID: 2d726714-d427-38e9-14f4-3596de958db7 * Content: [Creating or importing a runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-creating-importing-runbook) * Content Source: [articles/automation/automation-creating-importing-runbook.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-creating-importing-runbook.md) * Service: **automation** * Product: **unspecified** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,publish script is missing a automationaccountname automationaccount here in front of the automationaccount runbookname sample testrunbook rgname resourcegroup publish azurermautomationrunbook automationaccountname automationaccountname name runbookname resourcegroupname rgname document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation product unspecified github login georgewallace microsoft alias gwallace ,1 823868,31071514233.0,IssuesEvent,2023-08-12 01:45:49,ryonakano/reco,https://api.github.com/repos/ryonakano/reco,closed,Add recording indication,Priority: Medium,"Originally from https://github.com/ryonakano/reco/issues/175#issuecomment-1280885151: > If I can just pick your brain for a quick feature request, it would be nice to have some sort of indication of signal, maybe just a little red light when there is speech, something... because I just made a recording without my microphone plugged in and didn't even notice until I saw the file size :)",1.0,"Add recording indication - Originally from https://github.com/ryonakano/reco/issues/175#issuecomment-1280885151: > If I can just pick your brain for a quick feature request, it would be nice to have some sort of indication of signal, maybe just a little red light when there is speech, something... because I just made a recording without my microphone plugged in and didn't even notice until I saw the file size :)",0,add recording indication originally from if i can just pick your brain for a quick feature request it would be nice to have some sort of indication of signal maybe just a little red light when there is speech something because i just made a recording without my microphone plugged in and didn t even notice until i saw the file size ,0 1989,11219762727.0,IssuesEvent,2020-01-07 14:32:56,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,Syscheck automated tests: Check syscheck alert for moving a folder with a file in it,automation component/fim,"## Description Update `test_basic_usage` to implement a simple test that will move a folder with files inside and check if syscheck generates alerts. During the test a new folder within the monitored one will be created and several files will be created inside. Then, that new folder will be moved to another location and the corresponding events (`added` and `deleted`) must be detected. The following cases must be checked: 1. Move the subdirectory to another monitored directory (for example: from `testdir1/subdir`r to `/testdir2`) 2. Move the subdirectory to a non-monitored directory (for example: from `testdir1/subdir` to `/`) 3. Move a subdirectory of a non-monitored directory to a monitored one (for example: from `/subdir` to `/testdir1`) ## Expected output The files in the folder must generate `deleted` events when moving outside of a monitored directory and `added` events in the new directory if this one is also being monitored. For example: when moving `/testdir1/subdir` to `/testdir2/subdir` a `deleted` event for each file inside `/testdir1/subdir` must be detected on `/testdir1` while `added` events should be detected on `/testdir2/subdir`. ## Subtasks **Unix** - [x] Create a test that moves a subdirectory with files inside in a monitored path to another monitored folder (for example: from `testdir1/subdir` to `/testdir2`) - [x] Create a test that moves the subdirectory to a non-monitored path (for example: from `testdir1/subdir` to `/`) - [x] Create a test that moves a subdirectory of a non-monitored path to a monitored one (for example: from `/subdir` to `/testdir1`) **Windows** - [x] Create a test that moves a subdirectory with files inside in a monitored path to another monitored folder (for example: from `testdir1/subdir` to `/testdir2`) - [x] Create a test that moves the subdirectory to a non-monitored path (for example: from `testdir1/subdir` to `/`) - [x] Create a test that moves a subdirectory of a non-monitored path to a monitored one (for example: from `/subdir` to `/testdir1`) **MacOS** - [x] Create a test that moves a subdirectory with files inside in a monitored path to another monitored folder (for example: from `testdir1/subdir` to `/testdir2`) - [x] Create a test that moves the subdirectory to a non-monitored path (for example: from `testdir1/subdir` to `/`) - [x] Create a test that moves a subdirectory of a non-monitored path to a monitored one (for example: from `/subdir` to `/testdir1`) ",1.0,"Syscheck automated tests: Check syscheck alert for moving a folder with a file in it - ## Description Update `test_basic_usage` to implement a simple test that will move a folder with files inside and check if syscheck generates alerts. During the test a new folder within the monitored one will be created and several files will be created inside. Then, that new folder will be moved to another location and the corresponding events (`added` and `deleted`) must be detected. The following cases must be checked: 1. Move the subdirectory to another monitored directory (for example: from `testdir1/subdir`r to `/testdir2`) 2. Move the subdirectory to a non-monitored directory (for example: from `testdir1/subdir` to `/`) 3. Move a subdirectory of a non-monitored directory to a monitored one (for example: from `/subdir` to `/testdir1`) ## Expected output The files in the folder must generate `deleted` events when moving outside of a monitored directory and `added` events in the new directory if this one is also being monitored. For example: when moving `/testdir1/subdir` to `/testdir2/subdir` a `deleted` event for each file inside `/testdir1/subdir` must be detected on `/testdir1` while `added` events should be detected on `/testdir2/subdir`. ## Subtasks **Unix** - [x] Create a test that moves a subdirectory with files inside in a monitored path to another monitored folder (for example: from `testdir1/subdir` to `/testdir2`) - [x] Create a test that moves the subdirectory to a non-monitored path (for example: from `testdir1/subdir` to `/`) - [x] Create a test that moves a subdirectory of a non-monitored path to a monitored one (for example: from `/subdir` to `/testdir1`) **Windows** - [x] Create a test that moves a subdirectory with files inside in a monitored path to another monitored folder (for example: from `testdir1/subdir` to `/testdir2`) - [x] Create a test that moves the subdirectory to a non-monitored path (for example: from `testdir1/subdir` to `/`) - [x] Create a test that moves a subdirectory of a non-monitored path to a monitored one (for example: from `/subdir` to `/testdir1`) **MacOS** - [x] Create a test that moves a subdirectory with files inside in a monitored path to another monitored folder (for example: from `testdir1/subdir` to `/testdir2`) - [x] Create a test that moves the subdirectory to a non-monitored path (for example: from `testdir1/subdir` to `/`) - [x] Create a test that moves a subdirectory of a non-monitored path to a monitored one (for example: from `/subdir` to `/testdir1`) ",1,syscheck automated tests check syscheck alert for moving a folder with a file in it description update test basic usage to implement a simple test that will move a folder with files inside and check if syscheck generates alerts during the test a new folder within the monitored one will be created and several files will be created inside then that new folder will be moved to another location and the corresponding events added and deleted must be detected the following cases must be checked move the subdirectory to another monitored directory for example from subdir r to move the subdirectory to a non monitored directory for example from subdir to move a subdirectory of a non monitored directory to a monitored one for example from subdir to expected output the files in the folder must generate deleted events when moving outside of a monitored directory and added events in the new directory if this one is also being monitored for example when moving subdir to subdir a deleted event for each file inside subdir must be detected on while added events should be detected on subdir subtasks unix create a test that moves a subdirectory with files inside in a monitored path to another monitored folder for example from subdir to create a test that moves the subdirectory to a non monitored path for example from subdir to create a test that moves a subdirectory of a non monitored path to a monitored one for example from subdir to windows create a test that moves a subdirectory with files inside in a monitored path to another monitored folder for example from subdir to create a test that moves the subdirectory to a non monitored path for example from subdir to create a test that moves a subdirectory of a non monitored path to a monitored one for example from subdir to macos create a test that moves a subdirectory with files inside in a monitored path to another monitored folder for example from subdir to create a test that moves the subdirectory to a non monitored path for example from subdir to create a test that moves a subdirectory of a non monitored path to a monitored one for example from subdir to ,1 10749,7300701735.0,IssuesEvent,2018-02-27 01:01:34,dotnet/coreclr,https://api.github.com/repos/dotnet/coreclr,opened,[Perf] Investigate LinqBenchmarks Regression between release/2.0.0 and release/2.1,area-Benchmarks tenet-performance,Investigate the full suite of Linq benchmarks as we saw regressions across many tests as large as 5%,True,[Perf] Investigate LinqBenchmarks Regression between release/2.0.0 and release/2.1 - Investigate the full suite of Linq benchmarks as we saw regressions across many tests as large as 5%,0, investigate linqbenchmarks regression between release and release investigate the full suite of linq benchmarks as we saw regressions across many tests as large as ,0 167826,6347365253.0,IssuesEvent,2017-07-28 06:46:13,BinPar/PRM,https://api.github.com/repos/BinPar/PRM,closed,PRM UNI PRO: PROBLEMAS A LA HORA DE ESCRIBIR COMENTARIOS,Priority: High,"Noemí reporta errores a la hora de escribir un comentario: _""Ahora cuando intento escribir un comentario en algún título dentro de la ficha del profesor, el comentario no se guarda al darle a enter. Se me queda con el cursor dentro del comentario y para poder salir de la casilla de comentarios tengo que darle al + como para crear un nuevo comentario.""_ ¿Está relacionado con la posibilidad de usar RETURN para crear una nueva línea dentro del comentario? @CristianBinpar @minigoBinpar @franciscorrr ",1.0,"PRM UNI PRO: PROBLEMAS A LA HORA DE ESCRIBIR COMENTARIOS - Noemí reporta errores a la hora de escribir un comentario: _""Ahora cuando intento escribir un comentario en algún título dentro de la ficha del profesor, el comentario no se guarda al darle a enter. Se me queda con el cursor dentro del comentario y para poder salir de la casilla de comentarios tengo que darle al + como para crear un nuevo comentario.""_ ¿Está relacionado con la posibilidad de usar RETURN para crear una nueva línea dentro del comentario? @CristianBinpar @minigoBinpar @franciscorrr ",0,prm uni pro problemas a la hora de escribir comentarios noemí reporta errores a la hora de escribir un comentario ahora cuando intento escribir un comentario en algún título dentro de la ficha del profesor el comentario no se guarda al darle a enter se me queda con el cursor dentro del comentario y para poder salir de la casilla de comentarios tengo que darle al como para crear un nuevo comentario ¿está relacionado con la posibilidad de usar return para crear una nueva línea dentro del comentario cristianbinpar minigobinpar franciscorrr ,0 4863,17840263068.0,IssuesEvent,2021-09-03 09:09:26,pypa/pip,https://api.github.com/repos/pypa/pip,closed,pip and Azure Pipelines,C: automation S: needs discussion type: maintenance,"/cc @pypa/pip-committers @brcrista #5785 has become a little long. Plus, it feels weird to me to have a discussion about general things on a PR. :) ",1.0,"pip and Azure Pipelines - /cc @pypa/pip-committers @brcrista #5785 has become a little long. Plus, it feels weird to me to have a discussion about general things on a PR. :) ",1,pip and azure pipelines cc pypa pip committers brcrista has become a little long plus it feels weird to me to have a discussion about general things on a pr ,1 8663,27172056220.0,IssuesEvent,2023-02-17 20:24:58,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,"Cannot create permissions with ""owner"" role.",automation:Closed,"I am unable to create permissions with ""owner"" role. I am using one drive 2.0 [invite](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_invite) API to create permissions. As specified [here](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission#roles-enumeration), ""sp.owner"" role can be specified while creating permissions. But API errors out with following error: {""error"":{""code"":""invalidRequest"",""message"":""Invalid value for role""}} Request: Url: /_api/v2.0/drive/items/{item_id}/invite Body: { ""recipients"": [ { ""email"": ""email@kdk.com"", } ], ""message"": ""Here's the file that we're collaborating on."", ""requireSignIn"": True, ""sendInvitation"": False, ""roles"": [""sp.owner""], } Method: Post Response: {""error"":{""code"":""invalidRequest"",""message"":""Invalid value for role""}} For other roles read and write its working fine. We are using client credentials workflow with full permissions given to our app.",1.0,"Cannot create permissions with ""owner"" role. - I am unable to create permissions with ""owner"" role. I am using one drive 2.0 [invite](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_invite) API to create permissions. As specified [here](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission#roles-enumeration), ""sp.owner"" role can be specified while creating permissions. But API errors out with following error: {""error"":{""code"":""invalidRequest"",""message"":""Invalid value for role""}} Request: Url: /_api/v2.0/drive/items/{item_id}/invite Body: { ""recipients"": [ { ""email"": ""email@kdk.com"", } ], ""message"": ""Here's the file that we're collaborating on."", ""requireSignIn"": True, ""sendInvitation"": False, ""roles"": [""sp.owner""], } Method: Post Response: {""error"":{""code"":""invalidRequest"",""message"":""Invalid value for role""}} For other roles read and write its working fine. We are using client credentials workflow with full permissions given to our app.",1,cannot create permissions with owner role i am unable to create permissions with owner role i am using one drive api to create permissions as specified sp owner role can be specified while creating permissions but api errors out with following error error code invalidrequest message invalid value for role request url api drive items item id invite body recipients email email kdk com message here s the file that we re collaborating on requiresignin true sendinvitation false roles method post response error code invalidrequest message invalid value for role for other roles read and write its working fine we are using client credentials workflow with full permissions given to our app ,1 88687,25487594314.0,IssuesEvent,2022-11-26 16:14:42,DynamoRIO/dynamorio,https://api.github.com/repos/DynamoRIO/dynamorio,opened,GA Windows package and CI builds broken by zlib errors,Component-Build OpSys-Windows,"The weekly package build is broken and also the CI builds apparently by a GS Windows image update which enabled zlib which was not there before. Xref #5766 The comment here has some details and analysis: https://github.com/DynamoRIO/dynamorio/pull/5766#issuecomment-1328029559 The broken package build from this week: https://github.com/DynamoRIO/dynamorio/actions/runs/3551887463/jobs/5966427587 Dr. Memory's package build is also broken: https://github.com/DynamoRIO/drmemory/actions/runs/3551911682/jobs/5966473881 ",1.0,"GA Windows package and CI builds broken by zlib errors - The weekly package build is broken and also the CI builds apparently by a GS Windows image update which enabled zlib which was not there before. Xref #5766 The comment here has some details and analysis: https://github.com/DynamoRIO/dynamorio/pull/5766#issuecomment-1328029559 The broken package build from this week: https://github.com/DynamoRIO/dynamorio/actions/runs/3551887463/jobs/5966427587 Dr. Memory's package build is also broken: https://github.com/DynamoRIO/drmemory/actions/runs/3551911682/jobs/5966473881 ",0,ga windows package and ci builds broken by zlib errors the weekly package build is broken and also the ci builds apparently by a gs windows image update which enabled zlib which was not there before xref the comment here has some details and analysis the broken package build from this week dr memory s package build is also broken ,0 4716,17347747511.0,IssuesEvent,2021-07-29 03:05:55,JacobLinCool/BA,https://api.github.com/repos/JacobLinCool/BA,closed,Automation (2021/29/7 10:55:26 AM),automation,"**Updated.** (2021/29/7 11:00:20 AM) ## 登入: 完成 ``` [2021/29/7 10:55:28 AM] 開始執行帳號登入程序 [2021/29/7 10:55:36 AM] 正在檢測登入狀態 [2021/29/7 10:55:39 AM] 登入狀態: 未登入 [2021/29/7 10:55:42 AM] 嘗試登入中 [2021/29/7 10:55:52 AM] 已嘗試登入,重新檢測登入狀態 [2021/29/7 10:55:52 AM] 正在檢測登入狀態 [2021/29/7 10:55:55 AM] 登入狀態: 已登入 [2021/29/7 10:55:55 AM] 帳號登入程序已完成 ``` ## 簽到: 完成 ``` [2021/29/7 10:55:55 AM] 開始執行自動簽到程序 [2021/29/7 10:55:55 AM] 正在檢測簽到狀態 [2021/29/7 10:55:59 AM] 簽到狀態: 已簽到 [2021/29/7 10:56:00 AM] 自動簽到程序已完成 [2021/29/7 10:56:00 AM] 開始執行自動觀看雙倍簽到獎勵廣告程序 [2021/29/7 10:56:00 AM] 正在檢測雙倍簽到獎勵狀態 [2021/29/7 10:56:05 AM] 雙倍簽到獎勵狀態: 已獲得雙倍簽到獎勵 [2021/29/7 10:56:05 AM] 自動觀看雙倍簽到獎勵廣告程序已完成 ``` ## 答題: 完成 ``` [2021/29/7 10:56:06 AM] 開始執行動畫瘋自動答題程序 [2021/29/7 10:56:06 AM] 正在檢測答題狀態 [2021/29/7 10:56:12 AM] 今日已經答過題目了 [2021/29/7 10:56:13 AM] 動畫瘋自動答題程序已完成 ``` ## 抽獎: 執行中 ``` [2021/29/7 10:56:13 AM] 開始執行福利社自動抽抽樂程序 [2021/29/7 10:56:13 AM] 正在尋找抽抽樂 [2021/29/7 10:56:17 AM] 找到 8 個抽抽樂 [2021/29/7 10:56:17 AM] 1: 一個打四個!綠聯 充電器 GaN快充版 3C1A [2021/29/7 10:56:17 AM] 2: 《信星科技》一指雙用,五指操控,飛智黃蜂搖桿抽抽樂! [2021/29/7 10:56:17 AM] 3: 又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 [2021/29/7 10:56:17 AM] 4: 你的明眸護眼法寶 - Awesome LED觸控式可調雙光源螢幕掛燈 [2021/29/7 10:56:17 AM] 5: 熱銷百萬!2021最新款 「TruEgos Super 2NC雙工抗噪真無線」-限時抽抽樂! [2021/29/7 10:56:17 AM] 6: XPG競爆你的電競生活,好禮大方送,附送 MANA 電競口香糖 8/11 上市前搶先嚐! [2021/29/7 10:56:17 AM] 7: GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版! [2021/29/7 10:56:17 AM] 8: EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/29/7 10:56:17 AM] 正在嘗試執行第 1 個抽抽樂: 一個打四個!綠聯 充電器 GaN快充版 3C1A [2021/29/7 10:56:21 AM] 第 1 個抽抽樂(一個打四個!綠聯 充電器 GaN快充版 3C1A)的廣告免費次數已用完 [2021/29/7 10:56:21 AM] 正在嘗試執行第 2 個抽抽樂: 《信星科技》一指雙用,五指操控,飛智黃蜂搖桿抽抽樂! [2021/29/7 10:56:24 AM] 第 2 個抽抽樂(《信星科技》一指雙用,五指操控,飛智黃蜂搖桿抽抽樂!)的廣告免費次數已用完 [2021/29/7 10:56:24 AM] 正在嘗試執行第 3 個抽抽樂: 又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 [2021/29/7 10:56:27 AM] 第 3 個抽抽樂(又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎)的廣告免費次數已用完 [2021/29/7 10:56:27 AM] 正在嘗試執行第 4 個抽抽樂: 你的明眸護眼法寶 - Awesome LED觸控式可調雙光源螢幕掛燈 [2021/29/7 10:56:29 AM] 第 4 個抽抽樂(你的明眸護眼法寶 - Awesome LED觸控式可調雙光源螢幕掛燈)的廣告免費次數已用完 [2021/29/7 10:56:29 AM] 正在嘗試執行第 5 個抽抽樂: 熱銷百萬!2021最新款 「TruEgos Super 2NC雙工抗噪真無線」-限時抽抽樂! [2021/29/7 10:56:32 AM] 第 5 個抽抽樂(熱銷百萬!2021最新款 「TruEgos Super 2NC雙工抗噪真無線」-限時抽抽樂!)的廣告免費次數已用完 [2021/29/7 10:56:32 AM] 正在嘗試執行第 6 個抽抽樂: XPG競爆你的電競生活,好禮大方送,附送 MANA 電競口香糖 8/11 上市前搶先嚐! [2021/29/7 10:56:35 AM] 第 6 個抽抽樂(XPG競爆你的電競生活,好禮大方送,附送 MANA 電競口香糖 8/11 上市前搶先嚐!)的廣告免費次數已用完 [2021/29/7 10:56:35 AM] 正在嘗試執行第 7 個抽抽樂: GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版! [2021/29/7 10:56:37 AM] 第 7 個抽抽樂(GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版!)的廣告免費次數已用完 [2021/29/7 10:56:37 AM] 正在嘗試執行第 8 個抽抽樂: EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/29/7 10:56:40 AM] 正在執行第 1 次抽獎,可能需要多達 1 分鐘 [2021/29/7 10:56:46 AM] 正在觀看廣告 [2021/29/7 10:57:30 AM] 未進入結算頁面,重試中 [2021/29/7 10:57:32 AM] 正在執行第 2 次抽獎,可能需要多達 1 分鐘 [2021/29/7 10:57:38 AM] 正在觀看廣告 [2021/29/7 10:58:22 AM] 正在確認結算頁面 [2021/29/7 10:58:26 AM] 已完成一次抽抽樂:EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/29/7 10:58:29 AM] 正在執行第 3 次抽獎,可能需要多達 1 分鐘 [2021/29/7 10:58:35 AM] 正在觀看廣告 [2021/29/7 10:59:19 AM] 未進入結算頁面,重試中 [2021/29/7 10:59:21 AM] 正在執行第 4 次抽獎,可能需要多達 1 分鐘 [2021/29/7 10:59:27 AM] 正在觀看廣告 [2021/29/7 11:00:11 AM] 未進入結算頁面,重試中 [2021/29/7 11:00:13 AM] 正在執行第 5 次抽獎,可能需要多達 1 分鐘 [2021/29/7 11:00:20 AM] 正在觀看廣告 ``` ",1.0,"Automation (2021/29/7 10:55:26 AM) - **Updated.** (2021/29/7 11:00:20 AM) ## 登入: 完成 ``` [2021/29/7 10:55:28 AM] 開始執行帳號登入程序 [2021/29/7 10:55:36 AM] 正在檢測登入狀態 [2021/29/7 10:55:39 AM] 登入狀態: 未登入 [2021/29/7 10:55:42 AM] 嘗試登入中 [2021/29/7 10:55:52 AM] 已嘗試登入,重新檢測登入狀態 [2021/29/7 10:55:52 AM] 正在檢測登入狀態 [2021/29/7 10:55:55 AM] 登入狀態: 已登入 [2021/29/7 10:55:55 AM] 帳號登入程序已完成 ``` ## 簽到: 完成 ``` [2021/29/7 10:55:55 AM] 開始執行自動簽到程序 [2021/29/7 10:55:55 AM] 正在檢測簽到狀態 [2021/29/7 10:55:59 AM] 簽到狀態: 已簽到 [2021/29/7 10:56:00 AM] 自動簽到程序已完成 [2021/29/7 10:56:00 AM] 開始執行自動觀看雙倍簽到獎勵廣告程序 [2021/29/7 10:56:00 AM] 正在檢測雙倍簽到獎勵狀態 [2021/29/7 10:56:05 AM] 雙倍簽到獎勵狀態: 已獲得雙倍簽到獎勵 [2021/29/7 10:56:05 AM] 自動觀看雙倍簽到獎勵廣告程序已完成 ``` ## 答題: 完成 ``` [2021/29/7 10:56:06 AM] 開始執行動畫瘋自動答題程序 [2021/29/7 10:56:06 AM] 正在檢測答題狀態 [2021/29/7 10:56:12 AM] 今日已經答過題目了 [2021/29/7 10:56:13 AM] 動畫瘋自動答題程序已完成 ``` ## 抽獎: 執行中 ``` [2021/29/7 10:56:13 AM] 開始執行福利社自動抽抽樂程序 [2021/29/7 10:56:13 AM] 正在尋找抽抽樂 [2021/29/7 10:56:17 AM] 找到 8 個抽抽樂 [2021/29/7 10:56:17 AM] 1: 一個打四個!綠聯 充電器 GaN快充版 3C1A [2021/29/7 10:56:17 AM] 2: 《信星科技》一指雙用,五指操控,飛智黃蜂搖桿抽抽樂! [2021/29/7 10:56:17 AM] 3: 又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 [2021/29/7 10:56:17 AM] 4: 你的明眸護眼法寶 - Awesome LED觸控式可調雙光源螢幕掛燈 [2021/29/7 10:56:17 AM] 5: 熱銷百萬!2021最新款 「TruEgos Super 2NC雙工抗噪真無線」-限時抽抽樂! [2021/29/7 10:56:17 AM] 6: XPG競爆你的電競生活,好禮大方送,附送 MANA 電競口香糖 8/11 上市前搶先嚐! [2021/29/7 10:56:17 AM] 7: GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版! [2021/29/7 10:56:17 AM] 8: EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/29/7 10:56:17 AM] 正在嘗試執行第 1 個抽抽樂: 一個打四個!綠聯 充電器 GaN快充版 3C1A [2021/29/7 10:56:21 AM] 第 1 個抽抽樂(一個打四個!綠聯 充電器 GaN快充版 3C1A)的廣告免費次數已用完 [2021/29/7 10:56:21 AM] 正在嘗試執行第 2 個抽抽樂: 《信星科技》一指雙用,五指操控,飛智黃蜂搖桿抽抽樂! [2021/29/7 10:56:24 AM] 第 2 個抽抽樂(《信星科技》一指雙用,五指操控,飛智黃蜂搖桿抽抽樂!)的廣告免費次數已用完 [2021/29/7 10:56:24 AM] 正在嘗試執行第 3 個抽抽樂: 又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎 [2021/29/7 10:56:27 AM] 第 3 個抽抽樂(又到了白色鍵盤的季節!irocks K71M RGB 機械鍵盤抽獎)的廣告免費次數已用完 [2021/29/7 10:56:27 AM] 正在嘗試執行第 4 個抽抽樂: 你的明眸護眼法寶 - Awesome LED觸控式可調雙光源螢幕掛燈 [2021/29/7 10:56:29 AM] 第 4 個抽抽樂(你的明眸護眼法寶 - Awesome LED觸控式可調雙光源螢幕掛燈)的廣告免費次數已用完 [2021/29/7 10:56:29 AM] 正在嘗試執行第 5 個抽抽樂: 熱銷百萬!2021最新款 「TruEgos Super 2NC雙工抗噪真無線」-限時抽抽樂! [2021/29/7 10:56:32 AM] 第 5 個抽抽樂(熱銷百萬!2021最新款 「TruEgos Super 2NC雙工抗噪真無線」-限時抽抽樂!)的廣告免費次數已用完 [2021/29/7 10:56:32 AM] 正在嘗試執行第 6 個抽抽樂: XPG競爆你的電競生活,好禮大方送,附送 MANA 電競口香糖 8/11 上市前搶先嚐! [2021/29/7 10:56:35 AM] 第 6 個抽抽樂(XPG競爆你的電競生活,好禮大方送,附送 MANA 電競口香糖 8/11 上市前搶先嚐!)的廣告免費次數已用完 [2021/29/7 10:56:35 AM] 正在嘗試執行第 7 個抽抽樂: GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版! [2021/29/7 10:56:37 AM] 第 7 個抽抽樂(GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版!)的廣告免費次數已用完 [2021/29/7 10:56:37 AM] 正在嘗試執行第 8 個抽抽樂: EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/29/7 10:56:40 AM] 正在執行第 1 次抽獎,可能需要多達 1 分鐘 [2021/29/7 10:56:46 AM] 正在觀看廣告 [2021/29/7 10:57:30 AM] 未進入結算頁面,重試中 [2021/29/7 10:57:32 AM] 正在執行第 2 次抽獎,可能需要多達 1 分鐘 [2021/29/7 10:57:38 AM] 正在觀看廣告 [2021/29/7 10:58:22 AM] 正在確認結算頁面 [2021/29/7 10:58:26 AM] 已完成一次抽抽樂:EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/29/7 10:58:29 AM] 正在執行第 3 次抽獎,可能需要多達 1 分鐘 [2021/29/7 10:58:35 AM] 正在觀看廣告 [2021/29/7 10:59:19 AM] 未進入結算頁面,重試中 [2021/29/7 10:59:21 AM] 正在執行第 4 次抽獎,可能需要多達 1 分鐘 [2021/29/7 10:59:27 AM] 正在觀看廣告 [2021/29/7 11:00:11 AM] 未進入結算頁面,重試中 [2021/29/7 11:00:13 AM] 正在執行第 5 次抽獎,可能需要多達 1 分鐘 [2021/29/7 11:00:20 AM] 正在觀看廣告 ``` ",1,automation am updated am 登入 完成 開始執行帳號登入程序 正在檢測登入狀態 登入狀態 未登入 嘗試登入中 已嘗試登入,重新檢測登入狀態 正在檢測登入狀態 登入狀態 已登入 帳號登入程序已完成 簽到 完成 開始執行自動簽到程序 正在檢測簽到狀態 簽到狀態 已簽到 自動簽到程序已完成 開始執行自動觀看雙倍簽到獎勵廣告程序 正在檢測雙倍簽到獎勵狀態 雙倍簽到獎勵狀態 已獲得雙倍簽到獎勵 自動觀看雙倍簽到獎勵廣告程序已完成 答題 完成 開始執行動畫瘋自動答題程序 正在檢測答題狀態 今日已經答過題目了 動畫瘋自動答題程序已完成 抽獎 執行中 開始執行福利社自動抽抽樂程序 正在尋找抽抽樂 找到 個抽抽樂 一個打四個!綠聯 充電器 gan快充版 《信星科技》一指雙用,五指操控,飛智黃蜂搖桿抽抽樂! 又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 你的明眸護眼法寶 awesome led觸控式可調雙光源螢幕掛燈 熱銷百萬! 「truegos super 」 限時抽抽樂! xpg競爆你的電競生活,好禮大方送,附送 mana 電競口香糖 上市前搶先嚐! gokids玩樂小子|深入絕地:暗黑世界傳說 ,史詩繁中再版! epos |sennheiser 最強電競耳機─王者回歸,gsp 正在嘗試執行第 個抽抽樂: 一個打四個!綠聯 充電器 gan快充版 第 個抽抽樂(一個打四個!綠聯 充電器 gan快充版 )的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: 《信星科技》一指雙用,五指操控,飛智黃蜂搖桿抽抽樂! 第 個抽抽樂(《信星科技》一指雙用,五指操控,飛智黃蜂搖桿抽抽樂!)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: 又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎 第 個抽抽樂(又到了白色鍵盤的季節!irocks rgb 機械鍵盤抽獎)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: 你的明眸護眼法寶 awesome led觸控式可調雙光源螢幕掛燈 第 個抽抽樂(你的明眸護眼法寶 awesome led觸控式可調雙光源螢幕掛燈)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: 熱銷百萬! 「truegos super 」 限時抽抽樂! 第 個抽抽樂(熱銷百萬! 「truegos super 」 限時抽抽樂!)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: xpg競爆你的電競生活,好禮大方送,附送 mana 電競口香糖 上市前搶先嚐! 第 個抽抽樂(xpg競爆你的電競生活,好禮大方送,附送 mana 電競口香糖 上市前搶先嚐!)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: gokids玩樂小子|深入絕地:暗黑世界傳說 ,史詩繁中再版! 第 個抽抽樂(gokids玩樂小子|深入絕地:暗黑世界傳說 ,史詩繁中再版!)的廣告免費次數已用完 正在嘗試執行第 個抽抽樂: epos |sennheiser 最強電競耳機─王者回歸,gsp 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 未進入結算頁面,重試中 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 正在確認結算頁面 已完成一次抽抽樂:epos |sennheiser 最強電競耳機─王者回歸,gsp 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 未進入結算頁面,重試中 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 未進入結算頁面,重試中 正在執行第 次抽獎,可能需要多達 分鐘 正在觀看廣告 ,1 3430,13764329485.0,IssuesEvent,2020-10-07 11:55:06,submariner-io/submariner,https://api.github.com/repos/submariner-io/submariner,closed,Add Gitlint to repos without Shipyard,automation cncf," **What would you like to be added**: Commit message linting with Gitlint, for repos without the shared Shipyard infra. **Why is this needed**: We should run commit linting everywhere. We currently run Gitlint through Shipyard shared infra, so it can be run locally but not in repos without Shipyard. As I recall, there wasn't a good GHA option, but should re-check.",1.0,"Add Gitlint to repos without Shipyard - **What would you like to be added**: Commit message linting with Gitlint, for repos without the shared Shipyard infra. **Why is this needed**: We should run commit linting everywhere. We currently run Gitlint through Shipyard shared infra, so it can be run locally but not in repos without Shipyard. As I recall, there wasn't a good GHA option, but should re-check.",1,add gitlint to repos without shipyard what would you like to be added commit message linting with gitlint for repos without the shared shipyard infra why is this needed we should run commit linting everywhere we currently run gitlint through shipyard shared infra so it can be run locally but not in repos without shipyard as i recall there wasn t a good gha option but should re check ,1 164,4322231433.0,IssuesEvent,2016-07-25 13:27:12,MISP/MISP,https://api.github.com/repos/MISP/MISP,closed,Remove the default defined salt,automation enhancement Security usability,"Change from a fixed default defined salt to one that is randomly generated at install time. This will avoid admins locking themselves out, and cases were whole communities need password reset because the salt was not changed at install time.",1.0,"Remove the default defined salt - Change from a fixed default defined salt to one that is randomly generated at install time. This will avoid admins locking themselves out, and cases were whole communities need password reset because the salt was not changed at install time.",1,remove the default defined salt change from a fixed default defined salt to one that is randomly generated at install time this will avoid admins locking themselves out and cases were whole communities need password reset because the salt was not changed at install time ,1 291371,8924009596.0,IssuesEvent,2019-01-21 17:11:42,poanetwork/blockscout,https://api.github.com/repos/poanetwork/blockscout,closed,Use eth_getCode to obtain the contract code for new smart contracts,enhancement in progress priority: high,"In our current implementation, we obtain the contract creation code from internal transactions which is causing a few issues. 1. On Ethereum Mainnet, it's a very slow process to look back at all previous internal transactions to obtain the contract creation code. 2. Ganache does not support internal transactions. ## Solution Using `eth_getCode` for transactions with a `created_contract_address_hash` will obtain the contract creation code immediately in the realtime fetcher. For the catchup fetcher we'll need to create a new fetcher to search for transactions that have a `created_contract_address_hash` but does not have it's `contract_code` completed. ",1.0,"Use eth_getCode to obtain the contract code for new smart contracts - In our current implementation, we obtain the contract creation code from internal transactions which is causing a few issues. 1. On Ethereum Mainnet, it's a very slow process to look back at all previous internal transactions to obtain the contract creation code. 2. Ganache does not support internal transactions. ## Solution Using `eth_getCode` for transactions with a `created_contract_address_hash` will obtain the contract creation code immediately in the realtime fetcher. For the catchup fetcher we'll need to create a new fetcher to search for transactions that have a `created_contract_address_hash` but does not have it's `contract_code` completed. ",0,use eth getcode to obtain the contract code for new smart contracts in our current implementation we obtain the contract creation code from internal transactions which is causing a few issues on ethereum mainnet it s a very slow process to look back at all previous internal transactions to obtain the contract creation code ganache does not support internal transactions solution using eth getcode for transactions with a created contract address hash will obtain the contract creation code immediately in the realtime fetcher for the catchup fetcher we ll need to create a new fetcher to search for transactions that have a created contract address hash but does not have it s contract code completed ,0 4678,17197819653.0,IssuesEvent,2021-07-16 20:26:02,elastic/e2e-testing,https://api.github.com/repos/elastic/e2e-testing,opened,Unabling Filesystem metrics with cenos agent results with no filesystem datastream using E2E tests,Team:Automation Team:Integrations,"1. Install cenOS agent. 2. Go to Policies and policy for that agent 3. Edit system-1 integration 4. Under ""Collect metrics from System instances"" unable ""System filesystem metrics"" and save Integration 5. Wait a min or so 6. Go to the ""Data Streams"" there is no system.filesystem metrics there I wait for at least 10-20 min. it runs fine with debian",1.0,"Unabling Filesystem metrics with cenos agent results with no filesystem datastream using E2E tests - 1. Install cenOS agent. 2. Go to Policies and policy for that agent 3. Edit system-1 integration 4. Under ""Collect metrics from System instances"" unable ""System filesystem metrics"" and save Integration 5. Wait a min or so 6. Go to the ""Data Streams"" there is no system.filesystem metrics there I wait for at least 10-20 min. it runs fine with debian",1,unabling filesystem metrics with cenos agent results with no filesystem datastream using tests install cenos agent go to policies and policy for that agent edit system integration under collect metrics from system instances unable system filesystem metrics and save integration wait a min or so go to the data streams there is no system filesystem metrics there i wait for at least min it runs fine with debian,1 4790,17516374957.0,IssuesEvent,2021-08-11 07:05:16,elastic/e2e-testing,https://api.github.com/repos/elastic/e2e-testing,opened,Increase the number of retained builds,Team:Automation size:S triaged area:ci impact:high requested-by:Automation,"The project only keeps the last 20 builds, and we'd need to retain a bigger number: 100? Because the main pipeline is a used as a helper, used as downstream for multiple sources (Beats PR/merges, nightly builds), in one day we loose the entire history.",2.0,"Increase the number of retained builds - The project only keeps the last 20 builds, and we'd need to retain a bigger number: 100? Because the main pipeline is a used as a helper, used as downstream for multiple sources (Beats PR/merges, nightly builds), in one day we loose the entire history.",1,increase the number of retained builds the project only keeps the last builds and we d need to retain a bigger number because the main pipeline is a used as a helper used as downstream for multiple sources beats pr merges nightly builds in one day we loose the entire history ,1 153,4167137692.0,IssuesEvent,2016-06-20 08:18:54,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Key pressing doesn't work in ASP Test,!IMPORTANT! AREA: client SYSTEM: automations TYPE: bug,`act.press` doesn't work in `ASPxGridViewDemos/Accessibility/KeyboardSupport` test. I've found that the issue is contained in #551. ,1.0,Key pressing doesn't work in ASP Test - `act.press` doesn't work in `ASPxGridViewDemos/Accessibility/KeyboardSupport` test. I've found that the issue is contained in #551. ,1,key pressing doesn t work in asp test act press doesn t work in aspxgridviewdemos accessibility keyboardsupport test i ve found that the issue is contained in ,1 7123,24287938870.0,IssuesEvent,2022-09-29 01:15:24,AdamXweb/awesome-aussie,https://api.github.com/repos/AdamXweb/awesome-aussie,opened,[ADDITION] Immediation,Awaiting Review Added to Airtable Automation from Airtable,"### Category ### Software to be added Immediation ### Supporting Material URL: https://www.immediation.com/ Description: Immediation is the world’s most secure and specialized digital legal environment for dispute resolution, justice and legal practice. Advanced, integrated tools and a panel of 100+ experts ensure scale, efficiency, access and control. Size: HQ: Melbourne LinkedIn: https://www.linkedin.com/company/immediation-digital-legal-environments/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/recyUU8a3rHybPdDH",1.0,"[ADDITION] Immediation - ### Category ### Software to be added Immediation ### Supporting Material URL: https://www.immediation.com/ Description: Immediation is the world’s most secure and specialized digital legal environment for dispute resolution, justice and legal practice. Advanced, integrated tools and a panel of 100+ experts ensure scale, efficiency, access and control. Size: HQ: Melbourne LinkedIn: https://www.linkedin.com/company/immediation-digital-legal-environments/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/recyUU8a3rHybPdDH",1, immediation category software to be added immediation supporting material url description immediation is the world’s most secure and specialized digital legal environment for dispute resolution justice and legal practice advanced integrated tools and a panel of experts ensure scale efficiency access and control size hq melbourne linkedin see record on airtable ,1 1594,10409982419.0,IssuesEvent,2019-09-13 10:07:11,mozilla-mobile/android-components,https://api.github.com/repos/mozilla-mobile/android-components,opened,UI test task blocking,🤖 automation,"We are currently unable to land PRs since the UI test task seems to hang on every run. https://tools.taskcluster.net/groups/X0-rdLpDRUO9HC-PHWN3LA/tasks/Vok715NBTVqOhsWPf8tybQ/runs/0/logs/public%2Flogs%2Flive.log",1.0,"UI test task blocking - We are currently unable to land PRs since the UI test task seems to hang on every run. https://tools.taskcluster.net/groups/X0-rdLpDRUO9HC-PHWN3LA/tasks/Vok715NBTVqOhsWPf8tybQ/runs/0/logs/public%2Flogs%2Flive.log",1,ui test task blocking we are currently unable to land prs since the ui test task seems to hang on every run ,1 3498,13853701105.0,IssuesEvent,2020-10-15 08:30:37,exercism/exercism,https://api.github.com/repos/exercism/exercism,opened,Moving from Travis to GitHub Actions,area/automation,"Over the last few months we've been transferring all our CI from Travis to GitHub Actions (GHA). We've found that GHA are easier to work with, more reliable, and much much faster. Based on our success with GHA and increasing intermittent failures on Travis, we have now decided to try and remove Travis from Exercism's org altogether and shift everything to GHA. For most CI checks this should be a transposing from Travis' syntax to GHA syntax, and hopefully quite straightforward (see this PR for an example). ",1.0,"Moving from Travis to GitHub Actions - Over the last few months we've been transferring all our CI from Travis to GitHub Actions (GHA). We've found that GHA are easier to work with, more reliable, and much much faster. Based on our success with GHA and increasing intermittent failures on Travis, we have now decided to try and remove Travis from Exercism's org altogether and shift everything to GHA. For most CI checks this should be a transposing from Travis' syntax to GHA syntax, and hopefully quite straightforward (see this PR for an example). ",1,moving from travis to github actions over the last few months we ve been transferring all our ci from travis to github actions gha we ve found that gha are easier to work with more reliable and much much faster based on our success with gha and increasing intermittent failures on travis we have now decided to try and remove travis from exercism s org altogether and shift everything to gha for most ci checks this should be a transposing from travis syntax to gha syntax and hopefully quite straightforward see this pr for an example ,1 170430,26958715418.0,IssuesEvent,2023-02-08 16:35:28,carbon-design-system/carbon-design-kit,https://api.github.com/repos/carbon-design-system/carbon-design-kit,closed,[Sketch] Fluid inputs: Multi-select,kit: sketch role: design :pencil2:,"Provide Skecth tooling updates for [Fluid Multi-select](https://github.com/carbon-design-system/carbon/issues/12124) across all four themes in v11. ```[tasklist] ### Themes - [x] White theme - [x] Gray 10 theme - [x] Gray 90 theme - [x] Gray 100 theme ```",1.0,"[Sketch] Fluid inputs: Multi-select - Provide Skecth tooling updates for [Fluid Multi-select](https://github.com/carbon-design-system/carbon/issues/12124) across all four themes in v11. ```[tasklist] ### Themes - [x] White theme - [x] Gray 10 theme - [x] Gray 90 theme - [x] Gray 100 theme ```",0, fluid inputs multi select provide skecth tooling updates for across all four themes in themes white theme gray theme gray theme gray theme ,0 897,8657219327.0,IssuesEvent,2018-11-27 20:40:27,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Missing Information - New Users,assigned-to-author automation/svc doc-enhancement triaged,"In the text, it says ""You should also have the Credential asset that's mentioned in the prerequisites."" Unfortunately, the prerequisites don't mention this - they just say you should have an Automation Account, with permissions that are required. This means new users (which this page is aimed at) now need to go digging to try and find out what this means, and how to make it all work. The steps are not sufficiently self-contained to be able to just follow them, as there's no link to the right place to get all the information needed to get this example working quickly. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 038d927f-2bcc-c62d-b3c3-f194513bced6 * Version Independent ID: 41adf2c5-3ab7-7387-e541-89e34aa6a6b1 * Content: [My first PowerShell runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-powershell#prerequisites) * Content Source: [articles/automation/automation-first-runbook-textual-powershell.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-powershell.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Missing Information - New Users - In the text, it says ""You should also have the Credential asset that's mentioned in the prerequisites."" Unfortunately, the prerequisites don't mention this - they just say you should have an Automation Account, with permissions that are required. This means new users (which this page is aimed at) now need to go digging to try and find out what this means, and how to make it all work. The steps are not sufficiently self-contained to be able to just follow them, as there's no link to the right place to get all the information needed to get this example working quickly. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 038d927f-2bcc-c62d-b3c3-f194513bced6 * Version Independent ID: 41adf2c5-3ab7-7387-e541-89e34aa6a6b1 * Content: [My first PowerShell runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-powershell#prerequisites) * Content Source: [articles/automation/automation-first-runbook-textual-powershell.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-powershell.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,missing information new users in the text it says you should also have the credential asset that s mentioned in the prerequisites unfortunately the prerequisites don t mention this they just say you should have an automation account with permissions that are required this means new users which this page is aimed at now need to go digging to try and find out what this means and how to make it all work the steps are not sufficiently self contained to be able to just follow them as there s no link to the right place to get all the information needed to get this example working quickly document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 2685,12452064107.0,IssuesEvent,2020-05-27 11:37:22,GoodDollar/GoodDAPP,https://api.github.com/repos/GoodDollar/GoodDAPP,closed,"(BUG) The user can send money after login in deleted wallet, using localstorage data ",Important automation bug mvp,"Steps to reproduce: 1) create a new wallet 2) save all localstorage 3) claim 1 GD$ 4) delete wallet using option from menu 5) logging via saved localstorage from remote wallet 6) send some money using payment link (ex, 0.25$) 7) login in existing wallet and withdraw the payment from payment link 9) go to ""deleted"" wallet, check that money was withdrawn (1-0.25=0.75) video: https://www.screencast.com/t/Rzr62YEn ![deleted_wallet.png] (https://images.zenhubusercontent.com/5eb529c8c90bb26b8aaf9d9d/62c74cb1-57c1-41d7-96a5-aa6c60b9e225)",1.0,"(BUG) The user can send money after login in deleted wallet, using localstorage data - Steps to reproduce: 1) create a new wallet 2) save all localstorage 3) claim 1 GD$ 4) delete wallet using option from menu 5) logging via saved localstorage from remote wallet 6) send some money using payment link (ex, 0.25$) 7) login in existing wallet and withdraw the payment from payment link 9) go to ""deleted"" wallet, check that money was withdrawn (1-0.25=0.75) video: https://www.screencast.com/t/Rzr62YEn ![deleted_wallet.png] (https://images.zenhubusercontent.com/5eb529c8c90bb26b8aaf9d9d/62c74cb1-57c1-41d7-96a5-aa6c60b9e225)",1, bug the user can send money after login in deleted wallet using localstorage data steps to reproduce create a new wallet save all localstorage claim gd delete wallet using option from menu logging via saved localstorage from remote wallet send some money using payment link ex login in existing wallet and withdraw the payment from payment link go to deleted wallet check that money was withdrawn video ,1 75193,15394278315.0,IssuesEvent,2021-03-03 17:42:30,jgeraigery/FHIR,https://api.github.com/repos/jgeraigery/FHIR,opened,CVE-2020-0470 (Medium) detected in libaomandroid-11.0.0_r18,security vulnerability,"## CVE-2020-0470 - Medium Severity Vulnerability
Vulnerable Library - libaomandroid-11.0.0_r18

Bug: 139309277

Library home page: https://android.googlesource.com/platform/external/libaom

Found in HEAD commit: 8e7083c384ed5860b5dd7d933217a2758b900556

Found in base branch: main

Vulnerable Source Files (1)

r.h

Vulnerability Details

In extend_frame_highbd of restoration.c, there is a possible out of bounds write due to a heap buffer overflow. This could lead to remote information disclosure with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-11 Android-10Android ID: A-166268541

Publish Date: 2020-12-14

URL: CVE-2020-0470

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

",True,"CVE-2020-0470 (Medium) detected in libaomandroid-11.0.0_r18 - ## CVE-2020-0470 - Medium Severity Vulnerability
Vulnerable Library - libaomandroid-11.0.0_r18

Bug: 139309277

Library home page: https://android.googlesource.com/platform/external/libaom

Found in HEAD commit: 8e7083c384ed5860b5dd7d933217a2758b900556

Found in base branch: main

Vulnerable Source Files (1)

r.h

Vulnerability Details

In extend_frame_highbd of restoration.c, there is a possible out of bounds write due to a heap buffer overflow. This could lead to remote information disclosure with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-11 Android-10Android ID: A-166268541

Publish Date: 2020-12-14

URL: CVE-2020-0470

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

",0,cve medium detected in libaomandroid cve medium severity vulnerability vulnerable library libaomandroid bug library home page a href found in head commit a href found in base branch main vulnerable source files r h vulnerability details in extend frame highbd of restoration c there is a possible out of bounds write due to a heap buffer overflow this could lead to remote information disclosure with no additional execution privileges needed user interaction is needed for exploitation product androidversions android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href ,0 3895,14923309875.0,IssuesEvent,2021-01-23 18:23:50,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,opened,Support program level singletons,area/automation-api area/core area/multi-language-components kind/enhancement,"It would be nice if there was a piece of program level state that could be used to keep track of resources shared by the program. Common examples are: - stack references: a stack ref is uniquely identified in the program (the slug is in the URN). Trying to create two instances of the same stack ref will fail. This is arguably an issue with impl of stack refs as resources, but program level singletons would still help here. The workaround currently is to create a stack reference cache that you must pass around your program. - IAM policies: @leezen brought up the desire to create a program-level policy to alleviate the deprecation of AWS lambda policies. Instead of creating a duplicate policy per function, we should just be able to create a shared base role. The intuition here is often to use some sort of module level package level global state. Unfortunately, Automation API (and multi-language components to some degree) mean that the lifetime of the module can outlive the lifetime of the program, meaning that it isn't safe to store state like this globally any longer. For something like go, it would be fairly easy to hang a cache off of the `ctx`. C# has a stack base class that this could be attached to. Python and nodejs programs that exist in ""open coding"" form might be more challenging or require more creativity. Although, those two runtimes utilize much more global state and probably need some redesign anyway. ",1.0,"Support program level singletons - It would be nice if there was a piece of program level state that could be used to keep track of resources shared by the program. Common examples are: - stack references: a stack ref is uniquely identified in the program (the slug is in the URN). Trying to create two instances of the same stack ref will fail. This is arguably an issue with impl of stack refs as resources, but program level singletons would still help here. The workaround currently is to create a stack reference cache that you must pass around your program. - IAM policies: @leezen brought up the desire to create a program-level policy to alleviate the deprecation of AWS lambda policies. Instead of creating a duplicate policy per function, we should just be able to create a shared base role. The intuition here is often to use some sort of module level package level global state. Unfortunately, Automation API (and multi-language components to some degree) mean that the lifetime of the module can outlive the lifetime of the program, meaning that it isn't safe to store state like this globally any longer. For something like go, it would be fairly easy to hang a cache off of the `ctx`. C# has a stack base class that this could be attached to. Python and nodejs programs that exist in ""open coding"" form might be more challenging or require more creativity. Although, those two runtimes utilize much more global state and probably need some redesign anyway. ",1,support program level singletons it would be nice if there was a piece of program level state that could be used to keep track of resources shared by the program common examples are stack references a stack ref is uniquely identified in the program the slug is in the urn trying to create two instances of the same stack ref will fail this is arguably an issue with impl of stack refs as resources but program level singletons would still help here the workaround currently is to create a stack reference cache that you must pass around your program iam policies leezen brought up the desire to create a program level policy to alleviate the deprecation of aws lambda policies instead of creating a duplicate policy per function we should just be able to create a shared base role the intuition here is often to use some sort of module level package level global state unfortunately automation api and multi language components to some degree mean that the lifetime of the module can outlive the lifetime of the program meaning that it isn t safe to store state like this globally any longer for something like go it would be fairly easy to hang a cache off of the ctx c has a stack base class that this could be attached to python and nodejs programs that exist in open coding form might be more challenging or require more creativity although those two runtimes utilize much more global state and probably need some redesign anyway ,1 450201,31885261807.0,IssuesEvent,2023-09-16 21:47:16,aquasecurity/tracee,https://api.github.com/repos/aquasecurity/tracee,opened,Fix Gob Output Doc,kind/documentation,"Please provide an example of using `gob` as an output? The documentation for using gob shows the output as still being `json`. I assume the use case for `gob` is golang IPC and RPC mechanisms, but that would be better as part of the documentation. https://aquasecurity.github.io/tracee/v0.17/docs/outputs/output-formats/#gob",1.0,"Fix Gob Output Doc - Please provide an example of using `gob` as an output? The documentation for using gob shows the output as still being `json`. I assume the use case for `gob` is golang IPC and RPC mechanisms, but that would be better as part of the documentation. https://aquasecurity.github.io/tracee/v0.17/docs/outputs/output-formats/#gob",0,fix gob output doc please provide an example of using gob as an output the documentation for using gob shows the output as still being json i assume the use case for gob is golang ipc and rpc mechanisms but that would be better as part of the documentation ,0 6714,23773796267.0,IssuesEvent,2022-09-01 18:49:12,pnp/powershell,https://api.github.com/repos/pnp/powershell,closed,[BUG] Get-PnPAzureADUser fails in Azure Automation when using -SELECT with AdditionalProperties ,bug azure-automation in review,"### Notice Get-PnPAzureADUser when running inside Azure Automation Account (runbook) will throw an error ""An item with the same key has already been added."", it works fine when -SELECT is not present This is the same as #1821 but that was closed without solving. ### Reporting an Issue This command gives and error when I'm selecting AdditionalProperties for the -SELECT parameter For example (NOK): - Get-PnPAzureADUser -Select ""Department"" - Get-PnPAzureADUser -Select ""Country"" It works fine with the standard properties (like ""UserPrincipalName"",""OfficeLocation"") Example (OK): Get-PnPAzureADUser -Filter ""AccountEnabled eq true"" -Select ""UserPrincipalName"",""OfficeLocation"" ### Expected behavior Should not give an error. ### Actual behavior error message: ""An item with the same key has already been added."" ### Cause I noticed that the ALL AdditionalProperties are already loaded in the runbook using the command without -SELECT: - Get-PnPAzureADUser -> This is not the case when running in local powershell script. ### Steps to reproduce behavior Run in Azure Automatation runbook ### What is the version of the Cmdlet module you are running? 1.10.0 ### Which operating system/environment are you running PnP PowerShell on? - [ x] Windows - [ ] Linux - [ ] MacOS - [ ] Azure Cloud Shell - [ ] Azure Functions - [ x] Other : Azure automation (PS runtime version 5) ",1.0,"[BUG] Get-PnPAzureADUser fails in Azure Automation when using -SELECT with AdditionalProperties - ### Notice Get-PnPAzureADUser when running inside Azure Automation Account (runbook) will throw an error ""An item with the same key has already been added."", it works fine when -SELECT is not present This is the same as #1821 but that was closed without solving. ### Reporting an Issue This command gives and error when I'm selecting AdditionalProperties for the -SELECT parameter For example (NOK): - Get-PnPAzureADUser -Select ""Department"" - Get-PnPAzureADUser -Select ""Country"" It works fine with the standard properties (like ""UserPrincipalName"",""OfficeLocation"") Example (OK): Get-PnPAzureADUser -Filter ""AccountEnabled eq true"" -Select ""UserPrincipalName"",""OfficeLocation"" ### Expected behavior Should not give an error. ### Actual behavior error message: ""An item with the same key has already been added."" ### Cause I noticed that the ALL AdditionalProperties are already loaded in the runbook using the command without -SELECT: - Get-PnPAzureADUser -> This is not the case when running in local powershell script. ### Steps to reproduce behavior Run in Azure Automatation runbook ### What is the version of the Cmdlet module you are running? 1.10.0 ### Which operating system/environment are you running PnP PowerShell on? - [ x] Windows - [ ] Linux - [ ] MacOS - [ ] Azure Cloud Shell - [ ] Azure Functions - [ x] Other : Azure automation (PS runtime version 5) ",1, get pnpazureaduser fails in azure automation when using select with additionalproperties notice get pnpazureaduser when running inside azure automation account runbook will throw an error an item with the same key has already been added it works fine when select is not present this is the same as but that was closed without solving reporting an issue this command gives and error when i m selecting additionalproperties for the select parameter for example nok get pnpazureaduser select department get pnpazureaduser select country it works fine with the standard properties like userprincipalname officelocation example ok get pnpazureaduser filter accountenabled eq true select userprincipalname officelocation expected behavior should not give an error actual behavior error message an item with the same key has already been added cause i noticed that the all additionalproperties are already loaded in the runbook using the command without select get pnpazureaduser this is not the case when running in local powershell script steps to reproduce behavior run in azure automatation runbook what is the version of the cmdlet module you are running which operating system environment are you running pnp powershell on windows linux macos azure cloud shell azure functions other azure automation ps runtime version ,1 3802,14621399510.0,IssuesEvent,2020-12-22 21:34:54,newrelic/docs-website,https://api.github.com/repos/newrelic/docs-website,closed,Migration script doesn't account for navigation sort order,automation bug catch-all eng launch-blocker sp:3,"### Description Currently the migration script isn't automatically setting sort order for categories and pages. ### Example: Categories ![category_sort_order_issues](https://user-images.githubusercontent.com/55203603/101224564-d342a100-3643-11eb-977e-06bfaa724c27.png) ### Example: Pages ![doc_sort_order_issues](https://user-images.githubusercontent.com/55203603/101224999-2b2dd780-3645-11eb-9353-769dcf18027f.png) ### Expected behavior Preserve the sort order from the current docs site as part of the migration script. ### Additional Notes - Expose the sort weight in the API and leverage that as part of the script. - Scope: This is only for the migration script. - We need to account for release notes nav that uses release date and time to sort items in the nav ### Related to #381 # MMF 9 Scope When migration script is run we want arrange things (both .mdx content files AND the sub-directories) in the nav menus in the order that is set in Drupal. In the Drupal migration API, this order will be exposed via the `order` field. We don't currently have any API resources for taxonomy terms, but we could create them if needed to get the `order` value. This applies to migration only. Post migration, content contributors can set the order of the nav items by directly editing the .yml files. ## Acceptance criteria - [ ] As a content contributor, .mdx files and directories/categories in the nav files mirror the current site",1.0,"Migration script doesn't account for navigation sort order - ### Description Currently the migration script isn't automatically setting sort order for categories and pages. ### Example: Categories ![category_sort_order_issues](https://user-images.githubusercontent.com/55203603/101224564-d342a100-3643-11eb-977e-06bfaa724c27.png) ### Example: Pages ![doc_sort_order_issues](https://user-images.githubusercontent.com/55203603/101224999-2b2dd780-3645-11eb-9353-769dcf18027f.png) ### Expected behavior Preserve the sort order from the current docs site as part of the migration script. ### Additional Notes - Expose the sort weight in the API and leverage that as part of the script. - Scope: This is only for the migration script. - We need to account for release notes nav that uses release date and time to sort items in the nav ### Related to #381 # MMF 9 Scope When migration script is run we want arrange things (both .mdx content files AND the sub-directories) in the nav menus in the order that is set in Drupal. In the Drupal migration API, this order will be exposed via the `order` field. We don't currently have any API resources for taxonomy terms, but we could create them if needed to get the `order` value. This applies to migration only. Post migration, content contributors can set the order of the nav items by directly editing the .yml files. ## Acceptance criteria - [ ] As a content contributor, .mdx files and directories/categories in the nav files mirror the current site",1,migration script doesn t account for navigation sort order description currently the migration script isn t automatically setting sort order for categories and pages example categories example pages expected behavior preserve the sort order from the current docs site as part of the migration script additional notes expose the sort weight in the api and leverage that as part of the script scope this is only for the migration script we need to account for release notes nav that uses release date and time to sort items in the nav related to mmf scope when migration script is run we want arrange things both mdx content files and the sub directories in the nav menus in the order that is set in drupal in the drupal migration api this order will be exposed via the order field we don t currently have any api resources for taxonomy terms but we could create them if needed to get the order value this applies to migration only post migration content contributors can set the order of the nav items by directly editing the yml files acceptance criteria as a content contributor mdx files and directories categories in the nav files mirror the current site,1 8438,26966339162.0,IssuesEvent,2023-02-08 22:47:14,influxdata/ui,https://api.github.com/repos/influxdata/ui,closed,Quality checks on payload passed to wasm.,kind/bug team/automation,"We are seeing an error pop up in prod, coming from the wasm-bindgen generated javascript code. It's coming from an error in the message string being passed to the wasm. This is likely an error in the payload being sent from the UI. ### Error: ``` function passStringToWasm0(arg, malloc, realloc) { if (realloc === undefined) { const buf = cachedTextEncoder.encode(arg); const ptr = malloc(buf.length); getUint8Memory0().subarray(ptr, ptr + buf.length).set(buf); WASM_VECTOR_LEN = buf.length; return ptr; } let len = arg.length; ``` ^^ that last line (length check) is what errors. ### When seen: At least seen when navigating around the tasks part of the application, but may also occur elsewhere. ### possible solution: Doing additional payload checks prior to any message passing? ",1.0,"Quality checks on payload passed to wasm. - We are seeing an error pop up in prod, coming from the wasm-bindgen generated javascript code. It's coming from an error in the message string being passed to the wasm. This is likely an error in the payload being sent from the UI. ### Error: ``` function passStringToWasm0(arg, malloc, realloc) { if (realloc === undefined) { const buf = cachedTextEncoder.encode(arg); const ptr = malloc(buf.length); getUint8Memory0().subarray(ptr, ptr + buf.length).set(buf); WASM_VECTOR_LEN = buf.length; return ptr; } let len = arg.length; ``` ^^ that last line (length check) is what errors. ### When seen: At least seen when navigating around the tasks part of the application, but may also occur elsewhere. ### possible solution: Doing additional payload checks prior to any message passing? ",1,quality checks on payload passed to wasm we are seeing an error pop up in prod coming from the wasm bindgen generated javascript code it s coming from an error in the message string being passed to the wasm this is likely an error in the payload being sent from the ui error img width alt screen shot at pm src function arg malloc realloc if realloc undefined const buf cachedtextencoder encode arg const ptr malloc buf length subarray ptr ptr buf length set buf wasm vector len buf length return ptr let len arg length that last line length check is what errors when seen at least seen when navigating around the tasks part of the application but may also occur elsewhere possible solution doing additional payload checks prior to any message passing ,1 56973,11697262900.0,IssuesEvent,2020-03-06 11:25:50,fac19/week1-hjrv,https://api.github.com/repos/fac19/week1-hjrv,closed,Thumbs up,Code review compliment,"Overall a good looking website, great use of colours, and of course awesome profile photos :rofl: As you already know there are some layout issues as you're making it responsive for desktop, would be great to see how it looks when it's done! ![](https://media.giphy.com/media/kBZBlLVlfECvOQAVno/giphy.gif)",1.0,"Thumbs up - Overall a good looking website, great use of colours, and of course awesome profile photos :rofl: As you already know there are some layout issues as you're making it responsive for desktop, would be great to see how it looks when it's done! ![](https://media.giphy.com/media/kBZBlLVlfECvOQAVno/giphy.gif)",0,thumbs up overall a good looking website great use of colours and of course awesome profile photos rofl as you already know there are some layout issues as you re making it responsive for desktop would be great to see how it looks when it s done ,0 22407,15168884985.0,IssuesEvent,2021-02-12 20:10:06,algorand/indexer,https://api.github.com/repos/algorand/indexer,closed,cleanup/improve misc/validate_accounting.py,Infrastructure,"## Summary Clean up dead code in validate_accounting.py, particularly remove old method that digs into `algod` sqlite3 db and just rely on newer API mode. Add request parallelism to both Indexer and algod API queries. Default to ~4 threads? ## Scope/Requirements Code contained in misc/validate_accounting.py ## Urgency/Relative Priority Would be useful _today_ for release testing.",1.0,"cleanup/improve misc/validate_accounting.py - ## Summary Clean up dead code in validate_accounting.py, particularly remove old method that digs into `algod` sqlite3 db and just rely on newer API mode. Add request parallelism to both Indexer and algod API queries. Default to ~4 threads? ## Scope/Requirements Code contained in misc/validate_accounting.py ## Urgency/Relative Priority Would be useful _today_ for release testing.",0,cleanup improve misc validate accounting py summary clean up dead code in validate accounting py particularly remove old method that digs into algod db and just rely on newer api mode add request parallelism to both indexer and algod api queries default to threads scope requirements code contained in misc validate accounting py urgency relative priority would be useful today for release testing ,0 1316,9904042583.0,IssuesEvent,2019-06-27 08:18:09,PrestaShop/PrestaShop,https://api.github.com/repos/PrestaShop/PrestaShop,closed,Can't enable or disable some modules permissions,1.7.4.4 1.7.5.0 Bug Fixed Major Permissions QA_automation," **Describe the bug** Can't enable or disable some modules permissions **To Reproduce** Steps to reproduce the behavior: 1. Go to Advanced parameters > Team > Permissions 2. Try to enable/disable checkboxes below ![capture d ecran_680](https://user-images.githubusercontent.com/13449658/49165068-a8286700-f330-11e8-8d0c-0b9a3bd77134.png) 3. See the error ![capture d ecran_681](https://user-images.githubusercontent.com/13449658/49165143-d017ca80-f330-11e8-8ccc-a06b2a99b98d.png) **Additionnal information** PrestaShop version: N/A PHP version: N/A ",1.0,"Can't enable or disable some modules permissions - **Describe the bug** Can't enable or disable some modules permissions **To Reproduce** Steps to reproduce the behavior: 1. Go to Advanced parameters > Team > Permissions 2. Try to enable/disable checkboxes below ![capture d ecran_680](https://user-images.githubusercontent.com/13449658/49165068-a8286700-f330-11e8-8d0c-0b9a3bd77134.png) 3. See the error ![capture d ecran_681](https://user-images.githubusercontent.com/13449658/49165143-d017ca80-f330-11e8-8ccc-a06b2a99b98d.png) **Additionnal information** PrestaShop version: N/A PHP version: N/A ",1,can t enable or disable some modules permissions do not disclose security issues here contact security prestashop com instead describe the bug can t enable or disable some modules permissions to reproduce steps to reproduce the behavior go to advanced parameters team permissions try to enable disable checkboxes below see the error additionnal information prestashop version n a php version n a ,1 275267,20915353539.0,IssuesEvent,2022-03-24 12:59:36,r-lib/processx,https://api.github.com/repos/r-lib/processx,closed,kill-tree test failure on macOS ,bug documentation,"``` Failure (test-kill-tree.R:222:3): run cleanup any(grepl(btmp, cmd)) is not FALSE `actual`: TRUE `expected`: FALSE Backtrace: 1. base::tryCatch(...) at test-kill-tree.R:222:2 5. testthat::expect_false(any(grepl(btmp, cmd))) at test-kill-tree.R:225:4 ```",1.0,"kill-tree test failure on macOS - ``` Failure (test-kill-tree.R:222:3): run cleanup any(grepl(btmp, cmd)) is not FALSE `actual`: TRUE `expected`: FALSE Backtrace: 1. base::tryCatch(...) at test-kill-tree.R:222:2 5. testthat::expect_false(any(grepl(btmp, cmd))) at test-kill-tree.R:225:4 ```",0,kill tree test failure on macos failure test kill tree r run cleanup any grepl btmp cmd is not false actual true expected false backtrace base trycatch at test kill tree r testthat expect false any grepl btmp cmd at test kill tree r ,0 1496,10216267396.0,IssuesEvent,2019-08-15 10:03:50,elastic/hey-apm,https://api.github.com/repos/elastic/hey-apm,opened,[apm-ci] Use a different elasticsearch cluster in CI,automation ci,"Right now benchmark results are showing a lot of variability, without making changes to apm-server. The main suspect is elasticsearch running in the same machine as apm-server, as that demonstrated issues also in the past. Regression tests have not being failing because of #132 , but even with that fixed they can not be trusted. Therefore we should move the elasticsearch instance that apm server uses to another machine. it would be great to get this soon-ish, @elastic/observablt-robots feel free to reassign this back to me if you don't have the bandwidth!",1.0,"[apm-ci] Use a different elasticsearch cluster in CI - Right now benchmark results are showing a lot of variability, without making changes to apm-server. The main suspect is elasticsearch running in the same machine as apm-server, as that demonstrated issues also in the past. Regression tests have not being failing because of #132 , but even with that fixed they can not be trusted. Therefore we should move the elasticsearch instance that apm server uses to another machine. it would be great to get this soon-ish, @elastic/observablt-robots feel free to reassign this back to me if you don't have the bandwidth!",1, use a different elasticsearch cluster in ci right now benchmark results are showing a lot of variability without making changes to apm server the main suspect is elasticsearch running in the same machine as apm server as that demonstrated issues also in the past regression tests have not being failing because of but even with that fixed they can not be trusted therefore we should move the elasticsearch instance that apm server uses to another machine it would be great to get this soon ish elastic observablt robots feel free to reassign this back to me if you don t have the bandwidth ,1 275262,23901330166.0,IssuesEvent,2022-09-08 19:04:22,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,opened,[DocDB] Shutdown race in AsyncClientInitialiser,kind/failing-test area/docdb priority/high status/awaiting-triage,"### Description Observed in a xcluster test, but I think it might be a core client issue. The client destructor stack looks similar to the one in #11348 https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&build_type=all&class=TwoDCTestParams%2FTwoDCTest&fail_tag=tsan&name=TestAlterWhenProducerIsInaccessible%2F3&platform=linux https://jenkins.dev.yugabyte.com/job/github-yugabyte-db-centos-master-clang12-tsan/897/artifact/build/tsan-clang12-dynamic-ninja/yb-test-logs/tests-integration-tests__twodc-test/TwoDCTestParams__TwoDCTest_TestAlterWhenProducerIsInaccessible__3.log ``` WARNING: ThreadSanitizer: data race (pid=1083) Read of size 8 at 0x7b08000d2e68 by main thread (mutexes: write M539723148243371304): #0 std::__1::shared_ptr::shared_ptr(std::__1::shared_ptr const&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:3120:18 (libyrpc.so+0x370d00) #1 std::__1::shared_ptr* std::__1::construct_at, std::__1::shared_ptr&, std::__1::shared_ptr*>(std::__1::shared_ptr*, std::__1::shared_ptr&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__memory/base.h:105:38 (libyrpc.so+0x379bb8) #2 void std::__1::allocator_traits > >::construct, std::__1::shared_ptr&, void, void>(std::__1::allocator >&, std::__1::shared_ptr*, std::__1::shared_ptr&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__memory/allocator_traits.h:296:9 (libyrpc.so+0x379b78) #3 void std::__1::__construct_range_forward >, boost::container::stable_vector_iterator*, false>, std::__1::shared_ptr*>(std::__1::allocator >&, boost::container::stable_vector_iterator*, false>, boost::container::stable_vector_iterator*, false>, std::__1::shared_ptr*&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1042:9 (libyrpc.so+0x379ab8) #4 std::__1::enable_if<__is_cpp17_forward_iterator*, false> >::value, void>::type std::__1::vector, std::__1::allocator > >::__construct_at_end*, false> >(boost::container::stable_vector_iterator*, false>, boost::container::stable_vector_iterator*, false>, unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/vector:1077:5 (libyrpc.so+0x37929d) #5 std::__1::enable_if<(__is_cpp17_forward_iterator*, false> >::value) && (is_constructible, std::__1::iterator_traits*, false> >::reference>::value), void>::type std::__1::vector, std::__1::allocator > >::assign*, false> >(boost::container::stable_vector_iterator*, false>, boost::container::stable_vector_iterator*, false>) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/vector:1453:13 (libyrpc.so+0x370535) #6 yb::rpc::Rpcs::DoRequestAbortAll(yb::StronglyTypedBool) ${BUILD_ROOT}/../../src/yb/rpc/rpc.cc:311:13 (libyrpc.so+0x36e669) #7 yb::rpc::Rpcs::Shutdown() ${BUILD_ROOT}/../../src/yb/rpc/rpc.cc:328:19 (libyrpc.so+0x36e8b6) #8 yb::client::YBClient::Data::~Data() ${BUILD_ROOT}/../../src/yb/client/client-internal.cc:327:9 (libyb_client.so+0x9229c4) #9 std::__1::default_delete::operator()(yb::client::YBClient::Data*) const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1423:5 (libyb_client.so+0x8f9c4e) #10 std::__1::unique_ptr >::reset(yb::client::YBClient::Data*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1684:7 (libyb_client.so+0x8f9bbd) #11 std::__1::unique_ptr >::~unique_ptr() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1638:19 (libyb_client.so+0x8dae0b) #12 yb::client::YBClient::~YBClient() ${BUILD_ROOT}/../../src/yb/client/client.cc:550:1 (libyb_client.so+0x8c6a52) #13 yb::AtomicUniquePtr::~AtomicUniquePtr() ${BUILD_ROOT}/../../src/yb/util/atomic.h:358:5 (libyb_client.so+0x87833e) #14 yb::client::AsyncClientInitialiser::~AsyncClientInitialiser() ${BUILD_ROOT}/../../src/yb/client/async_initializer.cc:69:1 (libyb_client.so+0x87733f) #15 std::__1::default_delete::operator()(yb::client::AsyncClientInitialiser*) const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1423:5 (libcdc.so+0x1873ee) #16 std::__1::unique_ptr >::reset(yb::client::AsyncClientInitialiser*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1684:7 (libcdc.so+0x18735d) #17 std::__1::unique_ptr >::operator=(std::nullptr_t) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1642:5 (libcdc.so+0x1814eb) #18 yb::cdc::CDCServiceImpl::Shutdown() ${BUILD_ROOT}/../../ent/src/yb/cdc/cdc_service.cc:2853:31 (libcdc.so+0x15d131) #19 yb::rpc::ServicePoolImpl::StartShutdown() ${BUILD_ROOT}/../../src/yb/rpc/service_pool.cc:170:17 (libyrpc.so+0x3ce2c2) #20 yb::rpc::ServicePool::StartShutdown() ${BUILD_ROOT}/../../src/yb/rpc/service_pool.cc:478:10 (libyrpc.so+0x3ccf75) #21 yb::rpc::Messenger::UnregisterAllServices() ${BUILD_ROOT}/../../src/yb/rpc/messenger.cc:427:15 (libyrpc.so+0x2ee127) #22 yb::server::RpcServer::Shutdown() ${BUILD_ROOT}/../../src/yb/server/rpc_server.cc:182:17 (libserver_process.so+0x13adbc) #23 yb::server::RpcServerBase::Shutdown() ${BUILD_ROOT}/../../src/yb/server/server_base.cc:434:18 (libserver_process.so+0x143014) #24 yb::server::RpcAndWebServerBase::Shutdown() ${BUILD_ROOT}/../../src/yb/server/server_base.cc:665:18 (libserver_process.so+0x145449) #25 yb::tserver::TabletServer::Shutdown() ${BUILD_ROOT}/../../src/yb/tserver/tablet_server.cc:472:26 (libtserver.so+0x4be325) #26 yb::tserver::enterprise::TabletServer::Shutdown() ${BUILD_ROOT}/../../ent/src/yb/tserver/tablet_server_ent.cc:110:10 (libtserver.so+0x5ccf8e) #27 yb::tserver::MiniTabletServer::Shutdown() ${BUILD_ROOT}/../../src/yb/tserver/mini_tablet_server.cc:203:14 (libtserver_test_util.so+0x7d768) #28 yb::MiniCluster::Shutdown() ${BUILD_ROOT}/../../src/yb/integration-tests/mini_cluster.cc:470:20 (libintegration-tests.so+0x26b2c7) #29 yb::enterprise::TwoDCTestBase::TearDown() ${BUILD_ROOT}/../../ent/src/yb/integration-tests/twodc_test_base.cc:61:38 (libintegration-tests.so+0x2dfef1) #30 void testing::internal::HandleSehExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2402:10 (libgmock.so+0x658cf) #31 void testing::internal::HandleExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2438:14 (libgmock.so+0x658cf) #32 testing::Test::Run() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2482:3 (libgmock.so+0x45a70) #33 testing::TestInfo::Run() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2656:11 (libgmock.so+0x46c9d) #34 testing::TestCase::Run() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2774:28 (libgmock.so+0x47946) #35 testing::internal::UnitTestImpl::RunAllTests() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:4649:43 (libgmock.so+0x52346) #36 bool testing::internal::HandleSehExceptionsInMethodIfSupported(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2402:10 (libgmock.so+0x666df) #37 bool testing::internal::HandleExceptionsInMethodIfSupported(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2438:14 (libgmock.so+0x666df) #38 testing::UnitTest::Run() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:4257:10 (libgmock.so+0x51a99) #39 RUN_ALL_TESTS() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/include/gtest/gtest.h:2233:46 (libyb_test_main.so+0x728b) #40 main ${BUILD_ROOT}/../../src/yb/util/test_main.cc:109:13 (libyb_test_main.so+0x6e20) Previous write of size 8 at 0x7b08000d2e68 by thread T66 (mutexes: write M496657390905668756): #0 std::__1::enable_if<(is_move_constructible::value) && (is_move_assignable::value), void>::type std::__1::swap(yb::rpc::RpcCommand*&, yb::rpc::RpcCommand*&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3953:9 (libmaster_rpc.so+0x337a2) #1 std::__1::shared_ptr::swap(std::__1::shared_ptr&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:3319:5 (libmaster_rpc.so+0x338e0) #2 std::__1::enable_if<__compatible_with::value, std::__1::shared_ptr&>::type std::__1::shared_ptr::operator=(std::__1::shared_ptr&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:3278:34 (libmaster_rpc.so+0x223c5) #3 yb::master::GetLeaderMasterRpc::SendRpc() ${BUILD_ROOT}/../../src/yb/master/master_rpc.cc:196:15 (libmaster_rpc.so+0x21d1e) #4 yb::rpc::RpcRetrier::DoRetry(yb::rpc::RpcCommand*, yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/rpc.cc:228:10 (libyrpc.so+0x36dd6c) #5 decltype(*(std::__1::forward(fp0)).*fp(std::__1::forward(fp1), std::__1::forward(fp1))) std::__1::__invoke(void (yb::rpc::RpcRetrier::*&)(yb::rpc::RpcCommand*, yb::Status const&), yb::rpc::RpcRetrier*&, yb::rpc::RpcCommand*&, yb::Status const&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3635:1 (libyrpc.so+0x3766e0) #6 std::__1::__bind_return >, std::__1::tuple, __is_valid_bind_return >, std::__1::tuple >::value>::type std::__1::__apply_functor >, 0ul, 1ul, 2ul, std::__1::tuple >(void (yb::rpc::RpcRetrier::*&)(yb::rpc::RpcCommand*, yb::Status const&), std::__1::tuple >&, std::__1::__tuple_indices<0ul, 1ul, 2ul>, std::__1::tuple&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2857:12 (libyrpc.so+0x37660f) #7 std::__1::__bind_return >, std::__1::tuple, __is_valid_bind_return >, std::__1::tuple >::value>::type std::__1::__bind const&>::operator()(yb::Status const&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2890:20 (libyrpc.so+0x376571) #8 boost::detail::function::void_function_obj_invoker1 const&>, void, yb::Status const&>::invoke(boost::detail::function::function_buffer&, yb::Status const&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/include/boost/function/function_template.hpp:158:11 (libyrpc.so+0x3760f8) #9 boost::function1::operator()(yb::Status const&) const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/include/boost/function/function_template.hpp:763:14 (libyb-redis.so+0x1d8e99) #10 yb::rpc::DelayedTask::TimerHandler(ev::timer&, int) ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:900:5 (libyrpc.so+0x3474b5) #11 void ev::base::method_thunk(ev_loop*, ev_timer*, int) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/common/include/ev++.h:479:7 (libyrpc.so+0x362f1a) #12 ev_invoke_pending (libev.so.4+0x882b) #13 yb::rpc::Reactor::RunThread() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:498:9 (libyrpc.so+0x3409e4) #14 decltype(*(std::__1::forward(fp0)).*fp()) std::__1::__invoke(void (yb::rpc::Reactor::*&)(), yb::rpc::Reactor*&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3635:1 (libyrpc.so+0x3554bc) #15 std::__1::__bind_return, std::__1::tuple<>, __is_valid_bind_return, std::__1::tuple<> >::value>::type std::__1::__apply_functor, 0ul, std::__1::tuple<> >(void (yb::rpc::Reactor::*&)(), std::__1::tuple&, std::__1::__tuple_indices<0ul>, std::__1::tuple<>&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2857:12 (libyrpc.so+0x357a79) #16 std::__1::__bind_return, std::__1::tuple<>, __is_valid_bind_return, std::__1::tuple<> >::value>::type std::__1::__bind::operator()<>() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2890:20 (libyrpc.so+0x357a21) #17 decltype(std::__1::forward&>(fp)()) std::__1::__invoke&>(std::__1::__bind&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3694:1 (libyrpc.so+0x3579b1) #18 void std::__1::__invoke_void_return_wrapper::__call&>(std::__1::__bind&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__functional_base:348:9 (libyrpc.so+0x357941) #19 std::__1::__function::__alloc_func, std::__1::allocator >, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1558:16 (libyrpc.so+0x357901) #20 std::__1::__function::__func, std::__1::allocator >, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1732:12 (libyrpc.so+0x35658d) #21 std::__1::__function::__value_func::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1885:16 (libmaster.so+0xd807a4) #22 std::__1::function::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2560:12 (libmaster.so+0xd67669) #23 yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:774:3 (libyb_util.so+0x6d4448) ```",1.0,"[DocDB] Shutdown race in AsyncClientInitialiser - ### Description Observed in a xcluster test, but I think it might be a core client issue. The client destructor stack looks similar to the one in #11348 https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&build_type=all&class=TwoDCTestParams%2FTwoDCTest&fail_tag=tsan&name=TestAlterWhenProducerIsInaccessible%2F3&platform=linux https://jenkins.dev.yugabyte.com/job/github-yugabyte-db-centos-master-clang12-tsan/897/artifact/build/tsan-clang12-dynamic-ninja/yb-test-logs/tests-integration-tests__twodc-test/TwoDCTestParams__TwoDCTest_TestAlterWhenProducerIsInaccessible__3.log ``` WARNING: ThreadSanitizer: data race (pid=1083) Read of size 8 at 0x7b08000d2e68 by main thread (mutexes: write M539723148243371304): #0 std::__1::shared_ptr::shared_ptr(std::__1::shared_ptr const&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:3120:18 (libyrpc.so+0x370d00) #1 std::__1::shared_ptr* std::__1::construct_at, std::__1::shared_ptr&, std::__1::shared_ptr*>(std::__1::shared_ptr*, std::__1::shared_ptr&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__memory/base.h:105:38 (libyrpc.so+0x379bb8) #2 void std::__1::allocator_traits > >::construct, std::__1::shared_ptr&, void, void>(std::__1::allocator >&, std::__1::shared_ptr*, std::__1::shared_ptr&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__memory/allocator_traits.h:296:9 (libyrpc.so+0x379b78) #3 void std::__1::__construct_range_forward >, boost::container::stable_vector_iterator*, false>, std::__1::shared_ptr*>(std::__1::allocator >&, boost::container::stable_vector_iterator*, false>, boost::container::stable_vector_iterator*, false>, std::__1::shared_ptr*&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1042:9 (libyrpc.so+0x379ab8) #4 std::__1::enable_if<__is_cpp17_forward_iterator*, false> >::value, void>::type std::__1::vector, std::__1::allocator > >::__construct_at_end*, false> >(boost::container::stable_vector_iterator*, false>, boost::container::stable_vector_iterator*, false>, unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/vector:1077:5 (libyrpc.so+0x37929d) #5 std::__1::enable_if<(__is_cpp17_forward_iterator*, false> >::value) && (is_constructible, std::__1::iterator_traits*, false> >::reference>::value), void>::type std::__1::vector, std::__1::allocator > >::assign*, false> >(boost::container::stable_vector_iterator*, false>, boost::container::stable_vector_iterator*, false>) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/vector:1453:13 (libyrpc.so+0x370535) #6 yb::rpc::Rpcs::DoRequestAbortAll(yb::StronglyTypedBool) ${BUILD_ROOT}/../../src/yb/rpc/rpc.cc:311:13 (libyrpc.so+0x36e669) #7 yb::rpc::Rpcs::Shutdown() ${BUILD_ROOT}/../../src/yb/rpc/rpc.cc:328:19 (libyrpc.so+0x36e8b6) #8 yb::client::YBClient::Data::~Data() ${BUILD_ROOT}/../../src/yb/client/client-internal.cc:327:9 (libyb_client.so+0x9229c4) #9 std::__1::default_delete::operator()(yb::client::YBClient::Data*) const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1423:5 (libyb_client.so+0x8f9c4e) #10 std::__1::unique_ptr >::reset(yb::client::YBClient::Data*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1684:7 (libyb_client.so+0x8f9bbd) #11 std::__1::unique_ptr >::~unique_ptr() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1638:19 (libyb_client.so+0x8dae0b) #12 yb::client::YBClient::~YBClient() ${BUILD_ROOT}/../../src/yb/client/client.cc:550:1 (libyb_client.so+0x8c6a52) #13 yb::AtomicUniquePtr::~AtomicUniquePtr() ${BUILD_ROOT}/../../src/yb/util/atomic.h:358:5 (libyb_client.so+0x87833e) #14 yb::client::AsyncClientInitialiser::~AsyncClientInitialiser() ${BUILD_ROOT}/../../src/yb/client/async_initializer.cc:69:1 (libyb_client.so+0x87733f) #15 std::__1::default_delete::operator()(yb::client::AsyncClientInitialiser*) const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1423:5 (libcdc.so+0x1873ee) #16 std::__1::unique_ptr >::reset(yb::client::AsyncClientInitialiser*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1684:7 (libcdc.so+0x18735d) #17 std::__1::unique_ptr >::operator=(std::nullptr_t) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:1642:5 (libcdc.so+0x1814eb) #18 yb::cdc::CDCServiceImpl::Shutdown() ${BUILD_ROOT}/../../ent/src/yb/cdc/cdc_service.cc:2853:31 (libcdc.so+0x15d131) #19 yb::rpc::ServicePoolImpl::StartShutdown() ${BUILD_ROOT}/../../src/yb/rpc/service_pool.cc:170:17 (libyrpc.so+0x3ce2c2) #20 yb::rpc::ServicePool::StartShutdown() ${BUILD_ROOT}/../../src/yb/rpc/service_pool.cc:478:10 (libyrpc.so+0x3ccf75) #21 yb::rpc::Messenger::UnregisterAllServices() ${BUILD_ROOT}/../../src/yb/rpc/messenger.cc:427:15 (libyrpc.so+0x2ee127) #22 yb::server::RpcServer::Shutdown() ${BUILD_ROOT}/../../src/yb/server/rpc_server.cc:182:17 (libserver_process.so+0x13adbc) #23 yb::server::RpcServerBase::Shutdown() ${BUILD_ROOT}/../../src/yb/server/server_base.cc:434:18 (libserver_process.so+0x143014) #24 yb::server::RpcAndWebServerBase::Shutdown() ${BUILD_ROOT}/../../src/yb/server/server_base.cc:665:18 (libserver_process.so+0x145449) #25 yb::tserver::TabletServer::Shutdown() ${BUILD_ROOT}/../../src/yb/tserver/tablet_server.cc:472:26 (libtserver.so+0x4be325) #26 yb::tserver::enterprise::TabletServer::Shutdown() ${BUILD_ROOT}/../../ent/src/yb/tserver/tablet_server_ent.cc:110:10 (libtserver.so+0x5ccf8e) #27 yb::tserver::MiniTabletServer::Shutdown() ${BUILD_ROOT}/../../src/yb/tserver/mini_tablet_server.cc:203:14 (libtserver_test_util.so+0x7d768) #28 yb::MiniCluster::Shutdown() ${BUILD_ROOT}/../../src/yb/integration-tests/mini_cluster.cc:470:20 (libintegration-tests.so+0x26b2c7) #29 yb::enterprise::TwoDCTestBase::TearDown() ${BUILD_ROOT}/../../ent/src/yb/integration-tests/twodc_test_base.cc:61:38 (libintegration-tests.so+0x2dfef1) #30 void testing::internal::HandleSehExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2402:10 (libgmock.so+0x658cf) #31 void testing::internal::HandleExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2438:14 (libgmock.so+0x658cf) #32 testing::Test::Run() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2482:3 (libgmock.so+0x45a70) #33 testing::TestInfo::Run() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2656:11 (libgmock.so+0x46c9d) #34 testing::TestCase::Run() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2774:28 (libgmock.so+0x47946) #35 testing::internal::UnitTestImpl::RunAllTests() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:4649:43 (libgmock.so+0x52346) #36 bool testing::internal::HandleSehExceptionsInMethodIfSupported(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2402:10 (libgmock.so+0x666df) #37 bool testing::internal::HandleExceptionsInMethodIfSupported(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:2438:14 (libgmock.so+0x666df) #38 testing::UnitTest::Run() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/src/gmock-1.8.0/googletest/src/gtest.cc:4257:10 (libgmock.so+0x51a99) #39 RUN_ALL_TESTS() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/include/gtest/gtest.h:2233:46 (libyb_test_main.so+0x728b) #40 main ${BUILD_ROOT}/../../src/yb/util/test_main.cc:109:13 (libyb_test_main.so+0x6e20) Previous write of size 8 at 0x7b08000d2e68 by thread T66 (mutexes: write M496657390905668756): #0 std::__1::enable_if<(is_move_constructible::value) && (is_move_assignable::value), void>::type std::__1::swap(yb::rpc::RpcCommand*&, yb::rpc::RpcCommand*&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3953:9 (libmaster_rpc.so+0x337a2) #1 std::__1::shared_ptr::swap(std::__1::shared_ptr&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:3319:5 (libmaster_rpc.so+0x338e0) #2 std::__1::enable_if<__compatible_with::value, std::__1::shared_ptr&>::type std::__1::shared_ptr::operator=(std::__1::shared_ptr&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/memory:3278:34 (libmaster_rpc.so+0x223c5) #3 yb::master::GetLeaderMasterRpc::SendRpc() ${BUILD_ROOT}/../../src/yb/master/master_rpc.cc:196:15 (libmaster_rpc.so+0x21d1e) #4 yb::rpc::RpcRetrier::DoRetry(yb::rpc::RpcCommand*, yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/rpc.cc:228:10 (libyrpc.so+0x36dd6c) #5 decltype(*(std::__1::forward(fp0)).*fp(std::__1::forward(fp1), std::__1::forward(fp1))) std::__1::__invoke(void (yb::rpc::RpcRetrier::*&)(yb::rpc::RpcCommand*, yb::Status const&), yb::rpc::RpcRetrier*&, yb::rpc::RpcCommand*&, yb::Status const&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3635:1 (libyrpc.so+0x3766e0) #6 std::__1::__bind_return >, std::__1::tuple, __is_valid_bind_return >, std::__1::tuple >::value>::type std::__1::__apply_functor >, 0ul, 1ul, 2ul, std::__1::tuple >(void (yb::rpc::RpcRetrier::*&)(yb::rpc::RpcCommand*, yb::Status const&), std::__1::tuple >&, std::__1::__tuple_indices<0ul, 1ul, 2ul>, std::__1::tuple&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2857:12 (libyrpc.so+0x37660f) #7 std::__1::__bind_return >, std::__1::tuple, __is_valid_bind_return >, std::__1::tuple >::value>::type std::__1::__bind const&>::operator()(yb::Status const&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2890:20 (libyrpc.so+0x376571) #8 boost::detail::function::void_function_obj_invoker1 const&>, void, yb::Status const&>::invoke(boost::detail::function::function_buffer&, yb::Status const&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/include/boost/function/function_template.hpp:158:11 (libyrpc.so+0x3760f8) #9 boost::function1::operator()(yb::Status const&) const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/include/boost/function/function_template.hpp:763:14 (libyb-redis.so+0x1d8e99) #10 yb::rpc::DelayedTask::TimerHandler(ev::timer&, int) ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:900:5 (libyrpc.so+0x3474b5) #11 void ev::base::method_thunk(ev_loop*, ev_timer*, int) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/common/include/ev++.h:479:7 (libyrpc.so+0x362f1a) #12 ev_invoke_pending (libev.so.4+0x882b) #13 yb::rpc::Reactor::RunThread() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:498:9 (libyrpc.so+0x3409e4) #14 decltype(*(std::__1::forward(fp0)).*fp()) std::__1::__invoke(void (yb::rpc::Reactor::*&)(), yb::rpc::Reactor*&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3635:1 (libyrpc.so+0x3554bc) #15 std::__1::__bind_return, std::__1::tuple<>, __is_valid_bind_return, std::__1::tuple<> >::value>::type std::__1::__apply_functor, 0ul, std::__1::tuple<> >(void (yb::rpc::Reactor::*&)(), std::__1::tuple&, std::__1::__tuple_indices<0ul>, std::__1::tuple<>&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2857:12 (libyrpc.so+0x357a79) #16 std::__1::__bind_return, std::__1::tuple<>, __is_valid_bind_return, std::__1::tuple<> >::value>::type std::__1::__bind::operator()<>() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2890:20 (libyrpc.so+0x357a21) #17 decltype(std::__1::forward&>(fp)()) std::__1::__invoke&>(std::__1::__bind&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/type_traits:3694:1 (libyrpc.so+0x3579b1) #18 void std::__1::__invoke_void_return_wrapper::__call&>(std::__1::__bind&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/__functional_base:348:9 (libyrpc.so+0x357941) #19 std::__1::__function::__alloc_func, std::__1::allocator >, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1558:16 (libyrpc.so+0x357901) #20 std::__1::__function::__func, std::__1::allocator >, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1732:12 (libyrpc.so+0x35658d) #21 std::__1::__function::__value_func::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:1885:16 (libmaster.so+0xd807a4) #22 std::__1::function::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220806014430-c2f02d2024-centos7-x86_64-clang12/installed/tsan/libcxx/include/c++/v1/functional:2560:12 (libmaster.so+0xd67669) #23 yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:774:3 (libyb_util.so+0x6d4448) ```",0, shutdown race in asyncclientinitialiser description observed in a xcluster test but i think it might be a core client issue the client destructor stack looks similar to the one in warning threadsanitizer data race pid read of size at by main thread mutexes write std shared ptr shared ptr std shared ptr const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory libyrpc so std shared ptr std construct at std shared ptr std shared ptr std shared ptr std shared ptr opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory base h libyrpc so void std allocator traits construct std shared ptr void void std allocator std shared ptr std shared ptr opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory allocator traits h libyrpc so void std construct range forward boost container stable vector iterator false std shared ptr std allocator boost container stable vector iterator false boost container stable vector iterator false std shared ptr opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory libyrpc so std enable if false value void type std vector std allocator construct at end false boost container stable vector iterator false boost container stable vector iterator false unsigned long opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c vector libyrpc so std enable if false value is constructible std iterator traits false reference value void type std vector std allocator assign false boost container stable vector iterator false boost container stable vector iterator false opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c vector libyrpc so yb rpc rpcs dorequestabortall yb stronglytypedbool build root src yb rpc rpc cc libyrpc so yb rpc rpcs shutdown build root src yb rpc rpc cc libyrpc so yb client ybclient data data build root src yb client client internal cc libyb client so std default delete operator yb client ybclient data const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory libyb client so std unique ptr reset yb client ybclient data opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory libyb client so std unique ptr unique ptr opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory libyb client so yb client ybclient ybclient build root src yb client client cc libyb client so yb atomicuniqueptr atomicuniqueptr build root src yb util atomic h libyb client so yb client asyncclientinitialiser asyncclientinitialiser build root src yb client async initializer cc libyb client so std default delete operator yb client asyncclientinitialiser const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory libcdc so std unique ptr reset yb client asyncclientinitialiser opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory libcdc so std unique ptr operator std nullptr t opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory libcdc so yb cdc cdcserviceimpl shutdown build root ent src yb cdc cdc service cc libcdc so yb rpc servicepoolimpl startshutdown build root src yb rpc service pool cc libyrpc so yb rpc servicepool startshutdown build root src yb rpc service pool cc libyrpc so yb rpc messenger unregisterallservices build root src yb rpc messenger cc libyrpc so yb server rpcserver shutdown build root src yb server rpc server cc libserver process so yb server rpcserverbase shutdown build root src yb server server base cc libserver process so yb server rpcandwebserverbase shutdown build root src yb server server base cc libserver process so yb tserver tabletserver shutdown build root src yb tserver tablet server cc libtserver so yb tserver enterprise tabletserver shutdown build root ent src yb tserver tablet server ent cc libtserver so yb tserver minitabletserver shutdown build root src yb tserver mini tablet server cc libtserver test util so yb minicluster shutdown build root src yb integration tests mini cluster cc libintegration tests so yb enterprise twodctestbase teardown build root ent src yb integration tests twodc test base cc libintegration tests so void testing internal handlesehexceptionsinmethodifsupported testing test void testing test char const opt yb build thirdparty yugabyte db thirdparty src gmock googletest src gtest cc libgmock so void testing internal handleexceptionsinmethodifsupported testing test void testing test char const opt yb build thirdparty yugabyte db thirdparty src gmock googletest src gtest cc libgmock so testing test run opt yb build thirdparty yugabyte db thirdparty src gmock googletest src gtest cc libgmock so testing testinfo run opt yb build thirdparty yugabyte db thirdparty src gmock googletest src gtest cc libgmock so testing testcase run opt yb build thirdparty yugabyte db thirdparty src gmock googletest src gtest cc libgmock so testing internal unittestimpl runalltests opt yb build thirdparty yugabyte db thirdparty src gmock googletest src gtest cc libgmock so bool testing internal handlesehexceptionsinmethodifsupported testing internal unittestimpl bool testing internal unittestimpl char const opt yb build thirdparty yugabyte db thirdparty src gmock googletest src gtest cc libgmock so bool testing internal handleexceptionsinmethodifsupported testing internal unittestimpl bool testing internal unittestimpl char const opt yb build thirdparty yugabyte db thirdparty src gmock googletest src gtest cc libgmock so testing unittest run opt yb build thirdparty yugabyte db thirdparty src gmock googletest src gtest cc libgmock so run all tests opt yb build thirdparty yugabyte db thirdparty installed tsan include gtest gtest h libyb test main so main build root src yb util test main cc libyb test main so previous write of size at by thread mutexes write std enable if value is move assignable value void type std swap yb rpc rpccommand yb rpc rpccommand opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c type traits libmaster rpc so std shared ptr swap std shared ptr opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory libmaster rpc so std enable if value std shared ptr type std shared ptr operator std shared ptr opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c memory libmaster rpc so yb master getleadermasterrpc sendrpc build root src yb master master rpc cc libmaster rpc so yb rpc rpcretrier doretry yb rpc rpccommand yb status const build root src yb rpc rpc cc libyrpc so decltype std forward fp std forward std forward std invoke void yb rpc rpcretrier yb rpc rpccommand yb status const yb rpc rpcretrier yb rpc rpccommand yb status const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c type traits libyrpc so std bind return std tuple is valid bind return std tuple value type std apply functor std tuple void yb rpc rpcretrier yb rpc rpccommand yb status const std tuple std tuple indices std tuple opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libyrpc so std bind return std tuple is valid bind return std tuple value type std bind const operator yb status const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libyrpc so boost detail function void function obj const void yb status const invoke boost detail function function buffer yb status const opt yb build thirdparty yugabyte db thirdparty installed tsan include boost function function template hpp libyrpc so boost operator yb status const const opt yb build thirdparty yugabyte db thirdparty installed tsan include boost function function template hpp libyb redis so yb rpc delayedtask timerhandler ev timer int build root src yb rpc reactor cc libyrpc so void ev base method thunk ev loop ev timer int opt yb build thirdparty yugabyte db thirdparty installed common include ev h libyrpc so ev invoke pending libev so yb rpc reactor runthread build root src yb rpc reactor cc libyrpc so decltype std forward fp std invoke void yb rpc reactor yb rpc reactor opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c type traits libyrpc so std bind return std tuple is valid bind return std tuple value type std apply functor std tuple void yb rpc reactor std tuple std tuple indices std tuple opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libyrpc so std bind return std tuple is valid bind return std tuple value type std bind operator opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libyrpc so decltype std forward fp std invoke std bind opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c type traits libyrpc so void std invoke void return wrapper call std bind opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional base libyrpc so std function alloc func std allocator void operator opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libyrpc so std function func std allocator void operator opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libyrpc so std function value func operator const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libmaster so std function operator const opt yb build thirdparty yugabyte db thirdparty installed tsan libcxx include c functional libmaster so yb thread supervisethread void build root src yb util thread cc libyb util so ,0 219919,17135412937.0,IssuesEvent,2021-07-13 00:57:41,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,roachtest: tpcc/multiregion/survive=region/chaos=true failed,C-test-failure O-roachtest O-robot branch-master,"roachtest.tpcc/multiregion/survive=region/chaos=true [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3154228&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3154228&tab=artifacts#/tpcc/multiregion/survive=region/chaos=true) on master @ [ba689f91a5bbd7737a8a229522048b2ba91b2ec0](https://github.com/cockroachdb/cockroach/commits/ba689f91a5bbd7737a8a229522048b2ba91b2ec0): ``` | | main.(*tpccChaosEventProcessor).checkUptime | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:48 | | main.(*tpccChaosEventProcessor).listen.func1 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:265 | | runtime.goexit | | /usr/local/go/src/runtime/asm_amd64.s:1371 | Wraps: (4) expected 0 errors, found from 3763874.000000, to 3763875.000000 | Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.leafError Wraps: (3) secondary error attachment | error at from 2021-07-07T12:30:29Z, to 2021-07-07T12:35:19Z on metric workload_tpcc_payment_error_total{instance=""34.139.41.226:2120""}: expected 0 errors, found from 3752976.000000, to 3752977.000000 | (1) attached stack trace | -- stack trace: | | main.(*tpccChaosEventProcessor).checkMetrics | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:190 | | [...repeated from below...] | Wraps: (2) error at from 2021-07-07T12:30:29Z, to 2021-07-07T12:35:19Z on metric workload_tpcc_payment_error_total{instance=""34.139.41.226:2120""} | Wraps: (3) attached stack trace | -- stack trace: | | main.(*tpccChaosEventProcessor).checkUptime.func2 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:65 | | main.(*tpccChaosEventProcessor).checkMetrics | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:189 | | main.(*tpccChaosEventProcessor).checkUptime | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:48 | | main.(*tpccChaosEventProcessor).listen.func1 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:265 | | runtime.goexit | | /usr/local/go/src/runtime/asm_amd64.s:1371 | Wraps: (4) expected 0 errors, found from 3752976.000000, to 3752977.000000 | Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.leafError Wraps: (4) attached stack trace -- stack trace: | main.(*tpccChaosEventProcessor).checkMetrics | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:190 | [...repeated from below...] Wraps: (5) error at from 2021-07-07T12:10:25Z, to 2021-07-07T12:15:15Z on metric workload_tpcc_newOrder_error_total{instance=""34.139.41.226:2110""} Wraps: (6) attached stack trace -- stack trace: | main.(*tpccChaosEventProcessor).checkUptime.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:65 | main.(*tpccChaosEventProcessor).checkMetrics | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:189 | main.(*tpccChaosEventProcessor).checkUptime | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:48 | main.(*tpccChaosEventProcessor).listen.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:265 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1371 Wraps: (7) expected 0 errors, found from 1889934.000000, to 1889935.000000 Error types: (1) *secondary.withSecondaryError (2) *secondary.withSecondaryError (3) *secondary.withSecondaryError (4) *withstack.withStack (5) *errutil.withPrefix (6) *withstack.withStack (7) *errutil.leafError ```
Reproduce

To reproduce, try: ```bash # From https://go.crdb.dev/p/roachstress, perhaps edited lightly. caffeinate ./roachstress.sh tpcc/multiregion/survive=region/chaos=true ```

/cc @cockroachdb/multiregion [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tpcc/multiregion/survive=region/chaos=true.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) ",2.0,"roachtest: tpcc/multiregion/survive=region/chaos=true failed - roachtest.tpcc/multiregion/survive=region/chaos=true [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3154228&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3154228&tab=artifacts#/tpcc/multiregion/survive=region/chaos=true) on master @ [ba689f91a5bbd7737a8a229522048b2ba91b2ec0](https://github.com/cockroachdb/cockroach/commits/ba689f91a5bbd7737a8a229522048b2ba91b2ec0): ``` | | main.(*tpccChaosEventProcessor).checkUptime | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:48 | | main.(*tpccChaosEventProcessor).listen.func1 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:265 | | runtime.goexit | | /usr/local/go/src/runtime/asm_amd64.s:1371 | Wraps: (4) expected 0 errors, found from 3763874.000000, to 3763875.000000 | Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.leafError Wraps: (3) secondary error attachment | error at from 2021-07-07T12:30:29Z, to 2021-07-07T12:35:19Z on metric workload_tpcc_payment_error_total{instance=""34.139.41.226:2120""}: expected 0 errors, found from 3752976.000000, to 3752977.000000 | (1) attached stack trace | -- stack trace: | | main.(*tpccChaosEventProcessor).checkMetrics | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:190 | | [...repeated from below...] | Wraps: (2) error at from 2021-07-07T12:30:29Z, to 2021-07-07T12:35:19Z on metric workload_tpcc_payment_error_total{instance=""34.139.41.226:2120""} | Wraps: (3) attached stack trace | -- stack trace: | | main.(*tpccChaosEventProcessor).checkUptime.func2 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:65 | | main.(*tpccChaosEventProcessor).checkMetrics | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:189 | | main.(*tpccChaosEventProcessor).checkUptime | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:48 | | main.(*tpccChaosEventProcessor).listen.func1 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:265 | | runtime.goexit | | /usr/local/go/src/runtime/asm_amd64.s:1371 | Wraps: (4) expected 0 errors, found from 3752976.000000, to 3752977.000000 | Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.leafError Wraps: (4) attached stack trace -- stack trace: | main.(*tpccChaosEventProcessor).checkMetrics | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:190 | [...repeated from below...] Wraps: (5) error at from 2021-07-07T12:10:25Z, to 2021-07-07T12:15:15Z on metric workload_tpcc_newOrder_error_total{instance=""34.139.41.226:2110""} Wraps: (6) attached stack trace -- stack trace: | main.(*tpccChaosEventProcessor).checkUptime.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:65 | main.(*tpccChaosEventProcessor).checkMetrics | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:189 | main.(*tpccChaosEventProcessor).checkUptime | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:48 | main.(*tpccChaosEventProcessor).listen.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/drt.go:265 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1371 Wraps: (7) expected 0 errors, found from 1889934.000000, to 1889935.000000 Error types: (1) *secondary.withSecondaryError (2) *secondary.withSecondaryError (3) *secondary.withSecondaryError (4) *withstack.withStack (5) *errutil.withPrefix (6) *withstack.withStack (7) *errutil.leafError ```
Reproduce

To reproduce, try: ```bash # From https://go.crdb.dev/p/roachstress, perhaps edited lightly. caffeinate ./roachstress.sh tpcc/multiregion/survive=region/chaos=true ```

/cc @cockroachdb/multiregion [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tpcc/multiregion/survive=region/chaos=true.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) ",0,roachtest tpcc multiregion survive region chaos true failed roachtest tpcc multiregion survive region chaos true with on master main tpccchaoseventprocessor checkuptime home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go main tpccchaoseventprocessor listen home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go runtime goexit usr local go src runtime asm s wraps expected errors found from to error types withstack withstack errutil withprefix withstack withstack errutil leaferror wraps secondary error attachment error at from to on metric workload tpcc payment error total instance expected errors found from to attached stack trace stack trace main tpccchaoseventprocessor checkmetrics home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go wraps error at from to on metric workload tpcc payment error total instance wraps attached stack trace stack trace main tpccchaoseventprocessor checkuptime home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go main tpccchaoseventprocessor checkmetrics home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go main tpccchaoseventprocessor checkuptime home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go main tpccchaoseventprocessor listen home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go runtime goexit usr local go src runtime asm s wraps expected errors found from to error types withstack withstack errutil withprefix withstack withstack errutil leaferror wraps attached stack trace stack trace main tpccchaoseventprocessor checkmetrics home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go wraps error at from to on metric workload tpcc neworder error total instance wraps attached stack trace stack trace main tpccchaoseventprocessor checkuptime home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go main tpccchaoseventprocessor checkmetrics home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go main tpccchaoseventprocessor checkuptime home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go main tpccchaoseventprocessor listen home agent work go src github com cockroachdb cockroach pkg cmd roachtest drt go runtime goexit usr local go src runtime asm s wraps expected errors found from to error types secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror withstack withstack errutil withprefix withstack withstack errutil leaferror reproduce to reproduce try bash from perhaps edited lightly caffeinate roachstress sh tpcc multiregion survive region chaos true cc cockroachdb multiregion ,0 229231,17536062973.0,IssuesEvent,2021-08-12 06:34:54,arcus-azure/arcus.security,https://api.github.com/repos/arcus-azure/arcus.security,closed,[Docs] Remove obsolete Azure Key Vault documentation from current documentation,invalid documentation azure-key-vault,"**Describe what the problem is** Apparently, we have still a obsolete feature documentation on authentication with Azure Key Vault that uses old/obsolete features. This was previously required before the introduction of the secret store but should now be removed from the current version feature docs. https://security.arcus-azure.net/features/auth/azure-key-vault This was first discovered by @pim-simons.",1.0,"[Docs] Remove obsolete Azure Key Vault documentation from current documentation - **Describe what the problem is** Apparently, we have still a obsolete feature documentation on authentication with Azure Key Vault that uses old/obsolete features. This was previously required before the introduction of the secret store but should now be removed from the current version feature docs. https://security.arcus-azure.net/features/auth/azure-key-vault This was first discovered by @pim-simons.",0, remove obsolete azure key vault documentation from current documentation describe what the problem is apparently we have still a obsolete feature documentation on authentication with azure key vault that uses old obsolete features this was previously required before the introduction of the secret store but should now be removed from the current version feature docs this was first discovered by pim simons ,0 617149,19344108764.0,IssuesEvent,2021-12-15 09:01:57,Code-Poets/sheetstorm,https://api.github.com/repos/Code-Poets/sheetstorm,closed,Update the look of the change password page,feature priority high UX,"There is no mockup for this page Content order: - `Old password` - `New password` - new password requirements in small red font or maybe shows when typing in 'New password` field - `New password confirmation` - space - `Change password button` Should be done: ------------ - Put content into main container - Change the order of the contents as given above",1.0,"Update the look of the change password page - There is no mockup for this page Content order: - `Old password` - `New password` - new password requirements in small red font or maybe shows when typing in 'New password` field - `New password confirmation` - space - `Change password button` Should be done: ------------ - Put content into main container - Change the order of the contents as given above",0,update the look of the change password page there is no mockup for this page content order old password new password new password requirements in small red font or maybe shows when typing in new password field new password confirmation space change password button should be done put content into main container change the order of the contents as given above,0 128764,18070130064.0,IssuesEvent,2021-09-21 01:14:46,RG4421/terra-dev-site,https://api.github.com/repos/RG4421/terra-dev-site,opened,"CVE-2021-3803 (Medium) detected in nth-check-1.0.2.tgz, nth-check-2.0.0.tgz",security vulnerability,"## CVE-2021-3803 - Medium Severity Vulnerability
Vulnerable Libraries - nth-check-1.0.2.tgz, nth-check-2.0.0.tgz

nth-check-1.0.2.tgz

performant nth-check parser & compiler

Library home page: https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz

Path to dependency file: terra-dev-site/package.json

Path to vulnerable library: terra-dev-site/node_modules/nth-check/package.json

Dependency Hierarchy: - html-webpack-plugin-3.2.0.tgz (Root Library) - pretty-error-2.1.2.tgz - renderkid-2.0.5.tgz - css-select-2.1.0.tgz - :x: **nth-check-1.0.2.tgz** (Vulnerable Library)

nth-check-2.0.0.tgz

Parses and compiles CSS nth-checks to highly optimized functions.

Library home page: https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz

Path to dependency file: terra-dev-site/package.json

Path to vulnerable library: terra-dev-site/node_modules/cheerio-select-tmp/node_modules/nth-check/package.json

Dependency Hierarchy: - enzyme-3.11.0.tgz (Root Library) - cheerio-1.0.0-rc.5.tgz - cheerio-select-tmp-0.1.1.tgz - css-select-3.1.2.tgz - :x: **nth-check-2.0.0.tgz** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

nth-check is vulnerable to Inefficient Regular Expression Complexity

Publish Date: 2021-09-17

URL: CVE-2021-3803

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1

Release Date: 2021-09-17

Fix Resolution: nth-check - v2.0.1

",True,"CVE-2021-3803 (Medium) detected in nth-check-1.0.2.tgz, nth-check-2.0.0.tgz - ## CVE-2021-3803 - Medium Severity Vulnerability
Vulnerable Libraries - nth-check-1.0.2.tgz, nth-check-2.0.0.tgz

nth-check-1.0.2.tgz

performant nth-check parser & compiler

Library home page: https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz

Path to dependency file: terra-dev-site/package.json

Path to vulnerable library: terra-dev-site/node_modules/nth-check/package.json

Dependency Hierarchy: - html-webpack-plugin-3.2.0.tgz (Root Library) - pretty-error-2.1.2.tgz - renderkid-2.0.5.tgz - css-select-2.1.0.tgz - :x: **nth-check-1.0.2.tgz** (Vulnerable Library)

nth-check-2.0.0.tgz

Parses and compiles CSS nth-checks to highly optimized functions.

Library home page: https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz

Path to dependency file: terra-dev-site/package.json

Path to vulnerable library: terra-dev-site/node_modules/cheerio-select-tmp/node_modules/nth-check/package.json

Dependency Hierarchy: - enzyme-3.11.0.tgz (Root Library) - cheerio-1.0.0-rc.5.tgz - cheerio-select-tmp-0.1.1.tgz - css-select-3.1.2.tgz - :x: **nth-check-2.0.0.tgz** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

nth-check is vulnerable to Inefficient Regular Expression Complexity

Publish Date: 2021-09-17

URL: CVE-2021-3803

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1

Release Date: 2021-09-17

Fix Resolution: nth-check - v2.0.1

",0,cve medium detected in nth check tgz nth check tgz cve medium severity vulnerability vulnerable libraries nth check tgz nth check tgz nth check tgz performant nth check parser compiler library home page a href path to dependency file terra dev site package json path to vulnerable library terra dev site node modules nth check package json dependency hierarchy html webpack plugin tgz root library pretty error tgz renderkid tgz css select tgz x nth check tgz vulnerable library nth check tgz parses and compiles css nth checks to highly optimized functions library home page a href path to dependency file terra dev site package json path to vulnerable library terra dev site node modules cheerio select tmp node modules nth check package json dependency hierarchy enzyme tgz root library cheerio rc tgz cheerio select tmp tgz css select tgz x nth check tgz vulnerable library found in base branch master vulnerability details nth check is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nth check isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree html webpack plugin pretty error renderkid css select nth check isminimumfixversionavailable true minimumfixversion nth check packagetype javascript node js packagename nth check packageversion packagefilepaths istransitivedependency true dependencytree enzyme cheerio rc cheerio select tmp css select nth check isminimumfixversionavailable true minimumfixversion nth check basebranches vulnerabilityidentifier cve vulnerabilitydetails nth check is vulnerable to inefficient regular expression complexity vulnerabilityurl ,0 129038,10560940379.0,IssuesEvent,2019-10-04 14:52:51,nyu-devops-fall19-suppliers/suppliers,https://api.github.com/repos/nyu-devops-fall19-suppliers/suppliers,opened,Write a test cases for query operation,testing,"**As a** TDD developer **I need** test cases for query operation **So that** the query operation can be tested **Assumptions:** * ... **Acceptance Criteria** ``` Given the test cases are implemented When the actual query operation is implemented Then the developed query operation can be tested ``` ",1.0,"Write a test cases for query operation - **As a** TDD developer **I need** test cases for query operation **So that** the query operation can be tested **Assumptions:** * ... **Acceptance Criteria** ``` Given the test cases are implemented When the actual query operation is implemented Then the developed query operation can be tested ``` ",0,write a test cases for query operation as a tdd developer i need test cases for query operation so that the query operation can be tested assumptions acceptance criteria given the test cases are implemented when the actual query operation is implemented then the developed query operation can be tested ,0 196209,15585826842.0,IssuesEvent,2021-03-18 00:32:23,HalinaPP/travel-app,https://api.github.com/repos/HalinaPP/travel-app,closed,указать где видны панорамы,documentation,написать в инструкции или видео на каких наших странах есть панорамы и как их посмотреть,1.0,указать где видны панорамы - написать в инструкции или видео на каких наших странах есть панорамы и как их посмотреть,0,указать где видны панорамы написать в инструкции или видео на каких наших странах есть панорамы и как их посмотреть,0 703150,24147924103.0,IssuesEvent,2022-09-21 20:37:48,rancher/rke2,https://api.github.com/repos/rancher/rke2,closed,Cannot run rancher on hardened system,kind/bug priority/critical-urgent," **Environmental Info:** RKE2 Version: v1.22.12+rke2r1, v1.23.9+rke2r1, and v1.24.3+rke2r1 Node(s) CPU architecture, OS, and Version: Any Cluster Configuration: 3 servers. Also repro'ed on just 1 server. **Describe the bug:** After bringing up a hardened rke2 cluster and running rancher, I cannot access the rancher UI. The rancher pods appear to all be running, and the UI appears to be accessible via ClusterIP, but it fails to open when using the specified hostname. I am using AWS in this testing and see that the TargetGroups setup to my LB are showing Unhealthy on 443 and 80 (though 9345 and 6443 are both showing Healthy). **Steps To Reproduce:** 1. Install rke2 using `profile: cis-1.6` in the config. Also include a `tls-san` using a valid hostname. 2. Ensure the nodes are all setup to recognize the hostname (for example, with AWS I register the nodes to four TargetGroups with ports: 6443, 9345, 80, and 443. Then attach those to a LoadBalancer. Then I create a route53 record pointing to the LoadBalancer DNS. I use the route53 record as the hostname in my tls-san in step 1 above). 3. Install rancher: ``` # Update helm, setup namespaces, and install cert-manager helm repo add rancher-latest https://releases.rancher.com/server-charts/latest && \ helm repo add jetstack https://charts.jetstack.io && \ helm repo update && \ kubectl create namespace cattle-system && \ kubectl create namespace cert-manager && \ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.8.0/cert-manager.crds.yaml && \ helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --version v1.8.0 # Ensure cert-manager pods are running: kubectl get pods --namespace cert-manager # Install rancher. I've confirmed this happens using 2.6.7-rc5 and 2.6.6 both: helm install rancher rancher-latest/rancher \ --namespace cattle-system \ --set hostname=my.redacted.hostname \ --set rancherImageTag=v2.6.7-rc5 \ --version=v2.6.7-rc5 ``` 4. The above steps will finish with an output that contains information to access the UI. Wait until all the pods are available, then access that link: ``` kubectl -n cattle-system rollout status deploy/rancher ``` **Expected behavior:** The Rancher UI should be accessible. **Actual behavior:** The Rancher UI is not accessible at all. **Additional context / logs:** I believe this is related to the changes from https://github.com/rancher/rke2/issues/2206 probably mixed with the PSPs and NetworkPolicies we have in a hardened setup. I think the fix should include updating those netpols or psps to account for this hostnetwork change. The rancher pods do NOT deploy with any specific hostnetwork setting as far as I can tell, and I can't see any errors in the logs: [rancher-pod-logs.log](https://github.com/rancher/rke2/files/9214592/rancher-pod-logs.log) [ingress-nginx.log](https://github.com/rancher/rke2/files/9214605/ingress-nginx.log) I'm happy to reproduce and gather any more information necessary.",1.0,"Cannot run rancher on hardened system - **Environmental Info:** RKE2 Version: v1.22.12+rke2r1, v1.23.9+rke2r1, and v1.24.3+rke2r1 Node(s) CPU architecture, OS, and Version: Any Cluster Configuration: 3 servers. Also repro'ed on just 1 server. **Describe the bug:** After bringing up a hardened rke2 cluster and running rancher, I cannot access the rancher UI. The rancher pods appear to all be running, and the UI appears to be accessible via ClusterIP, but it fails to open when using the specified hostname. I am using AWS in this testing and see that the TargetGroups setup to my LB are showing Unhealthy on 443 and 80 (though 9345 and 6443 are both showing Healthy). **Steps To Reproduce:** 1. Install rke2 using `profile: cis-1.6` in the config. Also include a `tls-san` using a valid hostname. 2. Ensure the nodes are all setup to recognize the hostname (for example, with AWS I register the nodes to four TargetGroups with ports: 6443, 9345, 80, and 443. Then attach those to a LoadBalancer. Then I create a route53 record pointing to the LoadBalancer DNS. I use the route53 record as the hostname in my tls-san in step 1 above). 3. Install rancher: ``` # Update helm, setup namespaces, and install cert-manager helm repo add rancher-latest https://releases.rancher.com/server-charts/latest && \ helm repo add jetstack https://charts.jetstack.io && \ helm repo update && \ kubectl create namespace cattle-system && \ kubectl create namespace cert-manager && \ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.8.0/cert-manager.crds.yaml && \ helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --version v1.8.0 # Ensure cert-manager pods are running: kubectl get pods --namespace cert-manager # Install rancher. I've confirmed this happens using 2.6.7-rc5 and 2.6.6 both: helm install rancher rancher-latest/rancher \ --namespace cattle-system \ --set hostname=my.redacted.hostname \ --set rancherImageTag=v2.6.7-rc5 \ --version=v2.6.7-rc5 ``` 4. The above steps will finish with an output that contains information to access the UI. Wait until all the pods are available, then access that link: ``` kubectl -n cattle-system rollout status deploy/rancher ``` **Expected behavior:** The Rancher UI should be accessible. **Actual behavior:** The Rancher UI is not accessible at all. **Additional context / logs:** I believe this is related to the changes from https://github.com/rancher/rke2/issues/2206 probably mixed with the PSPs and NetworkPolicies we have in a hardened setup. I think the fix should include updating those netpols or psps to account for this hostnetwork change. The rancher pods do NOT deploy with any specific hostnetwork setting as far as I can tell, and I can't see any errors in the logs: [rancher-pod-logs.log](https://github.com/rancher/rke2/files/9214592/rancher-pod-logs.log) [ingress-nginx.log](https://github.com/rancher/rke2/files/9214605/ingress-nginx.log) I'm happy to reproduce and gather any more information necessary.",0,cannot run rancher on hardened system environmental info version and node s cpu architecture os and version any cluster configuration servers also repro ed on just server describe the bug after bringing up a hardened cluster and running rancher i cannot access the rancher ui the rancher pods appear to all be running and the ui appears to be accessible via clusterip but it fails to open when using the specified hostname i am using aws in this testing and see that the targetgroups setup to my lb are showing unhealthy on and though and are both showing healthy steps to reproduce install using profile cis in the config also include a tls san using a valid hostname ensure the nodes are all setup to recognize the hostname for example with aws i register the nodes to four targetgroups with ports and then attach those to a loadbalancer then i create a record pointing to the loadbalancer dns i use the record as the hostname in my tls san in step above install rancher update helm setup namespaces and install cert manager helm repo add rancher latest helm repo add jetstack helm repo update kubectl create namespace cattle system kubectl create namespace cert manager kubectl apply validate false f helm install cert manager jetstack cert manager namespace cert manager version ensure cert manager pods are running kubectl get pods namespace cert manager install rancher i ve confirmed this happens using and both helm install rancher rancher latest rancher namespace cattle system set hostname my redacted hostname set rancherimagetag version the above steps will finish with an output that contains information to access the ui wait until all the pods are available then access that link kubectl n cattle system rollout status deploy rancher expected behavior the rancher ui should be accessible actual behavior the rancher ui is not accessible at all additional context logs i believe this is related to the changes from probably mixed with the psps and networkpolicies we have in a hardened setup i think the fix should include updating those netpols or psps to account for this hostnetwork change the rancher pods do not deploy with any specific hostnetwork setting as far as i can tell and i can t see any errors in the logs i m happy to reproduce and gather any more information necessary ,0 220034,7349196394.0,IssuesEvent,2018-03-08 09:50:04,architecture-building-systems/CityEnergyAnalyst,https://api.github.com/repos/architecture-building-systems/CityEnergyAnalyst,closed,CEA Component in Grasshopper: read and apply the spatial reference information from terrain to shapefile.,Priority 1,"The spatial reference of the shapefiles is not correct. The best way to do we believe is that the CEA Component reads the spatial reference of the terrain.tif provided and apply that spatial references to the zone.shp, district.shp and streets.shp For the terrian we are using, please refer to the terrain file in the urban design folder",1.0,"CEA Component in Grasshopper: read and apply the spatial reference information from terrain to shapefile. - The spatial reference of the shapefiles is not correct. The best way to do we believe is that the CEA Component reads the spatial reference of the terrain.tif provided and apply that spatial references to the zone.shp, district.shp and streets.shp For the terrian we are using, please refer to the terrain file in the urban design folder",0,cea component in grasshopper read and apply the spatial reference information from terrain to shapefile the spatial reference of the shapefiles is not correct the best way to do we believe is that the cea component reads the spatial reference of the terrain tif provided and apply that spatial references to the zone shp district shp and streets shp for the terrian we are using please refer to the terrain file in the urban design folder,0 8174,26368252231.0,IssuesEvent,2023-01-11 18:19:33,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,FAILED: Automated Tests(27),automation,"Stats: { ""suites"": 60, ""tests"": 465, ""passes"": 438, ""pending"": 0, ""failures"": 27, ""start"": ""2023-01-09T23:09:48.880Z"", ""end"": ""2023-01-09T23:57:24.176Z"", ""duration"": 1643165, ""testsRegistered"": 465, ""passPercent"": 94.19354838709677, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 0, ""hasSkipped"": false } Failed Tests: ""Adds environment with Client ID/Secret authenticator to product"" ""Adds environment with JWT - Generated Key Pair authenticator to product"" ""Adds environment with JWT - JWKS URL authenticator to product"" ""Applies authorization plugin to service published to Kong Gateway"" ""Adds environment for invalid authorization profile to other"" ""Creates an access request"" ""Select scopes in Authorization Tab"" ""approves an access request"" ""Get access token using client ID and secret; make API request"" ""Creates an access request"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Creates an access request"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Regenrate credential client ID and Secret"" ""Make sure that the old client ID and Secret is disabled"" ""Verify that service is accessible with new client ID and Secret"" ""Update the authorization scope from Kong ACL-API to Client Credential"" ""applies authorization plugin to service published to Kong Gateway"" ""Verify that service is not accessible with existing Client ID - Secret credentials"" ""Raise request access"" ""Collect the credentials"" ""approves an access request"" ""Verify that API is accessible with the generated API Key"" ""Edit the created profile and verify the updated Issuer URL"" ""Creates an access request"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3878416299",1.0,"FAILED: Automated Tests(27) - Stats: { ""suites"": 60, ""tests"": 465, ""passes"": 438, ""pending"": 0, ""failures"": 27, ""start"": ""2023-01-09T23:09:48.880Z"", ""end"": ""2023-01-09T23:57:24.176Z"", ""duration"": 1643165, ""testsRegistered"": 465, ""passPercent"": 94.19354838709677, ""pendingPercent"": 0, ""other"": 0, ""hasOther"": false, ""skipped"": 0, ""hasSkipped"": false } Failed Tests: ""Adds environment with Client ID/Secret authenticator to product"" ""Adds environment with JWT - Generated Key Pair authenticator to product"" ""Adds environment with JWT - JWKS URL authenticator to product"" ""Applies authorization plugin to service published to Kong Gateway"" ""Adds environment for invalid authorization profile to other"" ""Creates an access request"" ""Select scopes in Authorization Tab"" ""approves an access request"" ""Get access token using client ID and secret; make API request"" ""Creates an access request"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Creates an access request"" ""approves an access request"" ""Get access token using JWT key pair; make API request"" ""Regenrate credential client ID and Secret"" ""Make sure that the old client ID and Secret is disabled"" ""Verify that service is accessible with new client ID and Secret"" ""Update the authorization scope from Kong ACL-API to Client Credential"" ""applies authorization plugin to service published to Kong Gateway"" ""Verify that service is not accessible with existing Client ID - Secret credentials"" ""Raise request access"" ""Collect the credentials"" ""approves an access request"" ""Verify that API is accessible with the generated API Key"" ""Edit the created profile and verify the updated Issuer URL"" ""Creates an access request"" Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3878416299",1,failed automated tests stats suites tests passes pending failures start end duration testsregistered passpercent pendingpercent other hasother false skipped hasskipped false failed tests adds environment with client id secret authenticator to product adds environment with jwt generated key pair authenticator to product adds environment with jwt jwks url authenticator to product applies authorization plugin to service published to kong gateway adds environment for invalid authorization profile to other creates an access request select scopes in authorization tab approves an access request get access token using client id and secret make api request creates an access request approves an access request get access token using jwt key pair make api request creates an access request approves an access request get access token using jwt key pair make api request regenrate credential client id and secret make sure that the old client id and secret is disabled verify that service is accessible with new client id and secret update the authorization scope from kong acl api to client credential applies authorization plugin to service published to kong gateway verify that service is not accessible with existing client id secret credentials raise request access collect the credentials approves an access request verify that api is accessible with the generated api key edit the created profile and verify the updated issuer url creates an access request run link ,1 630093,20086278983.0,IssuesEvent,2022-02-05 02:23:00,cjkrolak/ThermostatSupervisor,https://api.github.com/repos/cjkrolak/ThermostatSupervisor,closed,deviation check / alert is happening in OFF_MODE,bug Priority-1,"emulator OFF_MODE deviation alert on zone 0 reverting setpoint from 101.0°F to 56.0°F email sent from module '' running on cjkrolak-dell (192.168.86.228) OFF_MODE should not check for deviations or trigger deviation alerts.",1.0,"deviation check / alert is happening in OFF_MODE - emulator OFF_MODE deviation alert on zone 0 reverting setpoint from 101.0°F to 56.0°F email sent from module '' running on cjkrolak-dell (192.168.86.228) OFF_MODE should not check for deviations or trigger deviation alerts.",0,deviation check alert is happening in off mode emulator off mode deviation alert on zone reverting setpoint from °f to °f email sent from module running on cjkrolak dell off mode should not check for deviations or trigger deviation alerts ,0 3393,13647547605.0,IssuesEvent,2020-09-26 03:57:30,Python-World/python-mini-projects,https://api.github.com/repos/Python-World/python-mini-projects,closed,Create application to record video using mobile device,Automation Opencv good first issue help wanted python,"**problem statement** Create application to record video using mobile device",1.0,"Create application to record video using mobile device - **problem statement** Create application to record video using mobile device",1,create application to record video using mobile device problem statement create application to record video using mobile device,1 6408,23100606693.0,IssuesEvent,2022-07-27 02:04:32,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[IMPROVEMENT] Purging a volume before rebuilding starts,area/engine priority/1 require/automation-e2e require/doc area/stability kind/improvement backport-needed/1.2.5 area/replica backport-needed/1.3.1,"## Is your improvement request related to a feature? Please describe If we can do snapshot purge before rebuilding, we would eliminate 1x space used by the system snapshots for the case we mentioned in https://longhorn.io/docs/1.3.0/volumes-and-nodes/volume-size/#space-configuration-suggestions-for-volumes ## Describe the solution you'd like Before creating the system snapshot for the rebuilding replica, volume should do snapshot purge automatically ## Additional context Some users/customers are complaining about the extra space used by Longhorn volumes in the worst case, this improvement would help reduce the overhead.",1.0,"[IMPROVEMENT] Purging a volume before rebuilding starts - ## Is your improvement request related to a feature? Please describe If we can do snapshot purge before rebuilding, we would eliminate 1x space used by the system snapshots for the case we mentioned in https://longhorn.io/docs/1.3.0/volumes-and-nodes/volume-size/#space-configuration-suggestions-for-volumes ## Describe the solution you'd like Before creating the system snapshot for the rebuilding replica, volume should do snapshot purge automatically ## Additional context Some users/customers are complaining about the extra space used by Longhorn volumes in the worst case, this improvement would help reduce the overhead.",1, purging a volume before rebuilding starts is your improvement request related to a feature please describe if we can do snapshot purge before rebuilding we would eliminate space used by the system snapshots for the case we mentioned in describe the solution you d like before creating the system snapshot for the rebuilding replica volume should do snapshot purge automatically additional context some users customers are complaining about the extra space used by longhorn volumes in the worst case this improvement would help reduce the overhead ,1 4718,17349440742.0,IssuesEvent,2021-07-29 06:42:25,rancher-sandbox/cOS-toolkit,https://api.github.com/repos/rancher-sandbox/cOS-toolkit,closed,Add GCP disks to releases,automation enhancement release,"https://github.com/rancher-sandbox/cOS-toolkit/releases/tag/v0.6.0 We are missing the GCP disk images from the releases. This card is about adding the GCP disk image which is generated by the CI to be uploaded during releases",1.0,"Add GCP disks to releases - https://github.com/rancher-sandbox/cOS-toolkit/releases/tag/v0.6.0 We are missing the GCP disk images from the releases. This card is about adding the GCP disk image which is generated by the CI to be uploaded during releases",1,add gcp disks to releases we are missing the gcp disk images from the releases this card is about adding the gcp disk image which is generated by the ci to be uploaded during releases,1 2091,11360349981.0,IssuesEvent,2020-01-26 05:56:52,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,a8n: Upgrading Guava in a Gradle project,automation,"From [RFC 36](https://docs.google.com/document/d/1DKWy2zC6_rDZzoPS7ASUpvnVDK3Zj6cf_I9Riq9KNu0/edit) Upgrading Guava in a Gradle project and rewriting call sites using Comby syntax",1.0,"a8n: Upgrading Guava in a Gradle project - From [RFC 36](https://docs.google.com/document/d/1DKWy2zC6_rDZzoPS7ASUpvnVDK3Zj6cf_I9Riq9KNu0/edit) Upgrading Guava in a Gradle project and rewriting call sites using Comby syntax",1, upgrading guava in a gradle project from upgrading guava in a gradle project and rewriting call sites using comby syntax,1 83804,16373176819.0,IssuesEvent,2021-05-15 15:09:38,joomla/joomla-cms,https://api.github.com/repos/joomla/joomla-cms,closed,[4][com_finder][ACL bypass] Smart Search reveals author names of unpublished/acl restricted articles.,No Code Attached Yet,"### Steps to reproduce the issue tested on Joomla 4.0-dev Create a menu link to Smart Search Visit that menu link and look in the Advanced Search -> Search by author dropdown - note what you see Create a new UNPUBLISHED Article. Enter an Author Alias you would recognise. NOTE THE ARTICLE IS UNPUBLISHED. Visit that menu link and look in the Advanced Search -> Search by author dropdown - note what you see ### Expected result I expect to NOT see the Author Alias of an unpublished item, if there are no published items with that same Author name. ### Actual result Smart search is leaking information and displaying the names of Authors of unpublished items, where there are zero published items by that author. ### Also repeat this with setting the article to an ACL level your public has no access to (like Special or Super Users). Repeat the test. You can now view the author name of items that are restricted to you by ACL. **This is probably a security issue then as its ACL not being applied correctly too.** @joomla/security",1.0,"[4][com_finder][ACL bypass] Smart Search reveals author names of unpublished/acl restricted articles. - ### Steps to reproduce the issue tested on Joomla 4.0-dev Create a menu link to Smart Search Visit that menu link and look in the Advanced Search -> Search by author dropdown - note what you see Create a new UNPUBLISHED Article. Enter an Author Alias you would recognise. NOTE THE ARTICLE IS UNPUBLISHED. Visit that menu link and look in the Advanced Search -> Search by author dropdown - note what you see ### Expected result I expect to NOT see the Author Alias of an unpublished item, if there are no published items with that same Author name. ### Actual result Smart search is leaking information and displaying the names of Authors of unpublished items, where there are zero published items by that author. ### Also repeat this with setting the article to an ACL level your public has no access to (like Special or Super Users). Repeat the test. You can now view the author name of items that are restricted to you by ACL. **This is probably a security issue then as its ACL not being applied correctly too.** @joomla/security",0, smart search reveals author names of unpublished acl restricted articles steps to reproduce the issue tested on joomla dev create a menu link to smart search visit that menu link and look in the advanced search search by author dropdown note what you see create a new unpublished article enter an author alias you would recognise note the article is unpublished visit that menu link and look in the advanced search search by author dropdown note what you see expected result i expect to not see the author alias of an unpublished item if there are no published items with that same author name actual result smart search is leaking information and displaying the names of authors of unpublished items where there are zero published items by that author also repeat this with setting the article to an acl level your public has no access to like special or super users repeat the test you can now view the author name of items that are restricted to you by acl this is probably a security issue then as its acl not being applied correctly too joomla security,0 9454,28366655799.0,IssuesEvent,2023-04-12 14:17:46,dylan-lang/opendylan,https://api.github.com/repos/dylan-lang/opendylan,closed,Create GitHub CI for compiler PRs,Automation,"GitHub CI should bootstrap the compiler when changes touch `sources/dfmc`, and some other files like `Makefile.in`. Such changes may include new tests which break when using the released compiler, so there also needs to be a way to turn off the `libraries-test-suite` CI for such PRs. If there's no obvious way to accomplish both of those goals easily, we can probably get away with running the bootstrap CI for all changes since the volume is very low. Bonus points for figuring out which PRs only require a 1-stage bootstrap and doing the appropriate build.",1.0,"Create GitHub CI for compiler PRs - GitHub CI should bootstrap the compiler when changes touch `sources/dfmc`, and some other files like `Makefile.in`. Such changes may include new tests which break when using the released compiler, so there also needs to be a way to turn off the `libraries-test-suite` CI for such PRs. If there's no obvious way to accomplish both of those goals easily, we can probably get away with running the bootstrap CI for all changes since the volume is very low. Bonus points for figuring out which PRs only require a 1-stage bootstrap and doing the appropriate build.",1,create github ci for compiler prs github ci should bootstrap the compiler when changes touch sources dfmc and some other files like makefile in such changes may include new tests which break when using the released compiler so there also needs to be a way to turn off the libraries test suite ci for such prs if there s no obvious way to accomplish both of those goals easily we can probably get away with running the bootstrap ci for all changes since the volume is very low bonus points for figuring out which prs only require a stage bootstrap and doing the appropriate build ,1 293305,22051967176.0,IssuesEvent,2022-05-30 09:26:47,timescale/docs,https://api.github.com/repos/timescale/docs,closed,[Docs RFC] Add information about working with logs in MST,documentation enhancement community,"# Describe change in content, appearance, or functionality Add information about easier ways to work with logs, for example, to integrate with Loggly [Relevant Slack message](https://timescaledb.slack.com/archives/C01R6ME0JCS/p1646057029634489?thread_ts=1645739535.156159&cid=C01R6ME0JCS) # Subject matter expert (SME) [If known, who is a good person to ask about this topic] # Deadline [When does this need to be addressed] # Any further info [Anything else you want to add, or further links] ",1.0,"[Docs RFC] Add information about working with logs in MST - # Describe change in content, appearance, or functionality Add information about easier ways to work with logs, for example, to integrate with Loggly [Relevant Slack message](https://timescaledb.slack.com/archives/C01R6ME0JCS/p1646057029634489?thread_ts=1645739535.156159&cid=C01R6ME0JCS) # Subject matter expert (SME) [If known, who is a good person to ask about this topic] # Deadline [When does this need to be addressed] # Any further info [Anything else you want to add, or further links] ",0, add information about working with logs in mst describe change in content appearance or functionality add information about easier ways to work with logs for example to integrate with loggly subject matter expert sme deadline any further info ,0 3916,14969208279.0,IssuesEvent,2021-01-27 17:50:14,submariner-io/submariner,https://api.github.com/repos/submariner-io/submariner,opened,"Enable additional Go linters, fix issues they flag",automation," **What would you like to be added**: Enable the commented golangci-lint linters in each repo's `.golangci.yml`, or rule them out as potentially useful and remove them. For example, from this repo: ``` linters: disable-all: true enable: - bodyclose - deadcode - depguard # - dupl - errcheck - exportloopref # - funlen # - gochecknoglobals # - gochecknoinits - gocritic - gocyclo - gofmt - goimports # - golint - gosec - gosimple - govet - ineffassign - interfacer - lll - maligned - misspell - nakedret - staticcheck - structcheck # - stylecheck # - testpackage - typecheck - unconvert - unparam - unused - varcheck - whitespace - wsl ``` https://github.com/submariner-io/submariner/blob/master/.golangci.yml **Why is this needed**: When doing the major set of linting refactoring that added golangcl-lint, I didn't want to continue working on enabling Go linters before taking a first pass at the many other types of linting I wanted to add. I included these commented out linters as a starting point for later, as they seemed to flag real, non-duplicate areas for improvement. Now that we have broad linting coverage, we should loop back and dig deeper into enabling Go linters and fixing the issues they flag.",1.0,"Enable additional Go linters, fix issues they flag - **What would you like to be added**: Enable the commented golangci-lint linters in each repo's `.golangci.yml`, or rule them out as potentially useful and remove them. For example, from this repo: ``` linters: disable-all: true enable: - bodyclose - deadcode - depguard # - dupl - errcheck - exportloopref # - funlen # - gochecknoglobals # - gochecknoinits - gocritic - gocyclo - gofmt - goimports # - golint - gosec - gosimple - govet - ineffassign - interfacer - lll - maligned - misspell - nakedret - staticcheck - structcheck # - stylecheck # - testpackage - typecheck - unconvert - unparam - unused - varcheck - whitespace - wsl ``` https://github.com/submariner-io/submariner/blob/master/.golangci.yml **Why is this needed**: When doing the major set of linting refactoring that added golangcl-lint, I didn't want to continue working on enabling Go linters before taking a first pass at the many other types of linting I wanted to add. I included these commented out linters as a starting point for later, as they seemed to flag real, non-duplicate areas for improvement. Now that we have broad linting coverage, we should loop back and dig deeper into enabling Go linters and fixing the issues they flag.",1,enable additional go linters fix issues they flag what would you like to be added enable the commented golangci lint linters in each repo s golangci yml or rule them out as potentially useful and remove them for example from this repo linters disable all true enable bodyclose deadcode depguard dupl errcheck exportloopref funlen gochecknoglobals gochecknoinits gocritic gocyclo gofmt goimports golint gosec gosimple govet ineffassign interfacer lll maligned misspell nakedret staticcheck structcheck stylecheck testpackage typecheck unconvert unparam unused varcheck whitespace wsl why is this needed when doing the major set of linting refactoring that added golangcl lint i didn t want to continue working on enabling go linters before taking a first pass at the many other types of linting i wanted to add i included these commented out linters as a starting point for later as they seemed to flag real non duplicate areas for improvement now that we have broad linting coverage we should loop back and dig deeper into enabling go linters and fixing the issues they flag ,1 6854,23979334885.0,IssuesEvent,2022-09-13 13:59:31,OrdinaNederland/Stichting-NUTwente,https://api.github.com/repos/OrdinaNederland/Stichting-NUTwente,closed,Er moeten automatische test komen voor het veranderen van Contactgegevens en het Woonadres,enhancement continuïteit automation,"**Omschrijving issue:** Als medewerker van NuTwente wil ik gevalideerd hebben na elke verandering dat het veranderen van gegevens op de website werkt. Als medewerker van NuTwente wil ik niet elke push elke keer tijd spenderen om dit te valideren. **Oplossing** Als medewerker van NuTwente wil ik dat het testen van het het veranderen van gegevens op de website werkt geautomatiseerd wordt. Op deze manier weet ik zeker dat het werkt en hoef ik dit niet manueel elke keer te testen DoD - [x] Het automatisch testen van het veranderen van Contactgegevens op de pagina mijn gastgezinnen pagina werkt - [x] Het automatisch testen van het veranderen van Woonadres op de pagina mijn gastgezinnen pagina werkt",1.0,"Er moeten automatische test komen voor het veranderen van Contactgegevens en het Woonadres - **Omschrijving issue:** Als medewerker van NuTwente wil ik gevalideerd hebben na elke verandering dat het veranderen van gegevens op de website werkt. Als medewerker van NuTwente wil ik niet elke push elke keer tijd spenderen om dit te valideren. **Oplossing** Als medewerker van NuTwente wil ik dat het testen van het het veranderen van gegevens op de website werkt geautomatiseerd wordt. Op deze manier weet ik zeker dat het werkt en hoef ik dit niet manueel elke keer te testen DoD - [x] Het automatisch testen van het veranderen van Contactgegevens op de pagina mijn gastgezinnen pagina werkt - [x] Het automatisch testen van het veranderen van Woonadres op de pagina mijn gastgezinnen pagina werkt",1,er moeten automatische test komen voor het veranderen van contactgegevens en het woonadres omschrijving issue als medewerker van nutwente wil ik gevalideerd hebben na elke verandering dat het veranderen van gegevens op de website werkt als medewerker van nutwente wil ik niet elke push elke keer tijd spenderen om dit te valideren oplossing als medewerker van nutwente wil ik dat het testen van het het veranderen van gegevens op de website werkt geautomatiseerd wordt op deze manier weet ik zeker dat het werkt en hoef ik dit niet manueel elke keer te testen dod het automatisch testen van het veranderen van contactgegevens op de pagina mijn gastgezinnen pagina werkt het automatisch testen van het veranderen van woonadres op de pagina mijn gastgezinnen pagina werkt,1 16155,10435746799.0,IssuesEvent,2019-09-17 17:59:31,Azure/azure-sdk-for-python,https://api.github.com/repos/Azure/azure-sdk-for-python,closed,What the minimum version need of Service bus Python library which support the property of CorrelationId? ,Client Service Bus customer-reported,"We didn’t see **correlation id** in 'message' document https://docs.microsoft.com/en-us/python/api/azure-servicebus/azure.servicebus.message?view=azure-python. We would like to consult SDK team whether this feature realized in latest SDK. Thank you!!! - [ ] What the minimum version need of Service bus Python library which support the property of CorrelationId? ",1.0,"What the minimum version need of Service bus Python library which support the property of CorrelationId? - We didn’t see **correlation id** in 'message' document https://docs.microsoft.com/en-us/python/api/azure-servicebus/azure.servicebus.message?view=azure-python. We would like to consult SDK team whether this feature realized in latest SDK. Thank you!!! - [ ] What the minimum version need of Service bus Python library which support the property of CorrelationId? ",0,what the minimum version need of service bus python library which support the property of correlationid we didn’t see correlation id in message document we would like to consult sdk team whether this feature realized in latest sdk thank you what the minimum version need of service bus python library which support the property of correlationid ,0 67336,14861582505.0,IssuesEvent,2021-01-18 23:16:08,metnew-gr/dvnareal,https://api.github.com/repos/metnew-gr/dvnareal,opened,CVE-2018-14404 (High) detected in libxmljsv0.19.7,security vulnerability,"## CVE-2018-14404 - High Severity Vulnerability
Vulnerable Library - libxmljsv0.19.7

libxml bindings for v8 javascript engine

Library home page: https://github.com/libxmljs/libxmljs.git

Found in HEAD commit: 3ec818084b0703770c91de712552b03660c49963

Found in base branch: master

Vulnerable Source Files (1)

dvnareal/node_modules/libxmljs/vendor/libxml/xpath.c

Vulnerability Details

A NULL pointer dereference vulnerability exists in the xpath.c:xmlXPathCompOpEval() function of libxml2 through 2.9.8 when parsing an invalid XPath expression in the XPATH_OP_AND or XPATH_OP_OR case. Applications processing untrusted XSL format inputs with the use of the libxml2 library may be vulnerable to a denial of service attack due to a crash of the application.

Publish Date: 2018-07-19

URL: CVE-2018-14404

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/GNOME/libxml2/commit/a436374994c47b12d5de1b8b1d191a098fa23594

Release Date: 2018-07-19

Fix Resolution: nokogiri- 2.9.5, libxml2 - 2.9.9

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2018-14404 (High) detected in libxmljsv0.19.7 - ## CVE-2018-14404 - High Severity Vulnerability
Vulnerable Library - libxmljsv0.19.7

libxml bindings for v8 javascript engine

Library home page: https://github.com/libxmljs/libxmljs.git

Found in HEAD commit: 3ec818084b0703770c91de712552b03660c49963

Found in base branch: master

Vulnerable Source Files (1)

dvnareal/node_modules/libxmljs/vendor/libxml/xpath.c

Vulnerability Details

A NULL pointer dereference vulnerability exists in the xpath.c:xmlXPathCompOpEval() function of libxml2 through 2.9.8 when parsing an invalid XPath expression in the XPATH_OP_AND or XPATH_OP_OR case. Applications processing untrusted XSL format inputs with the use of the libxml2 library may be vulnerable to a denial of service attack due to a crash of the application.

Publish Date: 2018-07-19

URL: CVE-2018-14404

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/GNOME/libxml2/commit/a436374994c47b12d5de1b8b1d191a098fa23594

Release Date: 2018-07-19

Fix Resolution: nokogiri- 2.9.5, libxml2 - 2.9.9

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in cve high severity vulnerability vulnerable library libxml bindings for javascript engine library home page a href found in head commit a href found in base branch master vulnerable source files dvnareal node modules libxmljs vendor libxml xpath c vulnerability details a null pointer dereference vulnerability exists in the xpath c xmlxpathcompopeval function of through when parsing an invalid xpath expression in the xpath op and or xpath op or case applications processing untrusted xsl format inputs with the use of the library may be vulnerable to a denial of service attack due to a crash of the application publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nokogiri step up your open source security game with whitesource ,0 6264,6278924202.0,IssuesEvent,2017-07-18 15:15:38,dart-lang/site-webdev,https://api.github.com/repos/dart-lang/site-webdev,opened,Ensure that basic project setup and build process is beginner friendly,Infrastructure,"- README install instructions need updating #829 - `scripts/serve_local.sh` - Switch to using superstatic so as to avoid having to deal with firebase project access permissions (this is what we did for the Travis build process). - Setup for use of [browsersync](https://www.browsersync.io). Also see [this fb post](https://firebase.googleblog.com/2015/12/a-host-of-improvements_61.html). ",1.0,"Ensure that basic project setup and build process is beginner friendly - - README install instructions need updating #829 - `scripts/serve_local.sh` - Switch to using superstatic so as to avoid having to deal with firebase project access permissions (this is what we did for the Travis build process). - Setup for use of [browsersync](https://www.browsersync.io). Also see [this fb post](https://firebase.googleblog.com/2015/12/a-host-of-improvements_61.html). ",0,ensure that basic project setup and build process is beginner friendly readme install instructions need updating scripts serve local sh switch to using superstatic so as to avoid having to deal with firebase project access permissions this is what we did for the travis build process setup for use of also see ,0 2840,12692473919.0,IssuesEvent,2020-06-21 22:46:18,Capstone-SS-2020-PDX/Covid-19-MutualAid,https://api.github.com/repos/Capstone-SS-2020-PDX/Covid-19-MutualAid,closed,Add local.settings.py and move secrets out of git repo,Automation back end bug,"Several things going on: - Secrets currently published to github in settings.py - Need to keep secrets in new file: local.settings.py (not tracked in version control) Steps: 1. Create local.settings.py and add local.settings.py to .gitignore 2. import settings.py in local.settings.py 3. move secrets from settings.py to local.settings.py 4. Configure docker to run `manage.py --settings=local.settings.py` 4a. Bonus: Add Makefile hook to make local.settings.py ",1.0,"Add local.settings.py and move secrets out of git repo - Several things going on: - Secrets currently published to github in settings.py - Need to keep secrets in new file: local.settings.py (not tracked in version control) Steps: 1. Create local.settings.py and add local.settings.py to .gitignore 2. import settings.py in local.settings.py 3. move secrets from settings.py to local.settings.py 4. Configure docker to run `manage.py --settings=local.settings.py` 4a. Bonus: Add Makefile hook to make local.settings.py ",1,add local settings py and move secrets out of git repo several things going on secrets currently published to github in settings py need to keep secrets in new file local settings py not tracked in version control steps create local settings py and add local settings py to gitignore import settings py in local settings py move secrets from settings py to local settings py configure docker to run manage py settings local settings py bonus add makefile hook to make local settings py ,1 4512,16748500884.0,IssuesEvent,2021-06-11 18:55:34,newrelic/newrelic-observability-packs,https://api.github.com/repos/newrelic/newrelic-observability-packs,opened,Automation: Verify that referenced files exist,automation enhancement,"# Summary Add a job to the `validate-packs` workflow that parses the top level `config.yml` and checks to make sure that files referenced in the `icon` and `logo` fields exist. AC: - [ ] workflow `validate-packs` fails when the `icon` field does not reference an existing file - [ ] workflow `validate-packs` fails when the `logo` field does not reference an existing file - [ ] Ensure the output clearly directs the contributor to the problem",1.0,"Automation: Verify that referenced files exist - # Summary Add a job to the `validate-packs` workflow that parses the top level `config.yml` and checks to make sure that files referenced in the `icon` and `logo` fields exist. AC: - [ ] workflow `validate-packs` fails when the `icon` field does not reference an existing file - [ ] workflow `validate-packs` fails when the `logo` field does not reference an existing file - [ ] Ensure the output clearly directs the contributor to the problem",1,automation verify that referenced files exist summary add a job to the validate packs workflow that parses the top level config yml and checks to make sure that files referenced in the icon and logo fields exist ac workflow validate packs fails when the icon field does not reference an existing file workflow validate packs fails when the logo field does not reference an existing file ensure the output clearly directs the contributor to the problem,1 23552,10904629715.0,IssuesEvent,2019-11-20 09:09:10,NixOS/nixpkgs,https://api.github.com/repos/NixOS/nixpkgs,closed,Vulnerability roundup 76: libmad-0.15.1b: 1 advisory,1.severity: security,"[search](https://search.nix.gsc.io/?q=libmad&i=fosho&repos=nixos-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=libmad+in%3Apath&type=Code) * [ ] [CVE-2018-7263](https://nvd.nist.gov/vuln/detail/CVE-2018-7263) (nixos-19.09) Scanned versions: nixos-19.09: e34ac949d1b. May contain false positives.",True,"Vulnerability roundup 76: libmad-0.15.1b: 1 advisory - [search](https://search.nix.gsc.io/?q=libmad&i=fosho&repos=nixos-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=libmad+in%3Apath&type=Code) * [ ] [CVE-2018-7263](https://nvd.nist.gov/vuln/detail/CVE-2018-7263) (nixos-19.09) Scanned versions: nixos-19.09: e34ac949d1b. May contain false positives.",0,vulnerability roundup libmad advisory nixos scanned versions nixos may contain false positives ,0 1870,11009066150.0,IssuesEvent,2019-12-04 11:50:49,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,opened,Syscheck automated tests: 'max_eps' option,automation component/fim,"|Working branch|Platform| |---|---| ||Linux/Windows/macOS| ## Description The ``max_eps`` option is used to limit the throughput of events from the FIM module, avoiding the flooding of the network and the manager. - Default value: 200 - Allowed values: Any integer from 1 to 100000 To test this option, the simplest way is to monitor a large number of files. That way, when creating the baseline for the first scan, each monitored file is sent as a `state` event to the manager. ## Subtasks - [ ] Check that the EPS limitation is respected during stressing Syscheck. - [ ] Create the test for several values of EPS (10, 100, 400, 1000, 10000...) ",1.0,"Syscheck automated tests: 'max_eps' option - |Working branch|Platform| |---|---| ||Linux/Windows/macOS| ## Description The ``max_eps`` option is used to limit the throughput of events from the FIM module, avoiding the flooding of the network and the manager. - Default value: 200 - Allowed values: Any integer from 1 to 100000 To test this option, the simplest way is to monitor a large number of files. That way, when creating the baseline for the first scan, each monitored file is sent as a `state` event to the manager. ## Subtasks - [ ] Check that the EPS limitation is respected during stressing Syscheck. - [ ] Create the test for several values of EPS (10, 100, 400, 1000, 10000...) ",1,syscheck automated tests max eps option working branch platform linux windows macos description the max eps option is used to limit the throughput of events from the fim module avoiding the flooding of the network and the manager default value allowed values any integer from to to test this option the simplest way is to monitor a large number of files that way when creating the baseline for the first scan each monitored file is sent as a state event to the manager subtasks check that the eps limitation is respected during stressing syscheck create the test for several values of eps ,1 8762,27172220530.0,IssuesEvent,2023-02-17 20:33:59,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,How to browse to sites in a different tenancy?,area:Picker Needs: Investigation automation:Closed,"#### Category - [x] Question - [ ] Documentation issue - [ ] Bug #### Expected or Desired Behaviour Our company (A) has a SharePoint site which has external members from another company (B). They are able to interact with our SharePoint as if they are internal members using their company (B) credentials. Our app requires these users to use the picker to select files from our SharePoint (A), not their SharePoint (B). #### Observed Behaviour The OneDrive picker loads and authenticates correctly with the users' company (B) credentials. There is no way for the users to browse to our SharePoint site. The ""Shared libraries"" list only shows sites/libraries within the company (B) tenancy. Workarounds such as setting the default library for the picker do not work because of bugs such as #1139 #1045 #862 Advice on how to proceed would be greatly appreciated! --- #### Document details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 55a64e9d-ae15-46a4-4a4d-3674972d9806 * Version Independent ID: 744d6f1a-4cde-b9aa-4003-209d1d4a527b * Content: [Open from OneDrive in JavaScript - OneDrive dev center](https://docs.microsoft.com/en-au/onedrive/developer/controls/file-pickers/js-v72/open-file?view=odsp-graph-online#feedback) * Content Source: [docs/controls/file-pickers/js-v72/open-file.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/controls/file-pickers/js-v72/open-file.md) * Product: **onedrive** * GitHub Login: @rgregg * Microsoft Alias: **rgregg**",1.0,"How to browse to sites in a different tenancy? - #### Category - [x] Question - [ ] Documentation issue - [ ] Bug #### Expected or Desired Behaviour Our company (A) has a SharePoint site which has external members from another company (B). They are able to interact with our SharePoint as if they are internal members using their company (B) credentials. Our app requires these users to use the picker to select files from our SharePoint (A), not their SharePoint (B). #### Observed Behaviour The OneDrive picker loads and authenticates correctly with the users' company (B) credentials. There is no way for the users to browse to our SharePoint site. The ""Shared libraries"" list only shows sites/libraries within the company (B) tenancy. Workarounds such as setting the default library for the picker do not work because of bugs such as #1139 #1045 #862 Advice on how to proceed would be greatly appreciated! --- #### Document details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 55a64e9d-ae15-46a4-4a4d-3674972d9806 * Version Independent ID: 744d6f1a-4cde-b9aa-4003-209d1d4a527b * Content: [Open from OneDrive in JavaScript - OneDrive dev center](https://docs.microsoft.com/en-au/onedrive/developer/controls/file-pickers/js-v72/open-file?view=odsp-graph-online#feedback) * Content Source: [docs/controls/file-pickers/js-v72/open-file.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/controls/file-pickers/js-v72/open-file.md) * Product: **onedrive** * GitHub Login: @rgregg * Microsoft Alias: **rgregg**",1,how to browse to sites in a different tenancy category question documentation issue bug expected or desired behaviour our company a has a sharepoint site which has external members from another company b they are able to interact with our sharepoint as if they are internal members using their company b credentials our app requires these users to use the picker to select files from our sharepoint a not their sharepoint b observed behaviour the onedrive picker loads and authenticates correctly with the users company b credentials there is no way for the users to browse to our sharepoint site the shared libraries list only shows sites libraries within the company b tenancy workarounds such as setting the default library for the picker do not work because of bugs such as advice on how to proceed would be greatly appreciated document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product onedrive github login rgregg microsoft alias rgregg ,1 649063,21217110089.0,IssuesEvent,2022-04-11 08:26:37,woocommerce/woocommerce,https://api.github.com/repos/woocommerce/woocommerce,closed,[Enhancement]: correct form redirect after save account details,type: enhancement type: good first issue priority: low plugin: woocommerce,"### Describe the solution you'd like Currently when user saving the ""Account details"" form, a redirection occurs to the user's dashboard page. For comparison, saving ""Billing address"" does't redirect to another page - the user stays in the form (and sees notices). My idea is improve the function `wp_safe_redirect`. ### Describe alternatives you've considered No ideas for an alternative ### Additional context _No response_",1.0,"[Enhancement]: correct form redirect after save account details - ### Describe the solution you'd like Currently when user saving the ""Account details"" form, a redirection occurs to the user's dashboard page. For comparison, saving ""Billing address"" does't redirect to another page - the user stays in the form (and sees notices). My idea is improve the function `wp_safe_redirect`. ### Describe alternatives you've considered No ideas for an alternative ### Additional context _No response_",0, correct form redirect after save account details describe the solution you d like currently when user saving the account details form a redirection occurs to the user s dashboard page for comparison saving billing address does t redirect to another page the user stays in the form and sees notices my idea is improve the function wp safe redirect describe alternatives you ve considered no ideas for an alternative additional context no response ,0 8808,27172282262.0,IssuesEvent,2023-02-17 20:37:52,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,OneDrive file picker within MS Teams Tab,area:Picker Needs: Investigation automation:Closed,"#### Category - [ ] Question - [ ] Documentation issue - [x] Bug #### Expected or Desired Behavior I am working on an application that uses the OneDrive file picker (v7.2) which works fine when running the application in the browser. However, when integrating it in MS Teams, it doesn't work correctly (in the desktop client, the browser works fine again). Instead of a popup, a new browser tab (or if not browser is open a new browser window) opens. In the browser, the behavior is the following: - button click (on the application) triggers oneDrive.open(odOptions) - _popup_ opens with current application's url - after authenticating, the user is redirected to OneDrive in order to login - after login, the user is presented with the folders and files - the user can select a file - the success callback is called #### Observed Behavior In the MS Teams desktop client, this is the behavior: - button click (on the application) triggers oneDrive.open(odOptions) - _a new browser tab_ opens with current application's url - after authenticating, nothing happens I also checked another application (Trello) and even though they got some steps further, the success callback is never called: - button click (on the application) triggers oneDrive.open(odOptions) - _a new browser tab_ opens with current application's url - after authenticating, the user is redirected to OneDrive in order to login - after login, the user is presented with the folders and files - the user can select a file - an error is thrown ![4d2c7900-99d7-11ea-8252-5a8a70c81e2d](https://user-images.githubusercontent.com/1194366/82332754-16aa2a80-99e6-11ea-929b-7e5be8721390.png) #### Steps to Reproduce This happens for every application added to MS Teams as a website that uses the OneDrive file picker. #### Additional Context I think this is related to https://github.com/OneDrive/onedrive-api-docs/issues/650 but it never got resolved. Thank you. ",1.0,"OneDrive file picker within MS Teams Tab - #### Category - [ ] Question - [ ] Documentation issue - [x] Bug #### Expected or Desired Behavior I am working on an application that uses the OneDrive file picker (v7.2) which works fine when running the application in the browser. However, when integrating it in MS Teams, it doesn't work correctly (in the desktop client, the browser works fine again). Instead of a popup, a new browser tab (or if not browser is open a new browser window) opens. In the browser, the behavior is the following: - button click (on the application) triggers oneDrive.open(odOptions) - _popup_ opens with current application's url - after authenticating, the user is redirected to OneDrive in order to login - after login, the user is presented with the folders and files - the user can select a file - the success callback is called #### Observed Behavior In the MS Teams desktop client, this is the behavior: - button click (on the application) triggers oneDrive.open(odOptions) - _a new browser tab_ opens with current application's url - after authenticating, nothing happens I also checked another application (Trello) and even though they got some steps further, the success callback is never called: - button click (on the application) triggers oneDrive.open(odOptions) - _a new browser tab_ opens with current application's url - after authenticating, the user is redirected to OneDrive in order to login - after login, the user is presented with the folders and files - the user can select a file - an error is thrown ![4d2c7900-99d7-11ea-8252-5a8a70c81e2d](https://user-images.githubusercontent.com/1194366/82332754-16aa2a80-99e6-11ea-929b-7e5be8721390.png) #### Steps to Reproduce This happens for every application added to MS Teams as a website that uses the OneDrive file picker. #### Additional Context I think this is related to https://github.com/OneDrive/onedrive-api-docs/issues/650 but it never got resolved. Thank you. ",1,onedrive file picker within ms teams tab category question documentation issue bug expected or desired behavior i am working on an application that uses the onedrive file picker which works fine when running the application in the browser however when integrating it in ms teams it doesn t work correctly in the desktop client the browser works fine again instead of a popup a new browser tab or if not browser is open a new browser window opens in the browser the behavior is the following button click on the application triggers onedrive open odoptions popup opens with current application s url after authenticating the user is redirected to onedrive in order to login after login the user is presented with the folders and files the user can select a file the success callback is called observed behavior in the ms teams desktop client this is the behavior button click on the application triggers onedrive open odoptions a new browser tab opens with current application s url after authenticating nothing happens i also checked another application trello and even though they got some steps further the success callback is never called button click on the application triggers onedrive open odoptions a new browser tab opens with current application s url after authenticating the user is redirected to onedrive in order to login after login the user is presented with the folders and files the user can select a file an error is thrown steps to reproduce this happens for every application added to ms teams as a website that uses the onedrive file picker additional context i think this is related to but it never got resolved thank you ,1 8509,27041791310.0,IssuesEvent,2023-02-13 06:16:22,hackforla/website,https://api.github.com/repos/hackforla/website,closed,Google Apps Script: Fix data errors in spreadsheet used for wins page,Bug role: back end/devOps Size: Medium time sensitive automation Feature: Google Apps Scripts size: 1pt,"### Overview The data used to generate our wins page should be error free. For this issue, we will analyze and fix the data discrepancies between the responses and review sheets used to build the wins page. ### Action Items - [x] This issue was reopened because it failed the QA. Read through the comments, look at the PR, and then tackle this issue - [x] Obtain access to the Google Apps Script in the admin drive from the technical lead - [x] Become familiar with the workflow process in creating a wins entry - [x] In the admin drive, create a copy of the Wins-form (Responses) and Secret files in your own drive to recreate the workflow - [ ] Identify the reason for data discrepancy between the responses and review sheets - [ ] Fix the code logic so that the data is consistent amongst various sheets - [ ] Add an error catching mechanism so that error like this is not repeated again - [ ] From this [comment](https://github.com/hackforla/website/pull/2765#issuecomment-1036942313), fix the following problems: - [ ] There is something wrong with the Wins spreadsheet, where Wins form entry made on 11/5/2021 wasn't added to the Review sheet and so the true/false in the ""Display?"" and ""Homepage?"" are off. - [ ] Test the code so that on a new form submit, both the Review sheet and Responses sheet gets updated appropriately - [ ] See this [comment](https://github.com/hackforla/website/issues/2901#issuecomment-1328166222) - [ ] Demo the code to the technical lead and team For merge team: - [ ] Once approved, update the current production code - [ ] Release dependency on #2385 and #2505 - [ ] Update action items in the dependency issue if needed ### Resources/Instructions [Google Apps Script](https://developers.google.com/apps-script/overview) [_wins-data file](https://github.com/hackforla/website/blob/gh-pages/_data/external/_wins-data.json) [wins page JS](https://github.com/hackforla/website/blob/gh-pages/assets/js/wins.js) [Response sheet](https://docs.google.com/spreadsheets/d/1fXmYrmNtrgdzkM_odGIRbSwdaea3yS0cTLCJUzZ6Sc0/edit#gid=1301054877)",1.0,"Google Apps Script: Fix data errors in spreadsheet used for wins page - ### Overview The data used to generate our wins page should be error free. For this issue, we will analyze and fix the data discrepancies between the responses and review sheets used to build the wins page. ### Action Items - [x] This issue was reopened because it failed the QA. Read through the comments, look at the PR, and then tackle this issue - [x] Obtain access to the Google Apps Script in the admin drive from the technical lead - [x] Become familiar with the workflow process in creating a wins entry - [x] In the admin drive, create a copy of the Wins-form (Responses) and Secret files in your own drive to recreate the workflow - [ ] Identify the reason for data discrepancy between the responses and review sheets - [ ] Fix the code logic so that the data is consistent amongst various sheets - [ ] Add an error catching mechanism so that error like this is not repeated again - [ ] From this [comment](https://github.com/hackforla/website/pull/2765#issuecomment-1036942313), fix the following problems: - [ ] There is something wrong with the Wins spreadsheet, where Wins form entry made on 11/5/2021 wasn't added to the Review sheet and so the true/false in the ""Display?"" and ""Homepage?"" are off. - [ ] Test the code so that on a new form submit, both the Review sheet and Responses sheet gets updated appropriately - [ ] See this [comment](https://github.com/hackforla/website/issues/2901#issuecomment-1328166222) - [ ] Demo the code to the technical lead and team For merge team: - [ ] Once approved, update the current production code - [ ] Release dependency on #2385 and #2505 - [ ] Update action items in the dependency issue if needed ### Resources/Instructions [Google Apps Script](https://developers.google.com/apps-script/overview) [_wins-data file](https://github.com/hackforla/website/blob/gh-pages/_data/external/_wins-data.json) [wins page JS](https://github.com/hackforla/website/blob/gh-pages/assets/js/wins.js) [Response sheet](https://docs.google.com/spreadsheets/d/1fXmYrmNtrgdzkM_odGIRbSwdaea3yS0cTLCJUzZ6Sc0/edit#gid=1301054877)",1,google apps script fix data errors in spreadsheet used for wins page overview the data used to generate our wins page should be error free for this issue we will analyze and fix the data discrepancies between the responses and review sheets used to build the wins page action items this issue was reopened because it failed the qa read through the comments look at the pr and then tackle this issue obtain access to the google apps script in the admin drive from the technical lead become familiar with the workflow process in creating a wins entry in the admin drive create a copy of the wins form responses and secret files in your own drive to recreate the workflow identify the reason for data discrepancy between the responses and review sheets fix the code logic so that the data is consistent amongst various sheets add an error catching mechanism so that error like this is not repeated again from this fix the following problems there is something wrong with the wins spreadsheet where wins form entry made on wasn t added to the review sheet and so the true false in the display and homepage are off test the code so that on a new form submit both the review sheet and responses sheet gets updated appropriately see this demo the code to the technical lead and team for merge team once approved update the current production code release dependency on and update action items in the dependency issue if needed resources instructions ,1 9016,27363379830.0,IssuesEvent,2023-02-27 17:18:43,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,closed,[DocDB] Unable to establish replica leadership after cluster expansion,kind/bug area/docdb priority/medium qa_automation,"Jira Link: [DB-5658](https://yugabyte.atlassian.net/browse/DB-5658) ### Description ``` Failed to execute task {""platformVersion"":""2.17.2.0-b151"",""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""99a6846a-dbc2-425b-9a88-a7bac71c3697"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""y..., hit error: WaitForLeadersOnPreferredOnly(99a6846a-dbc2-425b-9a88-a7bac71c3697) did not complete.. ``` Steps to repro: 1. Create a YCQL table and set up PITR 2. Insert data into the table 3. Delete data from the table 4. Add 3 nodes ------> Fails Note: No issue is observed if any one of the above parameter is removed https://jenkins.dev.yugabyte.com/view/Test%20Jobs/job/itest-system-developer/3754/consoleFull [DB-5658]: https://yugabyte.atlassian.net/browse/DB-5658?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ",1.0,"[DocDB] Unable to establish replica leadership after cluster expansion - Jira Link: [DB-5658](https://yugabyte.atlassian.net/browse/DB-5658) ### Description ``` Failed to execute task {""platformVersion"":""2.17.2.0-b151"",""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""99a6846a-dbc2-425b-9a88-a7bac71c3697"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""y..., hit error: WaitForLeadersOnPreferredOnly(99a6846a-dbc2-425b-9a88-a7bac71c3697) did not complete.. ``` Steps to repro: 1. Create a YCQL table and set up PITR 2. Insert data into the table 3. Delete data from the table 4. Add 3 nodes ------> Fails Note: No issue is observed if any one of the above parameter is removed https://jenkins.dev.yugabyte.com/view/Test%20Jobs/job/itest-system-developer/3754/consoleFull [DB-5658]: https://yugabyte.atlassian.net/browse/DB-5658?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ",1, unable to establish replica leadership after cluster expansion jira link description failed to execute task platformversion sleepaftermasterrestartmillis sleepaftertserverrestartmillis nodeexporteruser prometheus universeuuid enableybc false installybc false ybcinstalled false encryptionatrestconfig encryptionatrestenabled false optype undefined type data key communicationports masterhttpport masterrpcport tserverhttpport tserverrpcport ybcontrollerhttpport y hit error waitforleadersonpreferredonly did not complete steps to repro create a ycql table and set up pitr insert data into the table delete data from the table add nodes fails note no issue is observed if any one of the above parameter is removed ,1 8755,27172211905.0,IssuesEvent,2023-02-17 20:33:29,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,EntryLocation not working or documentation incomplete,area:Picker Needs: Investigation automation:Closed,"#### Category - [ ] Question - [x] Documentation issue - [x] Bug #### Expected or Desired Behavior The OneDrive picker should redirect users to the location specified in the sitePath/listPath/itemPath params OR it should report a meaningful error when these are being used incorrectly. #### Observed Behavior The OneDrive picker loads to the user's personal OneDrive rather than the specified path. No error is reported when this occurs. However, an error is reported if the specified path doesn't pass validation. #### Steps to Reproduce 1. Create a html file with the following code: ``` ``` 2. Open the web page and click the button 3. Observe that opens (after sign in) to the user's Files and not the given paths 4. Modify the code above so the paths are e.g. `listPath: ""/teams//Shared%20Documents/""` 5. Reload the page and click the button 6. Observe that it reports an error: `Uncaught Error: uri /teams//Shared%20Documents/Forms/ does not match protocol(s): HTTPS` This issue has already been reported in #862 and #1045. The modification to the paths to remove the https://sharepoint.com was an attempt to follow the advice given here: https://github.com/OneDrive/onedrive-api-docs/issues/862#issuecomment-466609340 Thank you. ",1.0,"EntryLocation not working or documentation incomplete - #### Category - [ ] Question - [x] Documentation issue - [x] Bug #### Expected or Desired Behavior The OneDrive picker should redirect users to the location specified in the sitePath/listPath/itemPath params OR it should report a meaningful error when these are being used incorrectly. #### Observed Behavior The OneDrive picker loads to the user's personal OneDrive rather than the specified path. No error is reported when this occurs. However, an error is reported if the specified path doesn't pass validation. #### Steps to Reproduce 1. Create a html file with the following code: ``` ``` 2. Open the web page and click the button 3. Observe that opens (after sign in) to the user's Files and not the given paths 4. Modify the code above so the paths are e.g. `listPath: ""/teams//Shared%20Documents/""` 5. Reload the page and click the button 6. Observe that it reports an error: `Uncaught Error: uri /teams//Shared%20Documents/Forms/ does not match protocol(s): HTTPS` This issue has already been reported in #862 and #1045. The modification to the paths to remove the https://sharepoint.com was an attempt to follow the advice given here: https://github.com/OneDrive/onedrive-api-docs/issues/862#issuecomment-466609340 Thank you. ",1,entrylocation not working or documentation incomplete category question documentation issue bug expected or desired behavior the onedrive picker should redirect users to the location specified in the sitepath listpath itempath params or it should report a meaningful error when these are being used incorrectly observed behavior the onedrive picker loads to the user s personal onedrive rather than the specified path no error is reported when this occurs however an error is reported if the specified path doesn t pass validation steps to reproduce create a html file with the following code script type text javascript src function launchonedrivepicker var odoptions clientid openinnewwindow true action share multiselect false advanced navigation entrylocation disable false sharepoint sitepath listpath itempath sourcetypes sites or onedrive success function files console log success cancel function console log cancel error function error console log error onedrive open odoptions open from onedrive open the web page and click the button observe that opens after sign in to the user s files and not the given paths modify the code above so the paths are e g listpath teams shared reload the page and click the button observe that it reports an error uncaught error uri teams shared forms does not match protocol s https this issue has already been reported in and the modification to the paths to remove the was an attempt to follow the advice given here thank you ,1 162143,20165148801.0,IssuesEvent,2022-02-10 02:59:47,mycomplexsoul/delta,https://api.github.com/repos/mycomplexsoul/delta,closed,WS-2022-0008 (Medium) detected in node-forge-0.10.0.tgz - autoclosed,security vulnerability,"## WS-2022-0008 - Medium Severity Vulnerability
Vulnerable Library - node-forge-0.10.0.tgz

JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.

Library home page: https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz

Path to dependency file: /package.json

Path to vulnerable library: /node_modules/node-forge/package.json

Dependency Hierarchy: - build-angular-13.1.4.tgz (Root Library) - webpack-dev-server-4.6.0.tgz - selfsigned-1.10.14.tgz - :x: **node-forge-0.10.0.tgz** (Vulnerable Library)

Found in HEAD commit: 7640f94b9c1dad3cdd20e0221e98fb8e7ba1ec42

Found in base branch: master

Vulnerability Details

The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.

Publish Date: 2022-01-08

URL: WS-2022-0008

CVSS 3 Score Details (6.6)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-5rrq-pxf6-6jx5

Release Date: 2022-01-08

Fix Resolution (node-forge): 1.2.1

Direct dependency fix Resolution (@angular-devkit/build-angular): 13.2.0-next.2

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2022-0008 (Medium) detected in node-forge-0.10.0.tgz - autoclosed - ## WS-2022-0008 - Medium Severity Vulnerability
Vulnerable Library - node-forge-0.10.0.tgz

JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.

Library home page: https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz

Path to dependency file: /package.json

Path to vulnerable library: /node_modules/node-forge/package.json

Dependency Hierarchy: - build-angular-13.1.4.tgz (Root Library) - webpack-dev-server-4.6.0.tgz - selfsigned-1.10.14.tgz - :x: **node-forge-0.10.0.tgz** (Vulnerable Library)

Found in HEAD commit: 7640f94b9c1dad3cdd20e0221e98fb8e7ba1ec42

Found in base branch: master

Vulnerability Details

The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.

Publish Date: 2022-01-08

URL: WS-2022-0008

CVSS 3 Score Details (6.6)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-5rrq-pxf6-6jx5

Release Date: 2022-01-08

Fix Resolution (node-forge): 1.2.1

Direct dependency fix Resolution (@angular-devkit/build-angular): 13.2.0-next.2

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in node forge tgz autoclosed ws medium severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file package json path to vulnerable library node modules node forge package json dependency hierarchy build angular tgz root library webpack dev server tgz selfsigned tgz x node forge tgz vulnerable library found in head commit a href found in base branch master vulnerability details the forge debug api had a potential prototype pollution issue if called with untrusted input the api was only used for internal debug purposes in a safe way and never documented or advertised it is suspected that uses of this api if any exist would likely not have used untrusted inputs in a vulnerable way publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge direct dependency fix resolution angular devkit build angular next step up your open source security game with whitesource ,0 3433,2685848568.0,IssuesEvent,2015-03-30 07:09:26,go-rat/language-design,https://api.github.com/repos/go-rat/language-design,opened,Major language change: Blank identifier cannot be used in function/method call.,design work,"Blank identifier (`_`) in function/method call means ""type is not specified and can be deduced by parameters"" in current spec. This feature is deprecated by the new predeclared type `any`. ```go // Valid but not recommended. func F1(T generic, a, b, c T) {} // Valid and recommended. T := any func F2(a, b, c T) {} // Compile-time error. Blank identifier cannot be used as ""any type"" anymore. F1(_, 1, 2, 3) // Valid. F1(any, 1, 2, 3) F2(1, 2, 3) ```",1.0,"Major language change: Blank identifier cannot be used in function/method call. - Blank identifier (`_`) in function/method call means ""type is not specified and can be deduced by parameters"" in current spec. This feature is deprecated by the new predeclared type `any`. ```go // Valid but not recommended. func F1(T generic, a, b, c T) {} // Valid and recommended. T := any func F2(a, b, c T) {} // Compile-time error. Blank identifier cannot be used as ""any type"" anymore. F1(_, 1, 2, 3) // Valid. F1(any, 1, 2, 3) F2(1, 2, 3) ```",0,major language change blank identifier cannot be used in function method call blank identifier in function method call means type is not specified and can be deduced by parameters in current spec this feature is deprecated by the new predeclared type any go valid but not recommended func t generic a b c t valid and recommended t any func a b c t compile time error blank identifier cannot be used as any type anymore valid any ,0 2324,11770826651.0,IssuesEvent,2020-03-15 20:59:16,GoodDollar/GoodDAPP,https://api.github.com/repos/GoodDollar/GoodDAPP,closed,Fix test automation on master,automation,"Test automation fails on master - fix - suggest way automation/qa team follows build results and fix/ask dev team to solve",1.0,"Fix test automation on master - Test automation fails on master - fix - suggest way automation/qa team follows build results and fix/ask dev team to solve",1,fix test automation on master test automation fails on master fix suggest way automation qa team follows build results and fix ask dev team to solve,1 8618,27171997049.0,IssuesEvent,2023-02-17 20:21:43,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,item metadata returns a 'bundle' field - undocumented facet or bug?,automation:Closed,"I have come across an item that looks like a folder from the UI but when requesting metadata it does not have a folder facet, but instead returns a 'bundle' field containing 'childCount' information. Is this some undocumented facet that acts similarly to a folder? If so, what is the difference from a folder? Also, in case this is a feature and not a bug, is there a way to call view.delta on such an item? Because the typical way of requesting /drive/items/{itemId}/view.delta gives a 'Method Not Allowed' error.",1.0,"item metadata returns a 'bundle' field - undocumented facet or bug? - I have come across an item that looks like a folder from the UI but when requesting metadata it does not have a folder facet, but instead returns a 'bundle' field containing 'childCount' information. Is this some undocumented facet that acts similarly to a folder? If so, what is the difference from a folder? Also, in case this is a feature and not a bug, is there a way to call view.delta on such an item? Because the typical way of requesting /drive/items/{itemId}/view.delta gives a 'Method Not Allowed' error.",1,item metadata returns a bundle field undocumented facet or bug i have come across an item that looks like a folder from the ui but when requesting metadata it does not have a folder facet but instead returns a bundle field containing childcount information is this some undocumented facet that acts similarly to a folder if so what is the difference from a folder also in case this is a feature and not a bug is there a way to call view delta on such an item because the typical way of requesting drive items itemid view delta gives a method not allowed error ,1 888,8621552821.0,IssuesEvent,2018-11-20 17:38:37,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Unable to delete source control in Azure Automation - VsoGit Source Control,automation/svc cxp product-issue triaged,"There seems to be some bugs still with this in that for some reason i cannot delete an existing VsoGit source control. When selecting the and hitting delete. An error is returned and all this is said is 'An error has occurred'. So cannot determine what that error is? full error in the portal is: Delete source control failed. 9:52 AM An error occurred while deleting the source control named 'AzureAutomationRunBooks'. Error details: {""Message"":""An error has occurred.""}. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 83c90e64-b615-711f-a53d-fc76606e2ecd * Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea * Content: [Source Control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration) * Content Source: [articles/automation/source-control-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/source-control-integration.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Unable to delete source control in Azure Automation - VsoGit Source Control - There seems to be some bugs still with this in that for some reason i cannot delete an existing VsoGit source control. When selecting the and hitting delete. An error is returned and all this is said is 'An error has occurred'. So cannot determine what that error is? full error in the portal is: Delete source control failed. 9:52 AM An error occurred while deleting the source control named 'AzureAutomationRunBooks'. Error details: {""Message"":""An error has occurred.""}. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 83c90e64-b615-711f-a53d-fc76606e2ecd * Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea * Content: [Source Control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration) * Content Source: [articles/automation/source-control-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/source-control-integration.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,unable to delete source control in azure automation vsogit source control there seems to be some bugs still with this in that for some reason i cannot delete an existing vsogit source control when selecting the and hitting delete an error is returned and all this is said is an error has occurred so cannot determine what that error is full error in the portal is delete source control failed am an error occurred while deleting the source control named azureautomationrunbooks error details message an error has occurred document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 391534,11575399101.0,IssuesEvent,2020-02-21 09:41:19,wso2/product-apim,https://api.github.com/repos/wso2/product-apim,closed,Inconsistent typography,Priority/Normal Publisher Type/Bug Type/React-UI,"### Description: ![image](https://user-images.githubusercontent.com/20179540/75014166-b6215500-54ab-11ea-9ab9-d017e38ebd68.png) ### Steps to reproduce: ### Affected Product Version: ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: #### Suggested Labels: #### Suggested Assignees: ",1.0,"Inconsistent typography - ### Description: ![image](https://user-images.githubusercontent.com/20179540/75014166-b6215500-54ab-11ea-9ab9-d017e38ebd68.png) ### Steps to reproduce: ### Affected Product Version: ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: #### Suggested Labels: #### Suggested Assignees: ",0,inconsistent typography description steps to reproduce affected product version environment details with versions os client env docker optional fields related issues suggested labels suggested assignees ,0 5647,20591998952.0,IssuesEvent,2022-03-05 00:45:12,EthanThatOneKid/acmcsuf.com,https://api.github.com/repos/EthanThatOneKid/acmcsuf.com,closed,[OFFICER_AUTOMATION],automation:officer,"### >>Officer Name<< Daniel Truong ### >>Term to Overwrite<< Spring 2022 ### >>Overwrite Officer Position Title<< Dev Project Manager ### >>Overwrite Officer Picture<< ![image](https://user-images.githubusercontent.com/31261035/156857150-9a01d500-a11b-4fd9-a462-98abf2ae5e31.png) ",1.0,"[OFFICER_AUTOMATION] - ### >>Officer Name<< Daniel Truong ### >>Term to Overwrite<< Spring 2022 ### >>Overwrite Officer Position Title<< Dev Project Manager ### >>Overwrite Officer Picture<< ![image](https://user-images.githubusercontent.com/31261035/156857150-9a01d500-a11b-4fd9-a462-98abf2ae5e31.png) ",1, officer name daniel truong term to overwrite spring overwrite officer position title dev project manager overwrite officer picture ,1 2810,12625644836.0,IssuesEvent,2020-06-14 12:56:56,bandprotocol/bandchain,https://api.github.com/repos/bandprotocol/bandchain,opened,Monitor data place holder,automation,"After list all data we need to monitor, Let's get data and fill them in DB",1.0,"Monitor data place holder - After list all data we need to monitor, Let's get data and fill them in DB",1,monitor data place holder after list all data we need to monitor let s get data and fill them in db,1 6583,23447509975.0,IssuesEvent,2022-08-15 21:18:12,iGEM-Engineering/iGEM-distribution,https://api.github.com/repos/iGEM-Engineering/iGEM-distribution,closed,Package stats,good first issue automation,"For each package, we should be able to collect and record stats for its contents, including: - number of unique parts - classes of parts - number of plasmids used - number of parts per organism - number of part/backbone combinations (i.e., wells required) - types of organisms targeted by the package - issues in the package Put all of this into an automatically generated README file for the package We should also do this for the top-level ""complete distribution package"", which will necessarily have duplicates that get unioned.",1.0,"Package stats - For each package, we should be able to collect and record stats for its contents, including: - number of unique parts - classes of parts - number of plasmids used - number of parts per organism - number of part/backbone combinations (i.e., wells required) - types of organisms targeted by the package - issues in the package Put all of this into an automatically generated README file for the package We should also do this for the top-level ""complete distribution package"", which will necessarily have duplicates that get unioned.",1,package stats for each package we should be able to collect and record stats for its contents including number of unique parts classes of parts number of plasmids used number of parts per organism number of part backbone combinations i e wells required types of organisms targeted by the package issues in the package put all of this into an automatically generated readme file for the package we should also do this for the top level complete distribution package which will necessarily have duplicates that get unioned ,1 175332,21300980060.0,IssuesEvent,2022-04-15 03:02:33,mihorsky/Ghost,https://api.github.com/repos/mihorsky/Ghost,opened,CVE-2022-0436 (High) detected in grunt-1.3.0.tgz,security vulnerability,"## CVE-2022-0436 - High Severity Vulnerability
Vulnerable Library - grunt-1.3.0.tgz

The JavaScript Task Runner

Library home page: https://registry.npmjs.org/grunt/-/grunt-1.3.0.tgz

Path to dependency file: /package.json

Path to vulnerable library: /node_modules/grunt/package.json

Dependency Hierarchy: - :x: **grunt-1.3.0.tgz** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

Path Traversal in GitHub repository gruntjs/grunt prior to 1.5.2.

Publish Date: 2022-04-12

URL: CVE-2022-0436

CVSS 3 Score Details (7.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0436

Release Date: 2022-04-12

Fix Resolution: grunt - 1.5.2

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-0436 (High) detected in grunt-1.3.0.tgz - ## CVE-2022-0436 - High Severity Vulnerability
Vulnerable Library - grunt-1.3.0.tgz

The JavaScript Task Runner

Library home page: https://registry.npmjs.org/grunt/-/grunt-1.3.0.tgz

Path to dependency file: /package.json

Path to vulnerable library: /node_modules/grunt/package.json

Dependency Hierarchy: - :x: **grunt-1.3.0.tgz** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

Path Traversal in GitHub repository gruntjs/grunt prior to 1.5.2.

Publish Date: 2022-04-12

URL: CVE-2022-0436

CVSS 3 Score Details (7.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0436

Release Date: 2022-04-12

Fix Resolution: grunt - 1.5.2

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in grunt tgz cve high severity vulnerability vulnerable library grunt tgz the javascript task runner library home page a href path to dependency file package json path to vulnerable library node modules grunt package json dependency hierarchy x grunt tgz vulnerable library found in base branch master vulnerability details path traversal in github repository gruntjs grunt prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution grunt step up your open source security game with whitesource ,0 37374,5114680815.0,IssuesEvent,2017-01-06 19:17:50,dotnet/corefx,https://api.github.com/repos/dotnet/corefx,closed,Test failure: System.Xml.Tests.AsyncReaderLateInitTests/ReadAfterInitializationWithUriOnAsyncReaderTrows,area-System.Xml test bug test-run-desktop,"Opened on behalf of @jiangzeng The test `System.Xml.Tests.AsyncReaderLateInitTests/ReadAfterInitializationWithUriOnAsyncReaderTrows` has failed. Assert.Throws() Failure\r Expected: typeof(System.Xml.XmlException)\r Actual: typeof(System.AggregateException): One or more errors occurred. Stack Trace: at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken) at System.Xml.XmlTextReaderImpl.FinishInitUriString() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.Tests.AsyncReaderLateInitTests.<>c__DisplayClass8_0.b__0() in D:\A\_work\32\s\corefx\src\System.Private.Xml\tests\XmlReader\Tests\AsyncReaderLateInitTests.cs:line 86 Build : Master - 20161031.01 (Full Framework Tests) Failing configurations: - Windows.10.Amd64 - AnyCPU-Debug - AnyCPU-Release Details: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fdesktop~2Fcli~2F/build/20161031.01/workItem/System.Xml.RW.XmlReader.Tests/analysis/xunit/System.Xml.Tests.AsyncReaderLateInitTests~2FReadAfterInitializationWithUriOnAsyncReaderTrows",2.0,"Test failure: System.Xml.Tests.AsyncReaderLateInitTests/ReadAfterInitializationWithUriOnAsyncReaderTrows - Opened on behalf of @jiangzeng The test `System.Xml.Tests.AsyncReaderLateInitTests/ReadAfterInitializationWithUriOnAsyncReaderTrows` has failed. Assert.Throws() Failure\r Expected: typeof(System.Xml.XmlException)\r Actual: typeof(System.AggregateException): One or more errors occurred. Stack Trace: at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken) at System.Xml.XmlTextReaderImpl.FinishInitUriString() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.Tests.AsyncReaderLateInitTests.<>c__DisplayClass8_0.b__0() in D:\A\_work\32\s\corefx\src\System.Private.Xml\tests\XmlReader\Tests\AsyncReaderLateInitTests.cs:line 86 Build : Master - 20161031.01 (Full Framework Tests) Failing configurations: - Windows.10.Amd64 - AnyCPU-Debug - AnyCPU-Release Details: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fdesktop~2Fcli~2F/build/20161031.01/workItem/System.Xml.RW.XmlReader.Tests/analysis/xunit/System.Xml.Tests.AsyncReaderLateInitTests~2FReadAfterInitializationWithUriOnAsyncReaderTrows",0,test failure system xml tests asyncreaderlateinittests readafterinitializationwithurionasyncreadertrows opened on behalf of jiangzeng the test system xml tests asyncreaderlateinittests readafterinitializationwithurionasyncreadertrows has failed assert throws failure r expected typeof system xml xmlexception r actual typeof system aggregateexception one or more errors occurred stack trace at system threading tasks task throwifexceptional boolean includetaskcanceledexceptions at system threading tasks task wait millisecondstimeout cancellationtoken cancellationtoken at system xml xmltextreaderimpl finishinituristring at system xml xmltextreaderimpl read at system xml tests asyncreaderlateinittests c b in d a work s corefx src system private xml tests xmlreader tests asyncreaderlateinittests cs line build master full framework tests failing configurations windows anycpu debug anycpu release details ,0 13904,23934519234.0,IssuesEvent,2022-09-11 02:44:27,jensenkhem/cmput401-hackathon,https://api.github.com/repos/jensenkhem/cmput401-hackathon,closed,Front-end website,Requirement,"- Specify driver or passenger - Driver: input information - Passenger: select driver to carpool with",1.0,"Front-end website - - Specify driver or passenger - Driver: input information - Passenger: select driver to carpool with",0,front end website specify driver or passenger driver input information passenger select driver to carpool with,0 262262,27879729074.0,IssuesEvent,2023-03-21 18:25:33,rstudio/rstudio-docker-products,https://api.github.com/repos/rstudio/rstudio-docker-products,opened,Rebuild images for supported product on a regular basis,enhancement needs refinement security,"Currently, once a new version of a product is released, we no longer build additional images for previous versions. Letting the previous versions of these images go stale could open them up to newly discovered vulnerabilities while they're still in use by customers. Unfortunately due to the way our workflows are currently structured, it is also very difficult for us to go back and make patches on previous versions. To fix this issue, we should modify our workflows to rebuild images with the latest security updates on a regular schedule to ensure customers are receiving images that are as secure as possible even if they aren't using the latest supported version.",True,"Rebuild images for supported product on a regular basis - Currently, once a new version of a product is released, we no longer build additional images for previous versions. Letting the previous versions of these images go stale could open them up to newly discovered vulnerabilities while they're still in use by customers. Unfortunately due to the way our workflows are currently structured, it is also very difficult for us to go back and make patches on previous versions. To fix this issue, we should modify our workflows to rebuild images with the latest security updates on a regular schedule to ensure customers are receiving images that are as secure as possible even if they aren't using the latest supported version.",0,rebuild images for supported product on a regular basis currently once a new version of a product is released we no longer build additional images for previous versions letting the previous versions of these images go stale could open them up to newly discovered vulnerabilities while they re still in use by customers unfortunately due to the way our workflows are currently structured it is also very difficult for us to go back and make patches on previous versions to fix this issue we should modify our workflows to rebuild images with the latest security updates on a regular schedule to ensure customers are receiving images that are as secure as possible even if they aren t using the latest supported version ,0 3859,14743491241.0,IssuesEvent,2021-01-07 13:59:51,zce/temp,https://api.github.com/repos/zce/temp,opened,自动化构建,automation comments course,"总的来说,开发行业的「自动化构建」就是把我们开发阶段写出来的源代码自动化的转换为生产环境中可以运行的代码或者程序。一般我们会把这样一个转换的过程称之为「自动化构建工作流」。它的作用就是让我们尽可能的脱离运行环境的问题,在开发阶段使用一些提高效率的语法、规范、标准。 自动化构建也是前端工程化中很重要的一部分,本课程中着… **Permalink**: https://blog.zce.me/courses/automation/",1.0,"自动化构建 - 总的来说,开发行业的「自动化构建」就是把我们开发阶段写出来的源代码自动化的转换为生产环境中可以运行的代码或者程序。一般我们会把这样一个转换的过程称之为「自动化构建工作流」。它的作用就是让我们尽可能的脱离运行环境的问题,在开发阶段使用一些提高效率的语法、规范、标准。 自动化构建也是前端工程化中很重要的一部分,本课程中着… **Permalink**: https://blog.zce.me/courses/automation/",1,自动化构建 总的来说,开发行业的「自动化构建」就是把我们开发阶段写出来的源代码自动化的转换为生产环境中可以运行的代码或者程序。一般我们会把这样一个转换的过程称之为「自动化构建工作流」。它的作用就是让我们尽可能的脱离运行环境的问题,在开发阶段使用一些提高效率的语法、规范、标准。 自动化构建也是前端工程化中很重要的一部分,本课程中着… permalink ,1 3618,14148526441.0,IssuesEvent,2020-11-10 22:44:06,BCDevOps/OpenShift4-RollOut,https://api.github.com/repos/BCDevOps/OpenShift4-RollOut,closed,Aporeto - Cluster Port-Forward/Proxy Issue - SPIKE,bug tech/automation tech/networking,"As a developer, it is sometimes desirable to access a service using `oc proxy` or `oc port-forward`. Currently, Aporeto shows this traffic as coming from the Kubelet (Cluster IP) and does not register it as a Processing Unit (PU). Allowing any Cluster IP to contact the desired service pod will remove zero trust and allow any pod on any unenforced pod to communicate with your service pod. Another solution is needed. Definition of Done: - [x] Policy in place at the cluster level to allow the use of `oc proxy` and `oc port-forward` Additional Info: I'm opening a ticket with PANW to see if we can get some assistance with this one. ",1.0,"Aporeto - Cluster Port-Forward/Proxy Issue - SPIKE - As a developer, it is sometimes desirable to access a service using `oc proxy` or `oc port-forward`. Currently, Aporeto shows this traffic as coming from the Kubelet (Cluster IP) and does not register it as a Processing Unit (PU). Allowing any Cluster IP to contact the desired service pod will remove zero trust and allow any pod on any unenforced pod to communicate with your service pod. Another solution is needed. Definition of Done: - [x] Policy in place at the cluster level to allow the use of `oc proxy` and `oc port-forward` Additional Info: I'm opening a ticket with PANW to see if we can get some assistance with this one. ",1,aporeto cluster port forward proxy issue spike as a developer it is sometimes desirable to access a service using oc proxy or oc port forward currently aporeto shows this traffic as coming from the kubelet cluster ip and does not register it as a processing unit pu allowing any cluster ip to contact the desired service pod will remove zero trust and allow any pod on any unenforced pod to communicate with your service pod another solution is needed definition of done policy in place at the cluster level to allow the use of oc proxy and oc port forward additional info i m opening a ticket with panw to see if we can get some assistance with this one ,1 224932,17204330748.0,IssuesEvent,2021-07-17 23:25:54,druxt/druxt-router,https://api.github.com/repos/druxt/druxt-router,opened,Better support for Metatags,documentation enhancement,"**Is your feature request related to a problem? Please describe.** The DruxtRouter module has an unresolved @TODO for better Metatags support. **Describe the solution you'd like** 1. Investigate standard options for Metatags data in Drupal. 2. Investigate decoupled options for Metatags data in Drupal. 3. Investigate standard options for Metatags data in Nuxt. **Describe alternatives you've considered** N/A **Additional context** Drupal: - https://www.drupal.org/project/metatag - https://www.drupal.org/project/decoupled_kit (has Metatag support) - https://www.drupal.org/project/jsonapi_hypermedia Nuxt: - https://nuxtjs.org/docs/2.x/features/meta-tags-seo - https://github.com/AlekseyPleshkov/nuxt-social-meta Druxt: - https://github.com/druxt/druxt-router/blob/develop/src/components/DruxtRouter.vue#L75-L77 ",1.0,"Better support for Metatags - **Is your feature request related to a problem? Please describe.** The DruxtRouter module has an unresolved @TODO for better Metatags support. **Describe the solution you'd like** 1. Investigate standard options for Metatags data in Drupal. 2. Investigate decoupled options for Metatags data in Drupal. 3. Investigate standard options for Metatags data in Nuxt. **Describe alternatives you've considered** N/A **Additional context** Drupal: - https://www.drupal.org/project/metatag - https://www.drupal.org/project/decoupled_kit (has Metatag support) - https://www.drupal.org/project/jsonapi_hypermedia Nuxt: - https://nuxtjs.org/docs/2.x/features/meta-tags-seo - https://github.com/AlekseyPleshkov/nuxt-social-meta Druxt: - https://github.com/druxt/druxt-router/blob/develop/src/components/DruxtRouter.vue#L75-L77 ",0,better support for metatags is your feature request related to a problem please describe the druxtrouter module has an unresolved todo for better metatags support describe the solution you d like investigate standard options for metatags data in drupal investigate decoupled options for metatags data in drupal investigate standard options for metatags data in nuxt describe alternatives you ve considered n a additional context drupal has metatag support nuxt druxt ,0 7087,24223622318.0,IssuesEvent,2022-09-26 12:54:04,flatcar/Flatcar,https://api.github.com/repos/flatcar/Flatcar,closed,[RFE] ci: Test dev container in new pipeline,kind/feature area/ci-automation,"## Current situation The existing CI pipeline uses something like `git -C flatcar-scripts checkout ""${REF}""; flatcar-scripts/jenkins/systemd-run-wrap.sh flatcar-scripts/jenkins/kola/dev-container.sh` to validate the dev container. The new pipeline under `scripts/ci-automation` doesn't have this yet. ## Impact We can't switch to the new pipeline for releases. ## Ideal future situation/Implementation options The prerequisite https://github.com/flatcar-linux/Flatcar/issues/623 is implemented. We have a test of the dev container in `vendor-testing/` or maybe separate from it because it uses `systemd-nspawn` and should not run inside Docker itself - we could however document for users how to use Docker instead of systemd-nspawn for the dev container and then it's ok to only test with Docker but as far as I know this is not straightforward due to the emerge requirements (see the hacks in the container SDK). ",1.0,"[RFE] ci: Test dev container in new pipeline - ## Current situation The existing CI pipeline uses something like `git -C flatcar-scripts checkout ""${REF}""; flatcar-scripts/jenkins/systemd-run-wrap.sh flatcar-scripts/jenkins/kola/dev-container.sh` to validate the dev container. The new pipeline under `scripts/ci-automation` doesn't have this yet. ## Impact We can't switch to the new pipeline for releases. ## Ideal future situation/Implementation options The prerequisite https://github.com/flatcar-linux/Flatcar/issues/623 is implemented. We have a test of the dev container in `vendor-testing/` or maybe separate from it because it uses `systemd-nspawn` and should not run inside Docker itself - we could however document for users how to use Docker instead of systemd-nspawn for the dev container and then it's ok to only test with Docker but as far as I know this is not straightforward due to the emerge requirements (see the hacks in the container SDK). ",1, ci test dev container in new pipeline current situation the existing ci pipeline uses something like git c flatcar scripts checkout ref flatcar scripts jenkins systemd run wrap sh flatcar scripts jenkins kola dev container sh to validate the dev container the new pipeline under scripts ci automation doesn t have this yet impact we can t switch to the new pipeline for releases ideal future situation implementation options the prerequisite is implemented we have a test of the dev container in vendor testing or maybe separate from it because it uses systemd nspawn and should not run inside docker itself we could however document for users how to use docker instead of systemd nspawn for the dev container and then it s ok to only test with docker but as far as i know this is not straightforward due to the emerge requirements see the hacks in the container sdk ,1 54172,11201233980.0,IssuesEvent,2020-01-04 01:33:53,comphack/comp_hack,https://api.github.com/repos/comphack/comp_hack,closed,Channel/World servers stop communicating,bug code,"Remote, public, ReIMAGINE, Windows Server 2016 x64 4.5.0 Uruz running Cerberus_Uruz_Server_Files_11-25 datastore with a few fixes After running for an indeterminate amount of time, the world and channel servers seem to stop communicating with each other Channel CPU usage actually remains fairly active at around 14%, which is about what is used when there are 100 people actively playing. This is odd since all accounts are kicked. Channel log does not show anything after the mass kick, such as time or phase based spawns, so perhaps the process is hanging somewhere?",1.0,"Channel/World servers stop communicating - Remote, public, ReIMAGINE, Windows Server 2016 x64 4.5.0 Uruz running Cerberus_Uruz_Server_Files_11-25 datastore with a few fixes After running for an indeterminate amount of time, the world and channel servers seem to stop communicating with each other Channel CPU usage actually remains fairly active at around 14%, which is about what is used when there are 100 people actively playing. This is odd since all accounts are kicked. Channel log does not show anything after the mass kick, such as time or phase based spawns, so perhaps the process is hanging somewhere?",0,channel world servers stop communicating remote public reimagine windows server uruz running cerberus uruz server files datastore with a few fixes after running for an indeterminate amount of time the world and channel servers seem to stop communicating with each other channel cpu usage actually remains fairly active at around which is about what is used when there are people actively playing this is odd since all accounts are kicked channel log does not show anything after the mass kick such as time or phase based spawns so perhaps the process is hanging somewhere ,0 3637,14226995715.0,IssuesEvent,2020-11-18 00:14:38,cablelabs/transparent-security,https://api.github.com/repos/cablelabs/transparent-security,closed,Fix webhooks to Github for TPS CI jobs,automation bug,"Github stopped accepting user/pass auth for webhooks and now only support tokens, therefore, we need to reconfigure them on Jenkins. see https://developer.github.com/changes/2020-02-14-deprecating-password-auth/",1.0,"Fix webhooks to Github for TPS CI jobs - Github stopped accepting user/pass auth for webhooks and now only support tokens, therefore, we need to reconfigure them on Jenkins. see https://developer.github.com/changes/2020-02-14-deprecating-password-auth/",1,fix webhooks to github for tps ci jobs github stopped accepting user pass auth for webhooks and now only support tokens therefore we need to reconfigure them on jenkins see ,1 3590,14021539727.0,IssuesEvent,2020-10-29 21:27:51,rstudio/rstudio,https://api.github.com/repos/rstudio/rstudio,closed,C++ autoindent tests are broken,automation developer,"### System details RStudio Edition : N/A RStudio Version : 1.4.984 ### Steps to reproduce the problem 1. Clone the RStudio repo. 2. From `rstudio/src/gwt/tools`, run `sync-ace-commits` to clone our Ace editor fork. 3. Open the C++ autoindent tester: ``` src/gwt/test/autoindent_test_cpp.html ``` ### Describe the problem in detail JavaScript errors: ``` Uncaught RangeError: Invalid array length at CppCodeModel.getNextLineIndent (cpp_code_model.js:1523) at Mode.getNextLineIndent (c_cpp.js:176) at Editor.insert (ace.js:13710) at doIndentTest (autoindent_test_cpp.html:763) at autoindent_test_cpp.html:833 ``` as well as a number of failing tests, e.g.: ![image](https://user-images.githubusercontent.com/470418/97622402-7834e300-19e1-11eb-92cb-ab2e308e36b4.png) ### Describe the behavior you expected These tests should run and pass. I was able to get most of the tests running with https://github.com/rstudio/rstudio/commit/f07c3d116eb236010cf434abc06f8086394226ff, but not all. I've also hand-checked the scenarios that were failing and didn't find any regressions in functionality, so suspect that there's something wrong with the test setup (missing dependency or other runtime state?) that's causing it to fail. ",1.0,"C++ autoindent tests are broken - ### System details RStudio Edition : N/A RStudio Version : 1.4.984 ### Steps to reproduce the problem 1. Clone the RStudio repo. 2. From `rstudio/src/gwt/tools`, run `sync-ace-commits` to clone our Ace editor fork. 3. Open the C++ autoindent tester: ``` src/gwt/test/autoindent_test_cpp.html ``` ### Describe the problem in detail JavaScript errors: ``` Uncaught RangeError: Invalid array length at CppCodeModel.getNextLineIndent (cpp_code_model.js:1523) at Mode.getNextLineIndent (c_cpp.js:176) at Editor.insert (ace.js:13710) at doIndentTest (autoindent_test_cpp.html:763) at autoindent_test_cpp.html:833 ``` as well as a number of failing tests, e.g.: ![image](https://user-images.githubusercontent.com/470418/97622402-7834e300-19e1-11eb-92cb-ab2e308e36b4.png) ### Describe the behavior you expected These tests should run and pass. I was able to get most of the tests running with https://github.com/rstudio/rstudio/commit/f07c3d116eb236010cf434abc06f8086394226ff, but not all. I've also hand-checked the scenarios that were failing and didn't find any regressions in functionality, so suspect that there's something wrong with the test setup (missing dependency or other runtime state?) that's causing it to fail. ",1,c autoindent tests are broken system details rstudio edition n a rstudio version steps to reproduce the problem clone the rstudio repo from rstudio src gwt tools run sync ace commits to clone our ace editor fork open the c autoindent tester src gwt test autoindent test cpp html describe the problem in detail javascript errors uncaught rangeerror invalid array length at cppcodemodel getnextlineindent cpp code model js at mode getnextlineindent c cpp js at editor insert ace js at doindenttest autoindent test cpp html at autoindent test cpp html as well as a number of failing tests e g describe the behavior you expected these tests should run and pass i was able to get most of the tests running with but not all i ve also hand checked the scenarios that were failing and didn t find any regressions in functionality so suspect that there s something wrong with the test setup missing dependency or other runtime state that s causing it to fail ,1 260,5060883002.0,IssuesEvent,2016-12-22 13:45:22,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,"CompositeTriggerHandler wrongly ""dispose"" child handlers",Automation,"Currently CompositeTriggerHandler does: ``` @Override public void dispose() { List children = moduleType.getChildren(); for (Trigger child : children) { TriggerHandler handler = moduleHandlerMap.get(child); handler.setRuleEngineCallback(null); } setRuleEngineCallback(null); super.dispose(); } ``` The idea behind ` handler.setRuleEngineCallback(null);` by now is to stop child handlers from notifying (trigger) the composite handler. Most of Handler implementations, however, wouldn't expect `ruleCallback=null` (they can even have isNull check and never bother to set the new null value). The right way to notify given handler to stop working/ notifying should be calling the `dispose()`. Proposal is where `handler.setRuleEngineCallback(null); ` to be replaced with `handler.dispose();` This wouldn't mean that all handlers would dispose all used resources (including ruleCallback=null) as expected (Someone could just forget..). Thats why additional check in the CompositeTriggerHandler.triggered(where child handlers notify/trigger) should be performed whether CompositeTriggerHandler is not already disposed.",1.0,"CompositeTriggerHandler wrongly ""dispose"" child handlers - Currently CompositeTriggerHandler does: ``` @Override public void dispose() { List children = moduleType.getChildren(); for (Trigger child : children) { TriggerHandler handler = moduleHandlerMap.get(child); handler.setRuleEngineCallback(null); } setRuleEngineCallback(null); super.dispose(); } ``` The idea behind ` handler.setRuleEngineCallback(null);` by now is to stop child handlers from notifying (trigger) the composite handler. Most of Handler implementations, however, wouldn't expect `ruleCallback=null` (they can even have isNull check and never bother to set the new null value). The right way to notify given handler to stop working/ notifying should be calling the `dispose()`. Proposal is where `handler.setRuleEngineCallback(null); ` to be replaced with `handler.dispose();` This wouldn't mean that all handlers would dispose all used resources (including ruleCallback=null) as expected (Someone could just forget..). Thats why additional check in the CompositeTriggerHandler.triggered(where child handlers notify/trigger) should be performed whether CompositeTriggerHandler is not already disposed.",1,compositetriggerhandler wrongly dispose child handlers currently compositetriggerhandler does override public void dispose list children moduletype getchildren for trigger child children triggerhandler handler modulehandlermap get child handler setruleenginecallback null setruleenginecallback null super dispose the idea behind handler setruleenginecallback null by now is to stop child handlers from notifying trigger the composite handler most of handler implementations however wouldn t expect rulecallback null they can even have isnull check and never bother to set the new null value the right way to notify given handler to stop working notifying should be calling the dispose proposal is where handler setruleenginecallback null to be replaced with handler dispose this wouldn t mean that all handlers would dispose all used resources including rulecallback null as expected someone could just forget thats why additional check in the compositetriggerhandler triggered where child handlers notify trigger should be performed whether compositetriggerhandler is not already disposed ,1 2122,11429360469.0,IssuesEvent,2020-02-04 07:44:18,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,Campaign edit/close/delete buttons should be disabled while Campaign is processing,automation bug web,"We should disable the Edit/Close/Delete buttons on the Campaign details page while the Campaign is still processing and changesets are being created on the codehost. We already don't allow updating the campaign while it's processing and in https://github.com/sourcegraph/sourcegraph/pull/8240 I added the checks for Close/Delete on the backend side.",1.0,"Campaign edit/close/delete buttons should be disabled while Campaign is processing - We should disable the Edit/Close/Delete buttons on the Campaign details page while the Campaign is still processing and changesets are being created on the codehost. We already don't allow updating the campaign while it's processing and in https://github.com/sourcegraph/sourcegraph/pull/8240 I added the checks for Close/Delete on the backend side.",1,campaign edit close delete buttons should be disabled while campaign is processing we should disable the edit close delete buttons on the campaign details page while the campaign is still processing and changesets are being created on the codehost we already don t allow updating the campaign while it s processing and in i added the checks for close delete on the backend side ,1 16020,4005927401.0,IssuesEvent,2016-05-12 13:24:51,hapijs/good,https://api.github.com/repos/hapijs/good,closed,good stream recipes???,documentation example request,"One of the goals of good 7 is to reduce the need for dedicated good reporters and allow developers to use any stream module that suits a particular need (from several closed issues). It may be useful to assemble recipes for common problem solutions. That could be on the wiki or in the repo, similar to gulp's [recipes](https://github.com/gulpjs/gulp/tree/master/docs/recipes). Entries could range from whole reporter specs to details on how to use a specific stream to achieve a desired result. For example, when sending SafeJson to a file stream, `args: [ {}, { separator: ',' } ]` makes it much easier to convert the file into a parseable JSON array. (Wrap it in `[]` and remove the last `,`.) Same trick can get you `,\n` or `\n` if you like to eyeball the file. (I'm not a JS expert, so figuring that out required crawling through `monitor.js` to understand how `args` made it to good-squeeze plus some trial and error.) ",1.0,"good stream recipes??? - One of the goals of good 7 is to reduce the need for dedicated good reporters and allow developers to use any stream module that suits a particular need (from several closed issues). It may be useful to assemble recipes for common problem solutions. That could be on the wiki or in the repo, similar to gulp's [recipes](https://github.com/gulpjs/gulp/tree/master/docs/recipes). Entries could range from whole reporter specs to details on how to use a specific stream to achieve a desired result. For example, when sending SafeJson to a file stream, `args: [ {}, { separator: ',' } ]` makes it much easier to convert the file into a parseable JSON array. (Wrap it in `[]` and remove the last `,`.) Same trick can get you `,\n` or `\n` if you like to eyeball the file. (I'm not a JS expert, so figuring that out required crawling through `monitor.js` to understand how `args` made it to good-squeeze plus some trial and error.) ",0,good stream recipes one of the goals of good is to reduce the need for dedicated good reporters and allow developers to use any stream module that suits a particular need from several closed issues it may be useful to assemble recipes for common problem solutions that could be on the wiki or in the repo similar to gulp s entries could range from whole reporter specs to details on how to use a specific stream to achieve a desired result for example when sending safejson to a file stream args makes it much easier to convert the file into a parseable json array wrap it in and remove the last same trick can get you n or n if you like to eyeball the file i m not a js expert so figuring that out required crawling through monitor js to understand how args made it to good squeeze plus some trial and error ,0 133301,18853590906.0,IssuesEvent,2021-11-12 01:19:06,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,"Dashboard ""Create new"" button inconsistencies",discuss Team:Kibana-Design Feature:Dashboard triage_needed Team:Presentation,"There are 2 ""Create new"" buttons in dashboard and they have different behaviors. 1) ""Create new"" button in top nav. Clicking this opens create new visualization menu. 2) ""Create new"" button in add panel. Clicking this opens a drop down with only a single selection, ""visualize"". Clicking visualize then opens the create new visualization menu. These inconsistencies provide for an unpredictable user experience. Why is there a ""Create new"" button in the add panel while there is also a ""Create new"" button that is very prominent in the top nav? I would recommend removing the ""Create new"" from the add panel. The search bar is really squished and the extra horizontal space gained by removing this button will help alleviate this. Also, since the user made the choice to click ""add"", why is the most visible call to action then a button that has a different action? ",1.0,"Dashboard ""Create new"" button inconsistencies - There are 2 ""Create new"" buttons in dashboard and they have different behaviors. 1) ""Create new"" button in top nav. Clicking this opens create new visualization menu. 2) ""Create new"" button in add panel. Clicking this opens a drop down with only a single selection, ""visualize"". Clicking visualize then opens the create new visualization menu. These inconsistencies provide for an unpredictable user experience. Why is there a ""Create new"" button in the add panel while there is also a ""Create new"" button that is very prominent in the top nav? I would recommend removing the ""Create new"" from the add panel. The search bar is really squished and the extra horizontal space gained by removing this button will help alleviate this. Also, since the user made the choice to click ""add"", why is the most visible call to action then a button that has a different action? ",0,dashboard create new button inconsistencies there are create new buttons in dashboard and they have different behaviors create new button in top nav clicking this opens create new visualization menu img width alt screen shot at am src img width alt screen shot at am src create new button in add panel clicking this opens a drop down with only a single selection visualize clicking visualize then opens the create new visualization menu img width alt screen shot at am src these inconsistencies provide for an unpredictable user experience why is there a create new button in the add panel while there is also a create new button that is very prominent in the top nav i would recommend removing the create new from the add panel the search bar is really squished and the extra horizontal space gained by removing this button will help alleviate this also since the user made the choice to click add why is the most visible call to action then a button that has a different action ,0 117115,15055501626.0,IssuesEvent,2021-02-03 18:52:46,brave/brave-browser,https://api.github.com/repos/brave/brave-browser,closed,Fix the wording on the BR widget,OS/Desktop QA/Yes design feature/rewards priority/P2," ## Description Incorrect wordings on the widget in nightly. ## Steps to Reproduce 1. 2. 3. ## Actual result: ![image](https://user-images.githubusercontent.com/4369856/103805044-de546c80-5007-11eb-8c0c-fc4376b8a155.png) ## Expected result: 1. Change 'Get paid' to 'Earn rewards' 2. Change ""Brave Ads"" to ""Brave Private Ads"" 2. Change 'By clicking rewardsWidgetEarnAndGive, you agree to...' to 'By proceeding, you agree to...' ## Reproduces how often: ## Brave version (brave://version info) ## Version/Channel Information: - Can you reproduce this issue with the current release? - Can you reproduce this issue with the beta channel? - Can you reproduce this issue with the nightly channel? ## Other Additional Information: - Does the issue resolve itself when disabling Brave Shields? - Does the issue resolve itself when disabling Brave Rewards? - Is the issue reproducible on the latest version of Chrome? ## Miscellaneous Information: ",1.0,"Fix the wording on the BR widget - ## Description Incorrect wordings on the widget in nightly. ## Steps to Reproduce 1. 2. 3. ## Actual result: ![image](https://user-images.githubusercontent.com/4369856/103805044-de546c80-5007-11eb-8c0c-fc4376b8a155.png) ## Expected result: 1. Change 'Get paid' to 'Earn rewards' 2. Change ""Brave Ads"" to ""Brave Private Ads"" 2. Change 'By clicking rewardsWidgetEarnAndGive, you agree to...' to 'By proceeding, you agree to...' ## Reproduces how often: ## Brave version (brave://version info) ## Version/Channel Information: - Can you reproduce this issue with the current release? - Can you reproduce this issue with the beta channel? - Can you reproduce this issue with the nightly channel? ## Other Additional Information: - Does the issue resolve itself when disabling Brave Shields? - Does the issue resolve itself when disabling Brave Rewards? - Is the issue reproducible on the latest version of Chrome? ## Miscellaneous Information: ",0,fix the wording on the br widget have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description incorrect wordings on the widget in nightly steps to reproduce actual result expected result change get paid to earn rewards change brave ads to brave private ads change by clicking rewardswidgetearnandgive you agree to to by proceeding you agree to reproduces how often brave version brave version info version channel information can you reproduce this issue with the current release can you reproduce this issue with the beta channel can you reproduce this issue with the nightly channel other additional information does the issue resolve itself when disabling brave shields does the issue resolve itself when disabling brave rewards is the issue reproducible on the latest version of chrome miscellaneous information ,0 438931,30669776268.0,IssuesEvent,2023-07-25 21:17:48,RiotGames/developer-relations,https://api.github.com/repos/RiotGames/developer-relations,closed,Match documentation is missing statPerk fields,pending: acknowledged topic: documentation type: inconsistency api: match-v4,"The following fields are missing from the documentation of the `/lol/match/v4/matches/{matchId}` endpoint: - `statPerk0` - `statPerk1` - `statPerk2`",1.0,"Match documentation is missing statPerk fields - The following fields are missing from the documentation of the `/lol/match/v4/matches/{matchId}` endpoint: - `statPerk0` - `statPerk1` - `statPerk2`",0,match documentation is missing statperk fields the following fields are missing from the documentation of the lol match matches matchid endpoint ,0 7805,25714770251.0,IssuesEvent,2022-12-07 09:33:53,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,opened,"Missing Pu service comments/checks in GitHub project, when using the Automation API",kind/bug needs-triage area/automation-api,"### What happened? Following scenario: GitHub repo where our Pulumi GitHub App is connected to it. ## Prerequisite - In the `.github` folder is a reusable workflow called `pulumi.yaml` which get called by the parent workflow `build.yaml` ### 1. Run Writing a ""classic"" Pulumi program and creating PR. Everything work as expected. The Comment from the Pulumi service get add to the PR. Via the App and the github action itself! ![image](https://user-images.githubusercontent.com/38325136/206141030-632093c1-ffeb-41e4-9f0a-6c41a0e3d5c8.png) https://github.com/dirien/github-pulumi-action/pull/10 ### 2. Run via Automation API Now, I rewrite the ""classic"" Program to use the automation API. Also the workflow `pulumi.yaml` gets rewritten: The Pulumi GitHub Action gets removed and a simple `go run .` added. Env Variables for the service gets added too! No comments ![image](https://user-images.githubusercontent.com/38325136/206141590-7354f46c-acd5-4346-bef6-acb123e772e2.png) https://github.com/dirien/github-pulumi-action/pull/12 When I check the Pulumi Service, I see the succesful preview: ![image](https://user-images.githubusercontent.com/38325136/206141761-b5ff5f68-d932-4e30-b764-03ddded6ac25.png) ### Steps to reproduce Here is the demo Repo: https://github.com/dirien/github-pulumi-action ### Expected Behavior Comments/checks get posted to the PR ### Actual Behavior No Comments/checks get posted to the PR ### Output of `pulumi about` - ### Additional context We made a first investigation with @blampe and he has saw the demo in action too. ### Contributing Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already). ",1.0,"Missing Pu service comments/checks in GitHub project, when using the Automation API - ### What happened? Following scenario: GitHub repo where our Pulumi GitHub App is connected to it. ## Prerequisite - In the `.github` folder is a reusable workflow called `pulumi.yaml` which get called by the parent workflow `build.yaml` ### 1. Run Writing a ""classic"" Pulumi program and creating PR. Everything work as expected. The Comment from the Pulumi service get add to the PR. Via the App and the github action itself! ![image](https://user-images.githubusercontent.com/38325136/206141030-632093c1-ffeb-41e4-9f0a-6c41a0e3d5c8.png) https://github.com/dirien/github-pulumi-action/pull/10 ### 2. Run via Automation API Now, I rewrite the ""classic"" Program to use the automation API. Also the workflow `pulumi.yaml` gets rewritten: The Pulumi GitHub Action gets removed and a simple `go run .` added. Env Variables for the service gets added too! No comments ![image](https://user-images.githubusercontent.com/38325136/206141590-7354f46c-acd5-4346-bef6-acb123e772e2.png) https://github.com/dirien/github-pulumi-action/pull/12 When I check the Pulumi Service, I see the succesful preview: ![image](https://user-images.githubusercontent.com/38325136/206141761-b5ff5f68-d932-4e30-b764-03ddded6ac25.png) ### Steps to reproduce Here is the demo Repo: https://github.com/dirien/github-pulumi-action ### Expected Behavior Comments/checks get posted to the PR ### Actual Behavior No Comments/checks get posted to the PR ### Output of `pulumi about` - ### Additional context We made a first investigation with @blampe and he has saw the demo in action too. ### Contributing Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already). ",1,missing pu service comments checks in github project when using the automation api what happened following scenario github repo where our pulumi github app is connected to it prerequisite in the github folder is a reusable workflow called pulumi yaml which get called by the parent workflow build yaml run writing a classic pulumi program and creating pr everything work as expected the comment from the pulumi service get add to the pr via the app and the github action itself run via automation api now i rewrite the classic program to use the automation api also the workflow pulumi yaml gets rewritten the pulumi github action gets removed and a simple go run added env variables for the service gets added too no comments when i check the pulumi service i see the succesful preview steps to reproduce here is the demo repo expected behavior comments checks get posted to the pr actual behavior no comments checks get posted to the pr output of pulumi about additional context we made a first investigation with blampe and he has saw the demo in action too contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already ,1 1119,9534616155.0,IssuesEvent,2019-04-30 02:31:00,mattermost/mattermost-server,https://api.github.com/repos/mattermost/mattermost-server,closed,[Help Wanted] Add automated unit tests that confirm the CSS and HTML generated from Markdown test inputs,Area/Automation Difficulty/3:Hard Tech/ReactJS Up For Grabs,"If you're interested please comment here and come [join our ""Contributors"" community channel](https://pre-release.mattermost.com/core/channels/tickets) on our daily build server, where you can discuss questions with community members and the Mattermost core team. For technical advice or questions, please [join our ""Developers"" community channel](https://pre-release.mattermost.com/core/channels/developers). New contributors please see our [Developer's Guide](https://docs.mattermost.com/guides/developer.html), specifically for [machine setup](https://docs.mattermost.com/developer/developer-setup.html) and for [developer workflow](https://docs.mattermost.com/developer/developer-flow.html). ---- We are having too many regressions on changes to text processing. On pre-release right now we have line breaks inconsistently rendered because we merged a PR that didn't fail when CSS changes broke other parts of the system. We can't revert the PR because it changed the database. This ticket is to add unit tests that confirm the CSS and HTML generated from test inputs (https://github.com/mattermost/platform/blob/master/tests/test-markdown-lists.md) is consistent from release to release--or some other automated method to prevent regressions. Markdown rendering is critical, predictable functionality, regressions shouldn't be that hard to spot. Is there someone from the community to share their thoughts on how we can improve here? Also looking for help on this improvement. ",1.0,"[Help Wanted] Add automated unit tests that confirm the CSS and HTML generated from Markdown test inputs - If you're interested please comment here and come [join our ""Contributors"" community channel](https://pre-release.mattermost.com/core/channels/tickets) on our daily build server, where you can discuss questions with community members and the Mattermost core team. For technical advice or questions, please [join our ""Developers"" community channel](https://pre-release.mattermost.com/core/channels/developers). New contributors please see our [Developer's Guide](https://docs.mattermost.com/guides/developer.html), specifically for [machine setup](https://docs.mattermost.com/developer/developer-setup.html) and for [developer workflow](https://docs.mattermost.com/developer/developer-flow.html). ---- We are having too many regressions on changes to text processing. On pre-release right now we have line breaks inconsistently rendered because we merged a PR that didn't fail when CSS changes broke other parts of the system. We can't revert the PR because it changed the database. This ticket is to add unit tests that confirm the CSS and HTML generated from test inputs (https://github.com/mattermost/platform/blob/master/tests/test-markdown-lists.md) is consistent from release to release--or some other automated method to prevent regressions. Markdown rendering is critical, predictable functionality, regressions shouldn't be that hard to spot. Is there someone from the community to share their thoughts on how we can improve here? Also looking for help on this improvement. ",1, add automated unit tests that confirm the css and html generated from markdown test inputs if you re interested please comment here and come on our daily build server where you can discuss questions with community members and the mattermost core team for technical advice or questions please new contributors please see our specifically for and for we are having too many regressions on changes to text processing on pre release right now we have line breaks inconsistently rendered because we merged a pr that didn t fail when css changes broke other parts of the system we can t revert the pr because it changed the database this ticket is to add unit tests that confirm the css and html generated from test inputs is consistent from release to release or some other automated method to prevent regressions markdown rendering is critical predictable functionality regressions shouldn t be that hard to spot is there someone from the community to share their thoughts on how we can improve here also looking for help on this improvement ,1 3846,14706464927.0,IssuesEvent,2021-01-04 19:53:54,MinaProtocol/mina,https://api.github.com/repos/MinaProtocol/mina,opened,Integration Test Core: CI,acceptance-automation,"We need to get the integration test framework running in buildkite. For now, we should only enable the cheap, stable integration tests for every build (block-production, send-payment), but long term, we want to have the ability to run more expensive tests as nightlies.",1.0,"Integration Test Core: CI - We need to get the integration test framework running in buildkite. For now, we should only enable the cheap, stable integration tests for every build (block-production, send-payment), but long term, we want to have the ability to run more expensive tests as nightlies.",1,integration test core ci we need to get the integration test framework running in buildkite for now we should only enable the cheap stable integration tests for every build block production send payment but long term we want to have the ability to run more expensive tests as nightlies ,1 3098,13082055827.0,IssuesEvent,2020-08-01 13:14:09,FranMacedo/gestoremoto,https://api.github.com/repos/FranMacedo/gestoremoto,closed,Create script to properly renew certificates every 3 months,automation,"In /root/renew_certs.sh. Making it so certbot doesn't need native nginx on the host system. It uses a standalone web server to send the challenges to the DNS server. The script is as follows. @FranMacedo #!/bin/bash docker stop nginx_server certbot certonly --standalone -d privado.observatorios-lisboa.pt docker start nginx_server",1.0,"Create script to properly renew certificates every 3 months - In /root/renew_certs.sh. Making it so certbot doesn't need native nginx on the host system. It uses a standalone web server to send the challenges to the DNS server. The script is as follows. @FranMacedo #!/bin/bash docker stop nginx_server certbot certonly --standalone -d privado.observatorios-lisboa.pt docker start nginx_server",1,create script to properly renew certificates every months in root renew certs sh making it so certbot doesn t need native nginx on the host system it uses a standalone web server to send the challenges to the dns server the script is as follows franmacedo bin bash docker stop nginx server certbot certonly standalone d privado observatorios lisboa pt docker start nginx server,1 272621,29795057489.0,IssuesEvent,2023-06-16 01:07:34,billmcchesney1/pacbot,https://api.github.com/repos/billmcchesney1/pacbot,closed,CVE-2023-24998 (High) detected in tomcat-embed-core-8.5.32.jar - autoclosed,Mend: dependency security vulnerability,"## CVE-2023-24998 - High Severity Vulnerability
Vulnerable Library - tomcat-embed-core-8.5.32.jar

Core Tomcat implementation

Library home page: http://tomcat.apache.org/

Path to dependency file: /api/pacman-api-admin/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar

Dependency Hierarchy: - spring-boot-starter-web-2.0.4.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-2.0.4.RELEASE.jar - :x: **tomcat-embed-core-8.5.32.jar** (Vulnerable Library)

Found in HEAD commit: acf9a0620c1a37cee4f2896d71e1c3731c5c7b06

Found in base branch: master

Vulnerability Details

Apache Commons FileUpload before 1.5 does not limit the number of request parts to be processed resulting in the possibility of an attacker triggering a DoS with a malicious upload or series of uploads. Note that, like all of the file upload limits, the new configuration option (FileUploadBase#setFileCountMax) is not enabled by default and must be explicitly configured.

Publish Date: 2023-02-20

URL: CVE-2023-24998

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://tomcat.apache.org/security-10.html

Release Date: 2023-02-20

Fix Resolution: commons-fileupload:commons-fileupload:1.5;org.apache.tomcat:tomcat-coyote:8.5.85,9.0.71,10.1.5,11.0.0-M3;org.apache.tomcat.embed:tomcat-embed-core:8.5.85,9.0.71,10.1.5,11.0.0-M3;org.apache.tomcat:tomcat-util:8.5.85,9.0.71,10.1.5,11.0.0-M3;org.apache.tomcat:tomcat-catalina:8.5.85,9.0.71,10.1.5,11.0.0-M3

",True,"CVE-2023-24998 (High) detected in tomcat-embed-core-8.5.32.jar - autoclosed - ## CVE-2023-24998 - High Severity Vulnerability
Vulnerable Library - tomcat-embed-core-8.5.32.jar

Core Tomcat implementation

Library home page: http://tomcat.apache.org/

Path to dependency file: /api/pacman-api-admin/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.32/tomcat-embed-core-8.5.32.jar

Dependency Hierarchy: - spring-boot-starter-web-2.0.4.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-2.0.4.RELEASE.jar - :x: **tomcat-embed-core-8.5.32.jar** (Vulnerable Library)

Found in HEAD commit: acf9a0620c1a37cee4f2896d71e1c3731c5c7b06

Found in base branch: master

Vulnerability Details

Apache Commons FileUpload before 1.5 does not limit the number of request parts to be processed resulting in the possibility of an attacker triggering a DoS with a malicious upload or series of uploads. Note that, like all of the file upload limits, the new configuration option (FileUploadBase#setFileCountMax) is not enabled by default and must be explicitly configured.

Publish Date: 2023-02-20

URL: CVE-2023-24998

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://tomcat.apache.org/security-10.html

Release Date: 2023-02-20

Fix Resolution: commons-fileupload:commons-fileupload:1.5;org.apache.tomcat:tomcat-coyote:8.5.85,9.0.71,10.1.5,11.0.0-M3;org.apache.tomcat.embed:tomcat-embed-core:8.5.85,9.0.71,10.1.5,11.0.0-M3;org.apache.tomcat:tomcat-util:8.5.85,9.0.71,10.1.5,11.0.0-M3;org.apache.tomcat:tomcat-catalina:8.5.85,9.0.71,10.1.5,11.0.0-M3

",0,cve high detected in tomcat embed core jar autoclosed cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file api pacman api admin pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details apache commons fileupload before does not limit the number of request parts to be processed resulting in the possibility of an attacker triggering a dos with a malicious upload or series of uploads note that like all of the file upload limits the new configuration option fileuploadbase setfilecountmax is not enabled by default and must be explicitly configured publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons fileupload commons fileupload org apache tomcat tomcat coyote org apache tomcat embed tomcat embed core org apache tomcat tomcat util org apache tomcat tomcat catalina ,0 288537,8848338835.0,IssuesEvent,2019-01-08 06:33:07,OctopusDeploy/Issues,https://api.github.com/repos/OctopusDeploy/Issues,closed,Workers - Azure Cloud Service storage authentication failure,area/cloud area/execution priority,"# Prerequisites - [x] I have verified the problem exists in the latest version - [x] I have searched [open](https://github.com/OctopusDeploy/Issues/issues) and [closed](https://github.com/OctopusDeploy/Issues/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aclosed) issues to make sure it isn't already reported - [x] I have written a descriptive issue title - [x] I have linked the original source of this report - [x] I have tagged the issue appropriately (area/*, kind/bug, tag/regression?) # The bug When attempting to deploy an Azure Cloud Service using the Default Worker Pool, using a classic storage account, and a CS target that passes a health check, we receive the following authentication error regarding storage accounts. NOTE: **We believe this has to do with the permissions around the worker account, still under investigation.** ### Log excerpt Example failure task log (for Octopus Internal Staff): https://deploy.octopushq.com/app#/projects/mark-siedle-s-test-project-pls-don-t-remove/releases/1.0.37/deployments/Deployments-43150 ``` 23:57:35 Error | ForbiddenError: The server failed to authenticate the request. Verify that the certificate is valid and is associated with this subscription. 23:57:35 Error | Hyak.Common.CloudException 23:57:35 Error | at Microsoft.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) 23:57:35 Error | at Microsoft.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccess(Task task) 23:57:35 Error | at Microsoft.WindowsAzure.Management.Storage.StorageAccountOperationsExtensions.GetKeys(IStorageAccountOperations operations, String accountName) 23:57:35 Error | at Calamari.Azure.Integration.AzurePackageUploader.GetStorageAccountPrimaryKey(SubscriptionCloudCredentials credentials, String storageAccountName, String serviceManagementEndpoint) 23:57:35 Error | at Calamari.Azure.Integration.AzurePackageUploader.Upload(SubscriptionCloudCredentials credentials, String storageAccountName, String packageFile, String uploadedFileName, String storageEndpointSuffix, String serviceManagementEndpoint) 23:57:35 Error | at Calamari.Azure.Deployment.Conventions.UploadAzureCloudServicePackageConvention.Install(RunningDeployment deployment) 23:57:35 Error | at Calamari.Deployment.ConventionProcessor.RunInstallConventions() 23:57:35 Error | at Calamari.Deployment.ConventionProcessor.RunConventions() 23:57:35 Error | at Calamari.Azure.Commands.DeployAzureCloudServiceCommand.Execute(String[] commandLineArguments) 23:57:35 Error | at Calamari.Program.Execute(String[] args) 23:57:36 Verbose | Process C:\Windows\system32\WindowsPowershell\v1.0\PowerShell.exe in D:\Octopus\Work\20181218235702-363433-2984 exited with code 100 ``` ## Workarounds There are no workarounds known at this time. ## Links Source: https://help.octopus.com/t/cloud-service-deployment-fails-with-authentication-error/21981 ",1.0,"Workers - Azure Cloud Service storage authentication failure - # Prerequisites - [x] I have verified the problem exists in the latest version - [x] I have searched [open](https://github.com/OctopusDeploy/Issues/issues) and [closed](https://github.com/OctopusDeploy/Issues/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aclosed) issues to make sure it isn't already reported - [x] I have written a descriptive issue title - [x] I have linked the original source of this report - [x] I have tagged the issue appropriately (area/*, kind/bug, tag/regression?) # The bug When attempting to deploy an Azure Cloud Service using the Default Worker Pool, using a classic storage account, and a CS target that passes a health check, we receive the following authentication error regarding storage accounts. NOTE: **We believe this has to do with the permissions around the worker account, still under investigation.** ### Log excerpt Example failure task log (for Octopus Internal Staff): https://deploy.octopushq.com/app#/projects/mark-siedle-s-test-project-pls-don-t-remove/releases/1.0.37/deployments/Deployments-43150 ``` 23:57:35 Error | ForbiddenError: The server failed to authenticate the request. Verify that the certificate is valid and is associated with this subscription. 23:57:35 Error | Hyak.Common.CloudException 23:57:35 Error | at Microsoft.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) 23:57:35 Error | at Microsoft.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccess(Task task) 23:57:35 Error | at Microsoft.WindowsAzure.Management.Storage.StorageAccountOperationsExtensions.GetKeys(IStorageAccountOperations operations, String accountName) 23:57:35 Error | at Calamari.Azure.Integration.AzurePackageUploader.GetStorageAccountPrimaryKey(SubscriptionCloudCredentials credentials, String storageAccountName, String serviceManagementEndpoint) 23:57:35 Error | at Calamari.Azure.Integration.AzurePackageUploader.Upload(SubscriptionCloudCredentials credentials, String storageAccountName, String packageFile, String uploadedFileName, String storageEndpointSuffix, String serviceManagementEndpoint) 23:57:35 Error | at Calamari.Azure.Deployment.Conventions.UploadAzureCloudServicePackageConvention.Install(RunningDeployment deployment) 23:57:35 Error | at Calamari.Deployment.ConventionProcessor.RunInstallConventions() 23:57:35 Error | at Calamari.Deployment.ConventionProcessor.RunConventions() 23:57:35 Error | at Calamari.Azure.Commands.DeployAzureCloudServiceCommand.Execute(String[] commandLineArguments) 23:57:35 Error | at Calamari.Program.Execute(String[] args) 23:57:36 Verbose | Process C:\Windows\system32\WindowsPowershell\v1.0\PowerShell.exe in D:\Octopus\Work\20181218235702-363433-2984 exited with code 100 ``` ## Workarounds There are no workarounds known at this time. ## Links Source: https://help.octopus.com/t/cloud-service-deployment-fails-with-authentication-error/21981 ",0,workers azure cloud service storage authentication failure prerequisites i have verified the problem exists in the latest version i have searched and issues to make sure it isn t already reported i have written a descriptive issue title i have linked the original source of this report i have tagged the issue appropriately area kind bug tag regression the bug when attempting to deploy an azure cloud service using the default worker pool using a classic storage account and a cs target that passes a health check we receive the following authentication error regarding storage accounts note we believe this has to do with the permissions around the worker account still under investigation log excerpt example failure task log for octopus internal staff error forbiddenerror the server failed to authenticate the request verify that the certificate is valid and is associated with this subscription error hyak common cloudexception error at microsoft runtime compilerservices taskawaiter throwfornonsuccess task task error at microsoft runtime compilerservices taskawaiter handlenonsuccess task task error at microsoft windowsazure management storage storageaccountoperationsextensions getkeys istorageaccountoperations operations string accountname error at calamari azure integration azurepackageuploader getstorageaccountprimarykey subscriptioncloudcredentials credentials string storageaccountname string servicemanagementendpoint error at calamari azure integration azurepackageuploader upload subscriptioncloudcredentials credentials string storageaccountname string packagefile string uploadedfilename string storageendpointsuffix string servicemanagementendpoint error at calamari azure deployment conventions uploadazurecloudservicepackageconvention install runningdeployment deployment error at calamari deployment conventionprocessor runinstallconventions error at calamari deployment conventionprocessor runconventions error at calamari azure commands deployazurecloudservicecommand execute string commandlinearguments error at calamari program execute string args verbose process c windows windowspowershell powershell exe in d octopus work exited with code workarounds there are no workarounds known at this time links source ,0 628455,19986501978.0,IssuesEvent,2022-01-30 18:45:14,iluwatar/java-design-patterns,https://api.github.com/repos/iluwatar/java-design-patterns,closed,Event Aggregator pattern can be made more granular with emitting notification to specific observers,status: under construction epic: pattern type: enhancement info: good first issue resolution: fixed priority: normal,"The current implementation of the event -aggregator source code chooses to notify all the observers whenever an event source emits an event. This is can be made more granular with letting the subscriber subscribe with the aggregator for a specific type of event and then ONLY it should be notified in case , such an event is emitted. I would like to contribute a working version of these example, may be a PR if this issue is discussed.",1.0,"Event Aggregator pattern can be made more granular with emitting notification to specific observers - The current implementation of the event -aggregator source code chooses to notify all the observers whenever an event source emits an event. This is can be made more granular with letting the subscriber subscribe with the aggregator for a specific type of event and then ONLY it should be notified in case , such an event is emitted. I would like to contribute a working version of these example, may be a PR if this issue is discussed.",0,event aggregator pattern can be made more granular with emitting notification to specific observers the current implementation of the event aggregator source code chooses to notify all the observers whenever an event source emits an event this is can be made more granular with letting the subscriber subscribe with the aggregator for a specific type of event and then only it should be notified in case such an event is emitted i would like to contribute a working version of these example may be a pr if this issue is discussed ,0 37484,8303624770.0,IssuesEvent,2018-09-21 18:13:04,Chisel-Team/Chisel,https://api.github.com/repos/Chisel-Team/Chisel,closed,[1.12.2] Target a Block ,Unable To Reproduce bug-code,"If i want to Chisel a Block in the World and i want a specific type of block i put it in my chisel inventory and than it doesnt work! For example i want DENT cobble and dont want to click all the way through! Is this feature implemented?? grats ",1.0,"[1.12.2] Target a Block - If i want to Chisel a Block in the World and i want a specific type of block i put it in my chisel inventory and than it doesnt work! For example i want DENT cobble and dont want to click all the way through! Is this feature implemented?? grats ",0, target a block if i want to chisel a block in the world and i want a specific type of block i put it in my chisel inventory and than it doesnt work for example i want dent cobble and dont want to click all the way through is this feature implemented grats ,0 238745,18248729702.0,IssuesEvent,2021-10-01 22:58:49,rrousselGit/river_pod,https://api.github.com/repos/rrousselGit/river_pod,closed,Incorrect consumer documentation (probably outdated),documentation needs triage,"I think the consumer description in the starting docs is outdated. In [https://riverpod.dev/docs/concepts/reading ](https://riverpod.dev/docs/concepts/reading ) when explaining consumerwidget and consumer it uses WidgetRef instead of ScopeReader. So the examples use ref.watch(provider) instead of watch(provider) and that no longer works.",1.0,"Incorrect consumer documentation (probably outdated) - I think the consumer description in the starting docs is outdated. In [https://riverpod.dev/docs/concepts/reading ](https://riverpod.dev/docs/concepts/reading ) when explaining consumerwidget and consumer it uses WidgetRef instead of ScopeReader. So the examples use ref.watch(provider) instead of watch(provider) and that no longer works.",0,incorrect consumer documentation probably outdated i think the consumer description in the starting docs is outdated in when explaining consumerwidget and consumer it uses widgetref instead of scopereader so the examples use ref watch provider instead of watch provider and that no longer works ,0 104011,4188393931.0,IssuesEvent,2016-06-23 20:36:18,dmusican/Elegit,https://api.github.com/repos/dmusican/Elegit,closed,Get Elegit started again on my own computer,bug priority high,"Because it keeps throwing NullPointerExceptions. I already tried clearing the prefs in com.apple.java.util.prefs as explained in http://stackoverflow.com/questions/675864/where-are-java-preferences-stored-on-mac-os-x and using Xcode to edit the pedit file, but it won't even open without a NullPointerException. Get it started again, and document here what I had to do to recover.",1.0,"Get Elegit started again on my own computer - Because it keeps throwing NullPointerExceptions. I already tried clearing the prefs in com.apple.java.util.prefs as explained in http://stackoverflow.com/questions/675864/where-are-java-preferences-stored-on-mac-os-x and using Xcode to edit the pedit file, but it won't even open without a NullPointerException. Get it started again, and document here what I had to do to recover.",0,get elegit started again on my own computer because it keeps throwing nullpointerexceptions i already tried clearing the prefs in com apple java util prefs as explained in and using xcode to edit the pedit file but it won t even open without a nullpointerexception get it started again and document here what i had to do to recover ,0 5735,20908230559.0,IssuesEvent,2022-03-24 06:13:31,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,cdc emit row changed event to downstream kafka may block a long time.,type/bug severity/major found/automation area/ticdc affects-6.0,"### What did you do? 1. create a changefeed with kafka sink 2. stop the changfeed 3. prepare `go-tpc` workload 4. resume the changefeed 5. run `go-tpc` workload periodically use `kill -s STOP` to pause the Kafka process for around 40 ~ 50s, each a few minutes. ### What did you expect to see? The whole CDC works properly. ### What did you see instead? Processor gets blocked for a long time. When the problem happen, kafka process is healthy, resumed by `kill -s CONT` ``` [2022/03/22 14:00:30.348 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=23.000155792s] [role=processor] [2022/03/22 14:00:31.348 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=23.9999115s] [role=processor] [2022/03/22 14:00:32.348 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=24.9999985s] [role=processor] [2022/03/22 14:00:33.349 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=26.000115375s] [role=processor] [2022/03/22 14:00:34.349 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=27.000068583s] [role=processor] [2022/03/22 14:00:35.349 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=28.000139292s] [role=processor] [2022/03/22 14:00:36.348 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=28.999832417s] [role=processor] [2022/03/22 14:00:37.349 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=29.99986875s] [role=processor] [2022/03/22 14:00:38.349 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=30.999821875s] [role=processor] ``` ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console master ``` Upstream TiKV version (execute `tikv-server --version`): ```console master ``` TiCDC version (execute `cdc version`): ```console master ``` [etcd_worker.log](https://github.com/pingcap/tiflow/files/8322983/etcd_worker.log) [goroutine2.txt](https://github.com/pingcap/tiflow/files/8322990/goroutine2.txt) ",1.0,"cdc emit row changed event to downstream kafka may block a long time. - ### What did you do? 1. create a changefeed with kafka sink 2. stop the changfeed 3. prepare `go-tpc` workload 4. resume the changefeed 5. run `go-tpc` workload periodically use `kill -s STOP` to pause the Kafka process for around 40 ~ 50s, each a few minutes. ### What did you expect to see? The whole CDC works properly. ### What did you see instead? Processor gets blocked for a long time. When the problem happen, kafka process is healthy, resumed by `kill -s CONT` ``` [2022/03/22 14:00:30.348 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=23.000155792s] [role=processor] [2022/03/22 14:00:31.348 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=23.9999115s] [role=processor] [2022/03/22 14:00:32.348 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=24.9999985s] [role=processor] [2022/03/22 14:00:33.349 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=26.000115375s] [role=processor] [2022/03/22 14:00:34.349 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=27.000068583s] [role=processor] [2022/03/22 14:00:35.349 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=28.000139292s] [role=processor] [2022/03/22 14:00:36.348 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=28.999832417s] [role=processor] [2022/03/22 14:00:37.349 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=29.99986875s] [role=processor] [2022/03/22 14:00:38.349 +08:00] [WARN] [client.go:263] [""etcd client outCh blocking too long, the etcdWorker may be stuck""] [duration=30.999821875s] [role=processor] ``` ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console master ``` Upstream TiKV version (execute `tikv-server --version`): ```console master ``` TiCDC version (execute `cdc version`): ```console master ``` [etcd_worker.log](https://github.com/pingcap/tiflow/files/8322983/etcd_worker.log) [goroutine2.txt](https://github.com/pingcap/tiflow/files/8322990/goroutine2.txt) ",1,cdc emit row changed event to downstream kafka may block a long time what did you do create a changefeed with kafka sink stop the changfeed prepare go tpc workload resume the changefeed run go tpc workload periodically use kill s stop to pause the kafka process for around each a few minutes what did you expect to see the whole cdc works properly what did you see instead processor gets blocked for a long time when the problem happen kafka process is healthy resumed by kill s cont versions of the cluster upstream tidb cluster version execute select tidb version in a mysql client console master upstream tikv version execute tikv server version console master ticdc version execute cdc version console master ,1 18620,24579489040.0,IssuesEvent,2022-10-13 14:38:47,GoogleCloudPlatform/fda-mystudies,https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies,closed, [Consent API] Data sharing Consent Artifacts are not getting created ,Bug Blocker P0 Process: Fixed Process: Tested QA Process: Tested dev,Data sharing Consent Artifacts are not getting created when participant has provided data sharing permission,3.0, [Consent API] Data sharing Consent Artifacts are not getting created - Data sharing Consent Artifacts are not getting created when participant has provided data sharing permission,0, data sharing consent artifacts are not getting created data sharing consent artifacts are not getting created when participant has provided data sharing permission,0 339536,10255918803.0,IssuesEvent,2019-08-21 16:25:01,minio/minio-py,https://api.github.com/repos/minio/minio-py,closed,python client.stat_object raise 'NoSuchKey: message: The specified key does not exist.',community priority: medium working as intended,"when I use python: minio 4.0.0 stat_object bucket_name : image object_name : human/import/boteye_select_vertical/20190418/20190418220/5120_01201_1555642445656%2B5120_01201_1555642446657/2019071602_1555642442789_74_1555642442740.jpg %2B == '+' ## Current Behavior raise: NoSuchKey: message: The specified key does not exist. ## Steps to Reproduce (for bugs) 1. object_name include '+' 2. use stat_object ",1.0,"python client.stat_object raise 'NoSuchKey: message: The specified key does not exist.' - when I use python: minio 4.0.0 stat_object bucket_name : image object_name : human/import/boteye_select_vertical/20190418/20190418220/5120_01201_1555642445656%2B5120_01201_1555642446657/2019071602_1555642442789_74_1555642442740.jpg %2B == '+' ## Current Behavior raise: NoSuchKey: message: The specified key does not exist. ## Steps to Reproduce (for bugs) 1. object_name include '+' 2. use stat_object ",0,python client stat object raise nosuchkey message the specified key does not exist when i use python minio stat object bucket name image object name human import boteye select vertical jpg current behavior raise nosuchkey message the specified key does not exist steps to reproduce for bugs object name include use stat object ,0 23221,15948540755.0,IssuesEvent,2021-04-15 06:02:19,APSIMInitiative/ApsimX,https://api.github.com/repos/APSIMInitiative/ApsimX,opened,Datastore row filter appears to be broken,bug interface/infrastructure,The row filter in the datastore doesn't work - text entered into this textbox has no effect on the data shown in the UI. @hol353 fyi - not sure if this is related to your changes to the datastore access.,1.0,Datastore row filter appears to be broken - The row filter in the datastore doesn't work - text entered into this textbox has no effect on the data shown in the UI. @hol353 fyi - not sure if this is related to your changes to the datastore access.,0,datastore row filter appears to be broken the row filter in the datastore doesn t work text entered into this textbox has no effect on the data shown in the ui fyi not sure if this is related to your changes to the datastore access ,0 4606,16992810437.0,IssuesEvent,2021-06-30 23:46:37,newrelic/newrelic-observability-packs,https://api.github.com/repos/newrelic/newrelic-observability-packs,closed,Automation: check for a Description field ,automation o11y question,"Justin confirmed `Description` field is required in the main config.yml as of right now that set of rules have already been schema stitched.. We should add a check to ensure contributors know to add a description and if they don't we communicate that in the PR workflow. ### Acceptance Criteria - [ ] Add a check that ensure description is present and is `<= 2000` characters and is `plain text` ",1.0,"Automation: check for a Description field - Justin confirmed `Description` field is required in the main config.yml as of right now that set of rules have already been schema stitched.. We should add a check to ensure contributors know to add a description and if they don't we communicate that in the PR workflow. ### Acceptance Criteria - [ ] Add a check that ensure description is present and is `<= 2000` characters and is `plain text` ",1,automation check for a description field justin confirmed description field is required in the main config yml as of right now that set of rules have already been schema stitched we should add a check to ensure contributors know to add a description and if they don t we communicate that in the pr workflow acceptance criteria add a check that ensure description is present and is characters and is plain text ,1 131440,10696257323.0,IssuesEvent,2019-10-23 14:26:43,input-output-hk/ouroboros-network,https://api.github.com/repos/input-output-hk/ouroboros-network,closed,Expand testing in `network-mux`,mainnet networking testing,"Currently we are only using a single request / response in `network-mux` tests (see `setupMiniReqResp`). It would be much better to use multiple requests / responses, in particular use `mapAccumL` style tests (as in `Network.TypedProtocol.ReqResp.Examples.reqRespServerMapAccumL`.",1.0,"Expand testing in `network-mux` - Currently we are only using a single request / response in `network-mux` tests (see `setupMiniReqResp`). It would be much better to use multiple requests / responses, in particular use `mapAccumL` style tests (as in `Network.TypedProtocol.ReqResp.Examples.reqRespServerMapAccumL`.",0,expand testing in network mux currently we are only using a single request response in network mux tests see setupminireqresp it would be much better to use multiple requests responses in particular use mapaccuml style tests as in network typedprotocol reqresp examples reqrespservermapaccuml ,0 4554,11348404413.0,IssuesEvent,2020-01-24 00:16:26,TerriaJS/terriajs,https://api.github.com/repos/TerriaJS/terriajs,closed,Changing coord presentation fails in mobx,New Model Architecture T-Bug,"Clicking on the coord bar in a mobx app doesn't do anything. Throws an error related to `toggleUseProjection` method. ![Coords](https://user-images.githubusercontent.com/6735870/71046129-6b255180-218b-11ea-84ae-2bfdb9e207c4.png) ",1.0,"Changing coord presentation fails in mobx - Clicking on the coord bar in a mobx app doesn't do anything. Throws an error related to `toggleUseProjection` method. ![Coords](https://user-images.githubusercontent.com/6735870/71046129-6b255180-218b-11ea-84ae-2bfdb9e207c4.png) ",0,changing coord presentation fails in mobx clicking on the coord bar in a mobx app doesn t do anything throws an error related to toggleuseprojection method ,0 28879,8221797885.0,IssuesEvent,2018-09-06 04:07:44,ros-planning/navigation2,https://api.github.com/repos/ros-planning/navigation2,opened,Update CMakeLists.txt files to also build libraries,build,Currently we're building only executables for the various ROS nodes. We also need to build libraries to enable composition in a single process. ,1.0,Update CMakeLists.txt files to also build libraries - Currently we're building only executables for the various ROS nodes. We also need to build libraries to enable composition in a single process. ,0,update cmakelists txt files to also build libraries currently we re building only executables for the various ros nodes we also need to build libraries to enable composition in a single process ,0 3328,13487070874.0,IssuesEvent,2020-09-11 10:23:01,FORTH-ICS-INSPIRE/artemis,https://api.github.com/repos/FORTH-ICS-INSPIRE/artemis,closed,Env variable to invoke intended process states recovery mechanism,automation backend database p/medium,"**Is your feature request related to a problem? Please describe.** Intended process state invocation should be deactivated by default and activated on user's request to avoid issues with auto-restart in heavyweight deployments. **Describe the solution you'd like** Deactivate by default and create appropriate env var.",1.0,"Env variable to invoke intended process states recovery mechanism - **Is your feature request related to a problem? Please describe.** Intended process state invocation should be deactivated by default and activated on user's request to avoid issues with auto-restart in heavyweight deployments. **Describe the solution you'd like** Deactivate by default and create appropriate env var.",1,env variable to invoke intended process states recovery mechanism is your feature request related to a problem please describe intended process state invocation should be deactivated by default and activated on user s request to avoid issues with auto restart in heavyweight deployments describe the solution you d like deactivate by default and create appropriate env var ,1 352837,10546443486.0,IssuesEvent,2019-10-02 21:29:52,ppy/osu,https://api.github.com/repos/ppy/osu,closed,Raw input broken for trackpads in macOS Catalina,framework fix required low priority macOS,"I played with TUIO trackpad on Mac and raw input doesn't seem to response to trackpad or TUIO input, only the mouse would work. MacOS 10.15 Catalina Dev Beta 1, osu! Lazer 2019.607 I'm not sure if it's the problem of the new system, as the old wine client works fine in the previous system. But it stops working on Catalina and I have to move to Lazer. Catalina has input monitor permission which has been opened for osu, otherwise raw input would completely stop working including mouse.",1.0,"Raw input broken for trackpads in macOS Catalina - I played with TUIO trackpad on Mac and raw input doesn't seem to response to trackpad or TUIO input, only the mouse would work. MacOS 10.15 Catalina Dev Beta 1, osu! Lazer 2019.607 I'm not sure if it's the problem of the new system, as the old wine client works fine in the previous system. But it stops working on Catalina and I have to move to Lazer. Catalina has input monitor permission which has been opened for osu, otherwise raw input would completely stop working including mouse.",0,raw input broken for trackpads in macos catalina i played with tuio trackpad on mac and raw input doesn t seem to response to trackpad or tuio input only the mouse would work macos catalina dev beta osu lazer i m not sure if it s the problem of the new system as the old wine client works fine in the previous system but it stops working on catalina and i have to move to lazer catalina has input monitor permission which has been opened for osu otherwise raw input would completely stop working including mouse ,0 318279,23711950227.0,IssuesEvent,2022-08-30 08:36:18,LuccaSA/lucca-front,https://api.github.com/repos/LuccaSA/lucca-front,closed,Missing NG stories,👥 Guilde Front 📖 Documentation changes,"A few NG stories need to be added: - [ ] User Avatar - [ ] User Display - [ ] Tooltips - [ ] Sidepanel - [ ] NG Animations - [ ] Numbers",1.0,"Missing NG stories - A few NG stories need to be added: - [ ] User Avatar - [ ] User Display - [ ] Tooltips - [ ] Sidepanel - [ ] NG Animations - [ ] Numbers",0,missing ng stories a few ng stories need to be added user avatar user display tooltips sidepanel ng animations numbers,0 2022,11272610534.0,IssuesEvent,2020-01-14 15:10:20,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,reopened,a8n: Upsert ChangesetEvents from Bitbucket Server webhooks,automation bitbucket,"This the Automation/Sourcegraph side of [RFC 45 - Extend Bitbucket Server Plugin](https://docs.google.com/document/d/1I3Aq1WSUh42BP8KvKr6AlmuCfo8tXYtJu40WzdNT6go/edit). With the webhooks added in https://github.com/sourcegraph/bitbucket-server-plugin/pull/10 and their documentation added in https://github.com/sourcegraph/bitbucket-server-plugin/pull/11 we can add support for Bitbucket Server webhooks to Automation. What we need to do is the Bitbucket Server equivalent of the GitHub webhook implementation: https://github.com/sourcegraph/sourcegraph/pull/5913 ",1.0,"a8n: Upsert ChangesetEvents from Bitbucket Server webhooks - This the Automation/Sourcegraph side of [RFC 45 - Extend Bitbucket Server Plugin](https://docs.google.com/document/d/1I3Aq1WSUh42BP8KvKr6AlmuCfo8tXYtJu40WzdNT6go/edit). With the webhooks added in https://github.com/sourcegraph/bitbucket-server-plugin/pull/10 and their documentation added in https://github.com/sourcegraph/bitbucket-server-plugin/pull/11 we can add support for Bitbucket Server webhooks to Automation. What we need to do is the Bitbucket Server equivalent of the GitHub webhook implementation: https://github.com/sourcegraph/sourcegraph/pull/5913 ",1, upsert changesetevents from bitbucket server webhooks this the automation sourcegraph side of with the webhooks added in and their documentation added in we can add support for bitbucket server webhooks to automation what we need to do is the bitbucket server equivalent of the github webhook implementation ,1 718,7880232372.0,IssuesEvent,2018-06-26 15:23:21,humphd/next,https://api.github.com/repos/humphd/next,opened,Deal with lock file changes polluting commits,automation,"Every time we `npm install` it updates `package-lock.json` and `npm-shrinkwrap.json` files throughout the tree. This is annoying, and people not familiar with how these files work will often mess it up, rebases and merges get tricky due to merge conflicts, etc. Let's see if we can improve this. We could try [disabling them](https://github.com/substack/tape/commit/df48bfae19d8ba4b48055dacac8b81912b8887f2). Since we do `npm install` in various places, we'll need to figure out how to pass that down, or have it get read from our root dir.",1.0,"Deal with lock file changes polluting commits - Every time we `npm install` it updates `package-lock.json` and `npm-shrinkwrap.json` files throughout the tree. This is annoying, and people not familiar with how these files work will often mess it up, rebases and merges get tricky due to merge conflicts, etc. Let's see if we can improve this. We could try [disabling them](https://github.com/substack/tape/commit/df48bfae19d8ba4b48055dacac8b81912b8887f2). Since we do `npm install` in various places, we'll need to figure out how to pass that down, or have it get read from our root dir.",1,deal with lock file changes polluting commits every time we npm install it updates package lock json and npm shrinkwrap json files throughout the tree this is annoying and people not familiar with how these files work will often mess it up rebases and merges get tricky due to merge conflicts etc let s see if we can improve this we could try since we do npm install in various places we ll need to figure out how to pass that down or have it get read from our root dir ,1 22059,3932782215.0,IssuesEvent,2016-04-25 16:51:25,cea-hpc/clustershell,https://api.github.com/repos/cea-hpc/clustershell,closed,Use nose instead unittest for testing,Tests WIP,"Use python-nose to enhance test system. Break things! hm. rewrite, fix and run tests! URL: http://somethingaboutorange.com/mrl/projects/nose/0.11.2/",1.0,"Use nose instead unittest for testing - Use python-nose to enhance test system. Break things! hm. rewrite, fix and run tests! URL: http://somethingaboutorange.com/mrl/projects/nose/0.11.2/",0,use nose instead unittest for testing use python nose to enhance test system break things hm rewrite fix and run tests url ,0 27469,6875086425.0,IssuesEvent,2017-11-19 09:23:53,joomla/joomla-cms,https://api.github.com/repos/joomla/joomla-cms,closed,Calling JUserHelper::setUserGroups deletes user values from #__fields_values,No Code Attached Yet,"### Steps to reproduce the issue I'm calling JUserHelper::setUserGroups from within a plugin. ### Expected result Users group membership changes and no other modifications are made to the user. ### Actual result Every value stored in the #__fields_values table for that user are deleted. ### System information (as much as possible) J3.8.2 PHP7.1 ### Additional comments I had a feeling this had something to do with the fields system plugin, so I took a quick peek and noticed that the fields aren't updated if there is an input variable ""task"" with a value of ""activate"", ""block"" or ""unblock"". So as a test, I set ""task"" = ""unblock"" and the field values remain. Fortunately, this doesn't actually unblock a blocked user, so I can work around this issue. Without looking too closely, I believe this is a problem in the fields plugin, onContentAfterSave event. I believe that 2 lines of code would fix this. Wrapping ```$model->setFieldValue($field->id, $item->id, $value);``` with ```if(!is_null($value)) { }``` would probably do it. ",1.0,"Calling JUserHelper::setUserGroups deletes user values from #__fields_values - ### Steps to reproduce the issue I'm calling JUserHelper::setUserGroups from within a plugin. ### Expected result Users group membership changes and no other modifications are made to the user. ### Actual result Every value stored in the #__fields_values table for that user are deleted. ### System information (as much as possible) J3.8.2 PHP7.1 ### Additional comments I had a feeling this had something to do with the fields system plugin, so I took a quick peek and noticed that the fields aren't updated if there is an input variable ""task"" with a value of ""activate"", ""block"" or ""unblock"". So as a test, I set ""task"" = ""unblock"" and the field values remain. Fortunately, this doesn't actually unblock a blocked user, so I can work around this issue. Without looking too closely, I believe this is a problem in the fields plugin, onContentAfterSave event. I believe that 2 lines of code would fix this. Wrapping ```$model->setFieldValue($field->id, $item->id, $value);``` with ```if(!is_null($value)) { }``` would probably do it. ",0,calling juserhelper setusergroups deletes user values from fields values steps to reproduce the issue i m calling juserhelper setusergroups from within a plugin expected result users group membership changes and no other modifications are made to the user actual result every value stored in the fields values table for that user are deleted system information as much as possible additional comments i had a feeling this had something to do with the fields system plugin so i took a quick peek and noticed that the fields aren t updated if there is an input variable task with a value of activate block or unblock so as a test i set task unblock and the field values remain fortunately this doesn t actually unblock a blocked user so i can work around this issue without looking too closely i believe this is a problem in the fields plugin oncontentaftersave event i believe that lines of code would fix this wrapping model setfieldvalue field id item id value with if is null value would probably do it ,0 297256,9166292258.0,IssuesEvent,2019-03-02 01:58:50,microbuilder/zsensor,https://api.github.com/repos/microbuilder/zsensor,opened,Initial channel list,API area:Channels area:Raw Sensor Data area:SI Sensor Data priority:high,"Define an initial list of channels to have something to test any API ideas against. ## Base channel list As a minimum it should encompass the following (actual names may vary): - [ ] Raw data (mandatory for any driver) - [ ] Generic 3-Vector - [ ] Acceleration (accelerometers) - [ ] Magnetic Field (magnetometers) - [ ] Angular Momentum (gyroscopes) - [ ] Temperature - [ ] Quaternions - [ ] Euler Angles The last two are *synthesized* or *inferred* channels that are constructed based on one or more other channels, but it's worth considering early on the three tiers of: 1. Raw sensor data 2. Processed, standardised sensor data 3. Synthesized or inferred sensor data (sensor fusion, etc.) ## 3rd party APIs The following 3rd party APIs may be useful defining and refining a channel list: - Android: [sensors-base.h](https://android.googlesource.com/platform/hardware/libhardware/+/master/include/hardware/sensors-base.h) ",1.0,"Initial channel list - Define an initial list of channels to have something to test any API ideas against. ## Base channel list As a minimum it should encompass the following (actual names may vary): - [ ] Raw data (mandatory for any driver) - [ ] Generic 3-Vector - [ ] Acceleration (accelerometers) - [ ] Magnetic Field (magnetometers) - [ ] Angular Momentum (gyroscopes) - [ ] Temperature - [ ] Quaternions - [ ] Euler Angles The last two are *synthesized* or *inferred* channels that are constructed based on one or more other channels, but it's worth considering early on the three tiers of: 1. Raw sensor data 2. Processed, standardised sensor data 3. Synthesized or inferred sensor data (sensor fusion, etc.) ## 3rd party APIs The following 3rd party APIs may be useful defining and refining a channel list: - Android: [sensors-base.h](https://android.googlesource.com/platform/hardware/libhardware/+/master/include/hardware/sensors-base.h) ",0,initial channel list define an initial list of channels to have something to test any api ideas against base channel list as a minimum it should encompass the following actual names may vary raw data mandatory for any driver generic vector acceleration accelerometers magnetic field magnetometers angular momentum gyroscopes temperature quaternions euler angles the last two are synthesized or inferred channels that are constructed based on one or more other channels but it s worth considering early on the three tiers of raw sensor data processed standardised sensor data synthesized or inferred sensor data sensor fusion etc party apis the following party apis may be useful defining and refining a channel list android ,0 3000,12963790950.0,IssuesEvent,2020-07-20 19:22:34,willowtreeapps/vocable-ios,https://api.github.com/repos/willowtreeapps/vocable-ios,opened,Automation - Test My Sayings pagination on Home screen,automation,"**Acceptance Criteria**: Given I am on the Home screen and I add enough sayings to fill up the screen Then the pagination button shall become enabled and take me to the next page of My Sayings and the pagination text displays the new number of pages **Design**: https://www.figma.com/file/0XCO0RWFIa2ckp81SXoqNSLa/Vocable?node-id=1874%3A1 Note: these tests should be run on the iPhone simulator",1.0,"Automation - Test My Sayings pagination on Home screen - **Acceptance Criteria**: Given I am on the Home screen and I add enough sayings to fill up the screen Then the pagination button shall become enabled and take me to the next page of My Sayings and the pagination text displays the new number of pages **Design**: https://www.figma.com/file/0XCO0RWFIa2ckp81SXoqNSLa/Vocable?node-id=1874%3A1 Note: these tests should be run on the iPhone simulator",1,automation test my sayings pagination on home screen acceptance criteria given i am on the home screen and i add enough sayings to fill up the screen then the pagination button shall become enabled and take me to the next page of my sayings and the pagination text displays the new number of pages design note these tests should be run on the iphone simulator,1 251809,8027886798.0,IssuesEvent,2018-07-27 10:41:42,telstra/open-kilda,https://api.github.com/repos/telstra/open-kilda,opened,On flow creation Kilda allocates resources before validating flow request,bug priority/2-high,"1. Create any flow. It should be successfully created 2. Try creating another flow with the same name. Observe error. 3 ^Repeat previous step around 2000 times. **Expected:** Nothing serious happen. User always receives error that system is unable to create a flow with same name. **Actual:** Kilda is no longer able to create ANY flow. Storm (CrudBold) throws error with `Could not allocate resource: pool is full` -> `java.lang.ArrayIndexOutOfBoundsException`. Note: Restart flow topology in order to recover the system. [poolIsFullStacktrace.txt](https://github.com/telstra/open-kilda/files/2235482/poolIsFullStacktrace.txt) ",1.0,"On flow creation Kilda allocates resources before validating flow request - 1. Create any flow. It should be successfully created 2. Try creating another flow with the same name. Observe error. 3 ^Repeat previous step around 2000 times. **Expected:** Nothing serious happen. User always receives error that system is unable to create a flow with same name. **Actual:** Kilda is no longer able to create ANY flow. Storm (CrudBold) throws error with `Could not allocate resource: pool is full` -> `java.lang.ArrayIndexOutOfBoundsException`. Note: Restart flow topology in order to recover the system. [poolIsFullStacktrace.txt](https://github.com/telstra/open-kilda/files/2235482/poolIsFullStacktrace.txt) ",0,on flow creation kilda allocates resources before validating flow request create any flow it should be successfully created try creating another flow with the same name observe error repeat previous step around times expected nothing serious happen user always receives error that system is unable to create a flow with same name actual kilda is no longer able to create any flow storm crudbold throws error with could not allocate resource pool is full java lang arrayindexoutofboundsexception note restart flow topology in order to recover the system ,0 28202,6965633196.0,IssuesEvent,2017-12-09 08:52:30,triplea-game/triplea,https://api.github.com/repos/triplea-game/triplea,closed,Please do not depend on org.json:json which is non-free software,category: code P1,"Please do not depend on org:json:json which is non-free software. It contains the infamous license clause ""The Software shall be used for Good, not Evil."" This dependency could be replaced by existing free software. https://wiki.debian.org/qa.debian.org/jsonevil",1.0,"Please do not depend on org.json:json which is non-free software - Please do not depend on org:json:json which is non-free software. It contains the infamous license clause ""The Software shall be used for Good, not Evil."" This dependency could be replaced by existing free software. https://wiki.debian.org/qa.debian.org/jsonevil",0,please do not depend on org json json which is non free software please do not depend on org json json which is non free software it contains the infamous license clause the software shall be used for good not evil this dependency could be replaced by existing free software ,0 203259,15875896886.0,IssuesEvent,2021-04-09 07:39:40,zkat/big-brain,https://api.github.com/repos/zkat/big-brain,opened,Write guide,documentation help wanted,There should be a step-by-step guide on how to get started with big-brain and do incrementally more complex things.,1.0,Write guide - There should be a step-by-step guide on how to get started with big-brain and do incrementally more complex things.,0,write guide there should be a step by step guide on how to get started with big brain and do incrementally more complex things ,0 289446,24989743400.0,IssuesEvent,2022-11-02 17:42:25,tesshucom/jpsonic,https://api.github.com/repos/tesshucom/jpsonic,opened,Need a Docker stress test,in: test in: docker for : ported-from-airsonic,"Docker considerations. Related airsonic/airsonic#1473, #1747 Jpsonic eliminated several memory overflow risks. At this time, it is unknown whether Jpsonic can reproduce it. If we can't reproduce it, finding out the cause is proof of the devil. Therefore, we will conduct an equivalent stress test and if there are no problems, it will be closed. By the way, as a test with Jpsonic standalone, 400,000 songs were scanned, and it was confirmed that no memory overflow or resource leak occurred. No slowdowns. (However, when compared with 10,000 songs, depending on the environment of 400,000, there must be cases where IO of hardware-side deteriorates extremely. This means that there is no slowdown when the influence of the external environment is not taken into account.) ",1.0,"Need a Docker stress test - Docker considerations. Related airsonic/airsonic#1473, #1747 Jpsonic eliminated several memory overflow risks. At this time, it is unknown whether Jpsonic can reproduce it. If we can't reproduce it, finding out the cause is proof of the devil. Therefore, we will conduct an equivalent stress test and if there are no problems, it will be closed. By the way, as a test with Jpsonic standalone, 400,000 songs were scanned, and it was confirmed that no memory overflow or resource leak occurred. No slowdowns. (However, when compared with 10,000 songs, depending on the environment of 400,000, there must be cases where IO of hardware-side deteriorates extremely. This means that there is no slowdown when the influence of the external environment is not taken into account.) ",0,need a docker stress test docker considerations related airsonic airsonic jpsonic eliminated several memory overflow risks at this time it is unknown whether jpsonic can reproduce it if we can t reproduce it finding out the cause is proof of the devil therefore we will conduct an equivalent stress test and if there are no problems it will be closed by the way as a test with jpsonic standalone songs were scanned and it was confirmed that no memory overflow or resource leak occurred no slowdowns however when compared with songs depending on the environment of there must be cases where io of hardware side deteriorates extremely this means that there is no slowdown when the influence of the external environment is not taken into account ,0 245063,26506667484.0,IssuesEvent,2023-01-18 14:19:21,keepthatworktoyourself/wombat,https://api.github.com/repos/keepthatworktoyourself/wombat,closed,CLI Compatibility: Switch from bcrypt to scrypt for hashing,enhancement api cli security,"Found while compile the api with rollup for distribution in the CLI: - bcrypt lib uses node-pre-gyp - node-pre-gyp uses the `__dirname` node symbol, which is not valid when node is running in `""type"": ""module""` mode - rollup does compile a bundle, but then when the bundle is run node throws an error during execution scrypt is generally considered as secure as bcrypt or more secure ([depending on implementation details](https://security.stackexchange.com/questions/4781/do-any-security-experts-recommend-bcrypt-for-password-storage)), and scrypt has the advantage of being supported in the built-in node:crypto library. Therefore, switch password hashing to scrypt and remove the bcrypt dependency.",True,"CLI Compatibility: Switch from bcrypt to scrypt for hashing - Found while compile the api with rollup for distribution in the CLI: - bcrypt lib uses node-pre-gyp - node-pre-gyp uses the `__dirname` node symbol, which is not valid when node is running in `""type"": ""module""` mode - rollup does compile a bundle, but then when the bundle is run node throws an error during execution scrypt is generally considered as secure as bcrypt or more secure ([depending on implementation details](https://security.stackexchange.com/questions/4781/do-any-security-experts-recommend-bcrypt-for-password-storage)), and scrypt has the advantage of being supported in the built-in node:crypto library. Therefore, switch password hashing to scrypt and remove the bcrypt dependency.",0,cli compatibility switch from bcrypt to scrypt for hashing found while compile the api with rollup for distribution in the cli bcrypt lib uses node pre gyp node pre gyp uses the dirname node symbol which is not valid when node is running in type module mode rollup does compile a bundle but then when the bundle is run node throws an error during execution scrypt is generally considered as secure as bcrypt or more secure and scrypt has the advantage of being supported in the built in node crypto library therefore switch password hashing to scrypt and remove the bcrypt dependency ,0 5236,18895444685.0,IssuesEvent,2021-11-15 17:21:14,IBM/FHIR,https://api.github.com/repos/IBM/FHIR,closed,Website updates,automation security,"**Is your feature request related to a problem? Please describe.** Website updates The https://ibm.github.io/FHIR site uses gatsby-theme-carbon@2.0.0 https://gatsby.carbondesignsystem.com/getting-started ",1.0,"Website updates - **Is your feature request related to a problem? Please describe.** Website updates The https://ibm.github.io/FHIR site uses gatsby-theme-carbon@2.0.0 https://gatsby.carbondesignsystem.com/getting-started ",1,website updates is your feature request related to a problem please describe website updates the site uses gatsby theme carbon ,1 415994,12138171620.0,IssuesEvent,2020-04-23 16:48:51,decentraland/explorer,https://api.github.com/repos/decentraland/explorer,closed,Scene textures stutter when loading,bug high priority,"Several scenes have their textures stuttering to invisible while they load, when they finish loading sometimes they end up with part of the model invisible. ![image.png](https://images.zenhubusercontent.com/5d9b940e491c060001c8647b/0030546a-a6f2-4c4d-8659-21a78c9bcc11) See here, for example, where models don't finish loading well https://play.decentraland.org/?position=-83%2C64 If you load this scene at these coordinates directly, the stuttering might happen while you're seeing the black screen, but to see it you can spawn about 10 parcels away and then walk towards here, and you'll see the scene loading with the stuttering effect. Or move around the vegas district and you'll see most buildings are affected. Discussing this w Brian, he says that probably the material from the green blockers is being accidentally applied to these models, that's why it stutters. I sometimes also experience this effect in preview when doing a hot reload after editing the scene's code ",1.0,"Scene textures stutter when loading - Several scenes have their textures stuttering to invisible while they load, when they finish loading sometimes they end up with part of the model invisible. ![image.png](https://images.zenhubusercontent.com/5d9b940e491c060001c8647b/0030546a-a6f2-4c4d-8659-21a78c9bcc11) See here, for example, where models don't finish loading well https://play.decentraland.org/?position=-83%2C64 If you load this scene at these coordinates directly, the stuttering might happen while you're seeing the black screen, but to see it you can spawn about 10 parcels away and then walk towards here, and you'll see the scene loading with the stuttering effect. Or move around the vegas district and you'll see most buildings are affected. Discussing this w Brian, he says that probably the material from the green blockers is being accidentally applied to these models, that's why it stutters. I sometimes also experience this effect in preview when doing a hot reload after editing the scene's code ",0,scene textures stutter when loading several scenes have their textures stuttering to invisible while they load when they finish loading sometimes they end up with part of the model invisible see here for example where models don t finish loading well if you load this scene at these coordinates directly the stuttering might happen while you re seeing the black screen but to see it you can spawn about parcels away and then walk towards here and you ll see the scene loading with the stuttering effect or move around the vegas district and you ll see most buildings are affected discussing this w brian he says that probably the material from the green blockers is being accidentally applied to these models that s why it stutters i sometimes also experience this effect in preview when doing a hot reload after editing the scene s code ,0 7637,25317864363.0,IssuesEvent,2022-11-17 23:37:05,boto/boto3,https://api.github.com/repos/boto/boto3,closed,CostExplorer Client - ConnectionError: No such file or directory ,enhancement feature-request confusing-error automation-exempt p3,"When running a test script trying to pull any data from the CostExplorer client, a not found error is encountered. I've tried this for all of the CostExplorer functions as outlined in [the documentation](http://boto3.readthedocs.io/en/latest/reference/services/ce.html), and received the same error for them all. Sample code: ```python import boto3 client = boto3.client('ce') print(client.get_cost_and_usage()) ``` Which results in this error: ``` Traceback (most recent call last): File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 544, in urlopen body=body, headers=headers) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 341, in _make_request self._validate_conn(conn) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 761, in _validate_conn conn.connect() File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connection.py"", line 204, in connect conn = self._new_conn() File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connection.py"", line 134, in _new_conn (self.host, self.port), self.timeout, **extra_kw) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/util/connection.py"", line 64, in create_connection for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): File ""/usr/lib/python3.5/socket.py"", line 732, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): FileNotFoundError: [Errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/adapters.py"", line 370, in send timeout=timeout File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 597, in urlopen _stacktrace=sys.exc_info()[2]) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/util/retry.py"", line 245, in increment raise six.reraise(type(error), error, _stacktrace) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/packages/six.py"", line 309, in reraise raise value.with_traceback(tb) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 544, in urlopen body=body, headers=headers) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 341, in _make_request self._validate_conn(conn) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 761, in _validate_conn conn.connect() File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connection.py"", line 204, in connect conn = self._new_conn() File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connection.py"", line 134, in _new_conn (self.host, self.port), self.timeout, **extra_kw) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/util/connection.py"", line 64, in create_connection for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): File ""/usr/lib/python3.5/socket.py"", line 732, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): botocore.vendored.requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""test.py"", line 5, in print(client.get_cost_and_usage()) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/client.py"", line 317, in _api_call return self._make_api_call(operation_name, kwargs) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/client.py"", line 602, in _make_api_call operation_model, request_dict) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/endpoint.py"", line 143, in make_request return self._send_request(request_dict, operation_model) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/endpoint.py"", line 172, in _send_request success_response, exception): File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/endpoint.py"", line 265, in _needs_retry caught_exception=caught_exception, request_dict=request_dict) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/hooks.py"", line 227, in emit return self._emit(event_name, kwargs) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/hooks.py"", line 210, in _emit response = handler(**kwargs) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 183, in __call__ if self._checker(attempts, response, caught_exception): File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 251, in __call__ caught_exception) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 277, in _should_retry return self._checker(attempt_number, response, caught_exception) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 317, in __call__ caught_exception) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 223, in __call__ attempt_number, caught_exception) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 359, in _check_caught_exception raise caught_exception File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/endpoint.py"", line 213, in _get_response proxies=self.proxies, timeout=self.timeout) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/sessions.py"", line 573, in send r = adapter.send(request, **kwargs) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/adapters.py"", line 415, in send raise ConnectionError(err, request=request) botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) ``` Versions: boto3==1.5.24 botocore==1.8.38 awscli/xenial-updates,xenial-updates,now 1.11.13-1ubuntu1~16.04.0 all [installed]",1.0,"CostExplorer Client - ConnectionError: No such file or directory - When running a test script trying to pull any data from the CostExplorer client, a not found error is encountered. I've tried this for all of the CostExplorer functions as outlined in [the documentation](http://boto3.readthedocs.io/en/latest/reference/services/ce.html), and received the same error for them all. Sample code: ```python import boto3 client = boto3.client('ce') print(client.get_cost_and_usage()) ``` Which results in this error: ``` Traceback (most recent call last): File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 544, in urlopen body=body, headers=headers) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 341, in _make_request self._validate_conn(conn) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 761, in _validate_conn conn.connect() File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connection.py"", line 204, in connect conn = self._new_conn() File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connection.py"", line 134, in _new_conn (self.host, self.port), self.timeout, **extra_kw) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/util/connection.py"", line 64, in create_connection for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): File ""/usr/lib/python3.5/socket.py"", line 732, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): FileNotFoundError: [Errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/adapters.py"", line 370, in send timeout=timeout File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 597, in urlopen _stacktrace=sys.exc_info()[2]) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/util/retry.py"", line 245, in increment raise six.reraise(type(error), error, _stacktrace) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/packages/six.py"", line 309, in reraise raise value.with_traceback(tb) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 544, in urlopen body=body, headers=headers) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 341, in _make_request self._validate_conn(conn) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py"", line 761, in _validate_conn conn.connect() File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connection.py"", line 204, in connect conn = self._new_conn() File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/connection.py"", line 134, in _new_conn (self.host, self.port), self.timeout, **extra_kw) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/util/connection.py"", line 64, in create_connection for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): File ""/usr/lib/python3.5/socket.py"", line 732, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): botocore.vendored.requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""test.py"", line 5, in print(client.get_cost_and_usage()) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/client.py"", line 317, in _api_call return self._make_api_call(operation_name, kwargs) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/client.py"", line 602, in _make_api_call operation_model, request_dict) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/endpoint.py"", line 143, in make_request return self._send_request(request_dict, operation_model) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/endpoint.py"", line 172, in _send_request success_response, exception): File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/endpoint.py"", line 265, in _needs_retry caught_exception=caught_exception, request_dict=request_dict) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/hooks.py"", line 227, in emit return self._emit(event_name, kwargs) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/hooks.py"", line 210, in _emit response = handler(**kwargs) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 183, in __call__ if self._checker(attempts, response, caught_exception): File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 251, in __call__ caught_exception) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 277, in _should_retry return self._checker(attempt_number, response, caught_exception) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 317, in __call__ caught_exception) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 223, in __call__ attempt_number, caught_exception) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/retryhandler.py"", line 359, in _check_caught_exception raise caught_exception File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/endpoint.py"", line 213, in _get_response proxies=self.proxies, timeout=self.timeout) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/sessions.py"", line 573, in send r = adapter.send(request, **kwargs) File ""/home/redacted/rubysown/.local/lib/python3.5/site-packages/botocore/vendored/requests/adapters.py"", line 415, in send raise ConnectionError(err, request=request) botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) ``` Versions: boto3==1.5.24 botocore==1.8.38 awscli/xenial-updates,xenial-updates,now 1.11.13-1ubuntu1~16.04.0 all [installed]",1,costexplorer client connectionerror no such file or directory when running a test script trying to pull any data from the costexplorer client a not found error is encountered i ve tried this for all of the costexplorer functions as outlined in and received the same error for them all sample code python import client client ce print client get cost and usage which results in this error traceback most recent call last file home redacted rubysown local lib site packages botocore vendored requests packages connectionpool py line in urlopen body body headers headers file home redacted rubysown local lib site packages botocore vendored requests packages connectionpool py line in make request self validate conn conn file home redacted rubysown local lib site packages botocore vendored requests packages connectionpool py line in validate conn conn connect file home redacted rubysown local lib site packages botocore vendored requests packages connection py line in connect conn self new conn file home redacted rubysown local lib site packages botocore vendored requests packages connection py line in new conn self host self port self timeout extra kw file home redacted rubysown local lib site packages botocore vendored requests packages util connection py line in create connection for res in socket getaddrinfo host port socket sock stream file usr lib socket py line in getaddrinfo for res in socket getaddrinfo host port family type proto flags filenotfounderror no such file or directory during handling of the above exception another exception occurred traceback most recent call last file home redacted rubysown local lib site packages botocore vendored requests adapters py line in send timeout timeout file home redacted rubysown local lib site packages botocore vendored requests packages connectionpool py line in urlopen stacktrace sys exc info file home redacted rubysown local lib site packages botocore vendored requests packages util retry py line in increment raise six reraise type error error stacktrace file home redacted rubysown local lib site packages botocore vendored requests packages packages six py line in reraise raise value with traceback tb file home redacted rubysown local lib site packages botocore vendored requests packages connectionpool py line in urlopen body body headers headers file home redacted rubysown local lib site packages botocore vendored requests packages connectionpool py line in make request self validate conn conn file home redacted rubysown local lib site packages botocore vendored requests packages connectionpool py line in validate conn conn connect file home redacted rubysown local lib site packages botocore vendored requests packages connection py line in connect conn self new conn file home redacted rubysown local lib site packages botocore vendored requests packages connection py line in new conn self host self port self timeout extra kw file home redacted rubysown local lib site packages botocore vendored requests packages util connection py line in create connection for res in socket getaddrinfo host port socket sock stream file usr lib socket py line in getaddrinfo for res in socket getaddrinfo host port family type proto flags botocore vendored requests packages exceptions protocolerror connection aborted filenotfounderror no such file or directory during handling of the above exception another exception occurred traceback most recent call last file test py line in print client get cost and usage file home redacted rubysown local lib site packages botocore client py line in api call return self make api call operation name kwargs file home redacted rubysown local lib site packages botocore client py line in make api call operation model request dict file home redacted rubysown local lib site packages botocore endpoint py line in make request return self send request request dict operation model file home redacted rubysown local lib site packages botocore endpoint py line in send request success response exception file home redacted rubysown local lib site packages botocore endpoint py line in needs retry caught exception caught exception request dict request dict file home redacted rubysown local lib site packages botocore hooks py line in emit return self emit event name kwargs file home redacted rubysown local lib site packages botocore hooks py line in emit response handler kwargs file home redacted rubysown local lib site packages botocore retryhandler py line in call if self checker attempts response caught exception file home redacted rubysown local lib site packages botocore retryhandler py line in call caught exception file home redacted rubysown local lib site packages botocore retryhandler py line in should retry return self checker attempt number response caught exception file home redacted rubysown local lib site packages botocore retryhandler py line in call caught exception file home redacted rubysown local lib site packages botocore retryhandler py line in call attempt number caught exception file home redacted rubysown local lib site packages botocore retryhandler py line in check caught exception raise caught exception file home redacted rubysown local lib site packages botocore endpoint py line in get response proxies self proxies timeout self timeout file home redacted rubysown local lib site packages botocore vendored requests sessions py line in send r adapter send request kwargs file home redacted rubysown local lib site packages botocore vendored requests adapters py line in send raise connectionerror err request request botocore vendored requests exceptions connectionerror connection aborted filenotfounderror no such file or directory versions botocore awscli xenial updates xenial updates now all ,1 5963,21770577482.0,IssuesEvent,2022-05-13 08:40:37,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,closed,join get the incorrect result,type/bug sig/planner severity/critical found/automation,"## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) ```sql CREATE TABLE t0 (a int, b int, c int); CREATE TABLE t1 (a int, b int, c int); CREATE TABLE t2 (a int, b int, c int); CREATE TABLE t3 (a int, b int, c int); CREATE TABLE t4 (a int, b int, c int); CREATE TABLE t5 (a int, b int, c int); CREATE TABLE t6 (a int, b int, c int); CREATE TABLE t7 (a int, b int, c int); CREATE TABLE t8 (a int, b int, c int); CREATE TABLE t9 (a int, b int, c int); INSERT INTO t0 VALUES (1,1,0), (1,2,0), (2,2,0); INSERT INTO t1 VALUES (1,3,0), (2,2,0), (3,2,0); INSERT INTO t2 VALUES (3,3,0), (4,2,0), (5,3,0); INSERT INTO t3 VALUES (1,2,0), (2,2,0); INSERT INTO t4 VALUES (3,2,0), (4,2,0); INSERT INTO t5 VALUES (3,1,0), (2,2,0), (3,3,0); INSERT INTO t6 VALUES (3,2,0), (6,2,0), (6,1,0); INSERT INTO t7 VALUES (1,1,0), (2,2,0); INSERT INTO t8 VALUES (0,2,0), (1,2,0); INSERT INTO t9 VALUES (1,1,0), (1,2,0), (3,3,0); SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b FROM (t3,t4) LEFT JOIN (t1,t2) ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; explain SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b FROM (t3,t4) LEFT JOIN (t1,t2) ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; ``` ### 2. What did you expect to see? (Required) ```sql MySQL root@127.0.0.1:test> SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b -> FROM (t3,t4) -> LEFT JOIN -> (t1,t2) -> ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; +--------+--------+---+---+---+---+ | a | b | a | b | a | b | +--------+--------+---+---+---+---+ | | | 2 | 2 | 3 | 2 | | | | 2 | 2 | 4 | 2 | | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | +--------+--------+---+---+---+---+ MySQL root@127.0.0.1:test> explain SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b -> FROM (t3,t4) -> LEFT JOIN -> (t1,t2) -> ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; +-------------------------------+---------+-----------+---------------+----------------------------------------------------------------------------------------------------------+ | id | estRows | task | access object | operator info | +-------------------------------+---------+-----------+---------------+----------------------------------------------------------------------------------------------------------+ | Sort_13 | 15.00 | root | | test.t2.a, test.t2.b, test.t3.a, test.t3.b, test.t4.a | | └─Projection_15 | 15.00 | root | | test.t2.a, test.t2.b, test.t3.a, test.t3.b, test.t4.a, test.t4.b | | └─HashJoin_17 | 15.00 | root | | left outer join, equal:[eq(test.t3.b, test.t2.b) eq(test.t4.b, test.t2.b)], left cond:[eq(test.t3.a, 1)] | | ├─HashJoin_18(Build) | 4.00 | root | | CARTESIAN inner join | | │ ├─TableReader_23(Build) | 2.00 | root | | data:TableFullScan_22 | | │ │ └─TableFullScan_22 | 2.00 | cop[tikv] | table:t4 | keep order:false, stats:pseudo | | │ └─TableReader_21(Probe) | 2.00 | root | | data:TableFullScan_20 | | │ └─TableFullScan_20 | 2.00 | cop[tikv] | table:t3 | keep order:false, stats:pseudo | | └─HashJoin_26(Probe) | 8.99 | root | | CARTESIAN inner join | | ├─TableReader_29(Build) | 3.00 | root | | data:Selection_28 | | │ └─Selection_28 | 3.00 | cop[tikv] | | not(isnull(test.t2.b)) | | │ └─TableFullScan_27 | 3.00 | cop[tikv] | table:t2 | keep order:false, stats:pseudo | | └─TableReader_31(Probe) | 3.00 | root | | data:TableFullScan_30 | | └─TableFullScan_30 | 3.00 | cop[tikv] | table:t1 | keep order:false, stats:pseudo | +-------------------------------+---------+-----------+---------------+----------------------------------------------------------------------------------------------------------+ ``` ### 3. What did you see instead (Required) ```sql MySQL root@127.0.0.1:test> SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b -> FROM (t3,t4) -> LEFT JOIN -> (t1,t2) -> ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; +---+---+---+---+---+---+ | a | b | a | b | a | b | +---+---+---+---+---+---+ | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | +---+---+---+---+---+---+ MySQL root@127.0.0.1:test> explain SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b -> FROM (t3,t4) -> LEFT JOIN -> (t1,t2) -> ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; +-----------------------------------+---------+-----------+---------------+---------------------------------------------------------------------------------+ | id | estRows | task | access object | operator info | +-----------------------------------+---------+-----------+---------------+---------------------------------------------------------------------------------+ | Sort_15 | 6.26 | root | | test.t2.a, test.t2.b, test.t3.a, test.t3.b, test.t4.a | | └─Projection_17 | 6.26 | root | | test.t2.a, test.t2.b, test.t3.a, test.t3.b, test.t4.a, test.t4.b | | └─Projection_18 | 6.26 | root | | test.t3.a, test.t3.b, test.t4.a, test.t4.b, test.t2.a, test.t2.b | | └─HashJoin_20 | 6.26 | root | | left outer join, equal:[eq(test.t4.b, test.t2.b)] | | ├─TableReader_22(Build) | 2.00 | root | | data:TableFullScan_21 | | │ └─TableFullScan_21 | 2.00 | cop[tikv] | table:t4 | keep order:false, stats:pseudo | | └─HashJoin_24(Probe) | 7.50 | root | | left outer join, equal:[eq(test.t3.b, test.t2.b)], left cond:[eq(test.t3.a, 1)] | | ├─TableReader_26(Build) | 2.00 | root | | data:TableFullScan_25 | | │ └─TableFullScan_25 | 2.00 | cop[tikv] | table:t3 | keep order:false, stats:pseudo | | └─HashJoin_29(Probe) | 8.99 | root | | CARTESIAN inner join | | ├─TableReader_32(Build) | 3.00 | root | | data:Selection_31 | | │ └─Selection_31 | 3.00 | cop[tikv] | | not(isnull(test.t2.b)) | | │ └─TableFullScan_30 | 3.00 | cop[tikv] | table:t2 | keep order:false, stats:pseudo | | └─TableReader_34(Probe) | 3.00 | root | | data:TableFullScan_33 | | └─TableFullScan_33 | 3.00 | cop[tikv] | table:t1 | keep order:false, stats:pseudo | +-----------------------------------+---------+-----------+---------------+---------------------------------------------------------------------------------+ ``` ### 4. What is your TiDB version? (Required) ```sql MySQL root@127.0.0.1:test> select tidb_version()\G ***************************[ 1. row ]*************************** tidb_version() | Release Version: v6.1.0-alpha-390-g98c31070d Edition: Community Git Commit Hash: 98c31070d95858ecf5f9ffb9d5e0dab3aca13d9c Git Branch: master UTC Build Time: 2022-05-12 01:58:56 GoVersion: go1.18 Race Enabled: false TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306 Check Table Before Drop: false ``` ",1.0,"join get the incorrect result - ## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) ```sql CREATE TABLE t0 (a int, b int, c int); CREATE TABLE t1 (a int, b int, c int); CREATE TABLE t2 (a int, b int, c int); CREATE TABLE t3 (a int, b int, c int); CREATE TABLE t4 (a int, b int, c int); CREATE TABLE t5 (a int, b int, c int); CREATE TABLE t6 (a int, b int, c int); CREATE TABLE t7 (a int, b int, c int); CREATE TABLE t8 (a int, b int, c int); CREATE TABLE t9 (a int, b int, c int); INSERT INTO t0 VALUES (1,1,0), (1,2,0), (2,2,0); INSERT INTO t1 VALUES (1,3,0), (2,2,0), (3,2,0); INSERT INTO t2 VALUES (3,3,0), (4,2,0), (5,3,0); INSERT INTO t3 VALUES (1,2,0), (2,2,0); INSERT INTO t4 VALUES (3,2,0), (4,2,0); INSERT INTO t5 VALUES (3,1,0), (2,2,0), (3,3,0); INSERT INTO t6 VALUES (3,2,0), (6,2,0), (6,1,0); INSERT INTO t7 VALUES (1,1,0), (2,2,0); INSERT INTO t8 VALUES (0,2,0), (1,2,0); INSERT INTO t9 VALUES (1,1,0), (1,2,0), (3,3,0); SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b FROM (t3,t4) LEFT JOIN (t1,t2) ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; explain SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b FROM (t3,t4) LEFT JOIN (t1,t2) ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; ``` ### 2. What did you expect to see? (Required) ```sql MySQL root@127.0.0.1:test> SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b -> FROM (t3,t4) -> LEFT JOIN -> (t1,t2) -> ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; +--------+--------+---+---+---+---+ | a | b | a | b | a | b | +--------+--------+---+---+---+---+ | | | 2 | 2 | 3 | 2 | | | | 2 | 2 | 4 | 2 | | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | +--------+--------+---+---+---+---+ MySQL root@127.0.0.1:test> explain SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b -> FROM (t3,t4) -> LEFT JOIN -> (t1,t2) -> ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; +-------------------------------+---------+-----------+---------------+----------------------------------------------------------------------------------------------------------+ | id | estRows | task | access object | operator info | +-------------------------------+---------+-----------+---------------+----------------------------------------------------------------------------------------------------------+ | Sort_13 | 15.00 | root | | test.t2.a, test.t2.b, test.t3.a, test.t3.b, test.t4.a | | └─Projection_15 | 15.00 | root | | test.t2.a, test.t2.b, test.t3.a, test.t3.b, test.t4.a, test.t4.b | | └─HashJoin_17 | 15.00 | root | | left outer join, equal:[eq(test.t3.b, test.t2.b) eq(test.t4.b, test.t2.b)], left cond:[eq(test.t3.a, 1)] | | ├─HashJoin_18(Build) | 4.00 | root | | CARTESIAN inner join | | │ ├─TableReader_23(Build) | 2.00 | root | | data:TableFullScan_22 | | │ │ └─TableFullScan_22 | 2.00 | cop[tikv] | table:t4 | keep order:false, stats:pseudo | | │ └─TableReader_21(Probe) | 2.00 | root | | data:TableFullScan_20 | | │ └─TableFullScan_20 | 2.00 | cop[tikv] | table:t3 | keep order:false, stats:pseudo | | └─HashJoin_26(Probe) | 8.99 | root | | CARTESIAN inner join | | ├─TableReader_29(Build) | 3.00 | root | | data:Selection_28 | | │ └─Selection_28 | 3.00 | cop[tikv] | | not(isnull(test.t2.b)) | | │ └─TableFullScan_27 | 3.00 | cop[tikv] | table:t2 | keep order:false, stats:pseudo | | └─TableReader_31(Probe) | 3.00 | root | | data:TableFullScan_30 | | └─TableFullScan_30 | 3.00 | cop[tikv] | table:t1 | keep order:false, stats:pseudo | +-------------------------------+---------+-----------+---------------+----------------------------------------------------------------------------------------------------------+ ``` ### 3. What did you see instead (Required) ```sql MySQL root@127.0.0.1:test> SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b -> FROM (t3,t4) -> LEFT JOIN -> (t1,t2) -> ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; +---+---+---+---+---+---+ | a | b | a | b | a | b | +---+---+---+---+---+---+ | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 3 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | | 4 | 2 | 1 | 2 | 4 | 2 | +---+---+---+---+---+---+ MySQL root@127.0.0.1:test> explain SELECT t2.a,t2.b,t3.a,t3.b,t4.a,t4.b -> FROM (t3,t4) -> LEFT JOIN -> (t1,t2) -> ON t3.a=1 AND t3.b=t2.b AND t2.b=t4.b order by 1, 2, 3, 4, 5; +-----------------------------------+---------+-----------+---------------+---------------------------------------------------------------------------------+ | id | estRows | task | access object | operator info | +-----------------------------------+---------+-----------+---------------+---------------------------------------------------------------------------------+ | Sort_15 | 6.26 | root | | test.t2.a, test.t2.b, test.t3.a, test.t3.b, test.t4.a | | └─Projection_17 | 6.26 | root | | test.t2.a, test.t2.b, test.t3.a, test.t3.b, test.t4.a, test.t4.b | | └─Projection_18 | 6.26 | root | | test.t3.a, test.t3.b, test.t4.a, test.t4.b, test.t2.a, test.t2.b | | └─HashJoin_20 | 6.26 | root | | left outer join, equal:[eq(test.t4.b, test.t2.b)] | | ├─TableReader_22(Build) | 2.00 | root | | data:TableFullScan_21 | | │ └─TableFullScan_21 | 2.00 | cop[tikv] | table:t4 | keep order:false, stats:pseudo | | └─HashJoin_24(Probe) | 7.50 | root | | left outer join, equal:[eq(test.t3.b, test.t2.b)], left cond:[eq(test.t3.a, 1)] | | ├─TableReader_26(Build) | 2.00 | root | | data:TableFullScan_25 | | │ └─TableFullScan_25 | 2.00 | cop[tikv] | table:t3 | keep order:false, stats:pseudo | | └─HashJoin_29(Probe) | 8.99 | root | | CARTESIAN inner join | | ├─TableReader_32(Build) | 3.00 | root | | data:Selection_31 | | │ └─Selection_31 | 3.00 | cop[tikv] | | not(isnull(test.t2.b)) | | │ └─TableFullScan_30 | 3.00 | cop[tikv] | table:t2 | keep order:false, stats:pseudo | | └─TableReader_34(Probe) | 3.00 | root | | data:TableFullScan_33 | | └─TableFullScan_33 | 3.00 | cop[tikv] | table:t1 | keep order:false, stats:pseudo | +-----------------------------------+---------+-----------+---------------+---------------------------------------------------------------------------------+ ``` ### 4. What is your TiDB version? (Required) ```sql MySQL root@127.0.0.1:test> select tidb_version()\G ***************************[ 1. row ]*************************** tidb_version() | Release Version: v6.1.0-alpha-390-g98c31070d Edition: Community Git Commit Hash: 98c31070d95858ecf5f9ffb9d5e0dab3aca13d9c Git Branch: master UTC Build Time: 2022-05-12 01:58:56 GoVersion: go1.18 Race Enabled: false TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306 Check Table Before Drop: false ``` ",1,join get the incorrect result bug report please answer these questions before submitting your issue thanks minimal reproduce step required sql create table a int b int c int create table a int b int c int create table a int b int c int create table a int b int c int create table a int b int c int create table a int b int c int create table a int b int c int create table a int b int c int create table a int b int c int create table a int b int c int insert into values insert into values insert into values insert into values insert into values insert into values insert into values insert into values insert into values insert into values select a b a b a b from left join on a and b b and b b order by explain select a b a b a b from left join on a and b b and b b order by what did you expect to see required sql mysql root test select a b a b a b from left join on a and b b and b b order by a b a b a b mysql root test explain select a b a b a b from left join on a and b b and b b order by id estrows task access object operator info sort root test a test b test a test b test a └─projection root test a test b test a test b test a test b └─hashjoin root left outer join equal left cond ├─hashjoin build root cartesian inner join │ ├─tablereader build root data tablefullscan │ │ └─tablefullscan cop table keep order false stats pseudo │ └─tablereader probe root data tablefullscan │ └─tablefullscan cop table keep order false stats pseudo └─hashjoin probe root cartesian inner join ├─tablereader build root data selection │ └─selection cop not isnull test b │ └─tablefullscan cop table keep order false stats pseudo └─tablereader probe root data tablefullscan └─tablefullscan cop table keep order false stats pseudo what did you see instead required sql mysql root test select a b a b a b from left join on a and b b and b b order by a b a b a b mysql root test explain select a b a b a b from left join on a and b b and b b order by id estrows task access object operator info sort root test a test b test a test b test a └─projection root test a test b test a test b test a test b └─projection root test a test b test a test b test a test b └─hashjoin root left outer join equal ├─tablereader build root data tablefullscan │ └─tablefullscan cop table keep order false stats pseudo └─hashjoin probe root left outer join equal left cond ├─tablereader build root data tablefullscan │ └─tablefullscan cop table keep order false stats pseudo └─hashjoin probe root cartesian inner join ├─tablereader build root data selection │ └─selection cop not isnull test b │ └─tablefullscan cop table keep order false stats pseudo └─tablereader probe root data tablefullscan └─tablefullscan cop table keep order false stats pseudo what is your tidb version required sql mysql root test select tidb version g tidb version release version alpha edition community git commit hash git branch master utc build time goversion race enabled false tikv min version check table before drop false ,1 3870,14853839291.0,IssuesEvent,2021-01-18 10:29:26,burespe1/FRAME,https://api.github.com/repos/burespe1/FRAME,closed,Laying Out Data Flows in Diagrams,EA Development automation functional view on hold,"Is it possible to place “data flow” elements automatically between IN and OUT connectors and place all of these (IN connector, the dataflow and the OUT connector) linearly between functions and datastores or terminators while drawing functional view diagrams? ![image](https://user-images.githubusercontent.com/71774192/99529811-7b543b00-29b1-11eb-94ac-67cb357a7eba.png) ",1.0,"Laying Out Data Flows in Diagrams - Is it possible to place “data flow” elements automatically between IN and OUT connectors and place all of these (IN connector, the dataflow and the OUT connector) linearly between functions and datastores or terminators while drawing functional view diagrams? ![image](https://user-images.githubusercontent.com/71774192/99529811-7b543b00-29b1-11eb-94ac-67cb357a7eba.png) ",1,laying out data flows in diagrams is it possible to place “data flow” elements automatically between in and out connectors and place all of these in connector the dataflow and the out connector linearly between functions and datastores or terminators while drawing functional view diagrams ,1 23142,10852236775.0,IssuesEvent,2019-11-13 12:23:56,elikkatzgit/TestingPOM,https://api.github.com/repos/elikkatzgit/TestingPOM,opened,CVE-2018-11784 (Medium) detected in tomcat-catalina-7.0.42.jar,security vulnerability,"## CVE-2018-11784 - Medium Severity Vulnerability
Vulnerable Library - tomcat-catalina-7.0.42.jar

Tomcat Servlet Engine Core Classes and Standard implementations

Library home page: http://tomcat.apache.org/

Dependency Hierarchy: - :x: **tomcat-catalina-7.0.42.jar** (Vulnerable Library)

Found in HEAD commit: 630f758cd843b129965c1658c5baf81a8deff375

Vulnerability Details

When the default servlet in Apache Tomcat versions 9.0.0.M1 to 9.0.11, 8.5.0 to 8.5.33 and 7.0.23 to 7.0.90 returned a redirect to a directory (e.g. redirecting to '/foo/' when the user requested '/foo') a specially crafted URL could be used to cause the redirect to be generated to any URI of the attackers choice.

Publish Date: 2018-10-04

URL: CVE-2018-11784

CVSS 3 Score Details (4.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11784

Release Date: 2018-10-04

Fix Resolution: 9.0.12,8.5.34,7.0.91

",True,"CVE-2018-11784 (Medium) detected in tomcat-catalina-7.0.42.jar - ## CVE-2018-11784 - Medium Severity Vulnerability
Vulnerable Library - tomcat-catalina-7.0.42.jar

Tomcat Servlet Engine Core Classes and Standard implementations

Library home page: http://tomcat.apache.org/

Dependency Hierarchy: - :x: **tomcat-catalina-7.0.42.jar** (Vulnerable Library)

Found in HEAD commit: 630f758cd843b129965c1658c5baf81a8deff375

Vulnerability Details

When the default servlet in Apache Tomcat versions 9.0.0.M1 to 9.0.11, 8.5.0 to 8.5.33 and 7.0.23 to 7.0.90 returned a redirect to a directory (e.g. redirecting to '/foo/' when the user requested '/foo') a specially crafted URL could be used to cause the redirect to be generated to any URI of the attackers choice.

Publish Date: 2018-10-04

URL: CVE-2018-11784

CVSS 3 Score Details (4.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11784

Release Date: 2018-10-04

Fix Resolution: 9.0.12,8.5.34,7.0.91

",0,cve medium detected in tomcat catalina jar cve medium severity vulnerability vulnerable library tomcat catalina jar tomcat servlet engine core classes and standard implementations library home page a href dependency hierarchy x tomcat catalina jar vulnerable library found in head commit a href vulnerability details when the default servlet in apache tomcat versions to to and to returned a redirect to a directory e g redirecting to foo when the user requested foo a specially crafted url could be used to cause the redirect to be generated to any uri of the attackers choice publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ,0 154509,13551576942.0,IssuesEvent,2020-09-17 11:19:35,MTG/essentia,https://api.github.com/repos/MTG/essentia,opened,Add QA scripts and improve docs for Key and KeyExtractor,algorithms QA documentation,"Add QA for Key and KeyExtractor for evaluation on the existing key ground truth datasets. Update the DOCs of these algorithms with the recommended settings for the prepreprocessing (missing to add high-pass filtering?). Test impact of - high-pass filtering - detuning correction - spectral whitening - different `profiles` on accuracy on different datasets. We can then write a blog post presenting this results and conclusions for the recommended settings.",1.0,"Add QA scripts and improve docs for Key and KeyExtractor - Add QA for Key and KeyExtractor for evaluation on the existing key ground truth datasets. Update the DOCs of these algorithms with the recommended settings for the prepreprocessing (missing to add high-pass filtering?). Test impact of - high-pass filtering - detuning correction - spectral whitening - different `profiles` on accuracy on different datasets. We can then write a blog post presenting this results and conclusions for the recommended settings.",0,add qa scripts and improve docs for key and keyextractor add qa for key and keyextractor for evaluation on the existing key ground truth datasets update the docs of these algorithms with the recommended settings for the prepreprocessing missing to add high pass filtering test impact of high pass filtering detuning correction spectral whitening different profiles on accuracy on different datasets we can then write a blog post presenting this results and conclusions for the recommended settings ,0 4724,17356252195.0,IssuesEvent,2021-07-29 14:42:58,CDCgov/prime-field-teams,https://api.github.com/repos/CDCgov/prime-field-teams,opened,Research use of Valuesets for some fields.,sender-automation,"During RS Schema development for Reddy-FMC, Joel will look at the use of setting up standard values for Sex, Race, Ethnicity, Y/N/U AOEs, Pregancy AOE and Test Result Text, so we'll have one place maintain all possible values instead of maintaining these altValues in each individual Schema.",1.0,"Research use of Valuesets for some fields. - During RS Schema development for Reddy-FMC, Joel will look at the use of setting up standard values for Sex, Race, Ethnicity, Y/N/U AOEs, Pregancy AOE and Test Result Text, so we'll have one place maintain all possible values instead of maintaining these altValues in each individual Schema.",1,research use of valuesets for some fields during rs schema development for reddy fmc joel will look at the use of setting up standard values for sex race ethnicity y n u aoes pregancy aoe and test result text so we ll have one place maintain all possible values instead of maintaining these altvalues in each individual schema ,1 2811,12626186949.0,IssuesEvent,2020-06-14 15:26:45,jcallaghan/home-assistant-config,https://api.github.com/repos/jcallaghan/home-assistant-config,opened,Add Home Assistant version sensor,core integration: automation integration: rest_command integration: version task: maintenance,"# Objective Create two sensors to track the current version and available version using the [Version integration](https://www.home-assistant.io/integrations/version/) and notify me when a new release is available. Integration into DevOps processes such as backup and GitHub issues.",1.0,"Add Home Assistant version sensor - # Objective Create two sensors to track the current version and available version using the [Version integration](https://www.home-assistant.io/integrations/version/) and notify me when a new release is available. Integration into DevOps processes such as backup and GitHub issues.",1,add home assistant version sensor objective create two sensors to track the current version and available version using the and notify me when a new release is available integration into devops processes such as backup and github issues ,1 334467,10141890621.0,IssuesEvent,2019-08-03 18:21:38,turt2live/matrix-dimension,https://api.github.com/repos/turt2live/matrix-dimension,closed,Policies: Fix GET /terms to support lack of auth,bug kind:policies parity:scalar priority,"Requires changes to subsequent consumers as well, including how the upstream ""do I need to sign anything"" logic works.",1.0,"Policies: Fix GET /terms to support lack of auth - Requires changes to subsequent consumers as well, including how the upstream ""do I need to sign anything"" logic works.",0,policies fix get terms to support lack of auth requires changes to subsequent consumers as well including how the upstream do i need to sign anything logic works ,0 55312,6469582690.0,IssuesEvent,2017-08-17 06:31:31,redmatrix/hubzilla,https://api.github.com/repos/redmatrix/hubzilla,closed,The clone changes the photo address in my post without permission.,retest please,"The clone changes the photo address in my post without permission. In the post I have a photo with /hub1/cloud/misterx/photo123456789.jpg And in the clone, the same post, I have /hubclone/cloud/misterx/photo123456789.jpg hub1 becomes hubclone, so in my clone I can not see more the photo in the post. And of course the photo has not been copied into the ""hubclone"" , so there is no file inside.",1.0,"The clone changes the photo address in my post without permission. - The clone changes the photo address in my post without permission. In the post I have a photo with /hub1/cloud/misterx/photo123456789.jpg And in the clone, the same post, I have /hubclone/cloud/misterx/photo123456789.jpg hub1 becomes hubclone, so in my clone I can not see more the photo in the post. And of course the photo has not been copied into the ""hubclone"" , so there is no file inside.",0,the clone changes the photo address in my post without permission the clone changes the photo address in my post without permission in the post i have a photo with cloud misterx jpg and in the clone the same post i have hubclone cloud misterx jpg becomes hubclone so in my clone i can not see more the photo in the post and of course the photo has not been copied into the hubclone so there is no file inside ,0 436468,30553684401.0,IssuesEvent,2023-07-20 10:11:42,Perl/perl5,https://api.github.com/repos/Perl/perl5,opened,"[doc] use v5.36, only partially enables warnings",Needs Triage documentation,"perl5360delta says: Furthermore, use v5.36 will also enable warnings as if you'd written use warnings. but the 'once' warning is an exception: perl -e'use v5.36; no strict; print $i' Use of uninitialized value $i in print at -e line 1. vs: perl -e'use v5.36; no strict; use warnings; print $i' Name ""main::i"" used only once: possible typo at -e line 1. Use of uninitialized value $i in print at -e line 1.",1.0,"[doc] use v5.36, only partially enables warnings - perl5360delta says: Furthermore, use v5.36 will also enable warnings as if you'd written use warnings. but the 'once' warning is an exception: perl -e'use v5.36; no strict; print $i' Use of uninitialized value $i in print at -e line 1. vs: perl -e'use v5.36; no strict; use warnings; print $i' Name ""main::i"" used only once: possible typo at -e line 1. Use of uninitialized value $i in print at -e line 1.",0, use only partially enables warnings says furthermore use will also enable warnings as if you d written use warnings but the once warning is an exception perl e use no strict print i use of uninitialized value i in print at e line vs perl e use no strict use warnings print i name main i used only once possible typo at e line use of uninitialized value i in print at e line ,0 366030,10807879849.0,IssuesEvent,2019-11-07 09:24:45,kubernetes/website,https://api.github.com/repos/kubernetes/website,closed,Improvement for k8s.io/docs/concepts/workloads/controllers/statefulset/,good first issue help wanted kind/feature language/en priority/backlog,"**This is a...** - [x] Feature Request - [ ] Bug Report **Problem:** Describing statefulness with an nginx service serving ""Hello world!"" dynamically or other static content is a bad choice because it's not stateful. It demonstrates the use of the configuration files, but provides no additional examples. This seems to be widespread in the k8s docs and probably has a cause like sponsoring by Nginx which is fine, but having a good overview of different use cases is a quality feature of good documentation. So far, I got a good overview of the concepts and specifications of parts of k8s, however feel like I've only seen nginx being deployed in the most absurd setups for someone getting started with k8s while it takes me hours and days to find examples of real world use cases. I think that everybody getting started on statefulness in k8s (e.g. in https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) is way more concerned about getting a clustered database accessible by a replicated set of stateless pods rather than clustering their nginx. **Proposed Solution:** Review examples in terms of coverage of real-world cases in order to make the example intuitive and useful rather than just provide the base to search for and depend on examples on unreliable third-party sources, like outdated blogs and horrible forums. **Page to Update:** https://kubernetes.io/...",1.0,"Improvement for k8s.io/docs/concepts/workloads/controllers/statefulset/ - **This is a...** - [x] Feature Request - [ ] Bug Report **Problem:** Describing statefulness with an nginx service serving ""Hello world!"" dynamically or other static content is a bad choice because it's not stateful. It demonstrates the use of the configuration files, but provides no additional examples. This seems to be widespread in the k8s docs and probably has a cause like sponsoring by Nginx which is fine, but having a good overview of different use cases is a quality feature of good documentation. So far, I got a good overview of the concepts and specifications of parts of k8s, however feel like I've only seen nginx being deployed in the most absurd setups for someone getting started with k8s while it takes me hours and days to find examples of real world use cases. I think that everybody getting started on statefulness in k8s (e.g. in https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) is way more concerned about getting a clustered database accessible by a replicated set of stateless pods rather than clustering their nginx. **Proposed Solution:** Review examples in terms of coverage of real-world cases in order to make the example intuitive and useful rather than just provide the base to search for and depend on examples on unreliable third-party sources, like outdated blogs and horrible forums. **Page to Update:** https://kubernetes.io/...",0,improvement for io docs concepts workloads controllers statefulset this is a feature request bug report problem describing statefulness with an nginx service serving hello world dynamically or other static content is a bad choice because it s not stateful it demonstrates the use of the configuration files but provides no additional examples this seems to be widespread in the docs and probably has a cause like sponsoring by nginx which is fine but having a good overview of different use cases is a quality feature of good documentation so far i got a good overview of the concepts and specifications of parts of however feel like i ve only seen nginx being deployed in the most absurd setups for someone getting started with while it takes me hours and days to find examples of real world use cases i think that everybody getting started on statefulness in e g in is way more concerned about getting a clustered database accessible by a replicated set of stateless pods rather than clustering their nginx proposed solution review examples in terms of coverage of real world cases in order to make the example intuitive and useful rather than just provide the base to search for and depend on examples on unreliable third party sources like outdated blogs and horrible forums page to update ,0 663624,22199546872.0,IssuesEvent,2022-06-07 09:55:18,ever-co/ever-gauzy,https://api.github.com/repos/ever-co/ever-gauzy,closed,Fix: Not able to save contact,type: bug :bug: scope: server priority: highest,"- [x] when trying to save contact getting error (when we select project then not able to contact) ![image](https://user-images.githubusercontent.com/30652722/172298354-75b89b80-409d-4f9e-9ce5-7c3561f34b28.png) - [x] contact listing filter are not working ![image](https://user-images.githubusercontent.com/30652722/172310185-c8a48c18-1dd1-4c76-aa6c-343af94cbe49.png) ",1.0,"Fix: Not able to save contact - - [x] when trying to save contact getting error (when we select project then not able to contact) ![image](https://user-images.githubusercontent.com/30652722/172298354-75b89b80-409d-4f9e-9ce5-7c3561f34b28.png) - [x] contact listing filter are not working ![image](https://user-images.githubusercontent.com/30652722/172310185-c8a48c18-1dd1-4c76-aa6c-343af94cbe49.png) ",0,fix not able to save contact when trying to save contact getting error when we select project then not able to contact contact listing filter are not working ,0 221633,17361517221.0,IssuesEvent,2021-07-29 21:24:47,microsoft/vscode-python,https://api.github.com/repos/microsoft/vscode-python,closed,Nose Tests discovery fails when tests are split by folders and imported to __init__.py (wantModule and wantDirectory are not parsed),area-testing needs PR reason-external type-bug," ## Environment data - VS Code version: 1.37.1 - Extension version (available under the Extensions sidebar): 2019.8.30787 - OS and version: Windows 10 running VS Code, remote plugin connects to Ubuntu 18.04.2 via SSH - Python version (& distribution if applicable, e.g. Anaconda): python 2.7.15rc1 - Type of virtual environment used (N/A | venv | virtualenv | conda | ...): virtualenv - Relevant/affected Python packages and their versions: Django==1.11.23; django-nose==1.4.6; nose==1.3.7 - Jedi or Language Server? (i.e. what is `""python.jediEnabled""` set to; more info #3977): jedi ## Expected behaviour Test discovery for nose should scan for both `wantFile` and `wantModule` and/or `wantDirectory` log outputs ## Actual behaviour We have a `django` project with `django-nose` test runner. Tests are split into modules, and all tests are imported to `__init__.py`. _I think these are some obsolete tests, from default django test runner. But nose runs them perfectly_. I create a simple script, that translates the `nosetests` call to `./manage.py test` call. Basically it just strips `-vvv` and replaces it with `--verbosity` ```bash #!/bin/bash function arg_remove () { local array=(""${@:3}"") for ((i=0; i<""${#array[@]}""; ++i)); do case ${array[i]} in ""$2"") unset array[i]; break ;; esac done # clean up unset array indexes for i in ""${!array[@]}""; do new_array+=( ""${array[i]}"" ) done array=(""${new_array[@]}"") unset new_array # assign array outside function scope local -g ""$1=( )"" eval ${1}='(""${array[@]}"")' } arg_remove ARG_PASS ""-vvv"" ""$@"" echo $@ direnv exec /home/igor/myproject/backend/ /home/igor/.virtualenvs/myproject/bin/python /home/igor/myproject/backend/manage.py test --settings=core.local_settings --verbosity=3 -l nose.selector --nologcapture ""${ARG_PASS[@]}"" ``` The structure of tests is (stripped) File `backend/apps/tests/__init__.py` ```python from .action import ActionTestCase ``` File `backend/apps/tests/action.py` ```python from apps.myproj.tests.base import BaseTestCase class ActionMSFTestCase(BaseTestCase): pass # stripped ``` The output of `--collect-only` command is different to what `vscode-python` expects. Relevant output to tests above is: ``` nose.selector: DEBUG: wantDirectory /home/igor/ata_portal/backend/apps/api/tests? True nose.selector: DEBUG: Test name /home/igor/ata_portal/backend/apps/api/tests resolved to file /home/igor/ata_portal/backend/apps/api/tests, module None, call None nose.selector: DEBUG: Final resolution of test name /home/igor/ata_portal/backend/apps/api/tests: file /home/igor/ata_portal/backend/apps/api/tests module apps.api.tests call None nose.selector: DEBUG: wantModule ? True nose.selector: DEBUG: wantClass ? True ``` `parserService` [here](https://github.com/microsoft/vscode-python/blob/master/src/client/testing/nosetest/services/parserService.ts) scans only for `wantFile` output. In case above it founds `wantClass` above any `wantFile ... py? True` line. That causes `testFile.suites.push(testSuite);` to fail ## Steps to reproduce: 1. Create a package `tests` 1. Add a module `action.py` to package with some real test case 1. import `action` inside `tests/__init__.py` 1. Configure nose tests for workspace 1. Run `Python: Discover Tests` command ## Logs Output for `Python` in the `Output` panel (`View`→`Output`, change the drop-down the upper-right of the `Output` panel to `Python`) ``` Test Discovery failed: TypeError: Cannot read property 'suites' of undefined ``` Output from `Console` under the `Developer Tools` panel (toggle Developer Tools on under `Help`; turn on source maps to make any tracebacks be useful by running `Enable source map support for extension debugging`) ``` notificationsAlerts.ts:40 Test discovery error, please check the configuration settings for the tests. onDidNotificationChange @ notificationsAlerts.ts:40 console.ts:137 [Extension Host] Python Extension: displayDiscoverStatus TypeError: Cannot read property 'suites' of undefined at t.forEach.t (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1015174) at Array.forEach () at h.parseNoseTestModuleCollectionResult (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1014343) at e.split.forEach (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1013968) at Array.forEach () at h.getTestFiles (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1013849) at h.parse (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1013661) at h.discoverTests (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1012881) at process._tickCallback (internal/process/next_tick.js:68:7) ``` ",1.0,"Nose Tests discovery fails when tests are split by folders and imported to __init__.py (wantModule and wantDirectory are not parsed) - ## Environment data - VS Code version: 1.37.1 - Extension version (available under the Extensions sidebar): 2019.8.30787 - OS and version: Windows 10 running VS Code, remote plugin connects to Ubuntu 18.04.2 via SSH - Python version (& distribution if applicable, e.g. Anaconda): python 2.7.15rc1 - Type of virtual environment used (N/A | venv | virtualenv | conda | ...): virtualenv - Relevant/affected Python packages and their versions: Django==1.11.23; django-nose==1.4.6; nose==1.3.7 - Jedi or Language Server? (i.e. what is `""python.jediEnabled""` set to; more info #3977): jedi ## Expected behaviour Test discovery for nose should scan for both `wantFile` and `wantModule` and/or `wantDirectory` log outputs ## Actual behaviour We have a `django` project with `django-nose` test runner. Tests are split into modules, and all tests are imported to `__init__.py`. _I think these are some obsolete tests, from default django test runner. But nose runs them perfectly_. I create a simple script, that translates the `nosetests` call to `./manage.py test` call. Basically it just strips `-vvv` and replaces it with `--verbosity` ```bash #!/bin/bash function arg_remove () { local array=(""${@:3}"") for ((i=0; i<""${#array[@]}""; ++i)); do case ${array[i]} in ""$2"") unset array[i]; break ;; esac done # clean up unset array indexes for i in ""${!array[@]}""; do new_array+=( ""${array[i]}"" ) done array=(""${new_array[@]}"") unset new_array # assign array outside function scope local -g ""$1=( )"" eval ${1}='(""${array[@]}"")' } arg_remove ARG_PASS ""-vvv"" ""$@"" echo $@ direnv exec /home/igor/myproject/backend/ /home/igor/.virtualenvs/myproject/bin/python /home/igor/myproject/backend/manage.py test --settings=core.local_settings --verbosity=3 -l nose.selector --nologcapture ""${ARG_PASS[@]}"" ``` The structure of tests is (stripped) File `backend/apps/tests/__init__.py` ```python from .action import ActionTestCase ``` File `backend/apps/tests/action.py` ```python from apps.myproj.tests.base import BaseTestCase class ActionMSFTestCase(BaseTestCase): pass # stripped ``` The output of `--collect-only` command is different to what `vscode-python` expects. Relevant output to tests above is: ``` nose.selector: DEBUG: wantDirectory /home/igor/ata_portal/backend/apps/api/tests? True nose.selector: DEBUG: Test name /home/igor/ata_portal/backend/apps/api/tests resolved to file /home/igor/ata_portal/backend/apps/api/tests, module None, call None nose.selector: DEBUG: Final resolution of test name /home/igor/ata_portal/backend/apps/api/tests: file /home/igor/ata_portal/backend/apps/api/tests module apps.api.tests call None nose.selector: DEBUG: wantModule ? True nose.selector: DEBUG: wantClass ? True ``` `parserService` [here](https://github.com/microsoft/vscode-python/blob/master/src/client/testing/nosetest/services/parserService.ts) scans only for `wantFile` output. In case above it founds `wantClass` above any `wantFile ... py? True` line. That causes `testFile.suites.push(testSuite);` to fail ## Steps to reproduce: 1. Create a package `tests` 1. Add a module `action.py` to package with some real test case 1. import `action` inside `tests/__init__.py` 1. Configure nose tests for workspace 1. Run `Python: Discover Tests` command ## Logs Output for `Python` in the `Output` panel (`View`→`Output`, change the drop-down the upper-right of the `Output` panel to `Python`) ``` Test Discovery failed: TypeError: Cannot read property 'suites' of undefined ``` Output from `Console` under the `Developer Tools` panel (toggle Developer Tools on under `Help`; turn on source maps to make any tracebacks be useful by running `Enable source map support for extension debugging`) ``` notificationsAlerts.ts:40 Test discovery error, please check the configuration settings for the tests. onDidNotificationChange @ notificationsAlerts.ts:40 console.ts:137 [Extension Host] Python Extension: displayDiscoverStatus TypeError: Cannot read property 'suites' of undefined at t.forEach.t (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1015174) at Array.forEach () at h.parseNoseTestModuleCollectionResult (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1014343) at e.split.forEach (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1013968) at Array.forEach () at h.getTestFiles (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1013849) at h.parse (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1013661) at h.discoverTests (/home/igor/.vscode-server/extensions/ms-python.python-2019.8.30787/out/client/extension.js:75:1012881) at process._tickCallback (internal/process/next_tick.js:68:7) ``` ",0,nose tests discovery fails when tests are split by folders and imported to init py wantmodule and wantdirectory are not parsed environment data vs code version extension version available under the extensions sidebar os and version windows running vs code remote plugin connects to ubuntu via ssh python version distribution if applicable e g anaconda python type of virtual environment used n a venv virtualenv conda virtualenv relevant affected python packages and their versions django django nose nose jedi or language server i e what is python jedienabled set to more info jedi expected behaviour test discovery for nose should scan for both wantfile and wantmodule and or wantdirectory log outputs actual behaviour we have a django project with django nose test runner tests are split into modules and all tests are imported to init py i think these are some obsolete tests from default django test runner but nose runs them perfectly i create a simple script that translates the nosetests call to manage py test call basically it just strips vvv and replaces it with verbosity bash bin bash function arg remove local array for i i array i do case array in unset array break esac done clean up unset array indexes for i in array do new array array done array new array unset new array assign array outside function scope local g eval array arg remove arg pass vvv echo direnv exec home igor myproject backend home igor virtualenvs myproject bin python home igor myproject backend manage py test settings core local settings verbosity l nose selector nologcapture arg pass the structure of tests is stripped file backend apps tests init py python from action import actiontestcase file backend apps tests action py python from apps myproj tests base import basetestcase class actionmsftestcase basetestcase pass stripped the output of collect only command is different to what vscode python expects relevant output to tests above is nose selector debug wantdirectory home igor ata portal backend apps api tests true nose selector debug test name home igor ata portal backend apps api tests resolved to file home igor ata portal backend apps api tests module none call none nose selector debug final resolution of test name home igor ata portal backend apps api tests file home igor ata portal backend apps api tests module apps api tests call none nose selector debug wantmodule true nose selector debug wantclass true parserservice scans only for wantfile output in case above it founds wantclass above any wantfile py true line that causes testfile suites push testsuite to fail steps to reproduce create a package tests add a module action py to package with some real test case import action inside tests init py configure nose tests for workspace run python discover tests command note if you think a gif of what is happening would be helpful consider tools like or logs output for python in the output panel view → output change the drop down the upper right of the output panel to python test discovery failed typeerror cannot read property suites of undefined output from console under the developer tools panel toggle developer tools on under help turn on source maps to make any tracebacks be useful by running enable source map support for extension debugging notificationsalerts ts test discovery error please check the configuration settings for the tests ondidnotificationchange notificationsalerts ts console ts python extension displaydiscoverstatus typeerror cannot read property suites of undefined at t foreach t home igor vscode server extensions ms python python out client extension js at array foreach at h parsenosetestmodulecollectionresult home igor vscode server extensions ms python python out client extension js at e split foreach home igor vscode server extensions ms python python out client extension js at array foreach at h gettestfiles home igor vscode server extensions ms python python out client extension js at h parse home igor vscode server extensions ms python python out client extension js at h discovertests home igor vscode server extensions ms python python out client extension js at process tickcallback internal process next tick js ,0 23308,11867180419.0,IssuesEvent,2020-03-26 06:13:54,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,No supported version of EME detected on this user agent,Pri2 cxp in-progress media-services/svc product-question triaged,"Hi, I'm working on the newest version of Microsoft Edge and I also tested this on Google Chrome but I still get the same error and so didn't manage to play a video on my website with Dash.js. I don't know how to fix this because I read in the explanation that these browsers normally support the W3C Media Source Extensions (MSE). --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 7fee3830-8d03-9d6d-9c30-0ede9933a459 * Version Independent ID: 794397d3-b604-0c63-50b0-6c23afe58a21 * Content: [Embedding a MPEG-DASH Adaptive Streaming Video in an HTML5 Application with DASH.js](https://docs.microsoft.com/en-us/azure/media-services/previous/media-services-embed-mpeg-dash-in-html5#feedback) * Content Source: [articles/media-services/previous/media-services-embed-mpeg-dash-in-html5.md](https://github.com/Microsoft/azure-docs/blob/master/articles/media-services/previous/media-services-embed-mpeg-dash-in-html5.md) * Service: **media-services** * GitHub Login: @Juliako * Microsoft Alias: **juliako**",1.0,"No supported version of EME detected on this user agent - Hi, I'm working on the newest version of Microsoft Edge and I also tested this on Google Chrome but I still get the same error and so didn't manage to play a video on my website with Dash.js. I don't know how to fix this because I read in the explanation that these browsers normally support the W3C Media Source Extensions (MSE). --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 7fee3830-8d03-9d6d-9c30-0ede9933a459 * Version Independent ID: 794397d3-b604-0c63-50b0-6c23afe58a21 * Content: [Embedding a MPEG-DASH Adaptive Streaming Video in an HTML5 Application with DASH.js](https://docs.microsoft.com/en-us/azure/media-services/previous/media-services-embed-mpeg-dash-in-html5#feedback) * Content Source: [articles/media-services/previous/media-services-embed-mpeg-dash-in-html5.md](https://github.com/Microsoft/azure-docs/blob/master/articles/media-services/previous/media-services-embed-mpeg-dash-in-html5.md) * Service: **media-services** * GitHub Login: @Juliako * Microsoft Alias: **juliako**",0,no supported version of eme detected on this user agent hi i m working on the newest version of microsoft edge and i also tested this on google chrome but i still get the same error and so didn t manage to play a video on my website with dash js i don t know how to fix this because i read in the explanation that these browsers normally support the media source extensions mse document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service media services github login juliako microsoft alias juliako ,0 5306,19072230092.0,IssuesEvent,2021-11-27 04:49:48,extratone/extratone,https://api.github.com/repos/extratone/extratone,opened,Fwd: Claim Completed for the Musical.ly and/or TikTok Class Action [(Open email in Spark)](readdlespark://bl=QTphc3BoYWx0YXBvc3RsZUBpY2xvdWQuY29tO0lEOkNBQmFZbys0T0R2MjZpPW1f%0D%0APTU3WXZGYXhSdy1RPVQrRExXZVptc1FaTEgwTGRzRjl5UUBtYWlsLmdtYWlsLmNv%0D%0AbTsxOTAwNjU2NzQ5),automation,27-Nov-2021 04:47:23 - 2276415761 -,1.0,Fwd: Claim Completed for the Musical.ly and/or TikTok Class Action [(Open email in Spark)](readdlespark://bl=QTphc3BoYWx0YXBvc3RsZUBpY2xvdWQuY29tO0lEOkNBQmFZbys0T0R2MjZpPW1f%0D%0APTU3WXZGYXhSdy1RPVQrRExXZVptc1FaTEgwTGRzRjl5UUBtYWlsLmdtYWlsLmNv%0D%0AbTsxOTAwNjU2NzQ5) - 27-Nov-2021 04:47:23 - 2276415761 -,1,fwd claim completed for the musical ly and or tiktok class action readdlespark bl nov ,1 339834,10262883080.0,IssuesEvent,2019-08-22 13:18:08,deep-learning-indaba/Baobab,https://api.github.com/repos/deep-learning-indaba/Baobab,closed,"Attendance Admin: If you're an invited guest who has not registered, you are not marked as an invited guest in the confirmation dialog",High Priority back-end front-end,"In the database Benji is an invited guest. He has not registered though His attendance confirmation dialog should still say he is an invited guest ![image](https://user-images.githubusercontent.com/5547095/63514708-7b813480-c4e9-11e9-83d0-e51ef0ea778a.png) ",1.0,"Attendance Admin: If you're an invited guest who has not registered, you are not marked as an invited guest in the confirmation dialog - In the database Benji is an invited guest. He has not registered though His attendance confirmation dialog should still say he is an invited guest ![image](https://user-images.githubusercontent.com/5547095/63514708-7b813480-c4e9-11e9-83d0-e51ef0ea778a.png) ",0,attendance admin if you re an invited guest who has not registered you are not marked as an invited guest in the confirmation dialog in the database benji is an invited guest he has not registered though his attendance confirmation dialog should still say he is an invited guest ,0 2612,12341767497.0,IssuesEvent,2020-05-14 22:45:29,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] Backups are not getting deleted even after crossing retain count.,area/manager bug priority/3 require-automation-e2e,"**Describe the bug** Retain count is not getting applied to backups. Older backups are not getting deleted when backup counts are exceeding the count of retain. **To Reproduce** Steps to reproduce the behavior: 1. Create a volume(3 replicas, 3 nodes), attach to a pod 2. Write data to volume. Take a snapshot s1. 3. Enable recurring backup with retain count 10. 4. After system takes 5 backups, **revert to s1**. 5. Recurring backups will start to be taken from s1 similar as shown in below screenshot. 6. Backups taken before reverting to s1 would not get deleted even the count of recursive backups exceeds the retain count. **Expected behavior** Backups up to retain count should only be retained. **Environment:** - Longhorn version: master - Kubernetes version: v1.17.5 - Node OS type and version: ubuntu, aws node **Additional context** Also, the editing of retain count is not getting applied to backup retain functionality. ",1.0,"[BUG] Backups are not getting deleted even after crossing retain count. - **Describe the bug** Retain count is not getting applied to backups. Older backups are not getting deleted when backup counts are exceeding the count of retain. **To Reproduce** Steps to reproduce the behavior: 1. Create a volume(3 replicas, 3 nodes), attach to a pod 2. Write data to volume. Take a snapshot s1. 3. Enable recurring backup with retain count 10. 4. After system takes 5 backups, **revert to s1**. 5. Recurring backups will start to be taken from s1 similar as shown in below screenshot. 6. Backups taken before reverting to s1 would not get deleted even the count of recursive backups exceeds the retain count. **Expected behavior** Backups up to retain count should only be retained. **Environment:** - Longhorn version: master - Kubernetes version: v1.17.5 - Node OS type and version: ubuntu, aws node **Additional context** Also, the editing of retain count is not getting applied to backup retain functionality. ",1, backups are not getting deleted even after crossing retain count describe the bug retain count is not getting applied to backups older backups are not getting deleted when backup counts are exceeding the count of retain to reproduce steps to reproduce the behavior create a volume replicas nodes attach to a pod write data to volume take a snapshot enable recurring backup with retain count after system takes backups revert to recurring backups will start to be taken from similar as shown in below screenshot backups taken before reverting to would not get deleted even the count of recursive backups exceeds the retain count expected behavior backups up to retain count should only be retained environment longhorn version master kubernetes version node os type and version ubuntu aws node additional context also the editing of retain count is not getting applied to backup retain functionality img width alt screen shot at pm src img width alt screen shot at pm src ,1 347580,31226058641.0,IssuesEvent,2023-08-19 04:37:20,travel-planner-project/TravelPlanner,https://api.github.com/repos/travel-planner-project/TravelPlanner,closed,[BE] 멤버 및 프로필 builder 패턴 수정,Test BE 신세인 임준형 Refactor,"# 작업 내용 **Member와 Profile builder 패턴 테스트 및 수정 사항 처리**
- builder 패턴 변경에 따른 테스트 - 수정 사항 처리 ",1.0,"[BE] 멤버 및 프로필 builder 패턴 수정 - # 작업 내용 **Member와 Profile builder 패턴 테스트 및 수정 사항 처리**
- builder 패턴 변경에 따른 테스트 - 수정 사항 처리 ",0, 멤버 및 프로필 builder 패턴 수정 작업 내용 member와 profile builder 패턴 테스트 및 수정 사항 처리 builder 패턴 변경에 따른 테스트 수정 사항 처리 ,0 223868,7461494853.0,IssuesEvent,2018-03-31 03:42:40,C3DSU/e-DefPR,https://api.github.com/repos/C3DSU/e-DefPR,closed,Implementar scripts devops de iniciar tarefa e requisitar review,Category: Random Priority: Medium Stage: Review Type: New-feature,Implementar scripts devops (em bash) de iniciar tarefa e requisitar review,1.0,Implementar scripts devops de iniciar tarefa e requisitar review - Implementar scripts devops (em bash) de iniciar tarefa e requisitar review,0,implementar scripts devops de iniciar tarefa e requisitar review implementar scripts devops em bash de iniciar tarefa e requisitar review,0 559344,16556671874.0,IssuesEvent,2021-05-28 14:39:42,owncloud/web,https://api.github.com/repos/owncloud/web,closed,Warn leaving users about running uploads,Priority:p3-medium Topic:good-first-issue Type:Bug feature:files,"### Steps 1. Upload many files 1. During the upload, close the browser tab or navigate to another URL ### Expected result A browser warning appears to tell that the upload will cancel if the user proceeds to leave. ### Actual result No warning, upload is cancelled. Confused user. This should be a metter of adding a `window.onbeforeunload` or similar to the right places and check for pending uploads (or any other operation, really) @LukasHirt @hodyroff as discussed",1.0,"Warn leaving users about running uploads - ### Steps 1. Upload many files 1. During the upload, close the browser tab or navigate to another URL ### Expected result A browser warning appears to tell that the upload will cancel if the user proceeds to leave. ### Actual result No warning, upload is cancelled. Confused user. This should be a metter of adding a `window.onbeforeunload` or similar to the right places and check for pending uploads (or any other operation, really) @LukasHirt @hodyroff as discussed",0,warn leaving users about running uploads steps upload many files during the upload close the browser tab or navigate to another url expected result a browser warning appears to tell that the upload will cancel if the user proceeds to leave actual result no warning upload is cancelled confused user this should be a metter of adding a window onbeforeunload or similar to the right places and check for pending uploads or any other operation really lukashirt hodyroff as discussed,0 72984,15252069686.0,IssuesEvent,2021-02-20 01:24:48,RG4421/developers,https://api.github.com/repos/RG4421/developers,opened,"CVE-2021-23341 (High) detected in prismjs-1.17.1.tgz, prismjs-1.20.0.tgz",security vulnerability,"## CVE-2021-23341 - High Severity Vulnerability
Vulnerable Libraries - prismjs-1.17.1.tgz, prismjs-1.20.0.tgz

prismjs-1.17.1.tgz

Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.

Library home page: https://registry.npmjs.org/prismjs/-/prismjs-1.17.1.tgz

Path to dependency file: developers/package.json

Path to vulnerable library: developers/node_modules/refractor/node_modules/prismjs/package.json

Dependency Hierarchy: - react-syntax-highlighter-10.3.5.tgz (Root Library) - refractor-2.10.1.tgz - :x: **prismjs-1.17.1.tgz** (Vulnerable Library)

prismjs-1.20.0.tgz

Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.

Library home page: https://registry.npmjs.org/prismjs/-/prismjs-1.20.0.tgz

Path to dependency file: developers/package.json

Path to vulnerable library: developers/node_modules/prismjs/package.json

Dependency Hierarchy: - react-syntax-highlighter-10.3.5.tgz (Root Library) - :x: **prismjs-1.20.0.tgz** (Vulnerable Library)

Vulnerability Details

The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.

Publish Date: 2021-02-18

URL: CVE-2021-23341

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341

Release Date: 2021-02-18

Fix Resolution: 1.23.0

",True,"CVE-2021-23341 (High) detected in prismjs-1.17.1.tgz, prismjs-1.20.0.tgz - ## CVE-2021-23341 - High Severity Vulnerability
Vulnerable Libraries - prismjs-1.17.1.tgz, prismjs-1.20.0.tgz

prismjs-1.17.1.tgz

Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.

Library home page: https://registry.npmjs.org/prismjs/-/prismjs-1.17.1.tgz

Path to dependency file: developers/package.json

Path to vulnerable library: developers/node_modules/refractor/node_modules/prismjs/package.json

Dependency Hierarchy: - react-syntax-highlighter-10.3.5.tgz (Root Library) - refractor-2.10.1.tgz - :x: **prismjs-1.17.1.tgz** (Vulnerable Library)

prismjs-1.20.0.tgz

Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.

Library home page: https://registry.npmjs.org/prismjs/-/prismjs-1.20.0.tgz

Path to dependency file: developers/package.json

Path to vulnerable library: developers/node_modules/prismjs/package.json

Dependency Hierarchy: - react-syntax-highlighter-10.3.5.tgz (Root Library) - :x: **prismjs-1.20.0.tgz** (Vulnerable Library)

Vulnerability Details

The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.

Publish Date: 2021-02-18

URL: CVE-2021-23341

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341

Release Date: 2021-02-18

Fix Resolution: 1.23.0

",0,cve high detected in prismjs tgz prismjs tgz cve high severity vulnerability vulnerable libraries prismjs tgz prismjs tgz prismjs tgz lightweight robust elegant syntax highlighting a spin off project from dabblet library home page a href path to dependency file developers package json path to vulnerable library developers node modules refractor node modules prismjs package json dependency hierarchy react syntax highlighter tgz root library refractor tgz x prismjs tgz vulnerable library prismjs tgz lightweight robust elegant syntax highlighting a spin off project from dabblet library home page a href path to dependency file developers package json path to vulnerable library developers node modules prismjs package json dependency hierarchy react syntax highlighter tgz root library x prismjs tgz vulnerable library vulnerability details the package prismjs before are vulnerable to regular expression denial of service redos via the prism asciidoc prism rest prism tap and prism eiffel components publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree react syntax highlighter refractor prismjs isminimumfixversionavailable true minimumfixversion packagetype javascript node js packagename prismjs packageversion packagefilepaths istransitivedependency true dependencytree react syntax highlighter prismjs isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the package prismjs before are vulnerable to regular expression denial of service redos via the prism asciidoc prism rest prism tap and prism eiffel components vulnerabilityurl ,0 4079,15361070370.0,IssuesEvent,2021-03-01 17:40:53,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,Add structured output to Stack.Up,area/automation-api,"Right now `Stack.Up` returns limited output (stdout, stderr, Outputs, Summary). This is mainly due to the fact that `pulumi up` doesn't support a `--json` flag. We should add a `--json` flag to pulumi up, and then consume this from the Automation API. Ideally, we should be able to consume this output in both sync and streaming contexts. ",1.0,"Add structured output to Stack.Up - Right now `Stack.Up` returns limited output (stdout, stderr, Outputs, Summary). This is mainly due to the fact that `pulumi up` doesn't support a `--json` flag. We should add a `--json` flag to pulumi up, and then consume this from the Automation API. Ideally, we should be able to consume this output in both sync and streaming contexts. ",1,add structured output to stack up right now stack up returns limited output stdout stderr outputs summary this is mainly due to the fact that pulumi up doesn t support a json flag we should add a json flag to pulumi up and then consume this from the automation api ideally we should be able to consume this output in both sync and streaming contexts ,1 665,7745570775.0,IssuesEvent,2018-05-29 18:46:48,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Any option to add more VMs after the start/stop is created.,assigned-to-author automation/svc product-question triaged,"1. Is there any option to add more VMs after the start/stop is created? 2. After the schedule is created to start/stop, where can check which VMs are in that schedule ? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 225c9d05-83dd-b006-0025-3753f5ab25bf * Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096 * Content: [Start/Stop VMs during off-hours solution (preview)](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#modify-the-startup-and-shutdown-schedules) * Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md) * Service: **automation** * Product: **unspecified** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Any option to add more VMs after the start/stop is created. - 1. Is there any option to add more VMs after the start/stop is created? 2. After the schedule is created to start/stop, where can check which VMs are in that schedule ? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 225c9d05-83dd-b006-0025-3753f5ab25bf * Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096 * Content: [Start/Stop VMs during off-hours solution (preview)](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#modify-the-startup-and-shutdown-schedules) * Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md) * Service: **automation** * Product: **unspecified** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,any option to add more vms after the start stop is created is there any option to add more vms after the start stop is created after the schedule is created to start stop where can check which vms are in that schedule document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation product unspecified github login georgewallace microsoft alias gwallace ,1 2074,11355105584.0,IssuesEvent,2020-01-24 19:14:06,soffes/home,https://api.github.com/repos/soffes/home,opened,Send a notification if the windows are open and the temperature falls too low,automation,"Currently, we disable the the heat if any windows are open. If the temperature falls too low below the target temperature, it would be cool to send a notification and include which windows are open. There could be an action to turn the heat on anyway attached to the notification.",1.0,"Send a notification if the windows are open and the temperature falls too low - Currently, we disable the the heat if any windows are open. If the temperature falls too low below the target temperature, it would be cool to send a notification and include which windows are open. There could be an action to turn the heat on anyway attached to the notification.",1,send a notification if the windows are open and the temperature falls too low currently we disable the the heat if any windows are open if the temperature falls too low below the target temperature it would be cool to send a notification and include which windows are open there could be an action to turn the heat on anyway attached to the notification ,1 9353,28043735359.0,IssuesEvent,2023-03-28 20:42:44,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,It's not possible to set the path for a config with the Automation API,kind/enhancement area/automation-api,"### What happened? Hello, I'm trying to set the path for a config with the Automation API in Golang. With the CLI it will be: ```shell pulumi config set --path 'labels.injection.value' enabled ``` But seems to be not possible with the Automation API in Golang which uses: ```go (s *Stack) SetConfig(ctx context.Context, key string, val ConfigValue) ``` This because when it runs the command doesn't specify the ```--path``` flag: ```go stdout, stderr, errCode, err := l.runPulumiCmdSync(ctx, ""config"", ""set"", key, secretArg, ""--stack"", stackName, ""--non-interactive"", ""--"", val.Value) ``` ### Expected Behavior I expected an output like: ```yaml encryptionsalt: .... config: automation:labels: injection: value ``` ### Steps to reproduce Just run on a stack: ```go stack.SetConfig(ctx, ""labels.injection.value"", auto.ConfigValue{Value: value, Secret: false}) ``` ### Output of `pulumi about` ```shell CLI Version 3.55.0 Go Version go1.20 Go Compiler gc Host OS ubuntu Version 22.04 Arch x86_64 ``` ### Additional context _No response_ ### Contributing Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already). ",1.0,"It's not possible to set the path for a config with the Automation API - ### What happened? Hello, I'm trying to set the path for a config with the Automation API in Golang. With the CLI it will be: ```shell pulumi config set --path 'labels.injection.value' enabled ``` But seems to be not possible with the Automation API in Golang which uses: ```go (s *Stack) SetConfig(ctx context.Context, key string, val ConfigValue) ``` This because when it runs the command doesn't specify the ```--path``` flag: ```go stdout, stderr, errCode, err := l.runPulumiCmdSync(ctx, ""config"", ""set"", key, secretArg, ""--stack"", stackName, ""--non-interactive"", ""--"", val.Value) ``` ### Expected Behavior I expected an output like: ```yaml encryptionsalt: .... config: automation:labels: injection: value ``` ### Steps to reproduce Just run on a stack: ```go stack.SetConfig(ctx, ""labels.injection.value"", auto.ConfigValue{Value: value, Secret: false}) ``` ### Output of `pulumi about` ```shell CLI Version 3.55.0 Go Version go1.20 Go Compiler gc Host OS ubuntu Version 22.04 Arch x86_64 ``` ### Additional context _No response_ ### Contributing Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already). ",1,it s not possible to set the path for a config with the automation api what happened hello i m trying to set the path for a config with the automation api in golang with the cli it will be shell pulumi config set path labels injection value enabled but seems to be not possible with the automation api in golang which uses go s stack setconfig ctx context context key string val configvalue this because when it runs the command doesn t specify the path flag go stdout stderr errcode err l runpulumicmdsync ctx config set key secretarg stack stackname non interactive val value expected behavior i expected an output like yaml encryptionsalt config automation labels injection value steps to reproduce just run on a stack go stack setconfig ctx labels injection value auto configvalue value value secret false output of pulumi about shell cli version go version go compiler gc host os ubuntu version arch additional context no response contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already ,1 6139,22288100783.0,IssuesEvent,2022-06-12 00:19:33,KILTprotocol/sdk-js,https://api.github.com/repos/KILTprotocol/sdk-js,closed,SDK no longer compatible with latest dependecies,bug incompatible dependencies automation,"## Incompatibilities detected A [scheduled test workflow](https://github.com/KILTprotocol/sdk-js/actions/runs/2441332049) using the latest available dependencies matching our semver ranges has failed. We may need to constrain dependency ranges in our `package.json` or introduce fixes to recover compatibility. Below you can find a summary of depedency versions against which these tests were run. _Note: This issue was **automatically created** as a result of scheduled CI tests on 2022-06-05._
Dependency versions ""@commitlint/cli@npm:9.1.2"" ""@commitlint/config-conventional@npm:9.1.2"" ""@kiltprotocol/chain-helpers@workspace:packages/chain-helpers"" ""@kiltprotocol/config@workspace:packages/config"" ""@kiltprotocol/core@workspace:packages/core"" ""@kiltprotocol/did@workspace:packages/did"" ""@kiltprotocol/messaging@workspace:packages/messaging"" ""@kiltprotocol/sdk-js@workspace:packages/sdk-js"" ""@kiltprotocol/testing@workspace:packages/testing"" ""@kiltprotocol/types@workspace:packages/types"" ""@kiltprotocol/utils@workspace:packages/utils"" ""@kiltprotocol/vc-export@workspace:packages/vc-export"" ""@polkadot/api-augment@npm:7.15.1"" ""@polkadot/api@npm:7.15.1"" ""@polkadot/keyring@npm:8.7.1"" ""@polkadot/types-known@npm:7.15.1"" ""@polkadot/types@npm:7.15.1"" ""@polkadot/util-crypto@npm:8.7.1"" ""@polkadot/util@npm:8.7.1"" ""@types/jest@npm:27.5.2"" ""@types/jsonld@npm:1.5.1"" ""@types/uuid@npm:8.3.4"" ""@typescript-eslint/eslint-plugin@npm:5.27.0"" ""@typescript-eslint/parser@npm:5.27.0"" ""buffer@npm:6.0.3"" ""cbor@npm:8.1.0"" ""crypto-browserify@npm:3.12.0"" ""crypto-ld@npm:3.9.0"" ""eslint-config-airbnb-base@npm:14.2.1"" ""eslint-config-prettier@npm:6.15.0"" ""eslint-plugin-import@npm:2.26.0"" ""eslint-plugin-jsdoc@npm:37.9.7"" ""eslint-plugin-license-header@npm:0.2.1"" ""eslint-plugin-node@npm:11.1.0"" ""eslint-plugin-prettier@npm:3.4.1"" ""eslint@npm:7.32.0"" ""husky@npm:4.3.8"" ""jest-docblock@npm:27.5.1"" ""jest-runner-groups@npm:2.2.0"" ""jest-runner@npm:27.5.1"" ""jest@npm:27.5.1"" ""jsonld-signatures@npm:5.2.0"" ""jsonld@npm:2.0.2"" ""prettier@npm:2.6.2"" ""process@npm:0.11.10"" ""rimraf@npm:3.0.2"" ""root-workspace-0b6124@workspace:."" ""stream-browserify@npm:3.0.0"" ""terser-webpack-plugin@npm:5.3.3"" ""ts-jest-resolver@npm:2.0.0"" ""ts-jest@npm:27.1.5"" ""ts-node@npm:10.8.1"" ""tweetnacl@npm:1.0.3"" ""typedoc-plugin-external-module-name@npm:4.0.6"" ""typedoc@npm:0.22.17"" ""typescript-logging@npm:0.6.4"" ""typescript@patch:typescript@npm%3A4.7.3#~builtin::version=4.7.3&hash=32657b"" ""url@npm:0.11.0"" ""util@npm:0.12.4"" ""uuid@npm:8.3.2"" ""vc-js@npm:0.6.4"" ""webpack-cli@npm:4.9.2"" ""webpack@npm:5.73.0"" ""yargs@npm:16.2.0""
",1.0,"SDK no longer compatible with latest dependecies - ## Incompatibilities detected A [scheduled test workflow](https://github.com/KILTprotocol/sdk-js/actions/runs/2441332049) using the latest available dependencies matching our semver ranges has failed. We may need to constrain dependency ranges in our `package.json` or introduce fixes to recover compatibility. Below you can find a summary of depedency versions against which these tests were run. _Note: This issue was **automatically created** as a result of scheduled CI tests on 2022-06-05._
Dependency versions ""@commitlint/cli@npm:9.1.2"" ""@commitlint/config-conventional@npm:9.1.2"" ""@kiltprotocol/chain-helpers@workspace:packages/chain-helpers"" ""@kiltprotocol/config@workspace:packages/config"" ""@kiltprotocol/core@workspace:packages/core"" ""@kiltprotocol/did@workspace:packages/did"" ""@kiltprotocol/messaging@workspace:packages/messaging"" ""@kiltprotocol/sdk-js@workspace:packages/sdk-js"" ""@kiltprotocol/testing@workspace:packages/testing"" ""@kiltprotocol/types@workspace:packages/types"" ""@kiltprotocol/utils@workspace:packages/utils"" ""@kiltprotocol/vc-export@workspace:packages/vc-export"" ""@polkadot/api-augment@npm:7.15.1"" ""@polkadot/api@npm:7.15.1"" ""@polkadot/keyring@npm:8.7.1"" ""@polkadot/types-known@npm:7.15.1"" ""@polkadot/types@npm:7.15.1"" ""@polkadot/util-crypto@npm:8.7.1"" ""@polkadot/util@npm:8.7.1"" ""@types/jest@npm:27.5.2"" ""@types/jsonld@npm:1.5.1"" ""@types/uuid@npm:8.3.4"" ""@typescript-eslint/eslint-plugin@npm:5.27.0"" ""@typescript-eslint/parser@npm:5.27.0"" ""buffer@npm:6.0.3"" ""cbor@npm:8.1.0"" ""crypto-browserify@npm:3.12.0"" ""crypto-ld@npm:3.9.0"" ""eslint-config-airbnb-base@npm:14.2.1"" ""eslint-config-prettier@npm:6.15.0"" ""eslint-plugin-import@npm:2.26.0"" ""eslint-plugin-jsdoc@npm:37.9.7"" ""eslint-plugin-license-header@npm:0.2.1"" ""eslint-plugin-node@npm:11.1.0"" ""eslint-plugin-prettier@npm:3.4.1"" ""eslint@npm:7.32.0"" ""husky@npm:4.3.8"" ""jest-docblock@npm:27.5.1"" ""jest-runner-groups@npm:2.2.0"" ""jest-runner@npm:27.5.1"" ""jest@npm:27.5.1"" ""jsonld-signatures@npm:5.2.0"" ""jsonld@npm:2.0.2"" ""prettier@npm:2.6.2"" ""process@npm:0.11.10"" ""rimraf@npm:3.0.2"" ""root-workspace-0b6124@workspace:."" ""stream-browserify@npm:3.0.0"" ""terser-webpack-plugin@npm:5.3.3"" ""ts-jest-resolver@npm:2.0.0"" ""ts-jest@npm:27.1.5"" ""ts-node@npm:10.8.1"" ""tweetnacl@npm:1.0.3"" ""typedoc-plugin-external-module-name@npm:4.0.6"" ""typedoc@npm:0.22.17"" ""typescript-logging@npm:0.6.4"" ""typescript@patch:typescript@npm%3A4.7.3#~builtin::version=4.7.3&hash=32657b"" ""url@npm:0.11.0"" ""util@npm:0.12.4"" ""uuid@npm:8.3.2"" ""vc-js@npm:0.6.4"" ""webpack-cli@npm:4.9.2"" ""webpack@npm:5.73.0"" ""yargs@npm:16.2.0""
",1,sdk no longer compatible with latest dependecies incompatibilities detected a using the latest available dependencies matching our semver ranges has failed we may need to constrain dependency ranges in our package json or introduce fixes to recover compatibility below you can find a summary of depedency versions against which these tests were run note this issue was automatically created as a result of scheduled ci tests on dependency versions commitlint cli npm commitlint config conventional npm kiltprotocol chain helpers workspace packages chain helpers kiltprotocol config workspace packages config kiltprotocol core workspace packages core kiltprotocol did workspace packages did kiltprotocol messaging workspace packages messaging kiltprotocol sdk js workspace packages sdk js kiltprotocol testing workspace packages testing kiltprotocol types workspace packages types kiltprotocol utils workspace packages utils kiltprotocol vc export workspace packages vc export polkadot api augment npm polkadot api npm polkadot keyring npm polkadot types known npm polkadot types npm polkadot util crypto npm polkadot util npm types jest npm types jsonld npm types uuid npm typescript eslint eslint plugin npm typescript eslint parser npm buffer npm cbor npm crypto browserify npm crypto ld npm eslint config airbnb base npm eslint config prettier npm eslint plugin import npm eslint plugin jsdoc npm eslint plugin license header npm eslint plugin node npm eslint plugin prettier npm eslint npm husky npm jest docblock npm jest runner groups npm jest runner npm jest npm jsonld signatures npm jsonld npm prettier npm process npm rimraf npm root workspace workspace stream browserify npm terser webpack plugin npm ts jest resolver npm ts jest npm ts node npm tweetnacl npm typedoc plugin external module name npm typedoc npm typescript logging npm typescript patch typescript npm builtin version hash url npm util npm uuid npm vc js npm webpack cli npm webpack npm yargs npm ,1 296125,9104575656.0,IssuesEvent,2019-02-20 18:32:16,brave/browser-android-tabs,https://api.github.com/repos/brave/browser-android-tabs,closed,Translations missing and instead placeholder text is shown,QA/Yes bug/BR bug/BR-wait_for_upstream l10n priority/P1," ## Description Translations missing and instead placeholder text is shown ## Steps to Reproduce 1. Open brave 1.0.78 2. Enable Rewards 3. Accept the grants, click on your wallet 4. Observe UI issues ## Actual result: 1. `Missing` placeholder is shown 2. `tokenGrantClaimed ` is displayed instead of `Token Grants Claimed`- there is no space between texts and also `s` is missing after Grant, `t` should be caps 3. 30.0 BAT is not alinged with `Token Grants Claimed` ![screenshot_2019-02-08-15-48-16](https://user-images.githubusercontent.com/38657976/52474478-9aa81c80-2bbe-11e9-9b66-54d998c5ce5f.png) ## Expected result: same as desktop ## Issue reproduces how often: Easy ## Issue happens on: - Current Playstore version? no - Beta build?yes ## Device Details: - Install Type(ARM, x86): ARM - Device(Phone, Tablet, Phablet): Samsung Galaxy J3 - Android Version: 5.1 ## Brave version: 1.0.78 ### Website problems only: - Does the issue resolve itself when disabling Brave Shields? NA - Is the issue reproducible on the latest version of Chrome? NA ### Additional Information ",1.0,"Translations missing and instead placeholder text is shown - ## Description Translations missing and instead placeholder text is shown ## Steps to Reproduce 1. Open brave 1.0.78 2. Enable Rewards 3. Accept the grants, click on your wallet 4. Observe UI issues ## Actual result: 1. `Missing` placeholder is shown 2. `tokenGrantClaimed ` is displayed instead of `Token Grants Claimed`- there is no space between texts and also `s` is missing after Grant, `t` should be caps 3. 30.0 BAT is not alinged with `Token Grants Claimed` ![screenshot_2019-02-08-15-48-16](https://user-images.githubusercontent.com/38657976/52474478-9aa81c80-2bbe-11e9-9b66-54d998c5ce5f.png) ## Expected result: same as desktop ## Issue reproduces how often: Easy ## Issue happens on: - Current Playstore version? no - Beta build?yes ## Device Details: - Install Type(ARM, x86): ARM - Device(Phone, Tablet, Phablet): Samsung Galaxy J3 - Android Version: 5.1 ## Brave version: 1.0.78 ### Website problems only: - Does the issue resolve itself when disabling Brave Shields? NA - Is the issue reproducible on the latest version of Chrome? NA ### Additional Information ",0,translations missing and instead placeholder text is shown have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description translations missing and instead placeholder text is shown steps to reproduce open brave enable rewards accept the grants click on your wallet observe ui issues actual result missing placeholder is shown tokengrantclaimed is displayed instead of token grants claimed there is no space between texts and also s is missing after grant t should be caps bat is not alinged with token grants claimed expected result same as desktop issue reproduces how often easy issue happens on current playstore version no beta build yes device details install type arm arm device phone tablet phablet samsung galaxy android version brave version website problems only does the issue resolve itself when disabling brave shields na is the issue reproducible on the latest version of chrome na additional information ,0 3781,14566755874.0,IssuesEvent,2020-12-17 09:20:27,submariner-io/releases,https://api.github.com/repos/submariner-io/releases,closed,Create release when release PR is merged,automation,"1. Create a release in GH (with notes) 2. Upload subctl artifacts 3. Tag images with released version and push to quay",1.0,"Create release when release PR is merged - 1. Create a release in GH (with notes) 2. Upload subctl artifacts 3. Tag images with released version and push to quay",1,create release when release pr is merged create a release in gh with notes upload subctl artifacts tag images with released version and push to quay,1 295585,25486978246.0,IssuesEvent,2022-11-26 14:36:54,apache/streampipes,https://api.github.com/repos/apache/streampipes,closed,Implementation of Pipeline Tests,ui migrated from jira test testing,"Implementing e2e test using cypress for pipeline Elements.     Imported from Jira [STREAMPIPES-476](https://issues.apache.org/jira/browse/STREAMPIPES-476). Original Jira may contain additional context. Reported by: hrushi20.",2.0,"Implementation of Pipeline Tests - Implementing e2e test using cypress for pipeline Elements.     Imported from Jira [STREAMPIPES-476](https://issues.apache.org/jira/browse/STREAMPIPES-476). Original Jira may contain additional context. Reported by: hrushi20.",0,implementation of pipeline tests implementing test using cypress for pipeline elements     imported from jira original jira may contain additional context reported by ,0 342895,24760916525.0,IssuesEvent,2022-10-22 00:04:21,tweepy/tweepy,https://api.github.com/repos/tweepy/tweepy,closed,StreamingClient: rate limit implementation incorrect,Bug Question Duplicate Documentation,"The `BaseStream` class checks the 420 HTTP error code, while it should be 429. https://github.com/tweepy/tweepy/blob/33e444a9d13d53ea024ddb3c9da30158a39ea4f6/tweepy/streaming.py#L113 Also: the `wait_on_rate_limit` argument of `StreamingClient` is misleading, because it only impacts the `add_rules` and `delete_rules` methods. It has no impact on the `filter` or `sample` methods. Related to this: how does a rate limit exactly work for a stream? Apparently you can make 50 requests per 15 minutes to the stream endpoint, but what does this mean exactly in the context of a stream?",1.0,"StreamingClient: rate limit implementation incorrect - The `BaseStream` class checks the 420 HTTP error code, while it should be 429. https://github.com/tweepy/tweepy/blob/33e444a9d13d53ea024ddb3c9da30158a39ea4f6/tweepy/streaming.py#L113 Also: the `wait_on_rate_limit` argument of `StreamingClient` is misleading, because it only impacts the `add_rules` and `delete_rules` methods. It has no impact on the `filter` or `sample` methods. Related to this: how does a rate limit exactly work for a stream? Apparently you can make 50 requests per 15 minutes to the stream endpoint, but what does this mean exactly in the context of a stream?",0,streamingclient rate limit implementation incorrect the basestream class checks the http error code while it should be also the wait on rate limit argument of streamingclient is misleading because it only impacts the add rules and delete rules methods it has no impact on the filter or sample methods related to this how does a rate limit exactly work for a stream apparently you can make requests per minutes to the stream endpoint but what does this mean exactly in the context of a stream ,0 6352,22840730520.0,IssuesEvent,2022-07-12 21:31:54,tunglinn/website,https://api.github.com/repos/tunglinn/website,closed,Change PR instruction comment,good first issue automation Feature: Board/GitHub Maintenance role: back end/devOps size: 0.5pt,"### Overview Test ### Action Items - [ ] change PR instruction comment ### Resources/Instructions edit github-actions/pr-instructions/pr-instructions-template.md ",1.0,"Change PR instruction comment - ### Overview Test ### Action Items - [ ] change PR instruction comment ### Resources/Instructions edit github-actions/pr-instructions/pr-instructions-template.md ",1,change pr instruction comment overview test action items change pr instruction comment resources instructions edit github actions pr instructions pr instructions template md ,1 7824,25761076085.0,IssuesEvent,2022-12-08 20:35:39,uselagoon/lagoon,https://api.github.com/repos/uselagoon/lagoon,closed,Run SSH pod in each lagoon-remote,1-api-auth 8-automation-helpers Lagoon3.0,"Currently, all SSH connections run via the SSH pod in the same location that all other Lagoon services run. While this simplifies the auto-generation of Drush aliases, the experience of customers using SSH from locations distant from the main cluster is impacted by this. Each cluster should have it's own SSH pod running, and the auto-generated drush aliases should prefer this endpoint for any SSH interaction.",1.0,"Run SSH pod in each lagoon-remote - Currently, all SSH connections run via the SSH pod in the same location that all other Lagoon services run. While this simplifies the auto-generation of Drush aliases, the experience of customers using SSH from locations distant from the main cluster is impacted by this. Each cluster should have it's own SSH pod running, and the auto-generated drush aliases should prefer this endpoint for any SSH interaction.",1,run ssh pod in each lagoon remote currently all ssh connections run via the ssh pod in the same location that all other lagoon services run while this simplifies the auto generation of drush aliases the experience of customers using ssh from locations distant from the main cluster is impacted by this each cluster should have it s own ssh pod running and the auto generated drush aliases should prefer this endpoint for any ssh interaction ,1 2217,7494594815.0,IssuesEvent,2018-04-07 11:45:44,cyder/SyncPod-Android,https://api.github.com/repos/cyder/SyncPod-Android,closed,ログインミスったときのメッセージをいい感じにする.,doing new-architecture,`resId`渡せるようにしてるので,なんかいい感じにseald classでwhen式に書けてもいいし,enum classのようなものを作ってもいいしなんでも,1.0,ログインミスったときのメッセージをいい感じにする. - `resId`渡せるようにしてるので,なんかいい感じにseald classでwhen式に書けてもいいし,enum classのようなものを作ってもいいしなんでも,0,ログインミスったときのメッセージをいい感じにする. resid 渡せるようにしてるので,なんかいい感じにseald classでwhen式に書けてもいいし,enum classのようなものを作ってもいいしなんでも,0 6977,24079175046.0,IssuesEvent,2022-09-19 03:42:24,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,closed,`inl_hash_join` hint has not taken effect,type/bug sig/planner type/regression severity/major found/automation,"## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) ```sql CREATE TABLE `t` ( `id` int primary key, `a` bigint(20) DEFAULT NULL, `b` char(20) DEFAULT NULL, `c` datetime DEFAULT NULL, `d` double DEFAULT NULL, `e` json DEFAULT NULL, `f` decimal(40,6) DEFAULT NULL, KEY `a` (`a`), KEY `b` (`b`), KEY `c` (`c`), KEY `d` (`d`), KEY `f` (`f`)) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin; explain select /*+ inl_hash_join (t1) */ * from t t1 join t t2 on t1.d=t2.e; ``` ### 2. What did you expect to see? (Required) using index hash join ### 3. What did you see instead (Required) ```sql -------------------------+----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------+ | id | estRows | task | access object | operator info | +-------------------------+----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------+ | HashJoin_8 | 12500.00 | root | | inner join, equal:[eq(Column#15, test.t.e)] | | ├─TableReader_14(Build) | 10000.00 | root | | data:TableFullScan_13 | | │ └─TableFullScan_13 | 10000.00 | cop[tikv] | table:t2 | keep order:false, stats:pseudo | | └─Projection_10(Probe) | 10000.00 | root | | test.t.id, test.t.a, test.t.b, test.t.c, test.t.d, test.t.e, test.t.f, cast(test.t.d, json BINARY)->Column#15 | | └─TableReader_12 | 10000.00 | root | | data:TableFullScan_11 | | └─TableFullScan_11 | 10000.00 | cop[tikv] | table:t1 | keep order:false, stats:pseudo | +-------------------------+----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------+ ``` ### 4. What is your TiDB version? (Required) ```sql MySQL root@127.0.0.1:test> select tidb_version()\G ***************************[ 1. row ]*************************** tidb_version() | Release Version: v6.3.0-alpha Edition: Community Git Commit Hash: 899bd79686f677f531caa165053a28aaea3191c9 Git Branch: heads/refs/tags/v6.3.0-alpha UTC Build Time: 2022-09-13 14:25:20 GoVersion: go1.19 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false Store: unistore ``` ",1.0,"`inl_hash_join` hint has not taken effect - ## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) ```sql CREATE TABLE `t` ( `id` int primary key, `a` bigint(20) DEFAULT NULL, `b` char(20) DEFAULT NULL, `c` datetime DEFAULT NULL, `d` double DEFAULT NULL, `e` json DEFAULT NULL, `f` decimal(40,6) DEFAULT NULL, KEY `a` (`a`), KEY `b` (`b`), KEY `c` (`c`), KEY `d` (`d`), KEY `f` (`f`)) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin; explain select /*+ inl_hash_join (t1) */ * from t t1 join t t2 on t1.d=t2.e; ``` ### 2. What did you expect to see? (Required) using index hash join ### 3. What did you see instead (Required) ```sql -------------------------+----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------+ | id | estRows | task | access object | operator info | +-------------------------+----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------+ | HashJoin_8 | 12500.00 | root | | inner join, equal:[eq(Column#15, test.t.e)] | | ├─TableReader_14(Build) | 10000.00 | root | | data:TableFullScan_13 | | │ └─TableFullScan_13 | 10000.00 | cop[tikv] | table:t2 | keep order:false, stats:pseudo | | └─Projection_10(Probe) | 10000.00 | root | | test.t.id, test.t.a, test.t.b, test.t.c, test.t.d, test.t.e, test.t.f, cast(test.t.d, json BINARY)->Column#15 | | └─TableReader_12 | 10000.00 | root | | data:TableFullScan_11 | | └─TableFullScan_11 | 10000.00 | cop[tikv] | table:t1 | keep order:false, stats:pseudo | +-------------------------+----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------+ ``` ### 4. What is your TiDB version? (Required) ```sql MySQL root@127.0.0.1:test> select tidb_version()\G ***************************[ 1. row ]*************************** tidb_version() | Release Version: v6.3.0-alpha Edition: Community Git Commit Hash: 899bd79686f677f531caa165053a28aaea3191c9 Git Branch: heads/refs/tags/v6.3.0-alpha UTC Build Time: 2022-09-13 14:25:20 GoVersion: go1.19 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false Store: unistore ``` ",1, inl hash join hint has not taken effect bug report please answer these questions before submitting your issue thanks minimal reproduce step required sql create table t id int primary key a bigint default null b char default null c datetime default null d double default null e json default null f decimal default null key a a key b b key c c key d d key f f engine innodb default charset collate bin explain select inl hash join from t join t on d e what did you expect to see required using index hash join what did you see instead required sql id estrows task access object operator info hashjoin root inner join equal ├─tablereader build root data tablefullscan │ └─tablefullscan cop table keep order false stats pseudo └─projection probe root test t id test t a test t b test t c test t d test t e test t f cast test t d json binary column └─tablereader root data tablefullscan └─tablefullscan cop table keep order false stats pseudo what is your tidb version required sql mysql root test select tidb version g tidb version release version alpha edition community git commit hash git branch heads refs tags alpha utc build time goversion race enabled false tikv min version alpha check table before drop false store unistore ,1 2003,11255462327.0,IssuesEvent,2020-01-12 09:28:48,matchID-project/backend,https://api.github.com/repos/matchID-project/backend,closed,automation - ajoute la capacité à interpréter les variables d'environnement dans les conf yaml,automation,"Pour faciliter la configuration via Make/Docker (au run), permettre l'application de variables d'environnement au sein du Yaml. S'inspirer de : https://medium.com/swlh/python-yaml-configuration-with-environment-variables-parsing-77930f4273ac Modèle de déclaration cible : ``` maconf: !ENV ${ENV_VARIABLE} ``` ",1.0,"automation - ajoute la capacité à interpréter les variables d'environnement dans les conf yaml - Pour faciliter la configuration via Make/Docker (au run), permettre l'application de variables d'environnement au sein du Yaml. S'inspirer de : https://medium.com/swlh/python-yaml-configuration-with-environment-variables-parsing-77930f4273ac Modèle de déclaration cible : ``` maconf: !ENV ${ENV_VARIABLE} ``` ",1,automation ajoute la capacité à interpréter les variables d environnement dans les conf yaml pour faciliter la configuration via make docker au run permettre l application de variables d environnement au sein du yaml s inspirer de modèle de déclaration cible maconf env env variable ,1 11224,16656077096.0,IssuesEvent,2021-06-05 14:52:30,ReedFamily/hebamme-web,https://api.github.com/repos/ReedFamily/hebamme-web,closed,Create a themed 404 page.,requirement,The page should contain the same textual formatting and colors as the main page and should provide a link to return to the main page. The page shall be named 404.html and resides next to the index.html. It will be called when someone attempts to browse the backend code files directly.,1.0,Create a themed 404 page. - The page should contain the same textual formatting and colors as the main page and should provide a link to return to the main page. The page shall be named 404.html and resides next to the index.html. It will be called when someone attempts to browse the backend code files directly.,0,create a themed page the page should contain the same textual formatting and colors as the main page and should provide a link to return to the main page the page shall be named html and resides next to the index html it will be called when someone attempts to browse the backend code files directly ,0 375952,11136376332.0,IssuesEvent,2019-12-20 16:27:15,ntop/ntopng,https://api.github.com/repos/ntop/ntopng,closed,Applications menu is inconsistent,low-priority bug,"I have both `TLS.Facebook` and `DNS.Facebook` flows. The menu however only shows `Facebook` and `DNS` entries, `TLS` is missing (note: clicking DNS brings up both DNS only and DNS.Facebook flows): ![photo_2019-09-30_16-56-26](https://user-images.githubusercontent.com/5488003/65890562-966d8100-e392-11e9-96c5-1c7c7f7319a7.jpg) Manually changing the URL to `application=TLS` shows this TLS flow: ![2019-09-30_16-57](https://user-images.githubusercontent.com/5488003/65890617-b00ec880-e392-11e9-8d6e-b822ced443aa.png) But the TLS menu entry is still hidden and `TLS.Facebook` flows are not reported.",1.0,"Applications menu is inconsistent - I have both `TLS.Facebook` and `DNS.Facebook` flows. The menu however only shows `Facebook` and `DNS` entries, `TLS` is missing (note: clicking DNS brings up both DNS only and DNS.Facebook flows): ![photo_2019-09-30_16-56-26](https://user-images.githubusercontent.com/5488003/65890562-966d8100-e392-11e9-96c5-1c7c7f7319a7.jpg) Manually changing the URL to `application=TLS` shows this TLS flow: ![2019-09-30_16-57](https://user-images.githubusercontent.com/5488003/65890617-b00ec880-e392-11e9-8d6e-b822ced443aa.png) But the TLS menu entry is still hidden and `TLS.Facebook` flows are not reported.",0,applications menu is inconsistent i have both tls facebook and dns facebook flows the menu however only shows facebook and dns entries tls is missing note clicking dns brings up both dns only and dns facebook flows manually changing the url to application tls shows this tls flow but the tls menu entry is still hidden and tls facebook flows are not reported ,0 277738,24099250490.0,IssuesEvent,2022-09-19 22:02:21,CliMA/ClimaCore.jl,https://api.github.com/repos/CliMA/ClimaCore.jl,opened,Extend tests for TempestRemap v2.1.4,testcase tests,"TempestRemap was recently updated to release v2.1.4, which is now capable of doing monotone remappings of order `Nq=5`. We need to extend our tests to ensure that these remappings still work as expected. Related issues/PRs: - ClimaCore #895 - [ClimaCoupler #108](https://github.com/CliMA/ClimaCoupler.jl/issues/108) - [ClimaCoupler #114 ](https://github.com/CliMA/ClimaCoupler.jl/pull/114) ",2.0,"Extend tests for TempestRemap v2.1.4 - TempestRemap was recently updated to release v2.1.4, which is now capable of doing monotone remappings of order `Nq=5`. We need to extend our tests to ensure that these remappings still work as expected. Related issues/PRs: - ClimaCore #895 - [ClimaCoupler #108](https://github.com/CliMA/ClimaCoupler.jl/issues/108) - [ClimaCoupler #114 ](https://github.com/CliMA/ClimaCoupler.jl/pull/114) ",0,extend tests for tempestremap tempestremap was recently updated to release which is now capable of doing monotone remappings of order nq we need to extend our tests to ensure that these remappings still work as expected related issues prs climacore ,0 66841,14798976493.0,IssuesEvent,2021-01-13 01:07:38,LevyForchh/t-vault,https://api.github.com/repos/LevyForchh/t-vault,opened,CVE-2020-24025 (Medium) detected in node-sass-4.14.0.tgz,security vulnerability,"## CVE-2020-24025 - Medium Severity Vulnerability
Vulnerable Library - node-sass-4.14.0.tgz

Wrapper around libsass

Library home page: https://registry.npmjs.org/node-sass/-/node-sass-4.14.0.tgz

Path to dependency file: t-vault/tvaultui/package.json

Path to vulnerable library: t-vault/tvaultui/node_modules/node-sass/package.json

Dependency Hierarchy: - gulp-sass-3.1.0.tgz (Root Library) - :x: **node-sass-4.14.0.tgz** (Vulnerable Library)

Vulnerability Details

Certificate validation in node-sass 2.0.0 to 4.14.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.

Publish Date: 2021-01-11

URL: CVE-2020-24025

CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low

For more information on CVSS3 Scores, click here.

",True,"CVE-2020-24025 (Medium) detected in node-sass-4.14.0.tgz - ## CVE-2020-24025 - Medium Severity Vulnerability
Vulnerable Library - node-sass-4.14.0.tgz

Wrapper around libsass

Library home page: https://registry.npmjs.org/node-sass/-/node-sass-4.14.0.tgz

Path to dependency file: t-vault/tvaultui/package.json

Path to vulnerable library: t-vault/tvaultui/node_modules/node-sass/package.json

Dependency Hierarchy: - gulp-sass-3.1.0.tgz (Root Library) - :x: **node-sass-4.14.0.tgz** (Vulnerable Library)

Vulnerability Details

Certificate validation in node-sass 2.0.0 to 4.14.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.

Publish Date: 2021-01-11

URL: CVE-2020-24025

CVSS 3 Score Details (5.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low

For more information on CVSS3 Scores, click here.

",0,cve medium detected in node sass tgz cve medium severity vulnerability vulnerable library node sass tgz wrapper around libsass library home page a href path to dependency file t vault tvaultui package json path to vulnerable library t vault tvaultui node modules node sass package json dependency hierarchy gulp sass tgz root library x node sass tgz vulnerable library vulnerability details certificate validation in node sass to is disabled when requesting binaries even if the user is not specifying an alternative download path publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails certificate validation in node sass to is disabled when requesting binaries even if the user is not specifying an alternative download path vulnerabilityurl ,0 453,3385358186.0,IssuesEvent,2015-11-27 11:01:40,openETCS/toolchain,https://api.github.com/repos/openETCS/toolchain,closed,Review of tracability Architecture (ends 12-Nov-2015),US-Traceabiliy-Architecture,"Here my comments on the document linked to #504 - § 1.1 and Fig 2: - in the figure are mixed functionnal, HW, procedural,... requirements, at the top level (for example from User stories or Cenelec Standard) and all seems to be derived up to SW level (I understand that only specification and design of SW appear on the figure, not the Validation). But I think that lots of the initial requirements can not be derived on Sw, but on other activities (quality or project plan, Validation,...) or subsystems (HW, API,...); How it is plan to take into account these exported requirements ? >> Agree. ""Derive"" is not the right general term for all the arrows. Changed figure 1and used ""transform"" instead of ""derive"" and better explained that initial requirements are transformed to subsystem and then HW or SW or data or procedures. I improved fig 2 with better alignement on EN 50128:2011 and used only term ""input for"" for relations between artefacts at this stage of the document. >> V&V not shown at this stage of the document. Added as a note. - some non-functional requirements can be introduced (or derived from Cenelec standards) in openETCS quality or project plans. >> Yes. Do you think we need to show quality and project plans for this document? will those artefacts be >>traced to requirements? - in the fig 2 it seems there is a direct traceability between SRS and Cenelec (orange arrow): I am not agree. >> Removed. I removed initial arrows coming from ISO 15288 vision and focused now on OpenETCS >>only. ISO15288 was just a way to introduce engineering levels and help me understanding scope of >>different requirements and models by asking partners the position in those levels. in the current state of SRS it is difficult to explicitly defined a traceability between this document and stakeholders requirements. I consider more the SRS in midway between stakeholders requirement and a real System specification, I will put it in parallel of Cenelec and User stories. >> OK. Done. - I think validation are missing in fig 1 and 2: lots of requirements can not be derived up to SW code only, but will be link to the different test or V&V phases. >> OK. Which openETCS document can I read to add missing information? - §1.2 and Fig4 , It is necessary to clarify the data dictionary model and how it is defined (textual, SysML, Scade ?) as a Scade representation of it is one of the SW model. >> OK. ToDo. -§2.2.1: - Please give clearly definition of the mining of the different arrows (for example ""refines"" seems to correspond to a SysML definition which is very different from a classical formal definition of ""refines""). - why ""Documentation"" is an activity ? - why ""V&V"" do not ""use"" the requirement database ? - meaning of the arrows are not clear for me, so I do not understand why there are no linked between System model and requirement database or functional model and requirement data base. The figure need some comments as it is not self-sufficient for those who are not used of these notations. >> perfectly agree. I had almost same remarks than you when reading this figure the first time and I did >>not dare to remove it until now because it was not mine and because I thought it was ""validated"" after >>a previous review. As soon as I can express the traceability process through other diagrams easier to >>understand I will remove this initial figure. - §2.2.2: This means we consider only functional requirements. User stories, SRS, API or Cenelec are far to contain only functional requirements. >> yes because I wanted to focus on Functional formal model that seemed to be ""functional"". But I >>understand that this model is also behavioral and that we target an executable model, so containing >>non functional requirements. Will update this scenario with other non functional requirements taken >>into account. - Fig 7 : I do not think that the ""openETCS system designer"" is in charge of all the actions. Typically ""trace model element to SRS"" is made by SW designer, ""Create verification view"" by a verificator.... >> OK. This was a ""generic"" term used to simplify diagram (showing several actors would make it too >>large). I will use a more generic term and will precise the different possible roles according to activities. - §1 and 2 : Maybe it will be nice to have a look on QA plan (WP1 https://github.com/openETCS/governance/blob/master/QA%20Plan/D1.3.1_QA_Plan.pdf), definition plan (WP2 https://github.com/openETCS/requirements) and safety plan (WP4 https://github.com/openETCS/validation/tree/master/Reports/D4.2) to have a better view of what would be expected at the beginning of the project. >> OK. thanks for the reference. - §3 Ok for me. -§4.2.3, for the moment the tool is Scade studio (v16.2) >> mistake. fixed. - §5, in the view of the openETCS toolchain, totally open, I am agree with the left branch (ProR linked to papyrus). However in practice the sysML model has been made with Scade system which contains an old version of papyrus not really compatible with the one in openETCS toolchain. In this case I'am not sure that ProR can be used at system level (which do not allow us to have an open-source tool for traceability !) >> OK. will take that into account. - § 5.1.2: How is identify the first sentence ""If the establishment....."" ? Are we sure that we shall always share such a requirement in different sub requirements with different Id ? Are we not going to lost information (for example in this case that ALL the sequence of actions shall be made in a given order) ? >> This is initial text (I did not change that assuming that it was validated). I'll look at your point. - §5, 6 and 7: Three solutions are proposed: -why ? maybe an introduction in the document is missing to explain its contents and why 3 solutions are proposed >> Well: that might be a question of document organization. First version of document mentioned 1 first >> solution and I understood that this traceability solution was far from being perfect. So I have decided >> to investigate on possible improvements through alternate solutions. >> If this document reflects what IS DONE in the project, then I must focus on the reality only and >>perhaps conclude the document with ""current limits"". In that case I can create another document that >>would be ""proposals for improvements of traceability support by the tool chain"". - some parts of some solutions are already implemented or largely analyzed (eg. link between ProR and payprus, use of genDoc...) other seems just propositions. It will be nice to have a clear view of what exists and can be used right now, and other elements. >> OK. I will distinguish between existing (tested) solutions and ideas for improvements. To continue depending updating and comments.",1.0,"Review of tracability Architecture (ends 12-Nov-2015) - Here my comments on the document linked to #504 - § 1.1 and Fig 2: - in the figure are mixed functionnal, HW, procedural,... requirements, at the top level (for example from User stories or Cenelec Standard) and all seems to be derived up to SW level (I understand that only specification and design of SW appear on the figure, not the Validation). But I think that lots of the initial requirements can not be derived on Sw, but on other activities (quality or project plan, Validation,...) or subsystems (HW, API,...); How it is plan to take into account these exported requirements ? >> Agree. ""Derive"" is not the right general term for all the arrows. Changed figure 1and used ""transform"" instead of ""derive"" and better explained that initial requirements are transformed to subsystem and then HW or SW or data or procedures. I improved fig 2 with better alignement on EN 50128:2011 and used only term ""input for"" for relations between artefacts at this stage of the document. >> V&V not shown at this stage of the document. Added as a note. - some non-functional requirements can be introduced (or derived from Cenelec standards) in openETCS quality or project plans. >> Yes. Do you think we need to show quality and project plans for this document? will those artefacts be >>traced to requirements? - in the fig 2 it seems there is a direct traceability between SRS and Cenelec (orange arrow): I am not agree. >> Removed. I removed initial arrows coming from ISO 15288 vision and focused now on OpenETCS >>only. ISO15288 was just a way to introduce engineering levels and help me understanding scope of >>different requirements and models by asking partners the position in those levels. in the current state of SRS it is difficult to explicitly defined a traceability between this document and stakeholders requirements. I consider more the SRS in midway between stakeholders requirement and a real System specification, I will put it in parallel of Cenelec and User stories. >> OK. Done. - I think validation are missing in fig 1 and 2: lots of requirements can not be derived up to SW code only, but will be link to the different test or V&V phases. >> OK. Which openETCS document can I read to add missing information? - §1.2 and Fig4 , It is necessary to clarify the data dictionary model and how it is defined (textual, SysML, Scade ?) as a Scade representation of it is one of the SW model. >> OK. ToDo. -§2.2.1: - Please give clearly definition of the mining of the different arrows (for example ""refines"" seems to correspond to a SysML definition which is very different from a classical formal definition of ""refines""). - why ""Documentation"" is an activity ? - why ""V&V"" do not ""use"" the requirement database ? - meaning of the arrows are not clear for me, so I do not understand why there are no linked between System model and requirement database or functional model and requirement data base. The figure need some comments as it is not self-sufficient for those who are not used of these notations. >> perfectly agree. I had almost same remarks than you when reading this figure the first time and I did >>not dare to remove it until now because it was not mine and because I thought it was ""validated"" after >>a previous review. As soon as I can express the traceability process through other diagrams easier to >>understand I will remove this initial figure. - §2.2.2: This means we consider only functional requirements. User stories, SRS, API or Cenelec are far to contain only functional requirements. >> yes because I wanted to focus on Functional formal model that seemed to be ""functional"". But I >>understand that this model is also behavioral and that we target an executable model, so containing >>non functional requirements. Will update this scenario with other non functional requirements taken >>into account. - Fig 7 : I do not think that the ""openETCS system designer"" is in charge of all the actions. Typically ""trace model element to SRS"" is made by SW designer, ""Create verification view"" by a verificator.... >> OK. This was a ""generic"" term used to simplify diagram (showing several actors would make it too >>large). I will use a more generic term and will precise the different possible roles according to activities. - §1 and 2 : Maybe it will be nice to have a look on QA plan (WP1 https://github.com/openETCS/governance/blob/master/QA%20Plan/D1.3.1_QA_Plan.pdf), definition plan (WP2 https://github.com/openETCS/requirements) and safety plan (WP4 https://github.com/openETCS/validation/tree/master/Reports/D4.2) to have a better view of what would be expected at the beginning of the project. >> OK. thanks for the reference. - §3 Ok for me. -§4.2.3, for the moment the tool is Scade studio (v16.2) >> mistake. fixed. - §5, in the view of the openETCS toolchain, totally open, I am agree with the left branch (ProR linked to papyrus). However in practice the sysML model has been made with Scade system which contains an old version of papyrus not really compatible with the one in openETCS toolchain. In this case I'am not sure that ProR can be used at system level (which do not allow us to have an open-source tool for traceability !) >> OK. will take that into account. - § 5.1.2: How is identify the first sentence ""If the establishment....."" ? Are we sure that we shall always share such a requirement in different sub requirements with different Id ? Are we not going to lost information (for example in this case that ALL the sequence of actions shall be made in a given order) ? >> This is initial text (I did not change that assuming that it was validated). I'll look at your point. - §5, 6 and 7: Three solutions are proposed: -why ? maybe an introduction in the document is missing to explain its contents and why 3 solutions are proposed >> Well: that might be a question of document organization. First version of document mentioned 1 first >> solution and I understood that this traceability solution was far from being perfect. So I have decided >> to investigate on possible improvements through alternate solutions. >> If this document reflects what IS DONE in the project, then I must focus on the reality only and >>perhaps conclude the document with ""current limits"". In that case I can create another document that >>would be ""proposals for improvements of traceability support by the tool chain"". - some parts of some solutions are already implemented or largely analyzed (eg. link between ProR and payprus, use of genDoc...) other seems just propositions. It will be nice to have a clear view of what exists and can be used right now, and other elements. >> OK. I will distinguish between existing (tested) solutions and ideas for improvements. To continue depending updating and comments.",0,review of tracability architecture ends nov here my comments on the document linked to § and fig in the figure are mixed functionnal hw procedural requirements at the top level for example from user stories or cenelec standard and all seems to be derived up to sw level i understand that only specification and design of sw appear on the figure not the validation but i think that lots of the initial requirements can not be derived on sw but on other activities quality or project plan validation or subsystems hw api how it is plan to take into account these exported requirements agree derive is not the right general term for all the arrows changed figure used transform instead of derive and better explained that initial requirements are transformed to subsystem and then hw or sw or data or procedures i improved fig with better alignement on en and used only term input for for relations between artefacts at this stage of the document v v not shown at this stage of the document added as a note some non functional requirements can be introduced or derived from cenelec standards in openetcs quality or project plans yes do you think we need to show quality and project plans for this document will those artefacts be traced to requirements in the fig it seems there is a direct traceability between srs and cenelec orange arrow i am not agree removed i removed initial arrows coming from iso vision and focused now on openetcs only was just a way to introduce engineering levels and help me understanding scope of different requirements and models by asking partners the position in those levels in the current state of srs it is difficult to explicitly defined a traceability between this document and stakeholders requirements i consider more the srs in midway between stakeholders requirement and a real system specification i will put it in parallel of cenelec and user stories ok done i think validation are missing in fig and lots of requirements can not be derived up to sw code only but will be link to the different test or v v phases ok which openetcs document can i read to add missing information § and it is necessary to clarify the data dictionary model and how it is defined textual sysml scade as a scade representation of it is one of the sw model ok todo § please give clearly definition of the mining of the different arrows for example refines seems to correspond to a sysml definition which is very different from a classical formal definition of refines why documentation is an activity why v v do not use the requirement database meaning of the arrows are not clear for me so i do not understand why there are no linked between system model and requirement database or functional model and requirement data base the figure need some comments as it is not self sufficient for those who are not used of these notations perfectly agree i had almost same remarks than you when reading this figure the first time and i did not dare to remove it until now because it was not mine and because i thought it was validated after a previous review as soon as i can express the traceability process through other diagrams easier to understand i will remove this initial figure § this means we consider only functional requirements user stories srs api or cenelec are far to contain only functional requirements yes because i wanted to focus on functional formal model that seemed to be functional but i understand that this model is also behavioral and that we target an executable model so containing non functional requirements will update this scenario with other non functional requirements taken into account fig i do not think that the openetcs system designer is in charge of all the actions typically trace model element to srs is made by sw designer create verification view by a verificator ok this was a generic term used to simplify diagram showing several actors would make it too large i will use a more generic term and will precise the different possible roles according to activities § and maybe it will be nice to have a look on qa plan definition plan and safety plan to have a better view of what would be expected at the beginning of the project ok thanks for the reference § ok for me § for the moment the tool is scade studio mistake fixed § in the view of the openetcs toolchain totally open i am agree with the left branch pror linked to papyrus however in practice the sysml model has been made with scade system which contains an old version of papyrus not really compatible with the one in openetcs toolchain in this case i am not sure that pror can be used at system level which do not allow us to have an open source tool for traceability ok will take that into account § how is identify the first sentence if the establishment are we sure that we shall always share such a requirement in different sub requirements with different id are we not going to lost information for example in this case that all the sequence of actions shall be made in a given order this is initial text i did not change that assuming that it was validated i ll look at your point § and three solutions are proposed why maybe an introduction in the document is missing to explain its contents and why solutions are proposed well that might be a question of document organization first version of document mentioned first solution and i understood that this traceability solution was far from being perfect so i have decided to investigate on possible improvements through alternate solutions if this document reflects what is done in the project then i must focus on the reality only and perhaps conclude the document with current limits in that case i can create another document that would be proposals for improvements of traceability support by the tool chain some parts of some solutions are already implemented or largely analyzed eg link between pror and payprus use of gendoc other seems just propositions it will be nice to have a clear view of what exists and can be used right now and other elements ok i will distinguish between existing tested solutions and ideas for improvements to continue depending updating and comments ,0 53548,6334632594.0,IssuesEvent,2017-07-26 17:04:49,coreos/etcd,https://api.github.com/repos/coreos/etcd,closed,TestTLSReloadAtomicReplace: grpc: timed out when dialing,area/testing,"via semaphore https://semaphoreci.com/coreos/etcd/branches/pull-request-8213/builds/2 ``` --- FAIL: TestTLSReloadAtomicReplace (2.88s) v3_grpc_test.go:1790: expected no error, got grpc: timed out when dialing ```",1.0,"TestTLSReloadAtomicReplace: grpc: timed out when dialing - via semaphore https://semaphoreci.com/coreos/etcd/branches/pull-request-8213/builds/2 ``` --- FAIL: TestTLSReloadAtomicReplace (2.88s) v3_grpc_test.go:1790: expected no error, got grpc: timed out when dialing ```",0,testtlsreloadatomicreplace grpc timed out when dialing via semaphore fail testtlsreloadatomicreplace grpc test go expected no error got grpc timed out when dialing ,0 3848,14708845997.0,IssuesEvent,2021-01-05 00:45:58,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,[Automation API] Support recovery workflow for nodejs,area/automation-api language/javascript,We should expose export/import/cancel similar to what we added for Go https://github.com/pulumi/pulumi/pull/5369,1.0,[Automation API] Support recovery workflow for nodejs - We should expose export/import/cancel similar to what we added for Go https://github.com/pulumi/pulumi/pull/5369,1, support recovery workflow for nodejs we should expose export import cancel similar to what we added for go ,1 5428,19583979302.0,IssuesEvent,2022-01-05 02:49:10,ccodwg/Covid19CanadaBot,https://api.github.com/repos/ccodwg/Covid19CanadaBot,opened,Improvements to update validation,data-validation automation messaging,"- [ ] Condense top-line summary to use abbreviations (e.g., for cumulative, 7-day avg) - [ ] Align numbers if possible to make it easier to read",1.0,"Improvements to update validation - - [ ] Condense top-line summary to use abbreviations (e.g., for cumulative, 7-day avg) - [ ] Align numbers if possible to make it easier to read",1,improvements to update validation condense top line summary to use abbreviations e g for cumulative day avg align numbers if possible to make it easier to read,1 45894,7208625526.0,IssuesEvent,2018-02-07 04:15:20,GoogleCloudPlatform/google-cloud-java,https://api.github.com/repos/GoogleCloudPlatform/google-cloud-java,closed,BigQuery Data Transfer: ListDataSourcesRequest not documented / not linked to,api: bigquery documentation priority: p2 type: process,"See: https://googlecloudplatform.github.io/google-cloud-java/latest/apidocs/com/google/cloud/bigquery/datatransfer/v1/DataTransferServiceClient.html#listDataSources-com.google.cloud.bigquery.datatransfer.v1.ListDataSourcesRequest- I would expect `com.google.cloud.bigquery.datatransfer.v1.ListDataSourcesRequest` to be a hyperlink to the ListDataSourcesRequest class, but it is not. A search within that page for ListDataSourcesRequest does not show any documentation for that class.",1.0,"BigQuery Data Transfer: ListDataSourcesRequest not documented / not linked to - See: https://googlecloudplatform.github.io/google-cloud-java/latest/apidocs/com/google/cloud/bigquery/datatransfer/v1/DataTransferServiceClient.html#listDataSources-com.google.cloud.bigquery.datatransfer.v1.ListDataSourcesRequest- I would expect `com.google.cloud.bigquery.datatransfer.v1.ListDataSourcesRequest` to be a hyperlink to the ListDataSourcesRequest class, but it is not. A search within that page for ListDataSourcesRequest does not show any documentation for that class.",0,bigquery data transfer listdatasourcesrequest not documented not linked to see i would expect com google cloud bigquery datatransfer listdatasourcesrequest to be a hyperlink to the listdatasourcesrequest class but it is not a search within that page for listdatasourcesrequest does not show any documentation for that class ,0 2348,11795335723.0,IssuesEvent,2020-03-18 08:45:24,submariner-io/submariner,https://api.github.com/repos/submariner-io/submariner,closed,Post merge job doesn't rebuild the dapper base image,automation bug,"We noticed on this build [1] that the dapper base image wasn't rebuilt, even though the PR was updating it and the CI for the PR itself passed with no problems. [1] https://travis-ci.com/github/submariner-io/submariner/builds/152954775",1.0,"Post merge job doesn't rebuild the dapper base image - We noticed on this build [1] that the dapper base image wasn't rebuilt, even though the PR was updating it and the CI for the PR itself passed with no problems. [1] https://travis-ci.com/github/submariner-io/submariner/builds/152954775",1,post merge job doesn t rebuild the dapper base image we noticed on this build that the dapper base image wasn t rebuilt even though the pr was updating it and the ci for the pr itself passed with no problems ,1 5491,19803164523.0,IssuesEvent,2022-01-19 01:35:48,MinaProtocol/mina,https://api.github.com/repos/MinaProtocol/mina,closed,"Integration tests: port over old ""migrated"" integration tests over to new framework",acceptance-automation,"there are 5 old tests, their logic needs to be fully ported over to the new integration test framework so we can get rid of the old ones. - [ ] snarkless integration test - [ ] split integration test - [ ] bootstrap integration test - [ ] delegation itnegration test - [ ] shared state integration test",1.0,"Integration tests: port over old ""migrated"" integration tests over to new framework - there are 5 old tests, their logic needs to be fully ported over to the new integration test framework so we can get rid of the old ones. - [ ] snarkless integration test - [ ] split integration test - [ ] bootstrap integration test - [ ] delegation itnegration test - [ ] shared state integration test",1,integration tests port over old migrated integration tests over to new framework there are old tests their logic needs to be fully ported over to the new integration test framework so we can get rid of the old ones snarkless integration test split integration test bootstrap integration test delegation itnegration test shared state integration test,1 5952,21714363587.0,IssuesEvent,2022-05-10 16:24:35,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,"changefeed get stuck ...in test case ""cdc_scale_sync""",type/bug severity/major found/automation area/ticdc affects-5.3 affects-5.2 affects-5.1 affects-5.4 affects-6.0,"### What did you do? - test infra test case: cdc_scale_sync, main branch - the changefeed get stuck already for 6 hours: https://tcms.pingcap.net/dashboard/executions/plan/630680 - owner is not stuck because if creating new changefeed, the new one works fine. ### What did you expect to see? _No response_ ### What did you see instead? ![image](https://user-images.githubusercontent.com/78345569/150916447-ec2a9626-72d5-41bc-80c8-ff49ae7f10cd.png) ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console main ``` TiCDC version (execute `cdc version`): ```console [release-version=v5.5.0-nightly] [git-hash=7a227b421dbfcdafee02148e787138798edadf31] [git-branch=heads/refs/tags/v5.5.0-nightly] [utc-build-time=""2022-01-24 18:10:05""] [go-version=""go version go1.16.4 linux/amd64""] [failpoint-build=false] ```",1.0,"changefeed get stuck ...in test case ""cdc_scale_sync"" - ### What did you do? - test infra test case: cdc_scale_sync, main branch - the changefeed get stuck already for 6 hours: https://tcms.pingcap.net/dashboard/executions/plan/630680 - owner is not stuck because if creating new changefeed, the new one works fine. ### What did you expect to see? _No response_ ### What did you see instead? ![image](https://user-images.githubusercontent.com/78345569/150916447-ec2a9626-72d5-41bc-80c8-ff49ae7f10cd.png) ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console main ``` TiCDC version (execute `cdc version`): ```console [release-version=v5.5.0-nightly] [git-hash=7a227b421dbfcdafee02148e787138798edadf31] [git-branch=heads/refs/tags/v5.5.0-nightly] [utc-build-time=""2022-01-24 18:10:05""] [go-version=""go version go1.16.4 linux/amd64""] [failpoint-build=false] ```",1,changefeed get stuck in test case cdc scale sync what did you do test infra test case cdc scale sync main branch the changefeed get stuck already for hours owner is not stuck because if creating new changefeed the new one works fine what did you expect to see no response what did you see instead versions of the cluster upstream tidb cluster version execute select tidb version in a mysql client console main ticdc version execute cdc version console ,1 304235,23055830110.0,IssuesEvent,2022-07-25 04:25:54,extratone/mastodon-ios-apps,https://api.github.com/repos/extratone/mastodon-ios-apps,opened,Twidere X,documentation beta i,"![Twidere X Icon](https://user-images.githubusercontent.com/43663476/180696568-7139b328-1772-4dea-9c84-8ff3fd0388e2.png) - [GitHub Issue](https://github.com/extratone/mastodon-ios-apps/issues/) - [**App Store**](https://apps.apple.com/us/app/twidere-x/id1530314034) ![ Screens](screens/.png) - **App ID**:1530314034` ## Description Timeline - See what's new and what's happening! Tweets are sorted by time, support for displaying retweets with comments, and quick display of discussion threads. Pictures, GIFs, and videos are displayed in a completely new design. Complete features to ensure your browsing experience. 

Album mode - Enjoy the pure picture waterfall on the search and profile page. At the same time, multiple pictures in the timeline and tweets are displayed in an elegant form. 

Quickly display the discussion thread-Complete view of the discussion! Up to 2 levels of tweet conversations can be quickly displayed in tweets. 

Customized personalized interface - Perfect personalization features allow users to customize their interface: support for setting avatar style, following system settings to display light or dark mode, and customizing text size. 

Send tweets - You can send live photos in your tweets, or select images from the photo gallery. Send tweets that include the subject and location. 

Multiple accounts - Use as many identities as you want! You can add multiple Twitter accounts to Twidere X, ensuring that they don't interfere with each other. 

Search - Find tweets, media, and users among thousands of information. You can also use advanced search syntax to get more effective information. 

User Profile Display - New design for user’s homepage display. Subtle display of avatar, profile, and background. Also, the number of users following and follower, click to enter the list. 

User tweets display - a perfect display of user-related tweets and liked tweets with a quick display of discussion threads. Images are elegantly displayed in album mode. 
No ads - We do not carry any ads! A cool tweet experience. 

Twitter is a trademark of Twitter, Inc. ``` ``` ## Developer Contact - [SUJITEKU LIMITED LIABILITY CO. - Market Tools](https://tools.applemediaservices.com/developer/1049084226) ## Embed ## Data [App Store Tools](shortcuts://run-shortcut?name=App%20Store%20Tools) --- --- [Local](drafts://open?uuid=FC33BE7A-1C8C-4A49-8FBE-281A4E47A60F) ",1.0,"Twidere X - ![Twidere X Icon](https://user-images.githubusercontent.com/43663476/180696568-7139b328-1772-4dea-9c84-8ff3fd0388e2.png) - [GitHub Issue](https://github.com/extratone/mastodon-ios-apps/issues/) - [**App Store**](https://apps.apple.com/us/app/twidere-x/id1530314034) ![ Screens](screens/.png) - **App ID**:1530314034` ## Description Timeline - See what's new and what's happening! Tweets are sorted by time, support for displaying retweets with comments, and quick display of discussion threads. Pictures, GIFs, and videos are displayed in a completely new design. Complete features to ensure your browsing experience. 

Album mode - Enjoy the pure picture waterfall on the search and profile page. At the same time, multiple pictures in the timeline and tweets are displayed in an elegant form. 

Quickly display the discussion thread-Complete view of the discussion! Up to 2 levels of tweet conversations can be quickly displayed in tweets. 

Customized personalized interface - Perfect personalization features allow users to customize their interface: support for setting avatar style, following system settings to display light or dark mode, and customizing text size. 

Send tweets - You can send live photos in your tweets, or select images from the photo gallery. Send tweets that include the subject and location. 

Multiple accounts - Use as many identities as you want! You can add multiple Twitter accounts to Twidere X, ensuring that they don't interfere with each other. 

Search - Find tweets, media, and users among thousands of information. You can also use advanced search syntax to get more effective information. 

User Profile Display - New design for user’s homepage display. Subtle display of avatar, profile, and background. Also, the number of users following and follower, click to enter the list. 

User tweets display - a perfect display of user-related tweets and liked tweets with a quick display of discussion threads. Images are elegantly displayed in album mode. 
No ads - We do not carry any ads! A cool tweet experience. 

Twitter is a trademark of Twitter, Inc. ``` ``` ## Developer Contact - [SUJITEKU LIMITED LIABILITY CO. - Market Tools](https://tools.applemediaservices.com/developer/1049084226) ## Embed ## Data [App Store Tools](shortcuts://run-shortcut?name=App%20Store%20Tools) --- --- [Local](drafts://open?uuid=FC33BE7A-1C8C-4A49-8FBE-281A4E47A60F) ",0,twidere x screens png app id description timeline see what s new and what s happening tweets are sorted by time support for displaying retweets with comments and quick display of discussion threads pictures gifs and videos are displayed in a completely new design complete features to ensure your browsing experience 

album mode enjoy the pure picture waterfall on the search and profile page at the same time multiple pictures in the timeline and tweets are displayed in an elegant form 

quickly display the discussion thread complete view of the discussion up to levels of tweet conversations can be quickly displayed in tweets 

customized personalized interface perfect personalization features allow users to customize their interface support for setting avatar style following system settings to display light or dark mode and customizing text size 

send tweets you can send live photos in your tweets or select images from the photo gallery send tweets that include the subject and location 

multiple accounts use as many identities as you want you can add multiple twitter accounts to twidere x ensuring that they don t interfere with each other 

search find tweets media and users among thousands of information you can also use advanced search syntax to get more effective information 

user profile display new design for user’s homepage display subtle display of avatar profile and background also the number of users following and follower click to enter the list 

user tweets display a perfect display of user related tweets and liked tweets with a quick display of discussion threads images are elegantly displayed in album mode 
no ads we do not carry any ads a cool tweet experience 

twitter is a trademark of twitter inc developer contact embed data shortcuts run shortcut name app drafts open uuid ,0 4427,16518235248.0,IssuesEvent,2021-05-26 12:01:40,submariner-io/releases,https://api.github.com/repos/submariner-io/releases,closed,Release to OperatorHub automatically,P2 automation enhancement,"It is now possible and encouraged to send fully-automated PRs to OperatorHub for updating releases. See: https://github.com/operator-framework/community-operators/issues/3883 We should add this to our release automation.",1.0,"Release to OperatorHub automatically - It is now possible and encouraged to send fully-automated PRs to OperatorHub for updating releases. See: https://github.com/operator-framework/community-operators/issues/3883 We should add this to our release automation.",1,release to operatorhub automatically it is now possible and encouraged to send fully automated prs to operatorhub for updating releases see we should add this to our release automation ,1 8274,26598416672.0,IssuesEvent,2023-01-23 14:10:32,GoogleCloudPlatform/dataproc-templates,https://api.github.com/repos/GoogleCloudPlatform/dataproc-templates,closed,[Jenkins] Update python integration tests with enviroment variables,python automation,"Similar to Java integration tests, update python integration tests with env variables",1.0,"[Jenkins] Update python integration tests with enviroment variables - Similar to Java integration tests, update python integration tests with env variables",1, update python integration tests with enviroment variables similar to java integration tests update python integration tests with env variables,1 4524,16767182652.0,IssuesEvent,2021-06-14 10:18:48,rancher-sandbox/cOS-toolkit,https://api.github.com/repos/rancher-sandbox/cOS-toolkit,closed,CI: make sure we have one run of the ci for branch,automation,"Currently the CI can run different jobs in parallel for the same branch, which could invalidate the end result. For example, if we merge 2 PR in sequence, there should be only one job (latest) running. We should also take extra care to not cancel jobs which are running the publishing step See also: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#concurrency ",1.0,"CI: make sure we have one run of the ci for branch - Currently the CI can run different jobs in parallel for the same branch, which could invalidate the end result. For example, if we merge 2 PR in sequence, there should be only one job (latest) running. We should also take extra care to not cancel jobs which are running the publishing step See also: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#concurrency ",1,ci make sure we have one run of the ci for branch currently the ci can run different jobs in parallel for the same branch which could invalidate the end result for example if we merge pr in sequence there should be only one job latest running we should also take extra care to not cancel jobs which are running the publishing step see also ,1 533636,15595792178.0,IssuesEvent,2021-03-18 15:13:45,celo-org/celo-monorepo,https://api.github.com/repos/celo-org/celo-monorepo,closed,Integrate Adjust,Component: Other Priority: P1 epic wallet,"### What's important about this? Integrating Adjust will allow us to build marketing attribution models and improve ad targeting to receive the best ROI on our marketing spent. ### What is ""done""? Adjust SDK is implemented in the app and set in Segment. Adjust receives the events from the app. ### What's involved in doing this work? Segment doc: https://segment.com/docs/connections/destinations/catalog/adjust/ Adjust SDK: https://github.com/adjust/sdks ",1.0,"Integrate Adjust - ### What's important about this? Integrating Adjust will allow us to build marketing attribution models and improve ad targeting to receive the best ROI on our marketing spent. ### What is ""done""? Adjust SDK is implemented in the app and set in Segment. Adjust receives the events from the app. ### What's involved in doing this work? Segment doc: https://segment.com/docs/connections/destinations/catalog/adjust/ Adjust SDK: https://github.com/adjust/sdks ",0,integrate adjust what s important about this integrating adjust will allow us to build marketing attribution models and improve ad targeting to receive the best roi on our marketing spent what is done adjust sdk is implemented in the app and set in segment adjust receives the events from the app what s involved in doing this work segment doc adjust sdk ,0 9524,29188725134.0,IssuesEvent,2023-05-19 17:47:37,harvester/tests,https://api.github.com/repos/harvester/tests,closed,[TEST] Find the solution to run RKE1 and RKE2 test cases on daily test run,area/backend-automation priority/1 area/rancher-integrated,"## What's the test to develop? Please describe As of now, we have rke2 integration test running on our CI. Based on our current infra set up we can't run both, we'll need to change the `harvester-baremetal-ansible` in order to run the RKE1 and RKE2 both test. ",1.0,"[TEST] Find the solution to run RKE1 and RKE2 test cases on daily test run - ## What's the test to develop? Please describe As of now, we have rke2 integration test running on our CI. Based on our current infra set up we can't run both, we'll need to change the `harvester-baremetal-ansible` in order to run the RKE1 and RKE2 both test. ",1, find the solution to run and test cases on daily test run what s the test to develop please describe as of now we have integration test running on our ci based on our current infra set up we can t run both we ll need to change the harvester baremetal ansible in order to run the and both test ,1 44554,9602793075.0,IssuesEvent,2019-05-10 15:22:34,mozilla/release-services,https://api.github.com/repos/mozilla/release-services,closed,Improve static analysis and code coverage resilience,app:codecoverage/bot app:pulselistener app:staticanalysis/bot kind:enhancement,"- [x] Use a separate Taskcluster provisioner #946 - [x] Only publish new issues to be idempotent - [x] Make code coverage idempotent #1118 - [x] Enhance pulse listener monitoring to restart tasks in exception state #704 - [ ] Enhance pulse listener to keep tasks in the queue until they are finished #1127",1.0,"Improve static analysis and code coverage resilience - - [x] Use a separate Taskcluster provisioner #946 - [x] Only publish new issues to be idempotent - [x] Make code coverage idempotent #1118 - [x] Enhance pulse listener monitoring to restart tasks in exception state #704 - [ ] Enhance pulse listener to keep tasks in the queue until they are finished #1127",0,improve static analysis and code coverage resilience use a separate taskcluster provisioner only publish new issues to be idempotent make code coverage idempotent enhance pulse listener monitoring to restart tasks in exception state enhance pulse listener to keep tasks in the queue until they are finished ,0 185894,15036137045.0,IssuesEvent,2021-02-02 14:55:36,battjt/j1939-84,https://api.github.com/repos/battjt/j1939-84,closed,A5 verification engine hours,documentation,"FAIL: 6.1.10.3.b - Fail if any diagnostic information in any ECU is not reset or starts out with unexpected values. /* engine hours is not on the list with for example time since engine start */ Simulation showed a failure, that isn't in the code. Verify what is going on. This may have already been fixed.",1.0,"A5 verification engine hours - FAIL: 6.1.10.3.b - Fail if any diagnostic information in any ECU is not reset or starts out with unexpected values. /* engine hours is not on the list with for example time since engine start */ Simulation showed a failure, that isn't in the code. Verify what is going on. This may have already been fixed.",0, verification engine hours fail b fail if any diagnostic information in any ecu is not reset or starts out with unexpected values engine hours is not on the list with for example time since engine start simulation showed a failure that isn t in the code verify what is going on this may have already been fixed ,0 261350,22740625469.0,IssuesEvent,2022-07-07 03:09:13,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,opened,Fix Vulnerability Detector IT: Tier 1 - test_scan_results ,team/qa feature/vuln-detector type/fix subteam/qa-storm test/nightly type/nightly-test-failure status/not-tracked,"|Related issue| |---| |https://github.com/wazuh/wazuh-qa/issues/3057 | Date| Commit | Commit title | Build | Version| |--|--|--|--|--| | 2022-07-06| [c62d985](https://github.com/wazuh/wazuh/commit/c62d985f729591fb2f7b0026ec08bb1756dd031b) | disable filebeat metrics (https://github.com/wazuh/wazuh/pull/14121) | [#28645](https://ci.wazuh.info/job/Test_integration/28645) | 4.3.6 | ## Case | Tier| Wazuh Type | OS |--|--|--| | 1 | Manager | CentOS ### Description ``` 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLED11] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLED12] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLED15] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLES11] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLES12] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLES15] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_provider_and_nvd_vulnerabilities.py::test_scan_provider_and_nvd_vulnerabilities[SUSE] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_provider_vulnerabilities.py::test_scan_provider_vulnerabilities[SUSE] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py::test_vulnerability_removal_update_package[Alert vulnerability removal - SUSE] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py::test_vulnerability_removal_delete_package[Alert vulnerability removal - SUSE] 09:47:51 ERROR test_vulnerability_detector/test_scan_results/test_scan_provider_and_nvd_vulnerabilities.py::test_scan_provider_and_nvd_vulnerabilities[ALAS] 09:47:51 ERROR test_vulnerability_detector/test_scan_results/test_scan_provider_vulnerabilities.py::test_scan_provider_vulnerabilities[ALAS] 09:47:51 ERROR test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py::test_vulnerability_removal_update_package[Alert vulnerability removal - ALAS 2022] 09:47:51 ERROR test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py::test_vulnerability_removal_delete_package[Alert vulnerability removal - ALAS 2022] ``` ## Tasks - [ ] Run tests in Jenkins - [ ] Run tests in local enviroment - [ ] Read test code & docs and run tests manually - [ ] Fix failures",2.0,"Fix Vulnerability Detector IT: Tier 1 - test_scan_results - |Related issue| |---| |https://github.com/wazuh/wazuh-qa/issues/3057 | Date| Commit | Commit title | Build | Version| |--|--|--|--|--| | 2022-07-06| [c62d985](https://github.com/wazuh/wazuh/commit/c62d985f729591fb2f7b0026ec08bb1756dd031b) | disable filebeat metrics (https://github.com/wazuh/wazuh/pull/14121) | [#28645](https://ci.wazuh.info/job/Test_integration/28645) | 4.3.6 | ## Case | Tier| Wazuh Type | OS |--|--|--| | 1 | Manager | CentOS ### Description ``` 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLED11] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLED12] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLED15] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLES11] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLES12] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_nvd_vulnerabilities.py::test_no_agent_data[SLES15] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_provider_and_nvd_vulnerabilities.py::test_scan_provider_and_nvd_vulnerabilities[SUSE] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_provider_vulnerabilities.py::test_scan_provider_vulnerabilities[SUSE] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py::test_vulnerability_removal_update_package[Alert vulnerability removal - SUSE] 09:47:51 FAILED test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py::test_vulnerability_removal_delete_package[Alert vulnerability removal - SUSE] 09:47:51 ERROR test_vulnerability_detector/test_scan_results/test_scan_provider_and_nvd_vulnerabilities.py::test_scan_provider_and_nvd_vulnerabilities[ALAS] 09:47:51 ERROR test_vulnerability_detector/test_scan_results/test_scan_provider_vulnerabilities.py::test_scan_provider_vulnerabilities[ALAS] 09:47:51 ERROR test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py::test_vulnerability_removal_update_package[Alert vulnerability removal - ALAS 2022] 09:47:51 ERROR test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py::test_vulnerability_removal_delete_package[Alert vulnerability removal - ALAS 2022] ``` ## Tasks - [ ] Run tests in Jenkins - [ ] Run tests in local enviroment - [ ] Read test code & docs and run tests manually - [ ] Fix failures",0,fix vulnerability detector it tier test scan results related issue date commit commit title build version disable filebeat metrics case tier wazuh type os manager centos description failed test vulnerability detector test scan results test scan nvd vulnerabilities py test no agent data failed test vulnerability detector test scan results test scan nvd vulnerabilities py test no agent data failed test vulnerability detector test scan results test scan nvd vulnerabilities py test no agent data failed test vulnerability detector test scan results test scan nvd vulnerabilities py test no agent data failed test vulnerability detector test scan results test scan nvd vulnerabilities py test no agent data failed test vulnerability detector test scan results test scan nvd vulnerabilities py test no agent data failed test vulnerability detector test scan results test scan provider and nvd vulnerabilities py test scan provider and nvd vulnerabilities failed test vulnerability detector test scan results test scan provider vulnerabilities py test scan provider vulnerabilities failed test vulnerability detector test scan results test scan vulnerability removal py test vulnerability removal update package failed test vulnerability detector test scan results test scan vulnerability removal py test vulnerability removal delete package error test vulnerability detector test scan results test scan provider and nvd vulnerabilities py test scan provider and nvd vulnerabilities error test vulnerability detector test scan results test scan provider vulnerabilities py test scan provider vulnerabilities error test vulnerability detector test scan results test scan vulnerability removal py test vulnerability removal update package error test vulnerability detector test scan results test scan vulnerability removal py test vulnerability removal delete package tasks run tests in jenkins run tests in local enviroment read test code docs and run tests manually fix failures,0 408824,11952850089.0,IssuesEvent,2020-04-03 19:41:07,qgis/QGIS,https://api.github.com/repos/qgis/QGIS,closed,Virtual Layer SQLite Natural Join - Instant Crash,Bug Crash/Data Corruption High Priority,"**Describe the bug** My understanding is that QGIS is SQLite3 compliant, and as such, should be compatible with SQLite3 syntax and functions. **How to Reproduce** ![image](https://user-images.githubusercontent.com/19295950/63547312-de7ac780-c4f9-11e9-941a-991ed764fa3a.png) Create new Virtual Layer Import needed layers (One MultiLineString, one Table) Write SQLite Code: SELECT * FROM ""CAMP-RS_JobsList_XLS"" NATURAL JOIN ""Reconciled"" Test & get valid SQLite code confirmation Click Add and QGIS crashes instantly to desktop **QGIS and OS versions** QGIS version | 3.8.2-Zanzibar | QGIS code revision | 4470baa1a3 -- | -- | -- | -- Compiled against Qt | 5.11.2 | Running against Qt | 5.11.2 Compiled against GDAL/OGR | 2.4.1 | Running against GDAL/OGR | 2.4.1 Compiled against GEOS | 3.7.2-CAPI-1.11.0 | Running against GEOS | 3.7.2-CAPI-1.11.0 b55d2125 PostgreSQL Client Version | 10.8 | SpatiaLite Version | 4.3.0 QWT Version | 6.1.3 | QScintilla2 Version | 2.10.8 Compiled against PROJ | 5.2.0 | Running against PROJ | Rel. 5.2.0, September 15th, 2018 OS Version | Windows 10 (10.0) |   |   ",1.0,"Virtual Layer SQLite Natural Join - Instant Crash - **Describe the bug** My understanding is that QGIS is SQLite3 compliant, and as such, should be compatible with SQLite3 syntax and functions. **How to Reproduce** ![image](https://user-images.githubusercontent.com/19295950/63547312-de7ac780-c4f9-11e9-941a-991ed764fa3a.png) Create new Virtual Layer Import needed layers (One MultiLineString, one Table) Write SQLite Code: SELECT * FROM ""CAMP-RS_JobsList_XLS"" NATURAL JOIN ""Reconciled"" Test & get valid SQLite code confirmation Click Add and QGIS crashes instantly to desktop **QGIS and OS versions** QGIS version | 3.8.2-Zanzibar | QGIS code revision | 4470baa1a3 -- | -- | -- | -- Compiled against Qt | 5.11.2 | Running against Qt | 5.11.2 Compiled against GDAL/OGR | 2.4.1 | Running against GDAL/OGR | 2.4.1 Compiled against GEOS | 3.7.2-CAPI-1.11.0 | Running against GEOS | 3.7.2-CAPI-1.11.0 b55d2125 PostgreSQL Client Version | 10.8 | SpatiaLite Version | 4.3.0 QWT Version | 6.1.3 | QScintilla2 Version | 2.10.8 Compiled against PROJ | 5.2.0 | Running against PROJ | Rel. 5.2.0, September 15th, 2018 OS Version | Windows 10 (10.0) |   |   ",0,virtual layer sqlite natural join instant crash describe the bug my understanding is that qgis is compliant and as such should be compatible with syntax and functions how to reproduce create new virtual layer import needed layers one multilinestring one table write sqlite code select from camp rs jobslist xls natural join reconciled test get valid sqlite code confirmation click add and qgis crashes instantly to desktop qgis and os versions qgis version zanzibar qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi postgresql client version spatialite version qwt version version compiled against proj running against proj rel september os version windows     ,0 10131,31780124856.0,IssuesEvent,2023-09-12 16:48:59,rancher/qa-tasks,https://api.github.com/repos/rancher/qa-tasks,opened,Add default values in Terraform variables k3s/rke2,team/rke2 area/automation-framework,Add default values in Terraform variables k3s/rke2 in variables.tf file in distros-test-framework,1.0,Add default values in Terraform variables k3s/rke2 - Add default values in Terraform variables k3s/rke2 in variables.tf file in distros-test-framework,1,add default values in terraform variables add default values in terraform variables in variables tf file in distros test framework,1 6501,23263039572.0,IssuesEvent,2022-08-04 14:56:33,tj-cappelletti/blogs,https://api.github.com/repos/tj-cappelletti/blogs,opened,Automatically Update articles table in core README.md,automation,"Once an article is successfully published, the `README.md` file in the root of the repo should be updated with all of the details.",1.0,"Automatically Update articles table in core README.md - Once an article is successfully published, the `README.md` file in the root of the repo should be updated with all of the details.",1,automatically update articles table in core readme md once an article is successfully published the readme md file in the root of the repo should be updated with all of the details ,1 7652,25373431289.0,IssuesEvent,2022-11-21 12:21:29,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] Removing old instance records after the new IM pod is launched will take 1 minute,kind/bug area/manager reproduce/always priority/0 require/automation-e2e,"**Describe the bug** If an instance manager pod crashes unexpectedly, all related instances will be marked as `ERROR` and the records will be retained for 1 minute even if the new pod is launched. The issue is found during test #1336 **To Reproduce** Steps to reproduce the behavior: 1. Clean up all volumes 2. Create a new volume and attach it to a node 3. Delete the related engine manager pod for the volume 4. The volume will wait for 1 minute to become `detached` then starts reattachment **Expected behavior** The volume should be `detached` once the new engine manager pod is `running`. **Log** N/A **Environment:** - Longhorn version: v0.8.1 - Kubernetes version: - Node OS type and version: **Additional context** The 1-minute wait: https://github.com/longhorn/longhorn-manager/blob/2c8ad8ca03a645cafd0cc645c148dbe6f46da601/controller/instance_manager_controller.go#L818",1.0,"[BUG] Removing old instance records after the new IM pod is launched will take 1 minute - **Describe the bug** If an instance manager pod crashes unexpectedly, all related instances will be marked as `ERROR` and the records will be retained for 1 minute even if the new pod is launched. The issue is found during test #1336 **To Reproduce** Steps to reproduce the behavior: 1. Clean up all volumes 2. Create a new volume and attach it to a node 3. Delete the related engine manager pod for the volume 4. The volume will wait for 1 minute to become `detached` then starts reattachment **Expected behavior** The volume should be `detached` once the new engine manager pod is `running`. **Log** N/A **Environment:** - Longhorn version: v0.8.1 - Kubernetes version: - Node OS type and version: **Additional context** The 1-minute wait: https://github.com/longhorn/longhorn-manager/blob/2c8ad8ca03a645cafd0cc645c148dbe6f46da601/controller/instance_manager_controller.go#L818",1, removing old instance records after the new im pod is launched will take minute describe the bug if an instance manager pod crashes unexpectedly all related instances will be marked as error and the records will be retained for minute even if the new pod is launched the issue is found during test to reproduce steps to reproduce the behavior clean up all volumes create a new volume and attach it to a node delete the related engine manager pod for the volume the volume will wait for minute to become detached then starts reattachment expected behavior the volume should be detached once the new engine manager pod is running log n a environment longhorn version kubernetes version node os type and version additional context the minute wait ,1 124124,10290120655.0,IssuesEvent,2019-08-27 17:23:26,microsoft/AzureStorageExplorer,https://api.github.com/repos/microsoft/AzureStorageExplorer,opened,Pop up a notification bar after clicking 'OK' with the default option on 'Proxy Settings' dialog,:beetle: regression :gear: proxy 🧪 testing,"**Storage Explorer Version:** 1.10.0 **Build:** 20190827.12 **Branch:** master **Platform/OS:** Windows 10/ Linux Ubuntu 19.04/macOS High Sierra **Architecture:** ia32/x64 **Regression From:** Previous release 1.9.0 **Steps to reproduce:** 1. Launch Storage Explorer. 2. Go to Edit -> Config Proxy. 3. Use the default option 'Do not use proxy' -> Click 'OK' directly on the popped dialog. 4. Check the result. **Expect Experience:** No notification shows up. **Actual Experience:** Pop up a notification with the below message. ![image](https://user-images.githubusercontent.com/34729022/63757997-aa364b00-c8ed-11e9-8fd8-cecfa5b34726.png) ",1.0,"Pop up a notification bar after clicking 'OK' with the default option on 'Proxy Settings' dialog - **Storage Explorer Version:** 1.10.0 **Build:** 20190827.12 **Branch:** master **Platform/OS:** Windows 10/ Linux Ubuntu 19.04/macOS High Sierra **Architecture:** ia32/x64 **Regression From:** Previous release 1.9.0 **Steps to reproduce:** 1. Launch Storage Explorer. 2. Go to Edit -> Config Proxy. 3. Use the default option 'Do not use proxy' -> Click 'OK' directly on the popped dialog. 4. Check the result. **Expect Experience:** No notification shows up. **Actual Experience:** Pop up a notification with the below message. ![image](https://user-images.githubusercontent.com/34729022/63757997-aa364b00-c8ed-11e9-8fd8-cecfa5b34726.png) ",0,pop up a notification bar after clicking ok with the default option on proxy settings dialog storage explorer version build branch master platform os windows linux ubuntu macos high sierra architecture regression from previous release steps to reproduce launch storage explorer go to edit config proxy use the default option do not use proxy click ok directly on the popped dialog check the result expect experience no notification shows up actual experience pop up a notification with the below message ,0 9889,30707326315.0,IssuesEvent,2023-07-27 07:19:41,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Multi-homing support for Update Management,automation/svc triaged assigned-to-author doc-enhancement update-management/subsvc Pri1,"The part below in the document causes double interpretation. `Having a machine registered for Update Management in more than one Log Analytics workspace (also referred to as multihoming) isn't supported.` **Interpretation 1:** A machine registered to 1 workspace with Update Management enabled cannot register to another workspace that is also using Update Management. **Interpretation 2**: A machine registered to 1 workspace with Update Management enabled cannot register to any other workspace, even if the other workspace does not have Update Management enabled. **If Interpretation 2 is correct**, this means, for example, if the customer uses Microsoft Sentinel in one workspace and Update Management in another workspace, they cannot have a machine registered to both. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: cda4ec71-9492-2ac6-1030-dcd004826aa0 * Version Independent ID: 7720d2be-0dd6-36c7-6eba-e1acee46d598 * Content: [Azure Automation Update Management overview](https://docs.microsoft.com/en-us/azure/automation/update-management/overview) * Content Source: [articles/automation/update-management/overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/update-management/overview.md) * Service: **automation** * Sub-service: **update-management** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**",1.0,"Multi-homing support for Update Management - The part below in the document causes double interpretation. `Having a machine registered for Update Management in more than one Log Analytics workspace (also referred to as multihoming) isn't supported.` **Interpretation 1:** A machine registered to 1 workspace with Update Management enabled cannot register to another workspace that is also using Update Management. **Interpretation 2**: A machine registered to 1 workspace with Update Management enabled cannot register to any other workspace, even if the other workspace does not have Update Management enabled. **If Interpretation 2 is correct**, this means, for example, if the customer uses Microsoft Sentinel in one workspace and Update Management in another workspace, they cannot have a machine registered to both. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: cda4ec71-9492-2ac6-1030-dcd004826aa0 * Version Independent ID: 7720d2be-0dd6-36c7-6eba-e1acee46d598 * Content: [Azure Automation Update Management overview](https://docs.microsoft.com/en-us/azure/automation/update-management/overview) * Content Source: [articles/automation/update-management/overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/update-management/overview.md) * Service: **automation** * Sub-service: **update-management** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**",1,multi homing support for update management the part below in the document causes double interpretation having a machine registered for update management in more than one log analytics workspace also referred to as multihoming isn t supported interpretation a machine registered to workspace with update management enabled cannot register to another workspace that is also using update management interpretation a machine registered to workspace with update management enabled cannot register to any other workspace even if the other workspace does not have update management enabled if interpretation is correct this means for example if the customer uses microsoft sentinel in one workspace and update management in another workspace they cannot have a machine registered to both document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service update management github login sgsneha microsoft alias v ssudhir ,1 280947,8688554911.0,IssuesEvent,2018-12-03 16:23:22,openshiftio/openshift.io,https://api.github.com/repos/openshiftio/openshift.io,closed,Che in OSIO displays review link to deployed app before that app can be reached,SEV3-medium area/che priority/P4 team/che/osio type/bug,"Steps to recreate: * Create a new project in OpenShift.io * Create a new Che workspace for the project * Execute the run option (from the run button at the top of the Che display) * Observe the preview link URL is displayed before the app is fully built ![screenshot from 2018-04-05 12-51-38](https://user-images.githubusercontent.com/642621/38382037-afcf3e8a-38d6-11e8-837d-bbbab28bd94e.png) * Click on the link - observe that the app endpoint is not (yet) reachable: ![screenshot from 2018-04-05 12-51-54](https://user-images.githubusercontent.com/642621/38382105-dfc36dd2-38d6-11e8-83c7-21402adcb631.png) ",1.0,"Che in OSIO displays review link to deployed app before that app can be reached - Steps to recreate: * Create a new project in OpenShift.io * Create a new Che workspace for the project * Execute the run option (from the run button at the top of the Che display) * Observe the preview link URL is displayed before the app is fully built ![screenshot from 2018-04-05 12-51-38](https://user-images.githubusercontent.com/642621/38382037-afcf3e8a-38d6-11e8-837d-bbbab28bd94e.png) * Click on the link - observe that the app endpoint is not (yet) reachable: ![screenshot from 2018-04-05 12-51-54](https://user-images.githubusercontent.com/642621/38382105-dfc36dd2-38d6-11e8-83c7-21402adcb631.png) ",0,che in osio displays review link to deployed app before that app can be reached steps to recreate create a new project in openshift io create a new che workspace for the project execute the run option from the run button at the top of the che display observe the preview link url is displayed before the app is fully built click on the link observe that the app endpoint is not yet reachable ,0 23632,12043678506.0,IssuesEvent,2020-04-14 12:51:36,centreon/centreon,https://api.github.com/repos/centreon/centreon,closed,Session file is too big,area/performance kind/performance,"# BUG REPORT INFORMATION ### Prerequisites All versions ### Description When we open a session, we get a session of 173KB. $ ls /var/opt/rh/rh-php72/lib/php/session/ -rw------- 1 apache apache 173K 3 févr. 09:33 sess_nik5vc07na9cbqlg7nj951v77s -- Describe the encountered issue -- ### Steps to Reproduce 1. Open a session 2. Look the session file created ### Describe the received result ### Describe the expected result We could save many spaces if we remove first following datas (from the construct and the session): - /usr/share/centreon/www/class/centreonGMT.class.php : $this->timezoneById from the constructor - /usr/share/centreon/www/class/centreon.class.php : Nagioscfg (maybe more) We need to get datas when we need it. If we remove it, we have a session of (divide by 10): $ ls -lh /var/opt/rh/rh-php72/lib/php/session/ -rw------- 1 apache apache 17K 3 févr. 09:27 sess_ir5k2lk35u215mhlb06v1ik32h ### Logs ### Additional relevant information (e.g. frequency, ...) ",True,"Session file is too big - # BUG REPORT INFORMATION ### Prerequisites All versions ### Description When we open a session, we get a session of 173KB. $ ls /var/opt/rh/rh-php72/lib/php/session/ -rw------- 1 apache apache 173K 3 févr. 09:33 sess_nik5vc07na9cbqlg7nj951v77s -- Describe the encountered issue -- ### Steps to Reproduce 1. Open a session 2. Look the session file created ### Describe the received result ### Describe the expected result We could save many spaces if we remove first following datas (from the construct and the session): - /usr/share/centreon/www/class/centreonGMT.class.php : $this->timezoneById from the constructor - /usr/share/centreon/www/class/centreon.class.php : Nagioscfg (maybe more) We need to get datas when we need it. If we remove it, we have a session of (divide by 10): $ ls -lh /var/opt/rh/rh-php72/lib/php/session/ -rw------- 1 apache apache 17K 3 févr. 09:27 sess_ir5k2lk35u215mhlb06v1ik32h ### Logs ### Additional relevant information (e.g. frequency, ...) ",0,session file is too big bug report information prerequisites all versions description when we open a session we get a session of ls var opt rh rh lib php session rw apache apache févr sess describe the encountered issue steps to reproduce open a session look the session file created describe the received result describe the expected result we could save many spaces if we remove first following datas from the construct and the session usr share centreon www class centreongmt class php this timezonebyid from the constructor usr share centreon www class centreon class php nagioscfg maybe more we need to get datas when we need it if we remove it we have a session of divide by ls lh var opt rh rh lib php session rw apache apache févr sess logs additional relevant information e g frequency ,0 2415,11901586453.0,IssuesEvent,2020-03-30 12:42:18,elastic/beats,https://api.github.com/repos/elastic/beats,closed,[CI] APM beats update job it is falling,[zube]: In Review automation bug ci,"The APM beats update job it is falling with the following error ``` [2020-02-06T09:08:11.469Z] + make clean check testsuite apm-server [2020-02-06T09:08:13.380Z] 2020/02/06 09:08:13 Found Elastic Beats dir at /var/lib/jenkins/workspace/Beats_apm-beats-update_master/src/github.com/elastic/apm-server/_beats [2020-02-06T09:08:13.380Z] /var/lib/jenkins/workspace/Beats_apm-beats-update_master/.magefile cleaned [2020-02-06T09:08:19.971Z] 2020/02/06 09:08:18 Found Elastic Beats dir at /var/lib/jenkins/workspace/Beats_apm-beats-update_master/src/github.com/elastic/apm-server/_beats [2020-02-06T09:08:19.971Z] >> check: Checking source code for common problems [2020-02-06T09:08:25.391Z] # github.com/elastic/apm-server/vendor/github.com/gogo/googleapis/google/api [2020-02-06T09:08:25.391Z] vendor/github.com/gogo/googleapis/google/api/annotations.pb.go:22:11: undefined: proto.GoGoProtoPackageIsVersion3 [2020-02-06T09:08:25.391Z] vendor/github.com/gogo/googleapis/google/api/http.pb.go:26:11: undefined: proto.GoGoProtoPackageIsVersion3 [2020-02-06T09:09:22.101Z] Error: failed running go vet, please fix the issues reported: running ""go vet ./..."" failed with exit code 2 [2020-02-06T09:09:22.101Z] make: *** [check] Error 1 ``` ",1.0,"[CI] APM beats update job it is falling - The APM beats update job it is falling with the following error ``` [2020-02-06T09:08:11.469Z] + make clean check testsuite apm-server [2020-02-06T09:08:13.380Z] 2020/02/06 09:08:13 Found Elastic Beats dir at /var/lib/jenkins/workspace/Beats_apm-beats-update_master/src/github.com/elastic/apm-server/_beats [2020-02-06T09:08:13.380Z] /var/lib/jenkins/workspace/Beats_apm-beats-update_master/.magefile cleaned [2020-02-06T09:08:19.971Z] 2020/02/06 09:08:18 Found Elastic Beats dir at /var/lib/jenkins/workspace/Beats_apm-beats-update_master/src/github.com/elastic/apm-server/_beats [2020-02-06T09:08:19.971Z] >> check: Checking source code for common problems [2020-02-06T09:08:25.391Z] # github.com/elastic/apm-server/vendor/github.com/gogo/googleapis/google/api [2020-02-06T09:08:25.391Z] vendor/github.com/gogo/googleapis/google/api/annotations.pb.go:22:11: undefined: proto.GoGoProtoPackageIsVersion3 [2020-02-06T09:08:25.391Z] vendor/github.com/gogo/googleapis/google/api/http.pb.go:26:11: undefined: proto.GoGoProtoPackageIsVersion3 [2020-02-06T09:09:22.101Z] Error: failed running go vet, please fix the issues reported: running ""go vet ./..."" failed with exit code 2 [2020-02-06T09:09:22.101Z] make: *** [check] Error 1 ``` ",1, apm beats update job it is falling the apm beats update job it is falling with the following error make clean check testsuite apm server found elastic beats dir at var lib jenkins workspace beats apm beats update master src github com elastic apm server beats var lib jenkins workspace beats apm beats update master magefile cleaned found elastic beats dir at var lib jenkins workspace beats apm beats update master src github com elastic apm server beats check checking source code for common problems github com elastic apm server vendor github com gogo googleapis google api vendor github com gogo googleapis google api annotations pb go undefined proto vendor github com gogo googleapis google api http pb go undefined proto error failed running go vet please fix the issues reported running go vet failed with exit code make error ,1 82290,10238882238.0,IssuesEvent,2019-08-19 16:52:49,crossplaneio/crossplane,https://api.github.com/repos/crossplaneio/crossplane,closed,Consider strongly typing resource classes,design feature proposal services,"Currently `ResourceClass` is generic and not tied to a specific abstract resource, which means that just listing them and using names like `standard` or `high-performance` is not sufficient for usage. It would be easy to use the wrong resource class and have the binding fail. One solution is adopt a naming convention like `mysqlinstance-standard` and assume that a developer can make sense of these. Another solution is for us to go back to strongly typed resource classes like `MySqlInstanceClass`, which would make it clear that these only apply to MySqlInstance. This approach has other merits, 1) we can strongly type the `parameters` section, 2) might enable us to create a set of structured properties that are relevant to app developers when selecting the class, for example, there could be a property called ""iops"", 3) enables more granular RBAC rules on resource classes. ",1.0,"Consider strongly typing resource classes - Currently `ResourceClass` is generic and not tied to a specific abstract resource, which means that just listing them and using names like `standard` or `high-performance` is not sufficient for usage. It would be easy to use the wrong resource class and have the binding fail. One solution is adopt a naming convention like `mysqlinstance-standard` and assume that a developer can make sense of these. Another solution is for us to go back to strongly typed resource classes like `MySqlInstanceClass`, which would make it clear that these only apply to MySqlInstance. This approach has other merits, 1) we can strongly type the `parameters` section, 2) might enable us to create a set of structured properties that are relevant to app developers when selecting the class, for example, there could be a property called ""iops"", 3) enables more granular RBAC rules on resource classes. ",0,consider strongly typing resource classes currently resourceclass is generic and not tied to a specific abstract resource which means that just listing them and using names like standard or high performance is not sufficient for usage it would be easy to use the wrong resource class and have the binding fail one solution is adopt a naming convention like mysqlinstance standard and assume that a developer can make sense of these another solution is for us to go back to strongly typed resource classes like mysqlinstanceclass which would make it clear that these only apply to mysqlinstance this approach has other merits we can strongly type the parameters section might enable us to create a set of structured properties that are relevant to app developers when selecting the class for example there could be a property called iops enables more granular rbac rules on resource classes ,0 403221,11837828319.0,IssuesEvent,2020-03-23 14:46:23,TheNovi/Utano,https://api.github.com/repos/TheNovi/Utano,opened,Indicator for paused,enhancement high Priority,"Add some type of indicator if music is paused. Maybe use the icon (change color, image)? Or title (switch between 'paused' and name or add '||' at the start)?",1.0,"Indicator for paused - Add some type of indicator if music is paused. Maybe use the icon (change color, image)? Or title (switch between 'paused' and name or add '||' at the start)?",0,indicator for paused add some type of indicator if music is paused maybe use the icon change color image or title switch between paused and name or add at the start ,0 115177,17289642579.0,IssuesEvent,2021-07-24 13:07:08,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,"I have an issue playing a stream from an API ""Rest Coomand""",integration: eufy_security stale,"### The problem I'm trying to stream a camera using Rest API I'm getting an error in my log: Error opening stream No Stream 12:49:17 PM – (ERROR) Stream - message first occurred at 12:45:54 PM and shows up 7 times When I inspect the page I have found this issue: http://hass:8123/api/hls/fc285d4d2ae1787ea84c30736e2ada4e2a1b72869348d2784205d88427b2718f/master_playlist.m3u8 Can you please hep how to resolve it? This issue only happened after version -2021.4.x Cheers, ### What is version of Home Assistant Core has the issue? core-2021.4.5 ### What was the last working version of Home Assistant Core? core-2021.4.5 ### What type of installation are you running? Home Assistant OS ### Integration causing the issue No idea ### Link to integration documentation on our website _No response_ ### Example YAML snippet ```yaml rest_command: eufy_start_stream: url: ""http://192.168.88.86:8087/toggle/eufy-security.0.T8010N13203000F1.cameras.T8113N13203004DC.start_stream"" eufy_stop_stream: url: ""http://192.168.88.86:8087/toggle/eufy-security.0.T8010N13203000F1.cameras.T8113N13203004DC.stop_stream"" #------ ``` ### Anything in the logs that might be useful for us? _No response_ ### Additional information _No response_",True,"I have an issue playing a stream from an API ""Rest Coomand"" - ### The problem I'm trying to stream a camera using Rest API I'm getting an error in my log: Error opening stream No Stream 12:49:17 PM – (ERROR) Stream - message first occurred at 12:45:54 PM and shows up 7 times When I inspect the page I have found this issue: http://hass:8123/api/hls/fc285d4d2ae1787ea84c30736e2ada4e2a1b72869348d2784205d88427b2718f/master_playlist.m3u8 Can you please hep how to resolve it? This issue only happened after version -2021.4.x Cheers, ### What is version of Home Assistant Core has the issue? core-2021.4.5 ### What was the last working version of Home Assistant Core? core-2021.4.5 ### What type of installation are you running? Home Assistant OS ### Integration causing the issue No idea ### Link to integration documentation on our website _No response_ ### Example YAML snippet ```yaml rest_command: eufy_start_stream: url: ""http://192.168.88.86:8087/toggle/eufy-security.0.T8010N13203000F1.cameras.T8113N13203004DC.start_stream"" eufy_stop_stream: url: ""http://192.168.88.86:8087/toggle/eufy-security.0.T8010N13203000F1.cameras.T8113N13203004DC.stop_stream"" #------ ``` ### Anything in the logs that might be useful for us? _No response_ ### Additional information _No response_",0,i have an issue playing a stream from an api rest coomand the problem i m trying to stream a camera using rest api i m getting an error in my log error opening stream no stream pm – error stream message first occurred at pm and shows up times when i inspect the page i have found this issue can you please hep how to resolve it this issue only happened after version x cheers what is version of home assistant core has the issue core what was the last working version of home assistant core core what type of installation are you running home assistant os integration causing the issue no idea link to integration documentation on our website no response example yaml snippet yaml rest command eufy start stream url eufy stop stream url anything in the logs that might be useful for us no response additional information no response ,0 3635,14226539731.0,IssuesEvent,2020-11-17 23:13:29,surge-synthesizer/surge,https://api.github.com/repos/surge-synthesizer/surge,closed,Some Adjustments to VST3 Menu Rendering,Bug Report Host Automation Host Specific,"![image](https://user-images.githubusercontent.com/2393720/98521765-29563b80-2274-11eb-9669-8696295b2052.png) As you can see, it's showing Macro 3 instead of Global Volume there. It's similar for some other parameters, like Return 1/2, Scene Mode, Split Point, Max Poly. @baconpaul said he knows exactly what and where this is. 🙂 ",1.0,"Some Adjustments to VST3 Menu Rendering - ![image](https://user-images.githubusercontent.com/2393720/98521765-29563b80-2274-11eb-9669-8696295b2052.png) As you can see, it's showing Macro 3 instead of Global Volume there. It's similar for some other parameters, like Return 1/2, Scene Mode, Split Point, Max Poly. @baconpaul said he knows exactly what and where this is. 🙂 ",1,some adjustments to menu rendering as you can see it s showing macro instead of global volume there it s similar for some other parameters like return scene mode split point max poly baconpaul said he knows exactly what and where this is 🙂 ,1 64300,3210149347.0,IssuesEvent,2015-10-06 00:12:28,neuropoly/spinalcordtoolbox,https://api.github.com/repos/neuropoly/spinalcordtoolbox,closed,pip broken,bug installation priority: high,"syntax: ~~~ conda install pip ~~~ output: ~~~ Fetching package metadata: .... Solving package specifications: ................ # All requested packages already installed. # packages in environment at /Users/julien/miniconda: # pip 7.1.2 py27_0 ~~~ syntax: ~~~ pip ~~~ output: ~~~ Traceback (most recent call last): File ""/Users/julien/miniconda/bin/pip"", line 4, in from pip import main File ""/Users/julien/miniconda/lib/python2.7/site-packages/pip/__init__.py"", line 12, in from pip.exceptions import InstallationError, CommandError, PipError ImportError: No module named exceptions ~~~ Station: OSX 10.8.5",1.0,"pip broken - syntax: ~~~ conda install pip ~~~ output: ~~~ Fetching package metadata: .... Solving package specifications: ................ # All requested packages already installed. # packages in environment at /Users/julien/miniconda: # pip 7.1.2 py27_0 ~~~ syntax: ~~~ pip ~~~ output: ~~~ Traceback (most recent call last): File ""/Users/julien/miniconda/bin/pip"", line 4, in from pip import main File ""/Users/julien/miniconda/lib/python2.7/site-packages/pip/__init__.py"", line 12, in from pip.exceptions import InstallationError, CommandError, PipError ImportError: No module named exceptions ~~~ Station: OSX 10.8.5",0,pip broken syntax conda install pip output fetching package metadata solving package specifications all requested packages already installed packages in environment at users julien miniconda pip syntax pip output traceback most recent call last file users julien miniconda bin pip line in from pip import main file users julien miniconda lib site packages pip init py line in from pip exceptions import installationerror commanderror piperror importerror no module named exceptions station osx ,0 122324,4833811638.0,IssuesEvent,2016-11-08 12:20:59,BinPar/PPD,https://api.github.com/repos/BinPar/PPD,closed,ERROR EN CALCULO SUMA GESTION DE EJEMPLARES,Priority: High,"![image](https://cloud.githubusercontent.com/assets/22589031/20068614/91a8fa42-a519-11e6-91c3-c8c81dd4849a.png) No realiza correctamente la suma @CristianBinpar ",1.0,"ERROR EN CALCULO SUMA GESTION DE EJEMPLARES - ![image](https://cloud.githubusercontent.com/assets/22589031/20068614/91a8fa42-a519-11e6-91c3-c8c81dd4849a.png) No realiza correctamente la suma @CristianBinpar ",0,error en calculo suma gestion de ejemplares no realiza correctamente la suma cristianbinpar ,0 317031,27205709543.0,IssuesEvent,2023-02-20 12:56:49,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,opened,sql/ttl/ttljob: TestRowLevelTTLJobRandomEntries failed,C-test-failure O-robot branch-master,"sql/ttl/ttljob.TestRowLevelTTLJobRandomEntries [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8768256?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8768256?buildTab=artifacts#/) on master @ [4f7756ff4f16f38c5780d5c8ad0f54768ba97573](https://github.com/cockroachdb/cockroach/commits/4f7756ff4f16f38c5780d5c8ad0f54768ba97573): ``` === RUN TestRowLevelTTLJobRandomEntries test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/f00a2dae801c4acc82659d1d8dd20fad/logTestRowLevelTTLJobRandomEntries4263550092 test_log_scope.go:79: use -show-logs to present logs inline === CONT TestRowLevelTTLJobRandomEntries ttljob_test.go:884: -- test log scope end -- test logs left over in: /artifacts/tmp/_tmp/f00a2dae801c4acc82659d1d8dd20fad/logTestRowLevelTTLJobRandomEntries4263550092 --- FAIL: TestRowLevelTTLJobRandomEntries (323.34s) === RUN TestRowLevelTTLJobRandomEntries/random_1 ttljob_test.go:788: test case: ttljob_test.testCase{desc:""random 1"", createTable:""CREATE TABLE tbl (\n\tid UUID DEFAULT gen_random_uuid(),\n\trand_col_1 FLOAT8,\n\trand_col_2 OID,\n\ttext TEXT,\n\tPRIMARY KEY (id, rand_col_1, rand_col_2)\n) WITH (ttl_expire_after = '30 days', ttl_select_batch_size = 1, ttl_delete_batch_size = 34)"", preSetup:[]string(nil), postSetup:[]string(nil), numExpiredRows:1921, numNonExpiredRows:41, numSplits:7, forceNonMultiTenant:false, expirationExpression:"""", addRow:(func(*ttljob_test.rowLevelTTLTestJobTestHelper, *tree.CreateTable, time.Time))(nil)} ttljob_test.go:155: condition failed to evaluate within 45s: expected status succeeded, got running (error: ) --- FAIL: TestRowLevelTTLJobRandomEntries/random_1 (63.25s) ```

Parameters: TAGS=bazel,gss,deadlock

Help

See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)

/cc @cockroachdb/sql-sessions [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestRowLevelTTLJobRandomEntries.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) ",1.0,"sql/ttl/ttljob: TestRowLevelTTLJobRandomEntries failed - sql/ttl/ttljob.TestRowLevelTTLJobRandomEntries [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8768256?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8768256?buildTab=artifacts#/) on master @ [4f7756ff4f16f38c5780d5c8ad0f54768ba97573](https://github.com/cockroachdb/cockroach/commits/4f7756ff4f16f38c5780d5c8ad0f54768ba97573): ``` === RUN TestRowLevelTTLJobRandomEntries test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/f00a2dae801c4acc82659d1d8dd20fad/logTestRowLevelTTLJobRandomEntries4263550092 test_log_scope.go:79: use -show-logs to present logs inline === CONT TestRowLevelTTLJobRandomEntries ttljob_test.go:884: -- test log scope end -- test logs left over in: /artifacts/tmp/_tmp/f00a2dae801c4acc82659d1d8dd20fad/logTestRowLevelTTLJobRandomEntries4263550092 --- FAIL: TestRowLevelTTLJobRandomEntries (323.34s) === RUN TestRowLevelTTLJobRandomEntries/random_1 ttljob_test.go:788: test case: ttljob_test.testCase{desc:""random 1"", createTable:""CREATE TABLE tbl (\n\tid UUID DEFAULT gen_random_uuid(),\n\trand_col_1 FLOAT8,\n\trand_col_2 OID,\n\ttext TEXT,\n\tPRIMARY KEY (id, rand_col_1, rand_col_2)\n) WITH (ttl_expire_after = '30 days', ttl_select_batch_size = 1, ttl_delete_batch_size = 34)"", preSetup:[]string(nil), postSetup:[]string(nil), numExpiredRows:1921, numNonExpiredRows:41, numSplits:7, forceNonMultiTenant:false, expirationExpression:"""", addRow:(func(*ttljob_test.rowLevelTTLTestJobTestHelper, *tree.CreateTable, time.Time))(nil)} ttljob_test.go:155: condition failed to evaluate within 45s: expected status succeeded, got running (error: ) --- FAIL: TestRowLevelTTLJobRandomEntries/random_1 (63.25s) ```

Parameters: TAGS=bazel,gss,deadlock

Help

See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)

/cc @cockroachdb/sql-sessions [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestRowLevelTTLJobRandomEntries.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) ",0,sql ttl ttljob testrowlevelttljobrandomentries failed sql ttl ttljob testrowlevelttljobrandomentries with on master run testrowlevelttljobrandomentries test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline cont testrowlevelttljobrandomentries ttljob test go test log scope end test logs left over in artifacts tmp tmp fail testrowlevelttljobrandomentries run testrowlevelttljobrandomentries random ttljob test go test case ttljob test testcase desc random createtable create table tbl n tid uuid default gen random uuid n trand col n trand col oid n ttext text n tprimary key id rand col rand col n with ttl expire after days ttl select batch size ttl delete batch size presetup string nil postsetup string nil numexpiredrows numnonexpiredrows numsplits forcenonmultitenant false expirationexpression addrow func ttljob test rowlevelttltestjobtesthelper tree createtable time time nil ttljob test go condition failed to evaluate within expected status succeeded got running error fail testrowlevelttljobrandomentries random parameters tags bazel gss deadlock help see also cc cockroachdb sql sessions ,0 99390,8699254987.0,IssuesEvent,2018-12-05 03:14:13,kubeflow/pipelines,https://api.github.com/repos/kubeflow/pipelines,closed,Frontend integration test is flaky,area/front-end area/testing,".. with this error: ``` run-frontend-integration-tests: 1) deploy helloworld sample run finds the new run in the list of runs, navigates to it: run-frontend-integration-tests: element ("".tableRow"") still not visible after 15000ms run-frontend-integration-tests: running chrome run-frontend-integration-tests: Error: element ("".tableRow"") still not visible after 15000ms run-frontend-integration-tests: at elements("".tableRow"") - isVisible.js:54:17 run-frontend-integration-tests: at isVisible("".tableRow"") - waitForVisible.js:73:22 ``` e.g.: https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/kubeflow_pipelines/459/presubmit-e2e-test/652",1.0,"Frontend integration test is flaky - .. with this error: ``` run-frontend-integration-tests: 1) deploy helloworld sample run finds the new run in the list of runs, navigates to it: run-frontend-integration-tests: element ("".tableRow"") still not visible after 15000ms run-frontend-integration-tests: running chrome run-frontend-integration-tests: Error: element ("".tableRow"") still not visible after 15000ms run-frontend-integration-tests: at elements("".tableRow"") - isVisible.js:54:17 run-frontend-integration-tests: at isVisible("".tableRow"") - waitForVisible.js:73:22 ``` e.g.: https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/kubeflow_pipelines/459/presubmit-e2e-test/652",0,frontend integration test is flaky with this error run frontend integration tests deploy helloworld sample run finds the new run in the list of runs navigates to it run frontend integration tests element tablerow still not visible after run frontend integration tests running chrome run frontend integration tests error element tablerow still not visible after run frontend integration tests at elements tablerow isvisible js run frontend integration tests at isvisible tablerow waitforvisible js e g ,0 146557,23087046456.0,IssuesEvent,2022-07-26 12:22:25,ipfs/ipfs-webui,https://api.github.com/repos/ipfs/ipfs-webui,closed,Support for publishing to IPNS,help wanted exp/intermediate P2 status/ready area/screen/files topic/design-front-end topic/design-ux kind/enhancement need/analysis effort/days,"**Is your feature request related to a problem? Please describe.** It's annoying to have to use the terminal to publish content to an IPNS key, and it alienates non-powerusers. **Describe the solution you'd like** It would be nice if the webui let you generate, organize, and name IPNS keys that files and folders could then be published to via the dropdown menu.",2.0,"Support for publishing to IPNS - **Is your feature request related to a problem? Please describe.** It's annoying to have to use the terminal to publish content to an IPNS key, and it alienates non-powerusers. **Describe the solution you'd like** It would be nice if the webui let you generate, organize, and name IPNS keys that files and folders could then be published to via the dropdown menu.",0,support for publishing to ipns is your feature request related to a problem please describe it s annoying to have to use the terminal to publish content to an ipns key and it alienates non powerusers describe the solution you d like it would be nice if the webui let you generate organize and name ipns keys that files and folders could then be published to via the dropdown menu ,0 114080,4613687904.0,IssuesEvent,2016-09-25 05:09:42,marrus-sh/jelli,https://api.github.com/repos/marrus-sh/jelli,opened,Get rid of minor instance functions,Control Data enhancement Filter Game Letters low-priority Media Screen Sheet Tileset,"There are lots of minor functions, mostly getters and setters, which are defined within constructors and thus re-defined every instance. All of these could instead be defined outside of the constructor and then bound to the proper variables using `bind`, which would improve memory usage and allow for better optimization.",1.0,"Get rid of minor instance functions - There are lots of minor functions, mostly getters and setters, which are defined within constructors and thus re-defined every instance. All of these could instead be defined outside of the constructor and then bound to the proper variables using `bind`, which would improve memory usage and allow for better optimization.",0,get rid of minor instance functions there are lots of minor functions mostly getters and setters which are defined within constructors and thus re defined every instance all of these could instead be defined outside of the constructor and then bound to the proper variables using bind which would improve memory usage and allow for better optimization ,0 289830,31999067169.0,IssuesEvent,2023-09-21 11:04:40,uthrasri/frameworks_av_AOSP_4.2.2_r1,https://api.github.com/repos/uthrasri/frameworks_av_AOSP_4.2.2_r1,opened,CVE-2016-2507 (High) detected in https://source.codeaurora.org/quic/qrd-android/platform/frameworks/av/A8064AAAAANLYA161332,Mend: dependency security vulnerability,"## CVE-2016-2507 - High Severity Vulnerability
Vulnerable Library - https://source.codeaurora.org/quic/qrd-android/platform/frameworks/av/A8064AAAAANLYA161332

Library home page: https://source.codeaurora.org/quic/qrd-android/platform/frameworks/av/

Found in HEAD commit: 9e6327551b00ece943e6d1e2c7ff374495b36814

Found in base branch: main

Vulnerable Source Files (3)

/media/libstagefright/codecs/on2/h264dec/source/h264bsd_storage.c /media/libstagefright/codecs/on2/h264dec/source/h264bsd_storage.c /media/libstagefright/codecs/on2/h264dec/source/h264bsd_storage.c

Vulnerability Details

Integer overflow in codecs/on2/h264dec/source/h264bsd_storage.c in libstagefright in mediaserver in Android 4.x before 4.4.4, 5.0.x before 5.0.2, 5.1.x before 5.1.1, and 6.x before 2016-07-01 allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption) via a crafted media file, aka internal bug 28532266.

Publish Date: 2016-07-11

URL: CVE-2016-2507

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-2507

Release Date: 2018-12-20

Fix Resolution: android-6.0.1_r47

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2016-2507 (High) detected in https://source.codeaurora.org/quic/qrd-android/platform/frameworks/av/A8064AAAAANLYA161332 - ## CVE-2016-2507 - High Severity Vulnerability
Vulnerable Library - https://source.codeaurora.org/quic/qrd-android/platform/frameworks/av/A8064AAAAANLYA161332

Library home page: https://source.codeaurora.org/quic/qrd-android/platform/frameworks/av/

Found in HEAD commit: 9e6327551b00ece943e6d1e2c7ff374495b36814

Found in base branch: main

Vulnerable Source Files (3)

/media/libstagefright/codecs/on2/h264dec/source/h264bsd_storage.c /media/libstagefright/codecs/on2/h264dec/source/h264bsd_storage.c /media/libstagefright/codecs/on2/h264dec/source/h264bsd_storage.c

Vulnerability Details

Integer overflow in codecs/on2/h264dec/source/h264bsd_storage.c in libstagefright in mediaserver in Android 4.x before 4.4.4, 5.0.x before 5.0.2, 5.1.x before 5.1.1, and 6.x before 2016-07-01 allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption) via a crafted media file, aka internal bug 28532266.

Publish Date: 2016-07-11

URL: CVE-2016-2507

CVSS 3 Score Details (7.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-2507

Release Date: 2018-12-20

Fix Resolution: android-6.0.1_r47

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in cve high severity vulnerability vulnerable library library home page a href found in head commit a href found in base branch main vulnerable source files media libstagefright codecs source storage c media libstagefright codecs source storage c media libstagefright codecs source storage c vulnerability details integer overflow in codecs source storage c in libstagefright in mediaserver in android x before x before x before and x before allows remote attackers to execute arbitrary code or cause a denial of service memory corruption via a crafted media file aka internal bug publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with mend ,0 3503,13878239889.0,IssuesEvent,2020-10-17 08:34:43,Tithibots/tithiwa,https://api.github.com/repos/Tithibots/tithiwa,opened,Handle same name groups while exiting groups,Selenium Automation bug hacktoberfest python,"If we have the same name groups as follows. ![chrome_HkwKk6HfTD](https://user-images.githubusercontent.com/30471072/96332357-2905be80-1081-11eb-8bd4-beda05055931.png) Then it fails to exit from the 2nd and 3rd group because 1. when we try to open the 2nd group after the 1st group it waits for the group to be opened before exiting it. 2. it waits for the group name to appear on the chatroom 3. But the 1st group already has the same group name 4. So it just doesn't wait for the 2nd group to be opened 5. it thinks like `Yes we already exited that group` and goes to the next one. that's how it is skipping the same name groups. https://github.com/Tithibots/tithiwa/blob/654e4f4ef008963b0665bd3652bcf3679b0ce202/tithiwa/group.py#L127 https://github.com/Tithibots/tithiwa/blob/654e4f4ef008963b0665bd3652bcf3679b0ce202/tithiwa/group.py#L161-L164 https://github.com/Tithibots/tithiwa/blob/654e4f4ef008963b0665bd3652bcf3679b0ce202/tithiwa/waobject.py#L109-L118 ",1.0,"Handle same name groups while exiting groups - If we have the same name groups as follows. ![chrome_HkwKk6HfTD](https://user-images.githubusercontent.com/30471072/96332357-2905be80-1081-11eb-8bd4-beda05055931.png) Then it fails to exit from the 2nd and 3rd group because 1. when we try to open the 2nd group after the 1st group it waits for the group to be opened before exiting it. 2. it waits for the group name to appear on the chatroom 3. But the 1st group already has the same group name 4. So it just doesn't wait for the 2nd group to be opened 5. it thinks like `Yes we already exited that group` and goes to the next one. that's how it is skipping the same name groups. https://github.com/Tithibots/tithiwa/blob/654e4f4ef008963b0665bd3652bcf3679b0ce202/tithiwa/group.py#L127 https://github.com/Tithibots/tithiwa/blob/654e4f4ef008963b0665bd3652bcf3679b0ce202/tithiwa/group.py#L161-L164 https://github.com/Tithibots/tithiwa/blob/654e4f4ef008963b0665bd3652bcf3679b0ce202/tithiwa/waobject.py#L109-L118 ",1,handle same name groups while exiting groups if we have the same name groups as follows then it fails to exit from the and group because when we try to open the group after the group it waits for the group to be opened before exiting it it waits for the group name to appear on the chatroom but the group already has the same group name so it just doesn t wait for the group to be opened it thinks like yes we already exited that group and goes to the next one that s how it is skipping the same name groups ,1 9680,30231669521.0,IssuesEvent,2023-07-06 07:24:55,camunda/issues,https://api.github.com/repos/camunda/issues,opened,Allow to debug external Scripts,public kind:epic component:c7-automation-platform riskAssessment:pending,"### Value Proposition Statement Ease the development process for external scripts. ### User Problem Context: - Javascript / Groovy scripts for script tasks can be external resources on the filesystem. - With a filename set in the script context, it is possible that the debugger can match the script file on the filesystem. This allows the developer to set breakpoints for debugging. Problem: - In the script context, the filename `javax.script.filename` is unavailable. - There is no easy workaround to add the filename. ### User Stories * As a developer, I want to have the filename of an external script as a parameter in the script context, this allows me to debug the external script. ### Implementation Notes :robot: This issue is automatically synced from: [source](https://github.com/camunda/product-hub/issues/1361) ",1.0,"Allow to debug external Scripts - ### Value Proposition Statement Ease the development process for external scripts. ### User Problem Context: - Javascript / Groovy scripts for script tasks can be external resources on the filesystem. - With a filename set in the script context, it is possible that the debugger can match the script file on the filesystem. This allows the developer to set breakpoints for debugging. Problem: - In the script context, the filename `javax.script.filename` is unavailable. - There is no easy workaround to add the filename. ### User Stories * As a developer, I want to have the filename of an external script as a parameter in the script context, this allows me to debug the external script. ### Implementation Notes :robot: This issue is automatically synced from: [source](https://github.com/camunda/product-hub/issues/1361) ",1,allow to debug external scripts value proposition statement ease the development process for external scripts user problem context javascript groovy scripts for script tasks can be external resources on the filesystem with a filename set in the script context it is possible that the debugger can match the script file on the filesystem this allows the developer to set breakpoints for debugging problem in the script context the filename javax script filename is unavailable there is no easy workaround to add the filename user stories as a developer i want to have the filename of an external script as a parameter in the script context this allows me to debug the external script implementation notes robot this issue is automatically synced from ,1 8496,27019616102.0,IssuesEvent,2023-02-10 23:28:49,theglus/Home-Assistant-Config,https://api.github.com/repos/theglus/Home-Assistant-Config,closed,Steam Deck - Wake on Lan,automation Steam Deck,"# Requirements - [x] Setup WoL integration. - [x] Create WoL switch. # Resources - [How to turn on your Steam Deck remotely with your phone](https://overkill.wtf/how-to-turn-on-your-steam-deck-with-your-phone-when-docked/) - [Adding HDMI-CEC to a Dock using a Pulse-Eight USB Adapter](https://www.reddit.com/r/SteamDeck/comments/10nksyr/adding_hdmicec_to_a_dock_using_a_pulseeight_usb/)",1.0,"Steam Deck - Wake on Lan - # Requirements - [x] Setup WoL integration. - [x] Create WoL switch. # Resources - [How to turn on your Steam Deck remotely with your phone](https://overkill.wtf/how-to-turn-on-your-steam-deck-with-your-phone-when-docked/) - [Adding HDMI-CEC to a Dock using a Pulse-Eight USB Adapter](https://www.reddit.com/r/SteamDeck/comments/10nksyr/adding_hdmicec_to_a_dock_using_a_pulseeight_usb/)",1,steam deck wake on lan requirements setup wol integration create wol switch resources ,1 6432,23131407256.0,IssuesEvent,2022-07-28 10:42:28,elastic/fleet-server,https://api.github.com/repos/elastic/fleet-server,closed,[CI] Fleet server binaries are not signed,bug Team:Elastic-Agent automation blocker v7.14.0,"When triggering the e2e tests for Beats branches/PRs, where we consume the artifacts generated by the Beats CI, the elastic-agent is not able to find how to fetch the fleet-server binary. This is the error log: >package '/opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz' not found: open /opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz: no such file or directory >fetching package failed: Get ""https://artifacts.elastic.co/downloads/fleet-server/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz"": x509: certificate signed by unknown authority On the other hand, when fetching the ""official"" artifacts, generated by the unified release, then the binaries can be downloaded. ## Impact All Beats builds triggering the e2e-tests (any merge commit AND any PR requesting package) are failing, contributing a failed check to the commit status. ## CI Logs You can find the logs here: https://beats-ci.elastic.co/blue/organizations/jenkins/e2e-tests%2Fe2e-testing-mbp/detail/master/788/pipeline, which was triggered by the merge commit https://github.com/elastic/beats/commit/87a8c857fcbfbcf3894e5e524349fec2c580fc5d in Beats: >[2021-04-28T15:58:19.330Z] The Elastic Agent is currently in BETA and should not be used in production [2021-04-28T15:58:19.330Z] [2021-04-28T15:58:21.874Z] 2021-04-28T15:58:21.471Z INFO cmd/enroll_cmd.go:301 Generating self-signed certificate for Fleet Server [2021-04-28T15:58:23.257Z] 2021-04-28T15:58:23.049Z INFO cmd/enroll_cmd.go:589 Waiting for Elastic Agent to start Fleet Server [2021-04-28T15:58:25.166Z] 2021-04-28T15:58:25.054Z INFO cmd/enroll_cmd.go:619 Fleet Server - 2 errors occurred: [2021-04-28T15:58:25.166Z] * package '/opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz' not found: open /opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz: no such file or directory [2021-04-28T15:58:25.166Z] * fetching package failed: Get ""https://artifacts.elastic.co/downloads/fleet-server/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz"": x509: certificate signed by unknown authority [2021-04-28T15:58:25.166Z] [2021-04-28T15:58:25.166Z] [2021-04-28T15:58:25.166Z] Error: fleet-server never started by elastic-agent daemon: 2 errors occurred: [2021-04-28T15:58:25.166Z] * package '/opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz' not found: open /opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz: no such file or directory [2021-04-28T15:58:25.166Z] * fetching package failed: Get ""https://artifacts.elastic.co/downloads/fleet-server/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz"": x509: certificate signed by unknown authority [2021-04-28T15:58:25.166Z] [2021-04-28T15:58:25.166Z] [2021-04-28T15:58:26.107Z] Error: enroll command failed with exit code: 1 [2021-04-28T15:58:26.107Z] time=""2021-04-28T15:58:25Z"" level=error msg=""Could not execute command in service container"" command=""[timeout 5m /elastic-agent/elastic-agent install --force --fleet-server-es http://elasticsearch:9200 --fleet-server-service-token AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2MTk2MjU0OTc3NjE6RjJlaUlFTkhUbzJKM1o2ZTZLQU80Zw]"" error=""Could not run compose file: [/var/lib/jenkins/workspace/master-788-f69d9e9b-b968-4b54-b22a-cb7c861fe5cf/.op/compose/profiles/fleet/docker-compose.yml /var/lib/jenkins/workspace/master-788-f69d9e9b-b968-4b54-b22a-cb7c861fe5cf/.op/compose/services/fleet-server-debian/docker-compose.yml] - Local Docker compose exited abnormally whilst running docker-compose: [exec -T fleet-server-debian timeout 5m /elastic-agent/elastic-agent install --force --fleet-server-es http://elasticsearch:9200 --fleet-server-service-token AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2MTk2MjU0OTc3NjE6RjJlaUlFTkhUbzJKM1o2ZTZLQU80Zw]. exit status 1"" service=fleet-server-debian",1.0,"[CI] Fleet server binaries are not signed - When triggering the e2e tests for Beats branches/PRs, where we consume the artifacts generated by the Beats CI, the elastic-agent is not able to find how to fetch the fleet-server binary. This is the error log: >package '/opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz' not found: open /opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz: no such file or directory >fetching package failed: Get ""https://artifacts.elastic.co/downloads/fleet-server/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz"": x509: certificate signed by unknown authority On the other hand, when fetching the ""official"" artifacts, generated by the unified release, then the binaries can be downloaded. ## Impact All Beats builds triggering the e2e-tests (any merge commit AND any PR requesting package) are failing, contributing a failed check to the commit status. ## CI Logs You can find the logs here: https://beats-ci.elastic.co/blue/organizations/jenkins/e2e-tests%2Fe2e-testing-mbp/detail/master/788/pipeline, which was triggered by the merge commit https://github.com/elastic/beats/commit/87a8c857fcbfbcf3894e5e524349fec2c580fc5d in Beats: >[2021-04-28T15:58:19.330Z] The Elastic Agent is currently in BETA and should not be used in production [2021-04-28T15:58:19.330Z] [2021-04-28T15:58:21.874Z] 2021-04-28T15:58:21.471Z INFO cmd/enroll_cmd.go:301 Generating self-signed certificate for Fleet Server [2021-04-28T15:58:23.257Z] 2021-04-28T15:58:23.049Z INFO cmd/enroll_cmd.go:589 Waiting for Elastic Agent to start Fleet Server [2021-04-28T15:58:25.166Z] 2021-04-28T15:58:25.054Z INFO cmd/enroll_cmd.go:619 Fleet Server - 2 errors occurred: [2021-04-28T15:58:25.166Z] * package '/opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz' not found: open /opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz: no such file or directory [2021-04-28T15:58:25.166Z] * fetching package failed: Get ""https://artifacts.elastic.co/downloads/fleet-server/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz"": x509: certificate signed by unknown authority [2021-04-28T15:58:25.166Z] [2021-04-28T15:58:25.166Z] [2021-04-28T15:58:25.166Z] Error: fleet-server never started by elastic-agent daemon: 2 errors occurred: [2021-04-28T15:58:25.166Z] * package '/opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz' not found: open /opt/Elastic/Agent/data/elastic-agent-9eca68/downloads/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz: no such file or directory [2021-04-28T15:58:25.166Z] * fetching package failed: Get ""https://artifacts.elastic.co/downloads/fleet-server/fleet-server-8.0.0-SNAPSHOT-linux-x86_64.tar.gz"": x509: certificate signed by unknown authority [2021-04-28T15:58:25.166Z] [2021-04-28T15:58:25.166Z] [2021-04-28T15:58:26.107Z] Error: enroll command failed with exit code: 1 [2021-04-28T15:58:26.107Z] time=""2021-04-28T15:58:25Z"" level=error msg=""Could not execute command in service container"" command=""[timeout 5m /elastic-agent/elastic-agent install --force --fleet-server-es http://elasticsearch:9200 --fleet-server-service-token AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2MTk2MjU0OTc3NjE6RjJlaUlFTkhUbzJKM1o2ZTZLQU80Zw]"" error=""Could not run compose file: [/var/lib/jenkins/workspace/master-788-f69d9e9b-b968-4b54-b22a-cb7c861fe5cf/.op/compose/profiles/fleet/docker-compose.yml /var/lib/jenkins/workspace/master-788-f69d9e9b-b968-4b54-b22a-cb7c861fe5cf/.op/compose/services/fleet-server-debian/docker-compose.yml] - Local Docker compose exited abnormally whilst running docker-compose: [exec -T fleet-server-debian timeout 5m /elastic-agent/elastic-agent install --force --fleet-server-es http://elasticsearch:9200 --fleet-server-service-token AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2MTk2MjU0OTc3NjE6RjJlaUlFTkhUbzJKM1o2ZTZLQU80Zw]. exit status 1"" service=fleet-server-debian",1, fleet server binaries are not signed when triggering the tests for beats branches prs where we consume the artifacts generated by the beats ci the elastic agent is not able to find how to fetch the fleet server binary this is the error log package opt elastic agent data elastic agent downloads fleet server snapshot linux tar gz not found open opt elastic agent data elastic agent downloads fleet server snapshot linux tar gz no such file or directory fetching package failed get certificate signed by unknown authority on the other hand when fetching the official artifacts generated by the unified release then the binaries can be downloaded impact all beats builds triggering the tests any merge commit and any pr requesting package are failing contributing a failed check to the commit status ci logs you can find the logs here which was triggered by the merge commit in beats the elastic agent is currently in beta and should not be used in production info cmd enroll cmd go generating self signed certificate for fleet server info cmd enroll cmd go waiting for elastic agent to start fleet server info cmd enroll cmd go fleet server errors occurred package opt elastic agent data elastic agent downloads fleet server snapshot linux tar gz not found open opt elastic agent data elastic agent downloads fleet server snapshot linux tar gz no such file or directory fetching package failed get certificate signed by unknown authority error fleet server never started by elastic agent daemon errors occurred package opt elastic agent data elastic agent downloads fleet server snapshot linux tar gz not found open opt elastic agent data elastic agent downloads fleet server snapshot linux tar gz no such file or directory fetching package failed get certificate signed by unknown authority error enroll command failed with exit code time level error msg could not execute command in service container command error could not run compose file local docker compose exited abnormally whilst running docker compose exit status service fleet server debian,1 4638,17070073681.0,IssuesEvent,2021-07-07 12:17:36,JacobLinCool/BA,https://api.github.com/repos/JacobLinCool/BA,closed,Automation (2021/7/7 12:58:24 AM),automation,"**Updated.** (2021/7/7 1:00:17 AM) ## 登入: 完成 ``` [2021/7/7 12:58:26 AM] 開始執行帳號登入程序 [2021/7/7 12:58:26 AM] 正在檢測登入狀態 [2021/7/7 12:58:33 AM] 登入狀態: 未登入 [2021/7/7 12:58:36 AM] 嘗試登入中 [2021/7/7 12:58:46 AM] 成功登入 [2021/7/7 12:58:46 AM] 帳號登入程序已完成 ``` ## 簽到: 執行中 ``` [2021/7/7 12:58:47 AM] 開始執行自動簽到程序 [2021/7/7 12:58:47 AM] 正在檢測簽到狀態 [2021/7/7 12:58:50 AM] 簽到狀態: 尚未簽到 [2021/7/7 12:58:50 AM] 正在嘗試簽到 [2021/7/7 12:58:55 AM] 成功簽到! [2021/7/7 12:58:56 AM] 自動簽到程序已完成 [2021/7/7 12:58:56 AM] 開始執行自動觀看雙倍簽到獎勵廣告程序 [2021/7/7 12:58:56 AM] 正在檢測雙倍簽到獎勵狀態 [2021/7/7 12:59:01 AM] 雙倍簽到獎勵狀態: 尚未獲得雙倍簽到獎勵 [2021/7/7 12:59:01 AM] 嘗試觀看廣告以獲得雙倍獎勵,可能需要多達 1 分鐘 [2021/7/7 1:00:17 AM] 觀看雙倍獎勵廣告過程發生錯誤,將再重試 2 次 ``` ## 答題: 等待中 ``` ``` ## 抽獎: 等待中 ``` ``` ",1.0,"Automation (2021/7/7 12:58:24 AM) - **Updated.** (2021/7/7 1:00:17 AM) ## 登入: 完成 ``` [2021/7/7 12:58:26 AM] 開始執行帳號登入程序 [2021/7/7 12:58:26 AM] 正在檢測登入狀態 [2021/7/7 12:58:33 AM] 登入狀態: 未登入 [2021/7/7 12:58:36 AM] 嘗試登入中 [2021/7/7 12:58:46 AM] 成功登入 [2021/7/7 12:58:46 AM] 帳號登入程序已完成 ``` ## 簽到: 執行中 ``` [2021/7/7 12:58:47 AM] 開始執行自動簽到程序 [2021/7/7 12:58:47 AM] 正在檢測簽到狀態 [2021/7/7 12:58:50 AM] 簽到狀態: 尚未簽到 [2021/7/7 12:58:50 AM] 正在嘗試簽到 [2021/7/7 12:58:55 AM] 成功簽到! [2021/7/7 12:58:56 AM] 自動簽到程序已完成 [2021/7/7 12:58:56 AM] 開始執行自動觀看雙倍簽到獎勵廣告程序 [2021/7/7 12:58:56 AM] 正在檢測雙倍簽到獎勵狀態 [2021/7/7 12:59:01 AM] 雙倍簽到獎勵狀態: 尚未獲得雙倍簽到獎勵 [2021/7/7 12:59:01 AM] 嘗試觀看廣告以獲得雙倍獎勵,可能需要多達 1 分鐘 [2021/7/7 1:00:17 AM] 觀看雙倍獎勵廣告過程發生錯誤,將再重試 2 次 ``` ## 答題: 等待中 ``` ``` ## 抽獎: 等待中 ``` ``` ",1,automation am updated am 登入 完成 開始執行帳號登入程序 正在檢測登入狀態 登入狀態 未登入 嘗試登入中 成功登入 帳號登入程序已完成 簽到 執行中 開始執行自動簽到程序 正在檢測簽到狀態 簽到狀態 尚未簽到 正在嘗試簽到 成功簽到! 自動簽到程序已完成 開始執行自動觀看雙倍簽到獎勵廣告程序 正在檢測雙倍簽到獎勵狀態 雙倍簽到獎勵狀態 尚未獲得雙倍簽到獎勵 嘗試觀看廣告以獲得雙倍獎勵,可能需要多達 分鐘 觀看雙倍獎勵廣告過程發生錯誤,將再重試 次 答題 等待中 抽獎 等待中 ,1 51247,26987567734.0,IssuesEvent,2023-02-09 17:12:50,matrixorigin/matrixone,https://api.github.com/repos/matrixorigin/matrixone,closed,[Feature Request]: determine join order,kind/feature kind/performance,"### Is there an existing issue for the same feature request? - [X] I have checked the existing issues. ### Is your feature request related to a problem? _No response_ ### Describe the feature you'd like determine join order in sql query ### Describe implementation you've considered _No response_ ### Documentation, Adoption, Use Case, Migration Strategy _No response_ ### Additional information _No response_",True,"[Feature Request]: determine join order - ### Is there an existing issue for the same feature request? - [X] I have checked the existing issues. ### Is your feature request related to a problem? _No response_ ### Describe the feature you'd like determine join order in sql query ### Describe implementation you've considered _No response_ ### Documentation, Adoption, Use Case, Migration Strategy _No response_ ### Additional information _No response_",0, determine join order is there an existing issue for the same feature request i have checked the existing issues is your feature request related to a problem no response describe the feature you d like determine join order in sql query describe implementation you ve considered no response documentation adoption use case migration strategy no response additional information no response ,0 175512,21313842573.0,IssuesEvent,2022-04-16 01:07:52,Nivaskumark/kernel_v4.1.15,https://api.github.com/repos/Nivaskumark/kernel_v4.1.15,opened,CVE-2018-16871 (High) detected in linuxlinux-4.6,security vulnerability,"## CVE-2018-16871 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.6

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in base branch: master

Vulnerable Source Files (1)

/fs/nfsd/nfs4proc.c

Vulnerability Details

A flaw was found in the Linux kernel's NFS implementation, all versions 3.x and all versions 4.x up to 4.20. An attacker, who is able to mount an exported NFS filesystem, is able to trigger a null pointer dereference by using an invalid NFS sequence. This can panic the machine and deny access to the NFS server. Any outstanding disk writes to the NFS server will be lost.

Publish Date: 2019-07-30

URL: CVE-2018-16871

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16871

Release Date: 2019-07-30

Fix Resolution: v4.20-rc3

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2018-16871 (High) detected in linuxlinux-4.6 - ## CVE-2018-16871 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.6

The Linux Kernel

Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux

Found in base branch: master

Vulnerable Source Files (1)

/fs/nfsd/nfs4proc.c

Vulnerability Details

A flaw was found in the Linux kernel's NFS implementation, all versions 3.x and all versions 4.x up to 4.20. An attacker, who is able to mount an exported NFS filesystem, is able to trigger a null pointer dereference by using an invalid NFS sequence. This can panic the machine and deny access to the NFS server. Any outstanding disk writes to the NFS server will be lost.

Publish Date: 2019-07-30

URL: CVE-2018-16871

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16871

Release Date: 2019-07-30

Fix Resolution: v4.20-rc3

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files fs nfsd c vulnerability details a flaw was found in the linux kernel s nfs implementation all versions x and all versions x up to an attacker who is able to mount an exported nfs filesystem is able to trigger a null pointer dereference by using an invalid nfs sequence this can panic the machine and deny access to the nfs server any outstanding disk writes to the nfs server will be lost publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 376749,26215306562.0,IssuesEvent,2023-01-04 10:27:01,CryptoBlades/cryptoblades,https://api.github.com/repos/CryptoBlades/cryptoblades,opened,[Feature] - Remove MC from every place that is connected.,documentation enhancement,"### Prerequisites - [ ] I checked to make sure that this feature has not already been filed - [ ] I'm reporting this information to the correct repository - [X] I understand enough about this issue to complete a comprehensive document ### Describe the feature and its requirements Remove MagiCon from shown in connection to CryptoBlades, Riveted Games, AscensionPad, Bazaar, etc. (This should include any of the linktrees and other items like this that are not our websites, but associated.) ### Is your feature request related to an existing issue? Please describe. no ### Is there anything stopping this feature being completed? no ### Describe alternatives you've considered no ### Additional context _No response_",1.0,"[Feature] - Remove MC from every place that is connected. - ### Prerequisites - [ ] I checked to make sure that this feature has not already been filed - [ ] I'm reporting this information to the correct repository - [X] I understand enough about this issue to complete a comprehensive document ### Describe the feature and its requirements Remove MagiCon from shown in connection to CryptoBlades, Riveted Games, AscensionPad, Bazaar, etc. (This should include any of the linktrees and other items like this that are not our websites, but associated.) ### Is your feature request related to an existing issue? Please describe. no ### Is there anything stopping this feature being completed? no ### Describe alternatives you've considered no ### Additional context _No response_",0, remove mc from every place that is connected prerequisites i checked to make sure that this feature has not already been filed i m reporting this information to the correct repository i understand enough about this issue to complete a comprehensive document describe the feature and its requirements remove magicon from shown in connection to cryptoblades riveted games ascensionpad bazaar etc this should include any of the linktrees and other items like this that are not our websites but associated is your feature request related to an existing issue please describe no is there anything stopping this feature being completed no describe alternatives you ve considered no additional context no response ,0 180262,14743918087.0,IssuesEvent,2021-01-07 14:35:57,devonfw/devon4j,https://api.github.com/repos/devonfw/devon4j,closed,broken link for access-control-schema,bug documentation easyfix,"### Expected behavior As a user, I want to read about access-control-schema so that I can implement an easy potentially temporary solution for my authorization. ### Actual behavior Finding this spot https://devonfw.com/website/pages/docs/devon4j.asciidoc_guides.html#guide-access-control.asciidoc_access-control-schema-deprecated the link to the access-control-schema docs is broken. ",1.0,"broken link for access-control-schema - ### Expected behavior As a user, I want to read about access-control-schema so that I can implement an easy potentially temporary solution for my authorization. ### Actual behavior Finding this spot https://devonfw.com/website/pages/docs/devon4j.asciidoc_guides.html#guide-access-control.asciidoc_access-control-schema-deprecated the link to the access-control-schema docs is broken. ",0,broken link for access control schema expected behavior as a user i want to read about access control schema so that i can implement an easy potentially temporary solution for my authorization actual behavior finding this spot the link to the access control schema docs is broken ,0 46995,6035337209.0,IssuesEvent,2017-06-09 13:40:03,geetsisbac/WK2XQXCBCXIVMLBGXSPVU5EB,https://api.github.com/repos/geetsisbac/WK2XQXCBCXIVMLBGXSPVU5EB,reopened,P/YQGcQHHqBA1Py0GZqMT9fMKTbDbXZIKum5wdM6TAb9HMk0RNcNOEdYLzwXEkgKstQE12ZCYmTHZzR4JtMfSVs/9RHj3Rc4hBw8nFixedNUqvdOMlkfNkfv+lGFrB3XvT7ZSJ1QF3+eHfD87dvFWsA6rDXGY8cVd0KL3gi1SBk=,design,rsPTccc4hJz5FNVGQBMo1koPexiGchrtzimyRcQocWGhnEYWPZFOlNfVrKwLrj20CUDzrMlsDSeRZK5LEoE3X1cL8dtq9H2f3/NMFouyi5S2iN3RzoQBMnzEjnsag/z3ZZeDmlH4yP3+kkFTLrVgzDDb1d8szByx4OYXur0oTpp920QD1dgC0lDSyQN9R1Bt1TwK48Dt8Y1oxz52myb6w0oIlVifUCVge7bETUyDVqWWSfJxN72RDYURu9Rn56Vu3IZsDmAzT3wz/RfWblKjRoXKNXQkAoJkleULSi5m86chofZUEgBcci+oDKJFizTR571OuTHaKrCPizgvd1GZ5MMGQFf58vlbVLZYyZmBY0geG54ttKEr5EaxcmkZv7FewnETHhsAsnYkq8rBpTjZkcKkq3Wsaco3yM2XdFfDZ6808BNTGK99kOfQhYSykTyNwnETHhsAsnYkq8rBpTjZke2TMMEsBvcGgVQ0WmCK2D+WgsymkF/IEdgVLxj+I32l23c2cCGkwNrnLuvW6kLxnqnz32FtzahqCJ9PEzdQo6gtlcdFLjOwnK7wkVAg22Z1hvkIkLEimRoxKIa+dsvsJsLQFZBZvi7Izfu8LbijEpc1GeppYt+xB+Ykx8oM8G34TRYb2dZDW5JCtFnuB8ukqaE0PZ2YHYW9B7IXUNv5wqAqcofzf+av+mq9jvD87ZJ07KnKbFpPHk2q95pIOeLzuqqTJoWDcEE6KUO44BJ+T4FmZ6/CA9iFep9XaPXDNK1DtzDoYHZ2fGCNflZW+XpRY2Nq1fgs+IehzDijeLq98+Nisi5bcVB83X44gVg5HxrongldcZcUNw9WkfWBTXZPFl97DrrRX8KG5WpSsYPhQpGI3q4xY2g2QAE2NhDhC4m9QpF21qnLSPRAD3NHE8beWyAm1a5Hsv2edyCj17V9EuqVA5lHYo7f8gon6LSeReQ4LEZZb9ZBMkMoPxC2tg50WZNjAyLAHzm41vxcG5bDzLne67WPH28pWtmKMx6qcHyXRzIXzNAtR3AXEO3hwmwEVeayfJqDmikyjDvad+6ONXo5BPw/3CpazOjCXvDGJ76hDKKI0AgxFpglECW/kymrN+h+OsaztZJOZuErc8nkXfQs6RVLf9yY5iDNZY1IHpKm23c2cCGkwNrnLuvW6kLxnicCdI4iBzwmNSJ43qlSgLjPDMhIHbClKL7Uzkx63nwS23c2cCGkwNrnLuvW6kLxniVIRtbCelkKxrY2iINu/McdB+LS0S8vS8lERTzHaBThdDP+mCGBkVgOah545RcD3ZNDr9b0ycRG03RNc/gXkIhA1X33hvGc6a8KaHRBWW+GNt+1Ec0rp1PX64kZeedr6MMGQFf58vlbVLZYyZmBY0jVWj9+MTOG05MxNcunAedZK2lYF7vFfJmAU39zme/pjdCGSY8S9zcxrnEb/otI2F8A2O+bBGOf/3y77A3enRa++N9ZLYoxFl5te0EjXViAq/LXvHdNZGFW93CMdKO4LsMAufYyBYO6xUM67Nw4OZ/W0ODkSwXmMKqnABcyfa9zfUoIlVifUCVge7bETUyDVqUZ1uWTmcHPD90cslPrjmBurTb5g1gtCPGv1DyDY9n13Gbl+UDsvPxWI8sGG0qE3DM=,1.0,P/YQGcQHHqBA1Py0GZqMT9fMKTbDbXZIKum5wdM6TAb9HMk0RNcNOEdYLzwXEkgKstQE12ZCYmTHZzR4JtMfSVs/9RHj3Rc4hBw8nFixedNUqvdOMlkfNkfv+lGFrB3XvT7ZSJ1QF3+eHfD87dvFWsA6rDXGY8cVd0KL3gi1SBk= - rsPTccc4hJz5FNVGQBMo1koPexiGchrtzimyRcQocWGhnEYWPZFOlNfVrKwLrj20CUDzrMlsDSeRZK5LEoE3X1cL8dtq9H2f3/NMFouyi5S2iN3RzoQBMnzEjnsag/z3ZZeDmlH4yP3+kkFTLrVgzDDb1d8szByx4OYXur0oTpp920QD1dgC0lDSyQN9R1Bt1TwK48Dt8Y1oxz52myb6w0oIlVifUCVge7bETUyDVqWWSfJxN72RDYURu9Rn56Vu3IZsDmAzT3wz/RfWblKjRoXKNXQkAoJkleULSi5m86chofZUEgBcci+oDKJFizTR571OuTHaKrCPizgvd1GZ5MMGQFf58vlbVLZYyZmBY0geG54ttKEr5EaxcmkZv7FewnETHhsAsnYkq8rBpTjZkcKkq3Wsaco3yM2XdFfDZ6808BNTGK99kOfQhYSykTyNwnETHhsAsnYkq8rBpTjZke2TMMEsBvcGgVQ0WmCK2D+WgsymkF/IEdgVLxj+I32l23c2cCGkwNrnLuvW6kLxnqnz32FtzahqCJ9PEzdQo6gtlcdFLjOwnK7wkVAg22Z1hvkIkLEimRoxKIa+dsvsJsLQFZBZvi7Izfu8LbijEpc1GeppYt+xB+Ykx8oM8G34TRYb2dZDW5JCtFnuB8ukqaE0PZ2YHYW9B7IXUNv5wqAqcofzf+av+mq9jvD87ZJ07KnKbFpPHk2q95pIOeLzuqqTJoWDcEE6KUO44BJ+T4FmZ6/CA9iFep9XaPXDNK1DtzDoYHZ2fGCNflZW+XpRY2Nq1fgs+IehzDijeLq98+Nisi5bcVB83X44gVg5HxrongldcZcUNw9WkfWBTXZPFl97DrrRX8KG5WpSsYPhQpGI3q4xY2g2QAE2NhDhC4m9QpF21qnLSPRAD3NHE8beWyAm1a5Hsv2edyCj17V9EuqVA5lHYo7f8gon6LSeReQ4LEZZb9ZBMkMoPxC2tg50WZNjAyLAHzm41vxcG5bDzLne67WPH28pWtmKMx6qcHyXRzIXzNAtR3AXEO3hwmwEVeayfJqDmikyjDvad+6ONXo5BPw/3CpazOjCXvDGJ76hDKKI0AgxFpglECW/kymrN+h+OsaztZJOZuErc8nkXfQs6RVLf9yY5iDNZY1IHpKm23c2cCGkwNrnLuvW6kLxnicCdI4iBzwmNSJ43qlSgLjPDMhIHbClKL7Uzkx63nwS23c2cCGkwNrnLuvW6kLxniVIRtbCelkKxrY2iINu/McdB+LS0S8vS8lERTzHaBThdDP+mCGBkVgOah545RcD3ZNDr9b0ycRG03RNc/gXkIhA1X33hvGc6a8KaHRBWW+GNt+1Ec0rp1PX64kZeedr6MMGQFf58vlbVLZYyZmBY0jVWj9+MTOG05MxNcunAedZK2lYF7vFfJmAU39zme/pjdCGSY8S9zcxrnEb/otI2F8A2O+bBGOf/3y77A3enRa++N9ZLYoxFl5te0EjXViAq/LXvHdNZGFW93CMdKO4LsMAufYyBYO6xUM67Nw4OZ/W0ODkSwXmMKqnABcyfa9zfUoIlVifUCVge7bETUyDVqUZ1uWTmcHPD90cslPrjmBurTb5g1gtCPGv1DyDY9n13Gbl+UDsvPxWI8sGG0qE3DM=,0,p wgsymkf iedgvlxj xb av kymrn h mcdb gnt bbgof ,0 4643,17086165468.0,IssuesEvent,2021-07-08 12:08:47,gchq/Gaffer,https://api.github.com/repos/gchq/Gaffer,opened,Erroneous releases are too easy,automation,"Currently there exists a milestone called [Backlog](https://github.com/gchq/Gaffer/milestone/12). If this milestone were to be closed, then a release would start and change the latest Gaffer version to `acklog`. We obviously don't want that to happen, so I think there should be a check then when a milestone is closed, it follows the format of `vX.X.X`. ",1.0,"Erroneous releases are too easy - Currently there exists a milestone called [Backlog](https://github.com/gchq/Gaffer/milestone/12). If this milestone were to be closed, then a release would start and change the latest Gaffer version to `acklog`. We obviously don't want that to happen, so I think there should be a check then when a milestone is closed, it follows the format of `vX.X.X`. ",1,erroneous releases are too easy currently there exists a milestone called if this milestone were to be closed then a release would start and change the latest gaffer version to acklog we obviously don t want that to happen so i think there should be a check then when a milestone is closed it follows the format of vx x x ,1 422287,28433463519.0,IssuesEvent,2023-04-15 03:03:51,AzureAD/microsoft-authentication-library-for-dotnet,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-dotnet,closed,[Documentation] Update documentation for WAM,documentation,"**Documentation related to component** WAM **Please check those that apply** - [ ] typo - [ ] documentation doesn't exist - [ ] documentation needs clarification - [ ] error(s) in example - [x] needs Update **Description of the issue** WAM is being upgraded to the new broker. This has some breaking changes and needs examples.",1.0,"[Documentation] Update documentation for WAM - **Documentation related to component** WAM **Please check those that apply** - [ ] typo - [ ] documentation doesn't exist - [ ] documentation needs clarification - [ ] error(s) in example - [x] needs Update **Description of the issue** WAM is being upgraded to the new broker. This has some breaking changes and needs examples.",0, update documentation for wam documentation related to component wam please check those that apply typo documentation doesn t exist documentation needs clarification error s in example needs update description of the issue wam is being upgraded to the new broker this has some breaking changes and needs examples ,0 3184,13166937679.0,IssuesEvent,2020-08-11 09:24:28,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,FNX-4956 ⁃ [taskcluster] Dep-sign APKs for Raptor consumption on each master commit,eng:automation eng:release,"We will have performance tests running on `fenix` in the future. One of the requirements is to have builds signed with our releng-mobile ""dep"" key. The ""dep"" (""dependency"") key is very similar to our release key that's used for production/beta/nightly, except that it's used strictly for internal automation (such as for performance-testing/raptor :smile:) This ticket requires that, on every commit to the master branch, the ""release raptor"" variants should dep-signed. ",1.0,"FNX-4956 ⁃ [taskcluster] Dep-sign APKs for Raptor consumption on each master commit - We will have performance tests running on `fenix` in the future. One of the requirements is to have builds signed with our releng-mobile ""dep"" key. The ""dep"" (""dependency"") key is very similar to our release key that's used for production/beta/nightly, except that it's used strictly for internal automation (such as for performance-testing/raptor :smile:) This ticket requires that, on every commit to the master branch, the ""release raptor"" variants should dep-signed. ",1,fnx ⁃ dep sign apks for raptor consumption on each master commit we will have performance tests running on fenix in the future one of the requirements is to have builds signed with our releng mobile dep key the dep dependency key is very similar to our release key that s used for production beta nightly except that it s used strictly for internal automation such as for performance testing raptor smile this ticket requires that on every commit to the master branch the release raptor variants should dep signed ,1 6751,23851775177.0,IssuesEvent,2022-09-06 18:39:16,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,closed,[XCUITests] Sync int test China server page needs to be updated,eng:automation,"The test is failing as it is looking for ""[Enter your email](https://github.com/mozilla-mobile/firefox-ios/blob/c3396d5d2d7ca9d4f7d3c29c6ad16ec24d52096a/Tests/XCUITests/IntegrationTests.swift#L95)"" textField but for this site the textField is just ""Email"" ![Untitled](https://user-images.githubusercontent.com/1897507/187419108-250d7d92-7e34-4ca5-a0ce-17850453479a.png) cc @clarmso ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4837) ",1.0,"[XCUITests] Sync int test China server page needs to be updated - The test is failing as it is looking for ""[Enter your email](https://github.com/mozilla-mobile/firefox-ios/blob/c3396d5d2d7ca9d4f7d3c29c6ad16ec24d52096a/Tests/XCUITests/IntegrationTests.swift#L95)"" textField but for this site the textField is just ""Email"" ![Untitled](https://user-images.githubusercontent.com/1897507/187419108-250d7d92-7e34-4ca5-a0ce-17850453479a.png) cc @clarmso ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4837) ",1, sync int test china server page needs to be updated the test is failing as it is looking for textfield but for this site the textfield is just email cc clarmso ┆issue is synchronized with this ,1 4474,16623538333.0,IssuesEvent,2021-06-03 06:37:36,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,[Unit Test] Fix OnboardingHeaderViewHolderTest for Nightly builds ,eng:automation needs:triage,"This [test](https://github.com/mozilla-mobile/fenix/blob/07748f69b75144e8e247b742f66a756f999cb299/app/src/test/java/org/mozilla/fenix/home/sessioncontrol/viewholders/onboarding/OnboardingHeaderViewHolderTest.kt#L18) is failing when unit tests run on Nightly builds. It checks the [header text](https://github.com/mozilla-mobile/fenix/blob/07748f69b75144e8e247b742f66a756f999cb299/app/src/test/java/org/mozilla/fenix/home/sessioncontrol/viewholders/onboarding/OnboardingHeaderViewHolderTest.kt#L32) and the assertion fails with error: `[task 2021-03-19T20:14:15.372Z] org.junit.ComparisonFailure: expected: but was:` Usually this test runs as part of the `test-debug` task (Task :app:testDebugUnitTest) and so it is green on master and PRs. But it has failed when the task is `test-nightly` task (Task :app:testNightlyUnitTest) that seems to be running the unit tests on nightly builds. ",1.0,"[Unit Test] Fix OnboardingHeaderViewHolderTest for Nightly builds - This [test](https://github.com/mozilla-mobile/fenix/blob/07748f69b75144e8e247b742f66a756f999cb299/app/src/test/java/org/mozilla/fenix/home/sessioncontrol/viewholders/onboarding/OnboardingHeaderViewHolderTest.kt#L18) is failing when unit tests run on Nightly builds. It checks the [header text](https://github.com/mozilla-mobile/fenix/blob/07748f69b75144e8e247b742f66a756f999cb299/app/src/test/java/org/mozilla/fenix/home/sessioncontrol/viewholders/onboarding/OnboardingHeaderViewHolderTest.kt#L32) and the assertion fails with error: `[task 2021-03-19T20:14:15.372Z] org.junit.ComparisonFailure: expected: but was:` Usually this test runs as part of the `test-debug` task (Task :app:testDebugUnitTest) and so it is green on master and PRs. But it has failed when the task is `test-nightly` task (Task :app:testNightlyUnitTest) that seems to be running the unit tests on nightly builds. ",1, fix onboardingheaderviewholdertest for nightly builds this is failing when unit tests run on nightly builds it checks the and the assertion fails with error org junit comparisonfailure expected but was usually this test runs as part of the test debug task task app testdebugunittest and so it is green on master and prs but it has failed when the task is test nightly task task app testnightlyunittest that seems to be running the unit tests on nightly builds ,1 182943,21678063534.0,IssuesEvent,2022-05-09 01:14:27,AlexRogalskiy/customiere,https://api.github.com/repos/AlexRogalskiy/customiere,closed,CVE-2021-22112 (High) detected in spring-security-web-5.3.4.RELEASE.jar - autoclosed,security vulnerability,"## CVE-2021-22112 - High Severity Vulnerability
Vulnerable Library - spring-security-web-5.3.4.RELEASE.jar

spring-security-web

Library home page: https://spring.io/spring-security

Path to dependency file: /modules/customiere-validation/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar

Dependency Hierarchy: - spring-boot-starter-security-2.3.4.RELEASE.jar (Root Library) - :x: **spring-security-web-5.3.4.RELEASE.jar** (Vulnerable Library)

Found in HEAD commit: 3f6027b95560e5bd6d2350b9d9f405245557f734

Vulnerability Details

Spring Security 5.4.x prior to 5.4.4, 5.3.x prior to 5.3.8.RELEASE, 5.2.x prior to 5.2.9.RELEASE, and older unsupported versions can fail to save the SecurityContext if it is changed more than once in a single request.A malicious user cannot cause the bug to happen (it must be programmed in). However, if the application's intent is to only allow the user to run with elevated privileges in a small portion of the application, the bug can be leveraged to extend those privileges to the rest of the application.

Publish Date: 2021-02-23

URL: CVE-2021-22112

CVSS 3 Score Details (8.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://tanzu.vmware.com/security/cve-2021-22112

Release Date: 2021-02-23

Fix Resolution: org.springframework.security:spring-security-web:5.2.9,5.3.8,5.4.4

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-22112 (High) detected in spring-security-web-5.3.4.RELEASE.jar - autoclosed - ## CVE-2021-22112 - High Severity Vulnerability
Vulnerable Library - spring-security-web-5.3.4.RELEASE.jar

spring-security-web

Library home page: https://spring.io/spring-security

Path to dependency file: /modules/customiere-validation/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.3.4.RELEASE/spring-security-web-5.3.4.RELEASE.jar

Dependency Hierarchy: - spring-boot-starter-security-2.3.4.RELEASE.jar (Root Library) - :x: **spring-security-web-5.3.4.RELEASE.jar** (Vulnerable Library)

Found in HEAD commit: 3f6027b95560e5bd6d2350b9d9f405245557f734

Vulnerability Details

Spring Security 5.4.x prior to 5.4.4, 5.3.x prior to 5.3.8.RELEASE, 5.2.x prior to 5.2.9.RELEASE, and older unsupported versions can fail to save the SecurityContext if it is changed more than once in a single request.A malicious user cannot cause the bug to happen (it must be programmed in). However, if the application's intent is to only allow the user to run with elevated privileges in a small portion of the application, the bug can be leveraged to extend those privileges to the rest of the application.

Publish Date: 2021-02-23

URL: CVE-2021-22112

CVSS 3 Score Details (8.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://tanzu.vmware.com/security/cve-2021-22112

Release Date: 2021-02-23

Fix Resolution: org.springframework.security:spring-security-web:5.2.9,5.3.8,5.4.4

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in spring security web release jar autoclosed cve high severity vulnerability vulnerable library spring security web release jar spring security web library home page a href path to dependency file modules customiere validation pom xml path to vulnerable library home wss scanner repository org springframework security spring security web release spring security web release jar home wss scanner repository org springframework security spring security web release spring security web release jar home wss scanner repository org springframework security spring security web release spring security web release jar home wss scanner repository org springframework security spring security web release spring security web release jar home wss scanner repository org springframework security spring security web release spring security web release jar home wss scanner repository org springframework security spring security web release spring security web release jar home wss scanner repository org springframework security spring security web release spring security web release jar home wss scanner repository org springframework security spring security web release spring security web release jar home wss scanner repository org springframework security spring security web release spring security web release jar dependency hierarchy spring boot starter security release jar root library x spring security web release jar vulnerable library found in head commit a href vulnerability details spring security x prior to x prior to release x prior to release and older unsupported versions can fail to save the securitycontext if it is changed more than once in a single request a malicious user cannot cause the bug to happen it must be programmed in however if the application s intent is to only allow the user to run with elevated privileges in a small portion of the application the bug can be leveraged to extend those privileges to the rest of the application publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework security spring security web step up your open source security game with whitesource ,0 276636,30509329152.0,IssuesEvent,2023-07-18 19:28:00,samq-democorp/Webgoat8.1,https://api.github.com/repos/samq-democorp/Webgoat8.1,opened,bootstrap-3.3.7.jar: 6 vulnerabilities (highest severity is: 6.1),Mend: dependency security vulnerability,"
Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (bootstrap version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2019-8331](https://www.mend.io/vulnerability-database/CVE-2019-8331) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.1 | ✅ | | [CVE-2018-14040](https://www.mend.io/vulnerability-database/CVE-2018-14040) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.0 | ✅ | | [CVE-2018-20677](https://www.mend.io/vulnerability-database/CVE-2018-20677) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.0 | ✅ | | [CVE-2018-14042](https://www.mend.io/vulnerability-database/CVE-2018-14042) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.0 | ✅ | | [CVE-2018-20676](https://www.mend.io/vulnerability-database/CVE-2018-20676) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.0 | ✅ | | [CVE-2016-10735](https://www.mend.io/vulnerability-database/CVE-2016-10735) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.0 | ✅ | ## Details
CVE-2019-8331 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.

Publish Date: 2019-02-20

URL: CVE-2019-8331

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2019-02-20

Fix Resolution: 3.4.1

In order to enable automatic remediation, please create workflow rules

CVE-2018-14040 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.

Publish Date: 2018-07-13

URL: CVE-2018-14040

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2018-07-13

Fix Resolution: 3.4.0

In order to enable automatic remediation, please create workflow rules

CVE-2018-20677 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap before 3.4.0, XSS is possible in the affix configuration target property.

Publish Date: 2019-01-09

URL: CVE-2018-20677

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677

Release Date: 2019-01-09

Fix Resolution: 3.4.0

In order to enable automatic remediation, please create workflow rules

CVE-2018-14042 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip.

Publish Date: 2018-07-13

URL: CVE-2018-14042

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2018-07-13

Fix Resolution: 3.4.0

In order to enable automatic remediation, please create workflow rules

CVE-2018-20676 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap before 3.4.0, XSS is possible in the tooltip data-viewport attribute.

Publish Date: 2019-01-09

URL: CVE-2018-20676

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676

Release Date: 2019-01-09

Fix Resolution: 3.4.0

In order to enable automatic remediation, please create workflow rules

CVE-2016-10735 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041. Mend Note: Converted from WS-2018-0021, on 2022-11-08.

Publish Date: 2019-01-09

URL: CVE-2016-10735

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735

Release Date: 2019-01-09

Fix Resolution: 3.4.0

In order to enable automatic remediation, please create workflow rules

***

In order to enable automatic remediation for this issue, please create workflow rules

",True,"bootstrap-3.3.7.jar: 6 vulnerabilities (highest severity is: 6.1) -
Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (bootstrap version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2019-8331](https://www.mend.io/vulnerability-database/CVE-2019-8331) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.1 | ✅ | | [CVE-2018-14040](https://www.mend.io/vulnerability-database/CVE-2018-14040) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.0 | ✅ | | [CVE-2018-20677](https://www.mend.io/vulnerability-database/CVE-2018-20677) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.0 | ✅ | | [CVE-2018-14042](https://www.mend.io/vulnerability-database/CVE-2018-14042) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.0 | ✅ | | [CVE-2018-20676](https://www.mend.io/vulnerability-database/CVE-2018-20676) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.0 | ✅ | | [CVE-2016-10735](https://www.mend.io/vulnerability-database/CVE-2016-10735) | Medium | 6.1 | bootstrap-3.3.7.jar | Direct | 3.4.0 | ✅ | ## Details
CVE-2019-8331 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.

Publish Date: 2019-02-20

URL: CVE-2019-8331

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2019-02-20

Fix Resolution: 3.4.1

In order to enable automatic remediation, please create workflow rules

CVE-2018-14040 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.

Publish Date: 2018-07-13

URL: CVE-2018-14040

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2018-07-13

Fix Resolution: 3.4.0

In order to enable automatic remediation, please create workflow rules

CVE-2018-20677 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap before 3.4.0, XSS is possible in the affix configuration target property.

Publish Date: 2019-01-09

URL: CVE-2018-20677

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677

Release Date: 2019-01-09

Fix Resolution: 3.4.0

In order to enable automatic remediation, please create workflow rules

CVE-2018-14042 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip.

Publish Date: 2018-07-13

URL: CVE-2018-14042

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Release Date: 2018-07-13

Fix Resolution: 3.4.0

In order to enable automatic remediation, please create workflow rules

CVE-2018-20676 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap before 3.4.0, XSS is possible in the tooltip data-viewport attribute.

Publish Date: 2019-01-09

URL: CVE-2018-20676

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676

Release Date: 2019-01-09

Fix Resolution: 3.4.0

In order to enable automatic remediation, please create workflow rules

CVE-2016-10735 ### Vulnerable Library - bootstrap-3.3.7.jar

WebJar for Bootstrap

Library home page: http://webjars.org

Path to dependency file: /webgoat-integration-tests/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar

Dependency Hierarchy: - :x: **bootstrap-3.3.7.jar** (Vulnerable Library)

Found in HEAD commit: c850bd6ccbee3000da5fd6ffdd468c2f1ae54be5

Found in base branch: main

### Vulnerability Details

In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041. Mend Note: Converted from WS-2018-0021, on 2022-11-08.

Publish Date: 2019-01-09

URL: CVE-2016-10735

### CVSS 3 Score Details (6.1)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735

Release Date: 2019-01-09

Fix Resolution: 3.4.0

In order to enable automatic remediation, please create workflow rules

***

In order to enable automatic remediation for this issue, please create workflow rules

",0,bootstrap jar vulnerabilities highest severity is vulnerable library bootstrap jar webjar for bootstrap library home page a href path to dependency file webgoat integration tests pom xml path to vulnerable library home wss scanner repository org webjars bootstrap bootstrap jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in bootstrap version remediation available medium bootstrap jar direct medium bootstrap jar direct medium bootstrap jar direct medium bootstrap jar direct medium bootstrap jar direct medium bootstrap jar direct details cve vulnerable library bootstrap jar webjar for bootstrap library home page a href path to dependency file webgoat integration tests pom xml path to vulnerable library home wss scanner repository org webjars bootstrap bootstrap jar dependency hierarchy x bootstrap jar vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before and x before xss is possible in the tooltip or popover data template attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution in order to enable automatic remediation please create cve vulnerable library bootstrap jar webjar for bootstrap library home page a href path to dependency file webgoat integration tests pom xml path to vulnerable library home wss scanner repository org webjars bootstrap bootstrap jar dependency hierarchy x bootstrap jar vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before xss is possible in the collapse data parent attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution in order to enable automatic remediation please create cve vulnerable library bootstrap jar webjar for bootstrap library home page a href path to dependency file webgoat integration tests pom xml path to vulnerable library home wss scanner repository org webjars bootstrap bootstrap jar dependency hierarchy x bootstrap jar vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before xss is possible in the affix configuration target property publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution in order to enable automatic remediation please create cve vulnerable library bootstrap jar webjar for bootstrap library home page a href path to dependency file webgoat integration tests pom xml path to vulnerable library home wss scanner repository org webjars bootstrap bootstrap jar dependency hierarchy x bootstrap jar vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before xss is possible in the data container property of tooltip publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution in order to enable automatic remediation please create cve vulnerable library bootstrap jar webjar for bootstrap library home page a href path to dependency file webgoat integration tests pom xml path to vulnerable library home wss scanner repository org webjars bootstrap bootstrap jar dependency hierarchy x bootstrap jar vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before xss is possible in the tooltip data viewport attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution in order to enable automatic remediation please create cve vulnerable library bootstrap jar webjar for bootstrap library home page a href path to dependency file webgoat integration tests pom xml path to vulnerable library home wss scanner repository org webjars bootstrap bootstrap jar dependency hierarchy x bootstrap jar vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap x before and x beta before beta xss is possible in the data target attribute a different vulnerability than cve mend note converted from ws on publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution in order to enable automatic remediation please create in order to enable automatic remediation for this issue please create ,0 8730,27172179613.0,IssuesEvent,2023-02-17 20:31:42,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,[ODB] Resume upload fails to upload onenote file,Needs: Attention :wave: automation:Closed,"#### Category - [ ] Question - [ ] Documentation issue - [x] Bug Hi all, Problem 1: step 1: create a onenote in drive with pages like HELLO so the onenote folder would contain HELLO.one file step 2: use resume upload API to upload HELLO.one (create session then upload) then keeps returning ``` ""error"":{ ""code"":""serviceNotAvailable"", ""message"":""The service is not available. Try the request again after a delay. There may be a Retry-After header.""} ``` Problem 2 step 1: create a onenote in drive step 2: upload by simple upload TEST.one > Works fine, I can open TEST page in onenote folder step 3: upload by resume upload (create session then upload) TEST.one > The TEST page is corrupted It looks like something broken when using resume upload Thank you. ",1.0,"[ODB] Resume upload fails to upload onenote file - #### Category - [ ] Question - [ ] Documentation issue - [x] Bug Hi all, Problem 1: step 1: create a onenote in drive with pages like HELLO so the onenote folder would contain HELLO.one file step 2: use resume upload API to upload HELLO.one (create session then upload) then keeps returning ``` ""error"":{ ""code"":""serviceNotAvailable"", ""message"":""The service is not available. Try the request again after a delay. There may be a Retry-After header.""} ``` Problem 2 step 1: create a onenote in drive step 2: upload by simple upload TEST.one > Works fine, I can open TEST page in onenote folder step 3: upload by resume upload (create session then upload) TEST.one > The TEST page is corrupted It looks like something broken when using resume upload Thank you. ",1, resume upload fails to upload onenote file category question documentation issue bug hi all problem step create a onenote in drive with pages like hello so the onenote folder would contain hello one file step use resume upload api to upload hello one create session then upload then keeps returning error code servicenotavailable message the service is not available try the request again after a delay there may be a retry after header problem step create a onenote in drive step upload by simple upload test one works fine i can open test page in onenote folder step upload by resume upload create session then upload test one the test page is corrupted it looks like something broken when using resume upload thank you ,1 6708,23764576782.0,IssuesEvent,2022-09-01 11:45:50,MISP/MISP,https://api.github.com/repos/MISP/MISP,closed,Run Script on Alert Email | Purpose: Notification System Extension,T: enhancement automation S: stale topic: notification topic: email,"Currently, I have a cron that reads a mailspool for alerts and pastes them into a slack channel, as the email-based workflow doesn't work for everyone (and it slows collaboration down). Ideally, I would love to see an option for being able to either manually call a script in the background, or just have a simple configurable module that allows for some kind (slack, in our case) of configurable notifications. The API for Slack is super easy to integrate with. I'm literally just using regex to rip out info and format a pretty string that allows for a link to our local MISP install. This method allows teams to take action quickly, and collaborate when an alert comes up in channel. In python, this is the gist of what I'm doing / would love to see as a built-in feature ``` url = re.search('^URL\s*:\s(https://.*/\d*)$', payload, re.MULTILINE).group(1) rep = re.search('^Reported\sby\s:\s(.*)$', payload, re.MULTILINE).group(1) lev = re.search('^Threat\sLevel:\s(.*)$', payload, re.MULTILINE).group(1) des = re.search('^Description\s:\s(.*)$', payload, re.MULTILINE).group(1) ... sendMessage(""[ Alert Level: *{0}* ] {1} reports <{2}|{3}>"".format(lev, rep, url, des)) ``` ",1.0,"Run Script on Alert Email | Purpose: Notification System Extension - Currently, I have a cron that reads a mailspool for alerts and pastes them into a slack channel, as the email-based workflow doesn't work for everyone (and it slows collaboration down). Ideally, I would love to see an option for being able to either manually call a script in the background, or just have a simple configurable module that allows for some kind (slack, in our case) of configurable notifications. The API for Slack is super easy to integrate with. I'm literally just using regex to rip out info and format a pretty string that allows for a link to our local MISP install. This method allows teams to take action quickly, and collaborate when an alert comes up in channel. In python, this is the gist of what I'm doing / would love to see as a built-in feature ``` url = re.search('^URL\s*:\s(https://.*/\d*)$', payload, re.MULTILINE).group(1) rep = re.search('^Reported\sby\s:\s(.*)$', payload, re.MULTILINE).group(1) lev = re.search('^Threat\sLevel:\s(.*)$', payload, re.MULTILINE).group(1) des = re.search('^Description\s:\s(.*)$', payload, re.MULTILINE).group(1) ... sendMessage(""[ Alert Level: *{0}* ] {1} reports <{2}|{3}>"".format(lev, rep, url, des)) ``` ",1,run script on alert email purpose notification system extension currently i have a cron that reads a mailspool for alerts and pastes them into a slack channel as the email based workflow doesn t work for everyone and it slows collaboration down ideally i would love to see an option for being able to either manually call a script in the background or just have a simple configurable module that allows for some kind slack in our case of configurable notifications the api for slack is super easy to integrate with i m literally just using regex to rip out info and format a pretty string that allows for a link to our local misp install this method allows teams to take action quickly and collaborate when an alert comes up in channel in python this is the gist of what i m doing would love to see as a built in feature url re search url s s payload re multiline group rep re search reported sby s s payload re multiline group lev re search threat slevel s payload re multiline group des re search description s s payload re multiline group sendmessage reports format lev rep url des ,1 1168,9608604151.0,IssuesEvent,2019-05-12 08:04:35,threefoldtech/home,https://api.github.com/repos/threefoldtech/home,closed,gdrive gateway for 3bot,needs_automation state_verification type_story,"The idea here is to be able to link from google docs into our wiki/websites/FFP easily requires https://github.com/threefoldtech/jumpscaleX/issues/12 We'll do this by exposing these endpoints: - [x] http://$IPADDR/gdrive/slide/$gpresentation_guid/$gslide_slide_guid.png - [x] http://$IPADDR/gdrive/slide/$gpresentation_guid/$slidename.png - [x] http://$IPADDR/gdrive/doc/$gdoc_guid.pdf - [ ] http://$IPADDR/gdrive/spreadsheet/$sheet_guid.pdf - [ ] http://$IPADDR/gdrive/doc/$gdoc_guid.md More information is on the breakdown of the tasks below. this story card tracks the progress on: - parent of threefoldtech/digitalmeX#45 - parent of threefoldtech/digitalmeX#46 - parent of threefoldtech/digitalmeX#47 - parent of threefoldtech/digitalmeX#48 Child of #162 ",1.0,"gdrive gateway for 3bot - The idea here is to be able to link from google docs into our wiki/websites/FFP easily requires https://github.com/threefoldtech/jumpscaleX/issues/12 We'll do this by exposing these endpoints: - [x] http://$IPADDR/gdrive/slide/$gpresentation_guid/$gslide_slide_guid.png - [x] http://$IPADDR/gdrive/slide/$gpresentation_guid/$slidename.png - [x] http://$IPADDR/gdrive/doc/$gdoc_guid.pdf - [ ] http://$IPADDR/gdrive/spreadsheet/$sheet_guid.pdf - [ ] http://$IPADDR/gdrive/doc/$gdoc_guid.md More information is on the breakdown of the tasks below. this story card tracks the progress on: - parent of threefoldtech/digitalmeX#45 - parent of threefoldtech/digitalmeX#46 - parent of threefoldtech/digitalmeX#47 - parent of threefoldtech/digitalmeX#48 Child of #162 ",1,gdrive gateway for the idea here is to be able to link from google docs into our wiki websites ffp easily requires we ll do this by exposing these endpoints more information is on the breakdown of the tasks below this story card tracks the progress on parent of threefoldtech digitalmex parent of threefoldtech digitalmex parent of threefoldtech digitalmex parent of threefoldtech digitalmex child of ,1 20743,10904178462.0,IssuesEvent,2019-11-20 08:06:33,osu-Karaoke/osu-karaoke-dev,https://api.github.com/repos/osu-Karaoke/osu-karaoke-dev,opened,Adjust LifeEndTime in DrawableLyricLine,performance,"**Describe the bug:** `DrawableLyricLine` should set `LifeEndTime` while `EndTime` or `Duration` Changed. **Screenshots or videos showing encountered issue:** Nope **osu!lazer version:** Nope **Logs:** Nope",True,"Adjust LifeEndTime in DrawableLyricLine - **Describe the bug:** `DrawableLyricLine` should set `LifeEndTime` while `EndTime` or `Duration` Changed. **Screenshots or videos showing encountered issue:** Nope **osu!lazer version:** Nope **Logs:** Nope",0,adjust lifeendtime in drawablelyricline describe the bug drawablelyricline should set lifeendtime while endtime or duration changed screenshots or videos showing encountered issue nope osu lazer version nope logs nope,0 7187,24368307900.0,IssuesEvent,2022-10-03 16:56:46,commonknowledge/leftbookclub,https://api.github.com/repos/commonknowledge/leftbookclub,reopened,Shopify products should automatically sync to the website,bug selling products automations size:3,"- [x] Fix the Shopify webhook - [ ] Add a backup cron job triggered by Github Actions",1.0,"Shopify products should automatically sync to the website - - [x] Fix the Shopify webhook - [ ] Add a backup cron job triggered by Github Actions",1,shopify products should automatically sync to the website fix the shopify webhook add a backup cron job triggered by github actions,1 161,4266289379.0,IssuesEvent,2016-07-12 14:09:41,flowtype/flow-typed,https://api.github.com/repos/flowtype/flow-typed,closed,[Test CI] Only run tests for files affected by a pull request,Automation Request,"Currently we run all tests -- which takes quite a long time and definitely won't scale. Instead, we should only run tests for libdefs that were touched in the PR",1.0,"[Test CI] Only run tests for files affected by a pull request - Currently we run all tests -- which takes quite a long time and definitely won't scale. Instead, we should only run tests for libdefs that were touched in the PR",1, only run tests for files affected by a pull request currently we run all tests which takes quite a long time and definitely won t scale instead we should only run tests for libdefs that were touched in the pr,1 40285,6813488228.0,IssuesEvent,2017-11-06 09:28:50,rancher/rancher,https://api.github.com/repos/rancher/rancher,opened,Describe kubectl config generation (http vs https),area/documentation kind/enhancement,"The `kubectl` configuration that gets generated in the UI should be described. * Why rewriting it to HTTPS * How is HTTPS handled in `rancher/server`",1.0,"Describe kubectl config generation (http vs https) - The `kubectl` configuration that gets generated in the UI should be described. * Why rewriting it to HTTPS * How is HTTPS handled in `rancher/server`",0,describe kubectl config generation http vs https the kubectl configuration that gets generated in the ui should be described why rewriting it to https how is https handled in rancher server ,0 279429,21160222275.0,IssuesEvent,2022-04-07 08:41:56,django-hurricane/django-hurricane,https://api.github.com/repos/django-hurricane/django-hurricane,opened,Management Command `server` does not exist,documentation,"Not sure about this section in the documentation. Maybe the description of the `serve` command can be a bit more simple? Furthermore it mentions another management command `python manage.py server`. This one does not exist. ",1.0,"Management Command `server` does not exist - Not sure about this section in the documentation. Maybe the description of the `serve` command can be a bit more simple? Furthermore it mentions another management command `python manage.py server`. This one does not exist. ",0,management command server does not exist not sure about this section in the documentation maybe the description of the serve command can be a bit more simple furthermore it mentions another management command python manage py server this one does not exist img width alt bildschirmfoto um src ,0 766222,26874759509.0,IssuesEvent,2023-02-04 22:30:13,socalledtheraven/RoyalRoad-Addon,https://api.github.com/repos/socalledtheraven/RoyalRoad-Addon,closed,Adding loading icon,enhancement priority 4,"Currently it just looks weird, add an overlay & icon to improve until selected chapter is loaded",1.0,"Adding loading icon - Currently it just looks weird, add an overlay & icon to improve until selected chapter is loaded",0,adding loading icon currently it just looks weird add an overlay icon to improve until selected chapter is loaded,0 31303,8683783434.0,IssuesEvent,2018-12-02 21:06:05,denoland/deno,https://api.github.com/repos/denoland/deno,opened,LeakSanitizer should run in CI,build,"Already available in chromium. Shouldn't be too difficult. However we are only building release builds in CI at the moment. We'd need to start a new debug build.... See https://cs.chromium.org/chromium/src/build/config/sanitizers/sanitizers.gni?l=12&rcl=fb0e572f11d5be35992300d1a3c92f66352a46af https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer",1.0,"LeakSanitizer should run in CI - Already available in chromium. Shouldn't be too difficult. However we are only building release builds in CI at the moment. We'd need to start a new debug build.... See https://cs.chromium.org/chromium/src/build/config/sanitizers/sanitizers.gni?l=12&rcl=fb0e572f11d5be35992300d1a3c92f66352a46af https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer",0,leaksanitizer should run in ci already available in chromium shouldn t be too difficult however we are only building release builds in ci at the moment we d need to start a new debug build see ,0 389717,26831455801.0,IssuesEvent,2023-02-02 16:16:43,osism/issues,https://api.github.com/repos/osism/issues,opened,Discuss use of MAAS,documentation SCS,"In some installations, MAAS is used for the provision of the bare metal used. We should discuss to what degree we can include MAAS in the documentation and how we can ensure that the base images provided by MAAS are directly usable.",1.0,"Discuss use of MAAS - In some installations, MAAS is used for the provision of the bare metal used. We should discuss to what degree we can include MAAS in the documentation and how we can ensure that the base images provided by MAAS are directly usable.",0,discuss use of maas in some installations maas is used for the provision of the bare metal used we should discuss to what degree we can include maas in the documentation and how we can ensure that the base images provided by maas are directly usable ,0 438221,12624467327.0,IssuesEvent,2020-06-14 06:20:54,RoboJackets/robocup-software,https://api.github.com/repos/RoboJackets/robocup-software,opened,Make `hitSet` return an iterator,exp / novice (1) priority / low,"The Geometry2d library's hitSet currently incurs a _lot_ of overhead by actually creating new smart pointers. We could save a lot of allocation and reference counting by just returning an iterator to it. Code to change: https://github.com/RoboJackets/robocup-software/blob/staging/common/Geometry2d/ShapeSet.hpp#L50. Example: https://github.com/RoboJackets/robocup-software/pull/1494/files/1d6b526aaeada2dc275fe30c3e2316de4e177dd9#diff-dfa0f461b7c683261fb993e66180bc72R39 Brought up in review on #1494.",1.0,"Make `hitSet` return an iterator - The Geometry2d library's hitSet currently incurs a _lot_ of overhead by actually creating new smart pointers. We could save a lot of allocation and reference counting by just returning an iterator to it. Code to change: https://github.com/RoboJackets/robocup-software/blob/staging/common/Geometry2d/ShapeSet.hpp#L50. Example: https://github.com/RoboJackets/robocup-software/pull/1494/files/1d6b526aaeada2dc275fe30c3e2316de4e177dd9#diff-dfa0f461b7c683261fb993e66180bc72R39 Brought up in review on #1494.",0,make hitset return an iterator the library s hitset currently incurs a lot of overhead by actually creating new smart pointers we could save a lot of allocation and reference counting by just returning an iterator to it code to change example brought up in review on ,0 742464,25856073178.0,IssuesEvent,2022-12-13 13:52:03,autowarefoundation/autoware.universe,https://api.github.com/repos/autowarefoundation/autoware.universe,closed,behavior_path_planner generated path outside of drivable area,bug high priority planning,"### Checklist - [X] I've read the [contribution guidelines](https://github.com/autowarefoundation/autoware/blob/main/CONTRIBUTING.md). - [X] I've searched other issues and no duplicate issues were found. - [X] I'm convinced that this is not my fault but a bug. ### Description If you give to EGO a goal pose a bit inclined , the path is still generated without considering of the width and length of EGO. And this generated path is outside the drivable area. ![Screenshot from 2022-09-01 10-32-23](https://user-images.githubusercontent.com/32412808/187857957-b21fd5ea-3edc-4535-b7cb-a8e57e149a42.png) ![Screenshot from 2022-09-01 10-34-16](https://user-images.githubusercontent.com/32412808/187858252-6e998e9d-04b2-4caf-a043-5cdf05e4d950.png) ### Expected behavior If the path to generated is outside the drivable area, the node should not create any path. ### Actual behavior `behavior_path_planner` creates a path even if the path is outside the drivable area. ![Screenshot from 2022-09-01 10-34-16](https://user-images.githubusercontent.com/32412808/187858318-b99c0359-9597-4a64-876d-9476b5ed3904.png) ### Steps to reproduce 1. Download [1761.zip](https://github.com/autowarefoundation/autoware.universe/files/9469885/1761.zip) 2. Change pcd and lanelet2 maps file path in yaml file 3. run `ros2 launch scenario_test_runner scenario_test_runner.launch.py sensor_model:=sample_sensor_kit vehicle_model:=sample_vehicle \ scenario:=/scenario/path/1761.yaml \ architecture_type:=awf/universe launch_rviz:=false launch_autoware:=true` 4. Add ` /planning/scenario_planning/lane_driving/trajectory` topic in rviz 5. When you run the scenario, path occurred outside drivable area. ### Versions _No response_ ### Possible causes _No response_ ### Additional context _No response_",1.0,"behavior_path_planner generated path outside of drivable area - ### Checklist - [X] I've read the [contribution guidelines](https://github.com/autowarefoundation/autoware/blob/main/CONTRIBUTING.md). - [X] I've searched other issues and no duplicate issues were found. - [X] I'm convinced that this is not my fault but a bug. ### Description If you give to EGO a goal pose a bit inclined , the path is still generated without considering of the width and length of EGO. And this generated path is outside the drivable area. ![Screenshot from 2022-09-01 10-32-23](https://user-images.githubusercontent.com/32412808/187857957-b21fd5ea-3edc-4535-b7cb-a8e57e149a42.png) ![Screenshot from 2022-09-01 10-34-16](https://user-images.githubusercontent.com/32412808/187858252-6e998e9d-04b2-4caf-a043-5cdf05e4d950.png) ### Expected behavior If the path to generated is outside the drivable area, the node should not create any path. ### Actual behavior `behavior_path_planner` creates a path even if the path is outside the drivable area. ![Screenshot from 2022-09-01 10-34-16](https://user-images.githubusercontent.com/32412808/187858318-b99c0359-9597-4a64-876d-9476b5ed3904.png) ### Steps to reproduce 1. Download [1761.zip](https://github.com/autowarefoundation/autoware.universe/files/9469885/1761.zip) 2. Change pcd and lanelet2 maps file path in yaml file 3. run `ros2 launch scenario_test_runner scenario_test_runner.launch.py sensor_model:=sample_sensor_kit vehicle_model:=sample_vehicle \ scenario:=/scenario/path/1761.yaml \ architecture_type:=awf/universe launch_rviz:=false launch_autoware:=true` 4. Add ` /planning/scenario_planning/lane_driving/trajectory` topic in rviz 5. When you run the scenario, path occurred outside drivable area. ### Versions _No response_ ### Possible causes _No response_ ### Additional context _No response_",0,behavior path planner generated path outside of drivable area checklist i ve read the i ve searched other issues and no duplicate issues were found i m convinced that this is not my fault but a bug description if you give to ego a goal pose a bit inclined the path is still generated without considering of the width and length of ego and this generated path is outside the drivable area expected behavior if the path to generated is outside the drivable area the node should not create any path actual behavior behavior path planner creates a path even if the path is outside the drivable area steps to reproduce download change pcd and maps file path in yaml file run launch scenario test runner scenario test runner launch py sensor model sample sensor kit vehicle model sample vehicle scenario scenario path yaml architecture type awf universe launch rviz false launch autoware true add planning scenario planning lane driving trajectory topic in rviz when you run the scenario path occurred outside drivable area versions no response possible causes no response additional context no response ,0 653,7716043201.0,IssuesEvent,2018-05-23 09:29:55,kubevirt/kubevirt-ansible,https://api.github.com/repos/kubevirt/kubevirt-ansible,closed,default openshift to 3.9,automation bug docs enhancement,"- main readme.md says pull 3.7 openshift branch - playbooks/cluster/openshift/README.md defaults to 3.7 - kubevirt-ansible/automation/check-patch.sh runs it for 3.7 - kubevirt-ansible/playbooks/cluster/openshift/config.ym defaults it to 3.7 (though it should use vars/all.yml - see issue #146) But vars/all.yml defaults it to 3.9. We should pick one version, and actually default it to 3.9 since it's the latest release and that's where devs will want to target. ",1.0,"default openshift to 3.9 - - main readme.md says pull 3.7 openshift branch - playbooks/cluster/openshift/README.md defaults to 3.7 - kubevirt-ansible/automation/check-patch.sh runs it for 3.7 - kubevirt-ansible/playbooks/cluster/openshift/config.ym defaults it to 3.7 (though it should use vars/all.yml - see issue #146) But vars/all.yml defaults it to 3.9. We should pick one version, and actually default it to 3.9 since it's the latest release and that's where devs will want to target. ",1,default openshift to main readme md says pull openshift branch playbooks cluster openshift readme md defaults to kubevirt ansible automation check patch sh runs it for kubevirt ansible playbooks cluster openshift config ym defaults it to though it should use vars all yml see issue but vars all yml defaults it to we should pick one version and actually default it to since it s the latest release and that s where devs will want to target ,1 30914,11860123272.0,IssuesEvent,2020-03-25 14:26:47,BrianMcDonaldWS/genie,https://api.github.com/repos/BrianMcDonaldWS/genie,opened,CVE-2019-0201 (Medium) detected in zookeeper-3.4.12.jar,security vulnerability,"## CVE-2019-0201 - Medium Severity Vulnerability
Vulnerable Library - zookeeper-3.4.12.jar

Path to dependency file: /tmp/ws-scm/genie/genie-ui/build.gradle

Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar

Dependency Hierarchy: - spring-integration-zookeeper-5.2.2.RELEASE.jar (Root Library) - curator-recipes-4.0.1.jar - curator-framework-4.0.1.jar - curator-client-4.0.1.jar - :x: **zookeeper-3.4.12.jar** (Vulnerable Library)

Found in HEAD commit: 568866fb6e52bc93c68e71b643c3271128773566

Vulnerability Details

An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.

Publish Date: 2019-05-23

URL: CVE-2019-0201

CVSS 3 Score Details (5.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://zookeeper.apache.org/security.html

Release Date: 2019-05-23

Fix Resolution: 3.4.14, 3.5.5

",True,"CVE-2019-0201 (Medium) detected in zookeeper-3.4.12.jar - ## CVE-2019-0201 - Medium Severity Vulnerability
Vulnerable Library - zookeeper-3.4.12.jar

Path to dependency file: /tmp/ws-scm/genie/genie-ui/build.gradle

Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar

Dependency Hierarchy: - spring-integration-zookeeper-5.2.2.RELEASE.jar (Root Library) - curator-recipes-4.0.1.jar - curator-framework-4.0.1.jar - curator-client-4.0.1.jar - :x: **zookeeper-3.4.12.jar** (Vulnerable Library)

Found in HEAD commit: 568866fb6e52bc93c68e71b643c3271128773566

Vulnerability Details

An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.

Publish Date: 2019-05-23

URL: CVE-2019-0201

CVSS 3 Score Details (5.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://zookeeper.apache.org/security.html

Release Date: 2019-05-23

Fix Resolution: 3.4.14, 3.5.5

",0,cve medium detected in zookeeper jar cve medium severity vulnerability vulnerable library zookeeper jar path to dependency file tmp ws scm genie genie ui build gradle path to vulnerable library root gradle caches modules files org apache zookeeper zookeeper zookeeper jar root gradle caches modules files org apache zookeeper zookeeper zookeeper jar dependency hierarchy spring integration zookeeper release jar root library curator recipes jar curator framework jar curator client jar x zookeeper jar vulnerable library found in head commit a href vulnerability details an issue is present in apache zookeeper to and alpha to beta zookeeper’s getacl command doesn’t check any permission when retrieves the acls of the requested node and returns all information contained in the acl id field as plaintext string digestauthenticationprovider overloads the id field with the hash value that is used for user authentication as a consequence if digest authentication is in use the unsalted hash value will be disclosed by getacl request for unauthenticated or unprivileged users publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails an issue is present in apache zookeeper to and alpha to beta zookeeper’s getacl command doesn’t check any permission when retrieves the acls of the requested node and returns all information contained in the acl id field as plaintext string digestauthenticationprovider overloads the id field with the hash value that is used for user authentication as a consequence if digest authentication is in use the unsalted hash value will be disclosed by getacl request for unauthenticated or unprivileged users vulnerabilityurl ,0 1571,6572329951.0,IssuesEvent,2017-09-11 01:26:32,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,lxc_container: provide option to make automatic container restarts optional,affects_2.1 cloud feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME lxc_container ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT N/A ##### SUMMARY When the container_config is changed (and some other options are implemented), the module actions a container restart. It would be great if that behaviour could be optional so that it is possible to use handlers/tasks to action a restart at a later time. ##### STEPS TO REPRODUCE N/A ``` N/A ``` ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS ``` N/A ``` ",True,"lxc_container: provide option to make automatic container restarts optional - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME lxc_container ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT N/A ##### SUMMARY When the container_config is changed (and some other options are implemented), the module actions a container restart. It would be great if that behaviour could be optional so that it is possible to use handlers/tasks to action a restart at a later time. ##### STEPS TO REPRODUCE N/A ``` N/A ``` ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS ``` N/A ``` ",0,lxc container provide option to make automatic container restarts optional issue type feature idea component name lxc container ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary when the container config is changed and some other options are implemented the module actions a container restart it would be great if that behaviour could be optional so that it is possible to use handlers tasks to action a restart at a later time steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used n a n a expected results n a actual results n a ,0 380592,26425851063.0,IssuesEvent,2023-01-14 06:16:37,ioofy/transform-obj,https://api.github.com/repos/ioofy/transform-obj,closed,Compare size lodash with lodash-es for reduce bundle size,documentation enhancement,"Reproduce a compare and change lodash package with lodash-es for reduce bundle size because this libs using 25kb for total of bundling size. ![image](https://user-images.githubusercontent.com/77242429/211996053-1539ea52-7bff-4212-881f-1a76db96535c.png) So i want to compare them and use the best minified size",1.0,"Compare size lodash with lodash-es for reduce bundle size - Reproduce a compare and change lodash package with lodash-es for reduce bundle size because this libs using 25kb for total of bundling size. ![image](https://user-images.githubusercontent.com/77242429/211996053-1539ea52-7bff-4212-881f-1a76db96535c.png) So i want to compare them and use the best minified size",0,compare size lodash with lodash es for reduce bundle size reproduce a compare and change lodash package with lodash es for reduce bundle size because this libs using for total of bundling size so i want to compare them and use the best minified size,0 346342,24886752612.0,IssuesEvent,2022-10-28 08:27:07,bensohh/ped,https://api.github.com/repos/bensohh/ped,opened,Wrong tags assigned for Sample Data,type.DocumentationBug severity.VeryLow,"The chicken rice store was tagged as a `vegetarian` store in the initial sample/base data entries. This might be misleading to the user. Perhaps running through the entire sample data and check the tags once would be good :) ![image.png](https://raw.githubusercontent.com/bensohh/ped/main/files/15ffc89a-c909-4d23-87ae-8d3d82b97d71.png) ",1.0,"Wrong tags assigned for Sample Data - The chicken rice store was tagged as a `vegetarian` store in the initial sample/base data entries. This might be misleading to the user. Perhaps running through the entire sample data and check the tags once would be good :) ![image.png](https://raw.githubusercontent.com/bensohh/ped/main/files/15ffc89a-c909-4d23-87ae-8d3d82b97d71.png) ",0,wrong tags assigned for sample data the chicken rice store was tagged as a vegetarian store in the initial sample base data entries this might be misleading to the user perhaps running through the entire sample data and check the tags once would be good ,0 149445,23477030260.0,IssuesEvent,2022-08-17 07:11:45,OpenSecuritySummit/oss2018,https://api.github.com/repos/OpenSecuritySummit/oss2018,closed,The home page has to be organized better,content design hugo homepage,"- [ ] Revision of the current sections published on the website and identify possible new sections that has to amplify the effect from the Summit. (Specify new sections and order in which they have to appear on the website) - [ ] make graphical design if it's needed. - [ ] Implement the home page upon a revision of the sections for the homepage. 1. [IDEA FOR SECTION] Have a section which will include list of questions defined by ""Security Questions"" answered by some of the outcomes. The section to be titled as ""We answered the following questions"". For every question to exist a link which will redirect to outcome respectively.",1.0,"The home page has to be organized better - - [ ] Revision of the current sections published on the website and identify possible new sections that has to amplify the effect from the Summit. (Specify new sections and order in which they have to appear on the website) - [ ] make graphical design if it's needed. - [ ] Implement the home page upon a revision of the sections for the homepage. 1. [IDEA FOR SECTION] Have a section which will include list of questions defined by ""Security Questions"" answered by some of the outcomes. The section to be titled as ""We answered the following questions"". For every question to exist a link which will redirect to outcome respectively.",0,the home page has to be organized better revision of the current sections published on the website and identify possible new sections that has to amplify the effect from the summit specify new sections and order in which they have to appear on the website make graphical design if it s needed implement the home page upon a revision of the sections for the homepage have a section which will include list of questions defined by security questions answered by some of the outcomes the section to be titled as we answered the following questions for every question to exist a link which will redirect to outcome respectively ,0 4755,17378621108.0,IssuesEvent,2021-07-31 07:52:54,JacobLinCool/BA,https://api.github.com/repos/JacobLinCool/BA,closed,Automation (2021/7/31 3:47:31 PM),automation,"**Updated.** (2021/7/31 3:48:32 PM)
**✨ 已連續簽到 37 天 🍀 今日已簽到 🍀 已獲得雙倍簽到獎勵 🍀 今日已答題 **
## 登入: 完成 ``` [2021/7/31 3:47:33 PM] 開始執行帳號登入程序 [2021/7/31 3:47:38 PM] 正在檢測登入狀態 [2021/7/31 3:47:41 PM] 登入狀態: 未登入 [2021/7/31 3:47:44 PM] 嘗試登入中 [2021/7/31 3:47:55 PM] 已嘗試登入,重新檢測登入狀態 [2021/7/31 3:47:55 PM] 正在檢測登入狀態 [2021/7/31 3:47:58 PM] 登入狀態: 已登入 [2021/7/31 3:47:58 PM] 帳號登入程序已完成 ``` ## 簽到: 完成 ``` [2021/7/31 3:47:58 PM] [簽到] 開始執行 [2021/7/31 3:48:02 PM] [簽到] 已連續簽到天數: 37 [2021/7/31 3:48:02 PM] [簽到] 今日已簽到 ✔ [2021/7/31 3:48:02 PM] [簽到] 正在檢測雙倍簽到獎勵狀態 [2021/7/31 3:48:07 PM] [簽到] 已獲得雙倍簽到獎勵 ✔ [2021/7/31 3:48:07 PM] [簽到] 執行完畢 ✨ [2021/7/31 3:48:07 PM] [簽到] ✨✨✨ 已連續簽到 37 天 ✨✨✨ ``` ## 答題: 完成 ``` [2021/7/31 3:48:08 PM] [動畫瘋答題] 開始執行 [2021/7/31 3:48:08 PM] [動畫瘋答題] 正在檢測答題狀態 [2021/7/31 3:48:11 PM] [動畫瘋答題] 今日已經答過題目了 ✔ [2021/7/31 3:48:12 PM] [動畫瘋答題] 執行完畢 ✨ [2021/7/31 3:48:12 PM] [動畫瘋答題] ✨✨✨ 獲得 0 巴幣 ✨✨✨ ``` ## 抽獎: 執行中 ``` [2021/7/31 3:48:12 PM] [抽抽樂] 開始執行 [2021/7/31 3:48:14 PM] [抽抽樂] 找到 6 個抽抽樂 [2021/7/31 3:48:14 PM] [抽抽樂] 1: GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版! [2021/7/31 3:48:14 PM] [抽抽樂] 2: EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/7/31 3:48:14 PM] [抽抽樂] 3: 【亞瑟3C生活】手遊好夥伴,ENERGEA - L型可移動雙彎頭編織抗菌充電線-抽抽樂 [2021/7/31 3:48:14 PM] [抽抽樂] 4: 創力-株式会社つくり-怪物彈珠系列周邊-限時抽抽樂! [2021/7/31 3:48:14 PM] [抽抽樂] 5: NVMe Gen4 固態飆速-PNY CS3040 固態硬碟強悍滿載 [2021/7/31 3:48:14 PM] [抽抽樂] 6: 【繪王再挺你】遠距教學,安心防疫,完美組合包,一次幫你搞定! [2021/7/31 3:48:14 PM] [抽抽樂] 正在嘗試執行第 1 個抽抽樂: GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版! [2021/7/31 3:48:18 PM] [抽抽樂] 第 1 個抽抽樂(GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版!)的廣告免費次數已用完 ✔ [2021/7/31 3:48:18 PM] [抽抽樂] 正在嘗試執行第 2 個抽抽樂: EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/7/31 3:48:20 PM] [抽抽樂] 第 2 個抽抽樂(EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來)的廣告免費次數已用完 ✔ [2021/7/31 3:48:20 PM] [抽抽樂] 正在嘗試執行第 3 個抽抽樂: 【亞瑟3C生活】手遊好夥伴,ENERGEA - L型可移動雙彎頭編織抗菌充電線-抽抽樂 [2021/7/31 3:48:23 PM] [抽抽樂] 第 3 個抽抽樂(【亞瑟3C生活】手遊好夥伴,ENERGEA - L型可移動雙彎頭編織抗菌充電線-抽抽樂)的廣告免費次數已用完 ✔ [2021/7/31 3:48:23 PM] [抽抽樂] 正在嘗試執行第 4 個抽抽樂: 創力-株式会社つくり-怪物彈珠系列周邊-限時抽抽樂! [2021/7/31 3:48:26 PM] [抽抽樂] 第 4 個抽抽樂(創力-株式会社つくり-怪物彈珠系列周邊-限時抽抽樂!)的廣告免費次數已用完 ✔ [2021/7/31 3:48:26 PM] [抽抽樂] 正在嘗試執行第 5 個抽抽樂: NVMe Gen4 固態飆速-PNY CS3040 固態硬碟強悍滿載 [2021/7/31 3:48:28 PM] [抽抽樂] 第 5 個抽抽樂(NVMe Gen4 固態飆速-PNY CS3040 固態硬碟強悍滿載)的廣告免費次數已用完 ✔ [2021/7/31 3:48:28 PM] [抽抽樂] 正在嘗試執行第 6 個抽抽樂: 【繪王再挺你】遠距教學,安心防疫,完美組合包,一次幫你搞定! [2021/7/31 3:48:32 PM] [抽抽樂] 第 6 個抽抽樂(【繪王再挺你】遠距教學,安心防疫,完美組合包,一次幫你搞定!)的廣告免費次數已用完 ✔ ``` ",1.0,"Automation (2021/7/31 3:47:31 PM) - **Updated.** (2021/7/31 3:48:32 PM)
**✨ 已連續簽到 37 天 🍀 今日已簽到 🍀 已獲得雙倍簽到獎勵 🍀 今日已答題 **
## 登入: 完成 ``` [2021/7/31 3:47:33 PM] 開始執行帳號登入程序 [2021/7/31 3:47:38 PM] 正在檢測登入狀態 [2021/7/31 3:47:41 PM] 登入狀態: 未登入 [2021/7/31 3:47:44 PM] 嘗試登入中 [2021/7/31 3:47:55 PM] 已嘗試登入,重新檢測登入狀態 [2021/7/31 3:47:55 PM] 正在檢測登入狀態 [2021/7/31 3:47:58 PM] 登入狀態: 已登入 [2021/7/31 3:47:58 PM] 帳號登入程序已完成 ``` ## 簽到: 完成 ``` [2021/7/31 3:47:58 PM] [簽到] 開始執行 [2021/7/31 3:48:02 PM] [簽到] 已連續簽到天數: 37 [2021/7/31 3:48:02 PM] [簽到] 今日已簽到 ✔ [2021/7/31 3:48:02 PM] [簽到] 正在檢測雙倍簽到獎勵狀態 [2021/7/31 3:48:07 PM] [簽到] 已獲得雙倍簽到獎勵 ✔ [2021/7/31 3:48:07 PM] [簽到] 執行完畢 ✨ [2021/7/31 3:48:07 PM] [簽到] ✨✨✨ 已連續簽到 37 天 ✨✨✨ ``` ## 答題: 完成 ``` [2021/7/31 3:48:08 PM] [動畫瘋答題] 開始執行 [2021/7/31 3:48:08 PM] [動畫瘋答題] 正在檢測答題狀態 [2021/7/31 3:48:11 PM] [動畫瘋答題] 今日已經答過題目了 ✔ [2021/7/31 3:48:12 PM] [動畫瘋答題] 執行完畢 ✨ [2021/7/31 3:48:12 PM] [動畫瘋答題] ✨✨✨ 獲得 0 巴幣 ✨✨✨ ``` ## 抽獎: 執行中 ``` [2021/7/31 3:48:12 PM] [抽抽樂] 開始執行 [2021/7/31 3:48:14 PM] [抽抽樂] 找到 6 個抽抽樂 [2021/7/31 3:48:14 PM] [抽抽樂] 1: GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版! [2021/7/31 3:48:14 PM] [抽抽樂] 2: EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/7/31 3:48:14 PM] [抽抽樂] 3: 【亞瑟3C生活】手遊好夥伴,ENERGEA - L型可移動雙彎頭編織抗菌充電線-抽抽樂 [2021/7/31 3:48:14 PM] [抽抽樂] 4: 創力-株式会社つくり-怪物彈珠系列周邊-限時抽抽樂! [2021/7/31 3:48:14 PM] [抽抽樂] 5: NVMe Gen4 固態飆速-PNY CS3040 固態硬碟強悍滿載 [2021/7/31 3:48:14 PM] [抽抽樂] 6: 【繪王再挺你】遠距教學,安心防疫,完美組合包,一次幫你搞定! [2021/7/31 3:48:14 PM] [抽抽樂] 正在嘗試執行第 1 個抽抽樂: GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版! [2021/7/31 3:48:18 PM] [抽抽樂] 第 1 個抽抽樂(GoKids玩樂小子|深入絕地:暗黑世界傳說-跨越16年的經典RPG遊戲,史詩繁中再版!)的廣告免費次數已用完 ✔ [2021/7/31 3:48:18 PM] [抽抽樂] 正在嘗試執行第 2 個抽抽樂: EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來 [2021/7/31 3:48:20 PM] [抽抽樂] 第 2 個抽抽樂(EPOS |Sennheiser 最強電競耳機─王者回歸,GSP 602抽起來)的廣告免費次數已用完 ✔ [2021/7/31 3:48:20 PM] [抽抽樂] 正在嘗試執行第 3 個抽抽樂: 【亞瑟3C生活】手遊好夥伴,ENERGEA - L型可移動雙彎頭編織抗菌充電線-抽抽樂 [2021/7/31 3:48:23 PM] [抽抽樂] 第 3 個抽抽樂(【亞瑟3C生活】手遊好夥伴,ENERGEA - L型可移動雙彎頭編織抗菌充電線-抽抽樂)的廣告免費次數已用完 ✔ [2021/7/31 3:48:23 PM] [抽抽樂] 正在嘗試執行第 4 個抽抽樂: 創力-株式会社つくり-怪物彈珠系列周邊-限時抽抽樂! [2021/7/31 3:48:26 PM] [抽抽樂] 第 4 個抽抽樂(創力-株式会社つくり-怪物彈珠系列周邊-限時抽抽樂!)的廣告免費次數已用完 ✔ [2021/7/31 3:48:26 PM] [抽抽樂] 正在嘗試執行第 5 個抽抽樂: NVMe Gen4 固態飆速-PNY CS3040 固態硬碟強悍滿載 [2021/7/31 3:48:28 PM] [抽抽樂] 第 5 個抽抽樂(NVMe Gen4 固態飆速-PNY CS3040 固態硬碟強悍滿載)的廣告免費次數已用完 ✔ [2021/7/31 3:48:28 PM] [抽抽樂] 正在嘗試執行第 6 個抽抽樂: 【繪王再挺你】遠距教學,安心防疫,完美組合包,一次幫你搞定! [2021/7/31 3:48:32 PM] [抽抽樂] 第 6 個抽抽樂(【繪王再挺你】遠距教學,安心防疫,完美組合包,一次幫你搞定!)的廣告免費次數已用完 ✔ ``` ",1,automation pm updated pm ✨ 已連續簽到 天 🍀 今日已簽到 🍀 已獲得雙倍簽到獎勵 🍀 今日已答題 登入 完成 開始執行帳號登入程序 正在檢測登入狀態 登入狀態 未登入 嘗試登入中 已嘗試登入,重新檢測登入狀態 正在檢測登入狀態 登入狀態 已登入 帳號登入程序已完成 簽到 完成 開始執行 已連續簽到天數 今日已簽到 ✔ 正在檢測雙倍簽到獎勵狀態 已獲得雙倍簽到獎勵 ✔ 執行完畢 ✨ ✨✨✨ 已連續簽到 天 ✨✨✨ 答題 完成 開始執行 正在檢測答題狀態 今日已經答過題目了 ✔ 執行完畢 ✨ ✨✨✨ 獲得 巴幣 ✨✨✨ 抽獎 執行中 開始執行 找到 個抽抽樂 gokids玩樂小子|深入絕地:暗黑世界傳說 ,史詩繁中再版! epos |sennheiser 最強電競耳機─王者回歸,gsp 【 】手遊好夥伴,energea l型可移動雙彎頭編織抗菌充電線 抽抽樂 創力 株式会社つくり 怪物彈珠系列周邊 限時抽抽樂! nvme 固態飆速 pny 固態硬碟強悍滿載 【繪王再挺你】遠距教學,安心防疫,完美組合包,一次幫你搞定! 正在嘗試執行第 個抽抽樂: gokids玩樂小子|深入絕地:暗黑世界傳說 ,史詩繁中再版! 第 個抽抽樂(gokids玩樂小子|深入絕地:暗黑世界傳說 ,史詩繁中再版!)的廣告免費次數已用完 ✔ 正在嘗試執行第 個抽抽樂: epos |sennheiser 最強電競耳機─王者回歸,gsp 第 個抽抽樂(epos |sennheiser 最強電競耳機─王者回歸,gsp )的廣告免費次數已用完 ✔ 正在嘗試執行第 個抽抽樂: 【 】手遊好夥伴,energea l型可移動雙彎頭編織抗菌充電線 抽抽樂 第 個抽抽樂(【 】手遊好夥伴,energea l型可移動雙彎頭編織抗菌充電線 抽抽樂)的廣告免費次數已用完 ✔ 正在嘗試執行第 個抽抽樂: 創力 株式会社つくり 怪物彈珠系列周邊 限時抽抽樂! 第 個抽抽樂(創力 株式会社つくり 怪物彈珠系列周邊 限時抽抽樂!)的廣告免費次數已用完 ✔ 正在嘗試執行第 個抽抽樂: nvme 固態飆速 pny 固態硬碟強悍滿載 第 個抽抽樂(nvme 固態飆速 pny 固態硬碟強悍滿載)的廣告免費次數已用完 ✔ 正在嘗試執行第 個抽抽樂: 【繪王再挺你】遠距教學,安心防疫,完美組合包,一次幫你搞定! 第 個抽抽樂(【繪王再挺你】遠距教學,安心防疫,完美組合包,一次幫你搞定!)的廣告免費次數已用完 ✔ ,1 58458,7155539714.0,IssuesEvent,2018-01-26 13:14:55,algolia/instantsearch.js,https://api.github.com/repos/algolia/instantsearch.js,opened,Proposal: improve `connectRange()` behavior,Design: API Doc: API Scope: connectors ❔ Question,"### Improve the [`refine()` documentation](https://community.algolia.com/instantsearch.js/v2/connectors/connectRange.html#struct-RangeRenderingOptions) When you create a calendar widget (to refine with date ranges), you sometimes need to specify a lower bound without any upper bound (let's say ""*Display all events from 26/01/2018*""). A way to do so is to pass `undefined` as the upper bound: ```js refine([start, undefined]) ``` **I don't think we've documented passing `undefined` anywhere ([see `refine()` doc](https://community.algolia.com/instantsearch.js/v2/connectors/connectRange.html#struct-RangeRenderingOptions)).** ##### Actions: * Document that `refine()` accepts `undefined` as an unlimited bound ### Consider supporting `Infinity` For now, the `refine()` method only takes finite values. Passing `-Infinity` or `Infinity` to a range made sense at first to me (`typeof Infinity === 'number'`). It is however ignored by `refine()` and doesn't trigger any refinements when provided. I think it should be supported since it is a number and should fall back to `undefined` internally. **The documentation doesn't state that `refine()` only supports finite numbers.** ##### Actions: * Specify in the `refine()` doc that it filters only with finite values *OR* * Support infinite values (process them as `undefined` internally) * (?) Log warnings when the refinement isn't triggered ### Future API improvements As discussed with @samouss, it might be interesting to change `refine(Array)` to an object `refine({ min: number, max: number})` in a next major release. A reason behind this is that both values are optional. It also feels more natural to pass an object rather than a tuple (see [`range`](https://community.algolia.com/instantsearch.js/v2/connectors/connectRange.html#struct-RangeRenderingOptions)). If we decide to do so, we should also change the `start` method to a similar object. --- Do you think these inputs are relevant? Should we take action? #### TL;DR * `refine()` accepting `undefined` is not documented * `refine()` only accepts finite values (not `Infinity` and `-Infinity`) * `Infinity` and `-Infinity` should behave like `undefined` * We should consider passing an object to `refine()` instead of an array in the future #### Related * #2309",1.0,"Proposal: improve `connectRange()` behavior - ### Improve the [`refine()` documentation](https://community.algolia.com/instantsearch.js/v2/connectors/connectRange.html#struct-RangeRenderingOptions) When you create a calendar widget (to refine with date ranges), you sometimes need to specify a lower bound without any upper bound (let's say ""*Display all events from 26/01/2018*""). A way to do so is to pass `undefined` as the upper bound: ```js refine([start, undefined]) ``` **I don't think we've documented passing `undefined` anywhere ([see `refine()` doc](https://community.algolia.com/instantsearch.js/v2/connectors/connectRange.html#struct-RangeRenderingOptions)).** ##### Actions: * Document that `refine()` accepts `undefined` as an unlimited bound ### Consider supporting `Infinity` For now, the `refine()` method only takes finite values. Passing `-Infinity` or `Infinity` to a range made sense at first to me (`typeof Infinity === 'number'`). It is however ignored by `refine()` and doesn't trigger any refinements when provided. I think it should be supported since it is a number and should fall back to `undefined` internally. **The documentation doesn't state that `refine()` only supports finite numbers.** ##### Actions: * Specify in the `refine()` doc that it filters only with finite values *OR* * Support infinite values (process them as `undefined` internally) * (?) Log warnings when the refinement isn't triggered ### Future API improvements As discussed with @samouss, it might be interesting to change `refine(Array)` to an object `refine({ min: number, max: number})` in a next major release. A reason behind this is that both values are optional. It also feels more natural to pass an object rather than a tuple (see [`range`](https://community.algolia.com/instantsearch.js/v2/connectors/connectRange.html#struct-RangeRenderingOptions)). If we decide to do so, we should also change the `start` method to a similar object. --- Do you think these inputs are relevant? Should we take action? #### TL;DR * `refine()` accepting `undefined` is not documented * `refine()` only accepts finite values (not `Infinity` and `-Infinity`) * `Infinity` and `-Infinity` should behave like `undefined` * We should consider passing an object to `refine()` instead of an array in the future #### Related * #2309",0,proposal improve connectrange behavior improve the when you create a calendar widget to refine with date ranges you sometimes need to specify a lower bound without any upper bound let s say display all events from a way to do so is to pass undefined as the upper bound js refine i don t think we ve documented passing undefined anywhere actions document that refine accepts undefined as an unlimited bound consider supporting infinity for now the refine method only takes finite values passing infinity or infinity to a range made sense at first to me typeof infinity number it is however ignored by refine and doesn t trigger any refinements when provided i think it should be supported since it is a number and should fall back to undefined internally the documentation doesn t state that refine only supports finite numbers actions specify in the refine doc that it filters only with finite values or support infinite values process them as undefined internally log warnings when the refinement isn t triggered future api improvements as discussed with samouss it might be interesting to change refine array to an object refine min number max number in a next major release a reason behind this is that both values are optional it also feels more natural to pass an object rather than a tuple see if we decide to do so we should also change the start method to a similar object do you think these inputs are relevant should we take action tl dr refine accepting undefined is not documented refine only accepts finite values not infinity and infinity infinity and infinity should behave like undefined we should consider passing an object to refine instead of an array in the future related ,0 5441,19604591846.0,IssuesEvent,2022-01-06 07:42:32,arcus-azure/arcus.ml,https://api.github.com/repos/arcus-azure/arcus.ml,opened,Split CI build in separate jobs for clearer overview and restart,automation enhancement,"**Is your feature request related to a problem? Please describe.** Currently the CI build is one big job which doesn't provide much clarity in the PR overview and does not provide more gradual control if a job needs to be restarted. **Describe the solution you'd like** Split the CI build in separate jobs (like in the Arcus repo's).",1.0,"Split CI build in separate jobs for clearer overview and restart - **Is your feature request related to a problem? Please describe.** Currently the CI build is one big job which doesn't provide much clarity in the PR overview and does not provide more gradual control if a job needs to be restarted. **Describe the solution you'd like** Split the CI build in separate jobs (like in the Arcus repo's).",1,split ci build in separate jobs for clearer overview and restart is your feature request related to a problem please describe currently the ci build is one big job which doesn t provide much clarity in the pr overview and does not provide more gradual control if a job needs to be restarted describe the solution you d like split the ci build in separate jobs like in the arcus repo s ,1 271,5113542612.0,IssuesEvent,2017-01-06 15:40:05,blackbaud/skyux2,https://api.github.com/repos/blackbaud/skyux2,opened,Create lingoport repo for sky ux 2,automation,Set up a lingoport repository for SKY UX 2 to get automated translations in the way that we do for SKY UX 1.,1.0,Create lingoport repo for sky ux 2 - Set up a lingoport repository for SKY UX 2 to get automated translations in the way that we do for SKY UX 1.,1,create lingoport repo for sky ux set up a lingoport repository for sky ux to get automated translations in the way that we do for sky ux ,1 201258,15802147940.0,IssuesEvent,2021-04-03 08:17:59,tzexern/ped,https://api.github.com/repos/tzexern/ped,opened,Minor Grammar Mistake under Remove Feature,severity.VeryLow type.DocumentationBug,"![image.png](https://raw.githubusercontent.com/tzexern/ped/main/files/fe2a8e92-951b-4129-97ce-928490a16a13.png) Under remove, notes for `QUANTITY_TO_REMOVE`, is `...must be lower to equal...` supposed to be `... lower or equal...`? ",1.0,"Minor Grammar Mistake under Remove Feature - ![image.png](https://raw.githubusercontent.com/tzexern/ped/main/files/fe2a8e92-951b-4129-97ce-928490a16a13.png) Under remove, notes for `QUANTITY_TO_REMOVE`, is `...must be lower to equal...` supposed to be `... lower or equal...`? ",0,minor grammar mistake under remove feature under remove notes for quantity to remove is must be lower to equal supposed to be lower or equal ,0 9155,27638407456.0,IssuesEvent,2023-03-10 16:06:52,Accenture/sfmc-devtools,https://api.github.com/repos/Accenture/sfmc-devtools,opened,[BUG] automation activitiy types incorrectly mapped,bug c/automation NEW,"### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior activityTypeMapping: { dataExtract: 73, dataFactoryUtility: 425, emailSend: 42, fileTransfer: 53, filter: 303, fireEvent: 749, importFile: 43, journeyEntry: 952, journeyEntryOld: 733, query: 300, script: 423, verification: 1000, wait: 467, push: 736, sms: 725, reportDefinition: 84, refreshMobileFilteredList: 724, refreshGroup: 45, interactions: 1101, interactionStudioData: 1010, importMobileContact: 726, }, we try to cache these types but some of them are either not supported or incorrectly spelled (interactions, journeyEntry I can spot already, maybe more) ### Expected Behavior _No response_ ### Steps To Reproduce 1. Go to '...' 2. Click on '....' 3. Run '...' 4. See error... ### Version 4.3.4 ### Environment - OS: - Node: - npm: ### Participation - [X] I am willing to submit a pull request for this issue. ### Additional comments _No response_",1.0,"[BUG] automation activitiy types incorrectly mapped - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior activityTypeMapping: { dataExtract: 73, dataFactoryUtility: 425, emailSend: 42, fileTransfer: 53, filter: 303, fireEvent: 749, importFile: 43, journeyEntry: 952, journeyEntryOld: 733, query: 300, script: 423, verification: 1000, wait: 467, push: 736, sms: 725, reportDefinition: 84, refreshMobileFilteredList: 724, refreshGroup: 45, interactions: 1101, interactionStudioData: 1010, importMobileContact: 726, }, we try to cache these types but some of them are either not supported or incorrectly spelled (interactions, journeyEntry I can spot already, maybe more) ### Expected Behavior _No response_ ### Steps To Reproduce 1. Go to '...' 2. Click on '....' 3. Run '...' 4. See error... ### Version 4.3.4 ### Environment - OS: - Node: - npm: ### Participation - [X] I am willing to submit a pull request for this issue. ### Additional comments _No response_",1, automation activitiy types incorrectly mapped is there an existing issue for this i have searched the existing issues current behavior activitytypemapping dataextract datafactoryutility emailsend filetransfer filter fireevent importfile journeyentry journeyentryold query script verification wait push sms reportdefinition refreshmobilefilteredlist refreshgroup interactions interactionstudiodata importmobilecontact we try to cache these types but some of them are either not supported or incorrectly spelled interactions journeyentry i can spot already maybe more expected behavior no response steps to reproduce go to click on run see error version environment os node npm participation i am willing to submit a pull request for this issue additional comments no response ,1 308389,23246052448.0,IssuesEvent,2022-08-03 20:17:18,WordPress/Documentation-Issue-Tracker,https://api.github.com/repos/WordPress/Documentation-Issue-Tracker,closed,[HelpHub] Page Break,user documentation good first issue 5.9 block editor has-screenshots,"Quick video overview of how to contribute this change (with English subtitles): https://wordpress.tv/2022/03/04/update-screenshots-in-wordpress-documentation/ Original article: https://wordpress.org/support/article/page-break-block/ ## General - [x] make sure all screenshots/videos are relevant for current version (5.9) - [x] add changelog at the end of the article Issue migrated from Trello: https://trello.com/c/s0QlQbk6/76-page-break",1.0,"[HelpHub] Page Break - Quick video overview of how to contribute this change (with English subtitles): https://wordpress.tv/2022/03/04/update-screenshots-in-wordpress-documentation/ Original article: https://wordpress.org/support/article/page-break-block/ ## General - [x] make sure all screenshots/videos are relevant for current version (5.9) - [x] add changelog at the end of the article Issue migrated from Trello: https://trello.com/c/s0QlQbk6/76-page-break",0, page break quick video overview of how to contribute this change with english subtitles original article general make sure all screenshots videos are relevant for current version add changelog at the end of the article issue migrated from trello ,0 132629,28263550205.0,IssuesEvent,2023-04-07 03:12:17,RE-SS3D/SS3D-Website,https://api.github.com/repos/RE-SS3D/SS3D-Website,opened,Fix Page counter,Bug Code - HTML/CSS Code - JavaScript,Page counter went invisible and I'm not sure it was working right to begin with but don't have time at the moment to look into it.,2.0,Fix Page counter - Page counter went invisible and I'm not sure it was working right to begin with but don't have time at the moment to look into it.,0,fix page counter page counter went invisible and i m not sure it was working right to begin with but don t have time at the moment to look into it ,0 213386,16515853560.0,IssuesEvent,2021-05-26 09:38:55,numba/numba,https://api.github.com/repos/numba/numba,closed,CUDA: Atomics tests fail with NumPy 1.20,CUDA bug test suite,"**EDIT**: Originally this was reported as an issue with Python 3.9, but it actually seems to be NumPy 1.20 which causes the issue. Tested with Numba 0.53.1 and Python 3.9.4. ``` $ numba -s System info: -------------------------------------------------------------------------------- __Time Stamp__ Report started (local time) : 2021-04-23 10:50:10.994280 UTC start time : 2021-04-23 09:50:10.994284 Running time (s) : 0.872906 __Hardware Information__ Machine : x86_64 CPU Name : skylake-avx512 CPU Count : 12 Number of accessible CPUs : 12 List of accessible CPUs cores : 0-11 CFS Restrictions (CPUs worth of runtime) : None CPU Features : 64bit adx aes avx avx2 avx512bw avx512cd avx512dq avx512f avx512vl bmi bmi2 clflushopt clwb cmov cx16 cx8 f16c fma fsgsbase fxsr invpcid lzcnt mmx movbe pclmul pku popcnt prfchw rdrnd rdseed rtm sahf sse sse2 sse3 sse4.1 sse4.2 ssse3 xsave xsavec xsaveopt xsaves Memory Total (MB) : 95270 Memory Available (MB) : 91775 __OS Information__ Platform Name : Linux-5.4.0-72-generic-x86_64-with-glibc2.27 Platform Release : 5.4.0-72-generic OS Name : Linux OS Version : #80~18.04.1-Ubuntu SMP Mon Apr 12 23:26:25 UTC 2021 OS Specific Version : ? Libc Version : glibc 2.27 __Python Information__ Python Compiler : GCC 7.3.0 Python Implementation : CPython Python Version : 3.9.4 Python Locale : en_GB.UTF-8 __LLVM Information__ LLVM Version : 10.0.1 __CUDA Information__ CUDA Device Initialized : True CUDA Driver Version : 11030 CUDA Detect Output: Found 2 CUDA devices id 0 b'NVIDIA Quadro RTX 8000' [SUPPORTED] compute capability: 7.5 pci device id: 0 pci bus id: 21 id 1 b'NVIDIA Quadro RTX 8000' [SUPPORTED] compute capability: 7.5 pci device id: 0 pci bus id: 45 Summary: 2/2 devices are supported CUDA Libraries Test Output: Finding cublas from System named libcublas.so.11.2.2.29 trying to open library... ok Finding cusparse from System named libcusparse.so.11.3.1.29 trying to open library... ok Finding cufft from System named libcufft.so.10.4.0.29 trying to open library... ok Finding curand from System named libcurand.so.10.2.3.29 trying to open library... ok Finding nvvm from System named libnvvm.so.4.0.0 trying to open library... ok Finding cudart from System named libcudart.so.11.2.29 trying to open library... ok Finding cudadevrt from System named libcudadevrt.a Finding libdevice from System searching for compute_20... ok searching for compute_30... ok searching for compute_35... ok searching for compute_50... ok __ROC information__ ROC Available : False ROC Toolchains : None HSA Agents Count : 0 HSA Agents: None HSA Discrete GPUs Count : 0 HSA Discrete GPUs : None __SVML Information__ SVML State, config.USING_SVML : False SVML Library Loaded : False llvmlite Using SVML Patched LLVM : True SVML Operational : False __Threading Layer Information__ TBB Threading Layer Available : True +-->TBB imported successfully. OpenMP Threading Layer Available : True +-->Vendor: GNU Workqueue Threading Layer Available : True +-->Workqueue imported successfully. __Numba Environment Variable Information__ None found. __Conda Information__ Conda Build : 3.20.1 Conda Env : 4.9.2 Conda Platform : linux-64 Conda Python Version : 3.8.3.final.0 Conda Root Writable : True __Installed Packages__ _libgcc_mutex 0.1 main blas 1.0 mkl ca-certificates 2021.4.13 h06a4308_1 certifi 2020.12.5 py39h06a4308_0 intel-openmp 2021.2.0 h06a4308_610 ld_impl_linux-64 2.33.1 h53a641e_7 libffi 3.3 he6710b0_2 libgcc-ng 9.1.0 hdf63c60_0 libllvm10 10.0.1 hbcb73fb_5 libstdcxx-ng 9.1.0 hdf63c60_0 llvmlite 0.36.0 py39h612dafd_4 mkl 2021.2.0 h06a4308_296 mkl-service 2.3.0 py39h27cfd23_1 mkl_fft 1.3.0 py39h42c9631_2 mkl_random 1.2.1 py39ha9443f7_2 ncurses 6.2 he6710b0_1 numba 0.53.1 py39ha9443f7_0 numpy 1.20.1 py39h93e21f0_0 numpy-base 1.20.1 py39h7d8b39e_0 openssl 1.1.1k h27cfd23_0 pip 21.0.1 py39h06a4308_0 python 3.9.4 hdb3f193_0 readline 8.1 h27cfd23_0 setuptools 52.0.0 py39h06a4308_0 six 1.15.0 py39h06a4308_0 sqlite 3.35.4 hdfb4753_0 tbb 2020.3 hfd86e86_0 tk 8.6.10 hbc83047_0 tzdata 2020f h52ac0ba_0 wheel 0.36.2 pyhd3eb1b0_0 xz 5.2.5 h7b6447c_0 zlib 1.2.11 h7b6447c_3 No errors reported. ``` Test errors: ``` ====================================================================== ERROR: test_atomic_nanmax_int32 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1279, in test_atomic_nanmax_int32 self.check_atomic_nanmax(dtype=np.int32, lo=-65535, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1271, in check_atomic_nanmax vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmax_int64 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1286, in test_atomic_nanmax_int64 self.check_atomic_nanmax(dtype=np.int64, lo=-65535, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1271, in check_atomic_nanmax vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmax_uint32 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1282, in test_atomic_nanmax_uint32 self.check_atomic_nanmax(dtype=np.uint32, lo=0, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1271, in check_atomic_nanmax vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmax_uint64 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1290, in test_atomic_nanmax_uint64 self.check_atomic_nanmax(dtype=np.uint64, lo=0, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1271, in check_atomic_nanmax vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmin_int32 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1332, in test_atomic_nanmin_int32 self.check_atomic_nanmin(dtype=np.int32, lo=-65535, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1323, in check_atomic_nanmin vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmin_int64 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1339, in test_atomic_nanmin_int64 self.check_atomic_nanmin(dtype=np.int64, lo=-65535, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1323, in check_atomic_nanmin vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmin_uint32 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1335, in test_atomic_nanmin_uint32 self.check_atomic_nanmin(dtype=np.uint32, lo=0, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1323, in check_atomic_nanmin vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmin_uint64 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1343, in test_atomic_nanmin_uint64 self.check_atomic_nanmin(dtype=np.uint64, lo=0, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1323, in check_atomic_nanmin vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ---------------------------------------------------------------------- Ran 1124 tests in 138.542s FAILED (errors=8, skipped=11, expected failures=1) ```",1.0,"CUDA: Atomics tests fail with NumPy 1.20 - **EDIT**: Originally this was reported as an issue with Python 3.9, but it actually seems to be NumPy 1.20 which causes the issue. Tested with Numba 0.53.1 and Python 3.9.4. ``` $ numba -s System info: -------------------------------------------------------------------------------- __Time Stamp__ Report started (local time) : 2021-04-23 10:50:10.994280 UTC start time : 2021-04-23 09:50:10.994284 Running time (s) : 0.872906 __Hardware Information__ Machine : x86_64 CPU Name : skylake-avx512 CPU Count : 12 Number of accessible CPUs : 12 List of accessible CPUs cores : 0-11 CFS Restrictions (CPUs worth of runtime) : None CPU Features : 64bit adx aes avx avx2 avx512bw avx512cd avx512dq avx512f avx512vl bmi bmi2 clflushopt clwb cmov cx16 cx8 f16c fma fsgsbase fxsr invpcid lzcnt mmx movbe pclmul pku popcnt prfchw rdrnd rdseed rtm sahf sse sse2 sse3 sse4.1 sse4.2 ssse3 xsave xsavec xsaveopt xsaves Memory Total (MB) : 95270 Memory Available (MB) : 91775 __OS Information__ Platform Name : Linux-5.4.0-72-generic-x86_64-with-glibc2.27 Platform Release : 5.4.0-72-generic OS Name : Linux OS Version : #80~18.04.1-Ubuntu SMP Mon Apr 12 23:26:25 UTC 2021 OS Specific Version : ? Libc Version : glibc 2.27 __Python Information__ Python Compiler : GCC 7.3.0 Python Implementation : CPython Python Version : 3.9.4 Python Locale : en_GB.UTF-8 __LLVM Information__ LLVM Version : 10.0.1 __CUDA Information__ CUDA Device Initialized : True CUDA Driver Version : 11030 CUDA Detect Output: Found 2 CUDA devices id 0 b'NVIDIA Quadro RTX 8000' [SUPPORTED] compute capability: 7.5 pci device id: 0 pci bus id: 21 id 1 b'NVIDIA Quadro RTX 8000' [SUPPORTED] compute capability: 7.5 pci device id: 0 pci bus id: 45 Summary: 2/2 devices are supported CUDA Libraries Test Output: Finding cublas from System named libcublas.so.11.2.2.29 trying to open library... ok Finding cusparse from System named libcusparse.so.11.3.1.29 trying to open library... ok Finding cufft from System named libcufft.so.10.4.0.29 trying to open library... ok Finding curand from System named libcurand.so.10.2.3.29 trying to open library... ok Finding nvvm from System named libnvvm.so.4.0.0 trying to open library... ok Finding cudart from System named libcudart.so.11.2.29 trying to open library... ok Finding cudadevrt from System named libcudadevrt.a Finding libdevice from System searching for compute_20... ok searching for compute_30... ok searching for compute_35... ok searching for compute_50... ok __ROC information__ ROC Available : False ROC Toolchains : None HSA Agents Count : 0 HSA Agents: None HSA Discrete GPUs Count : 0 HSA Discrete GPUs : None __SVML Information__ SVML State, config.USING_SVML : False SVML Library Loaded : False llvmlite Using SVML Patched LLVM : True SVML Operational : False __Threading Layer Information__ TBB Threading Layer Available : True +-->TBB imported successfully. OpenMP Threading Layer Available : True +-->Vendor: GNU Workqueue Threading Layer Available : True +-->Workqueue imported successfully. __Numba Environment Variable Information__ None found. __Conda Information__ Conda Build : 3.20.1 Conda Env : 4.9.2 Conda Platform : linux-64 Conda Python Version : 3.8.3.final.0 Conda Root Writable : True __Installed Packages__ _libgcc_mutex 0.1 main blas 1.0 mkl ca-certificates 2021.4.13 h06a4308_1 certifi 2020.12.5 py39h06a4308_0 intel-openmp 2021.2.0 h06a4308_610 ld_impl_linux-64 2.33.1 h53a641e_7 libffi 3.3 he6710b0_2 libgcc-ng 9.1.0 hdf63c60_0 libllvm10 10.0.1 hbcb73fb_5 libstdcxx-ng 9.1.0 hdf63c60_0 llvmlite 0.36.0 py39h612dafd_4 mkl 2021.2.0 h06a4308_296 mkl-service 2.3.0 py39h27cfd23_1 mkl_fft 1.3.0 py39h42c9631_2 mkl_random 1.2.1 py39ha9443f7_2 ncurses 6.2 he6710b0_1 numba 0.53.1 py39ha9443f7_0 numpy 1.20.1 py39h93e21f0_0 numpy-base 1.20.1 py39h7d8b39e_0 openssl 1.1.1k h27cfd23_0 pip 21.0.1 py39h06a4308_0 python 3.9.4 hdb3f193_0 readline 8.1 h27cfd23_0 setuptools 52.0.0 py39h06a4308_0 six 1.15.0 py39h06a4308_0 sqlite 3.35.4 hdfb4753_0 tbb 2020.3 hfd86e86_0 tk 8.6.10 hbc83047_0 tzdata 2020f h52ac0ba_0 wheel 0.36.2 pyhd3eb1b0_0 xz 5.2.5 h7b6447c_0 zlib 1.2.11 h7b6447c_3 No errors reported. ``` Test errors: ``` ====================================================================== ERROR: test_atomic_nanmax_int32 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1279, in test_atomic_nanmax_int32 self.check_atomic_nanmax(dtype=np.int32, lo=-65535, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1271, in check_atomic_nanmax vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmax_int64 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1286, in test_atomic_nanmax_int64 self.check_atomic_nanmax(dtype=np.int64, lo=-65535, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1271, in check_atomic_nanmax vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmax_uint32 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1282, in test_atomic_nanmax_uint32 self.check_atomic_nanmax(dtype=np.uint32, lo=0, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1271, in check_atomic_nanmax vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmax_uint64 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1290, in test_atomic_nanmax_uint64 self.check_atomic_nanmax(dtype=np.uint64, lo=0, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1271, in check_atomic_nanmax vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmin_int32 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1332, in test_atomic_nanmin_int32 self.check_atomic_nanmin(dtype=np.int32, lo=-65535, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1323, in check_atomic_nanmin vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmin_int64 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1339, in test_atomic_nanmin_int64 self.check_atomic_nanmin(dtype=np.int64, lo=-65535, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1323, in check_atomic_nanmin vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmin_uint32 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1335, in test_atomic_nanmin_uint32 self.check_atomic_nanmin(dtype=np.uint32, lo=0, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1323, in check_atomic_nanmin vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ====================================================================== ERROR: test_atomic_nanmin_uint64 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1343, in test_atomic_nanmin_uint64 self.check_atomic_nanmin(dtype=np.uint64, lo=0, hi=65535) File ""/home/gmarkall/miniconda3/envs/NumbaGPUTest/lib/python3.9/site-packages/numba/cuda/tests/cudapy/test_atomics.py"", line 1323, in check_atomic_nanmin vals[1::2] = np.nan ValueError: cannot convert float NaN to integer ---------------------------------------------------------------------- Ran 1124 tests in 138.542s FAILED (errors=8, skipped=11, expected failures=1) ```",0,cuda atomics tests fail with numpy edit originally this was reported as an issue with python but it actually seems to be numpy which causes the issue tested with numba and python numba s system info time stamp report started local time utc start time running time s hardware information machine cpu name skylake cpu count number of accessible cpus list of accessible cpus cores cfs restrictions cpus worth of runtime none cpu features adx aes avx bmi clflushopt clwb cmov fma fsgsbase fxsr invpcid lzcnt mmx movbe pclmul pku popcnt prfchw rdrnd rdseed rtm sahf sse xsave xsavec xsaveopt xsaves memory total mb memory available mb os information platform name linux generic with platform release generic os name linux os version ubuntu smp mon apr utc os specific version libc version glibc python information python compiler gcc python implementation cpython python version python locale en gb utf llvm information llvm version cuda information cuda device initialized true cuda driver version cuda detect output found cuda devices id b nvidia quadro rtx compute capability pci device id pci bus id id b nvidia quadro rtx compute capability pci device id pci bus id summary devices are supported cuda libraries test output finding cublas from system named libcublas so trying to open library ok finding cusparse from system named libcusparse so trying to open library ok finding cufft from system named libcufft so trying to open library ok finding curand from system named libcurand so trying to open library ok finding nvvm from system named libnvvm so trying to open library ok finding cudart from system named libcudart so trying to open library ok finding cudadevrt from system named libcudadevrt a finding libdevice from system searching for compute ok searching for compute ok searching for compute ok searching for compute ok roc information roc available false roc toolchains none hsa agents count hsa agents none hsa discrete gpus count hsa discrete gpus none svml information svml state config using svml false svml library loaded false llvmlite using svml patched llvm true svml operational false threading layer information tbb threading layer available true tbb imported successfully openmp threading layer available true vendor gnu workqueue threading layer available true workqueue imported successfully numba environment variable information none found conda information conda build conda env conda platform linux conda python version final conda root writable true installed packages libgcc mutex main blas mkl ca certificates certifi intel openmp ld impl linux libffi libgcc ng libstdcxx ng llvmlite mkl mkl service mkl fft mkl random ncurses numba numpy numpy base openssl pip python readline setuptools six sqlite tbb tk tzdata wheel xz zlib no errors reported test errors error test atomic nanmax numba cuda tests cudapy test atomics testcudaatomics traceback most recent call last file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in test atomic nanmax self check atomic nanmax dtype np lo hi file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in check atomic nanmax vals np nan valueerror cannot convert float nan to integer error test atomic nanmax numba cuda tests cudapy test atomics testcudaatomics traceback most recent call last file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in test atomic nanmax self check atomic nanmax dtype np lo hi file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in check atomic nanmax vals np nan valueerror cannot convert float nan to integer error test atomic nanmax numba cuda tests cudapy test atomics testcudaatomics traceback most recent call last file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in test atomic nanmax self check atomic nanmax dtype np lo hi file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in check atomic nanmax vals np nan valueerror cannot convert float nan to integer error test atomic nanmax numba cuda tests cudapy test atomics testcudaatomics traceback most recent call last file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in test atomic nanmax self check atomic nanmax dtype np lo hi file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in check atomic nanmax vals np nan valueerror cannot convert float nan to integer error test atomic nanmin numba cuda tests cudapy test atomics testcudaatomics traceback most recent call last file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in test atomic nanmin self check atomic nanmin dtype np lo hi file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in check atomic nanmin vals np nan valueerror cannot convert float nan to integer error test atomic nanmin numba cuda tests cudapy test atomics testcudaatomics traceback most recent call last file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in test atomic nanmin self check atomic nanmin dtype np lo hi file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in check atomic nanmin vals np nan valueerror cannot convert float nan to integer error test atomic nanmin numba cuda tests cudapy test atomics testcudaatomics traceback most recent call last file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in test atomic nanmin self check atomic nanmin dtype np lo hi file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in check atomic nanmin vals np nan valueerror cannot convert float nan to integer error test atomic nanmin numba cuda tests cudapy test atomics testcudaatomics traceback most recent call last file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in test atomic nanmin self check atomic nanmin dtype np lo hi file home gmarkall envs numbagputest lib site packages numba cuda tests cudapy test atomics py line in check atomic nanmin vals np nan valueerror cannot convert float nan to integer ran tests in failed errors skipped expected failures ,0 7463,24931374722.0,IssuesEvent,2022-10-31 11:55:59,Budibase/budibase,https://api.github.com/repos/Budibase/budibase,opened,Update Discord automation to support embed objects,enhancement automations,"**Describe the feature request** Had a [request on our discord server here](https://discord.com/channels/733030666647765003/1035462956122656808/1035462956122656808) to support embeds in Discord automations. Supporting these would allow a much richer message to be sent via webhooks, and would allow them to include images, videos, thumbnails, etc. The full reference for the Discord embed object [can be found here](https://discord.com/channels/733030666647765003/1035462956122656808/1035462956122656808). **Screenshots** [This site has a good number of examples of what you can do with embeds.](https://birdie0.github.io/discord-webhooks-guide/structure/embeds.html) ![discord embed example](https://anidiots.guide/.gitbook/assets/first-bot-embed-example.png) ",1.0,"Update Discord automation to support embed objects - **Describe the feature request** Had a [request on our discord server here](https://discord.com/channels/733030666647765003/1035462956122656808/1035462956122656808) to support embeds in Discord automations. Supporting these would allow a much richer message to be sent via webhooks, and would allow them to include images, videos, thumbnails, etc. The full reference for the Discord embed object [can be found here](https://discord.com/channels/733030666647765003/1035462956122656808/1035462956122656808). **Screenshots** [This site has a good number of examples of what you can do with embeds.](https://birdie0.github.io/discord-webhooks-guide/structure/embeds.html) ![discord embed example](https://anidiots.guide/.gitbook/assets/first-bot-embed-example.png) ",1,update discord automation to support embed objects describe the feature request had a to support embeds in discord automations supporting these would allow a much richer message to be sent via webhooks and would allow them to include images videos thumbnails etc the full reference for the discord embed object screenshots ,1 310200,26705692828.0,IssuesEvent,2023-01-27 17:57:10,ntop/ntopng,https://api.github.com/repos/ntop/ntopng,closed,Invalid Link and Name for Chart Images,Bug Ready to Test Not Yet Working,"In recent ntopng versions there is a new option that allows the chart to be downloaded as image ![image](https://user-images.githubusercontent.com/4493366/204220881-0a77b61f-decd-4835-b51b-ede6c604ec07.png) There are a few issues to fix - There is a typo in ""Downlod"" ![image](https://user-images.githubusercontent.com/4493366/204221023-c9af9419-163d-49fd-91bf-10e6dde55725.png) - The file name should not have spaces or other forbidden chars such as / \, $, "" .... - The URL for downloading the chart offline (e.g. via curl) is not available and it must be implemented ",1.0,"Invalid Link and Name for Chart Images - In recent ntopng versions there is a new option that allows the chart to be downloaded as image ![image](https://user-images.githubusercontent.com/4493366/204220881-0a77b61f-decd-4835-b51b-ede6c604ec07.png) There are a few issues to fix - There is a typo in ""Downlod"" ![image](https://user-images.githubusercontent.com/4493366/204221023-c9af9419-163d-49fd-91bf-10e6dde55725.png) - The file name should not have spaces or other forbidden chars such as / \, $, "" .... - The URL for downloading the chart offline (e.g. via curl) is not available and it must be implemented ",0,invalid link and name for chart images in recent ntopng versions there is a new option that allows the chart to be downloaded as image there are a few issues to fix there is a typo in downlod the file name should not have spaces or other forbidden chars such as the url for downloading the chart offline e g via curl is not available and it must be implemented ,0 1735,10651467236.0,IssuesEvent,2019-10-17 10:25:53,elastic/apm-server,https://api.github.com/repos/elastic/apm-server,closed,Check Changelogs test is broken,[zube]: In Review automation ci,"The Check Changelogs test fails with every version before 7.3, this makes that the test never success ``` ** 6.2.asciidoc 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 ** checking 6.2.asciidoc on 6.2 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.2/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.3 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.3/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.4 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.4/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.5 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.5/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.6 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.6/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.7 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.7/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.8 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.8/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 7.0 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/7.0/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 7.1 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/7.1/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 7.2 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/7.2/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 7.3 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/7.3/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 7.x 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/7.x/changelogs/6.2.asciidoc success ** 6.3.asciidoc 2cda5ea9928efce30e130821e125b39e7c482e63 ** checking 6.3.asciidoc on 6.3 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.3/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 6.4 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.4/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 6.5 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.5/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 6.6 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.6/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 6.7 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.7/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 6.8 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.8/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 7.0 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/7.0/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 7.1 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/7.1/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 7.2 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/7.2/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 7.3 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/7.3/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 7.x 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/7.x/changelogs/6.3.asciidoc success ** 6.4.asciidoc 50218e1a8c033d65c00675790023e9184e55405c ** checking 6.4.asciidoc on 6.4 3227a748c9fa67254ebcf9a049b93802ab4740c9 https://raw.githubusercontent.com/elastic/apm-server/6.4/changelogs/6.4.asciidoc failed checking 6.4.asciidoc on 6.5 3227a748c9fa67254ebcf9a049b93802ab4740c9 https://raw.githubusercontent.com/elastic/apm-server/6.5/changelogs/6.4.asciidoc failed checking 6.4.asciidoc on 6.6 3227a748c9fa67254ebcf9a049b93802ab4740c9 https://raw.githubusercontent.com/elastic/apm-server/6.6/changelogs/6.4.asciidoc failed checking 6.4.asciidoc on 6.7 3227a748c9fa67254ebcf9a049b93802ab4740c9 https://raw.githubusercontent.com/elastic/apm-server/6.7/changelogs/6.4.asciidoc failed checking 6.4.asciidoc on 6.8 3227a748c9fa67254ebcf9a049b93802ab4740c9 https://raw.githubusercontent.com/elastic/apm-server/6.8/changelogs/6.4.asciidoc failed checking 6.4.asciidoc on 7.0 50218e1a8c033d65c00675790023e9184e55405c https://raw.githubusercontent.com/elastic/apm-server/7.0/changelogs/6.4.asciidoc success checking 6.4.asciidoc on 7.1 50218e1a8c033d65c00675790023e9184e55405c https://raw.githubusercontent.com/elastic/apm-server/7.1/changelogs/6.4.asciidoc success checking 6.4.asciidoc on 7.2 50218e1a8c033d65c00675790023e9184e55405c https://raw.githubusercontent.com/elastic/apm-server/7.2/changelogs/6.4.asciidoc success checking 6.4.asciidoc on 7.3 50218e1a8c033d65c00675790023e9184e55405c https://raw.githubusercontent.com/elastic/apm-server/7.3/changelogs/6.4.asciidoc success checking 6.4.asciidoc on 7.x 50218e1a8c033d65c00675790023e9184e55405c https://raw.githubusercontent.com/elastic/apm-server/7.x/changelogs/6.4.asciidoc success Traceback (most recent call last): File ""script/check_changelogs.py"", line 60, in main() File ""script/check_changelogs.py"", line 56, in main raise Exception('Some changelogs are missing, please look at for failed.') Exception: Some changelogs are missing, please look at for failed. ```",1.0,"Check Changelogs test is broken - The Check Changelogs test fails with every version before 7.3, this makes that the test never success ``` ** 6.2.asciidoc 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 ** checking 6.2.asciidoc on 6.2 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.2/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.3 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.3/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.4 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.4/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.5 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.5/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.6 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.6/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.7 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.7/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 6.8 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/6.8/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 7.0 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/7.0/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 7.1 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/7.1/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 7.2 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/7.2/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 7.3 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/7.3/changelogs/6.2.asciidoc success checking 6.2.asciidoc on 7.x 0a01e14e5f1c725db534e2ac1b0cca38fd60a006 https://raw.githubusercontent.com/elastic/apm-server/7.x/changelogs/6.2.asciidoc success ** 6.3.asciidoc 2cda5ea9928efce30e130821e125b39e7c482e63 ** checking 6.3.asciidoc on 6.3 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.3/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 6.4 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.4/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 6.5 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.5/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 6.6 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.6/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 6.7 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.7/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 6.8 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/6.8/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 7.0 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/7.0/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 7.1 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/7.1/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 7.2 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/7.2/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 7.3 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/7.3/changelogs/6.3.asciidoc success checking 6.3.asciidoc on 7.x 2cda5ea9928efce30e130821e125b39e7c482e63 https://raw.githubusercontent.com/elastic/apm-server/7.x/changelogs/6.3.asciidoc success ** 6.4.asciidoc 50218e1a8c033d65c00675790023e9184e55405c ** checking 6.4.asciidoc on 6.4 3227a748c9fa67254ebcf9a049b93802ab4740c9 https://raw.githubusercontent.com/elastic/apm-server/6.4/changelogs/6.4.asciidoc failed checking 6.4.asciidoc on 6.5 3227a748c9fa67254ebcf9a049b93802ab4740c9 https://raw.githubusercontent.com/elastic/apm-server/6.5/changelogs/6.4.asciidoc failed checking 6.4.asciidoc on 6.6 3227a748c9fa67254ebcf9a049b93802ab4740c9 https://raw.githubusercontent.com/elastic/apm-server/6.6/changelogs/6.4.asciidoc failed checking 6.4.asciidoc on 6.7 3227a748c9fa67254ebcf9a049b93802ab4740c9 https://raw.githubusercontent.com/elastic/apm-server/6.7/changelogs/6.4.asciidoc failed checking 6.4.asciidoc on 6.8 3227a748c9fa67254ebcf9a049b93802ab4740c9 https://raw.githubusercontent.com/elastic/apm-server/6.8/changelogs/6.4.asciidoc failed checking 6.4.asciidoc on 7.0 50218e1a8c033d65c00675790023e9184e55405c https://raw.githubusercontent.com/elastic/apm-server/7.0/changelogs/6.4.asciidoc success checking 6.4.asciidoc on 7.1 50218e1a8c033d65c00675790023e9184e55405c https://raw.githubusercontent.com/elastic/apm-server/7.1/changelogs/6.4.asciidoc success checking 6.4.asciidoc on 7.2 50218e1a8c033d65c00675790023e9184e55405c https://raw.githubusercontent.com/elastic/apm-server/7.2/changelogs/6.4.asciidoc success checking 6.4.asciidoc on 7.3 50218e1a8c033d65c00675790023e9184e55405c https://raw.githubusercontent.com/elastic/apm-server/7.3/changelogs/6.4.asciidoc success checking 6.4.asciidoc on 7.x 50218e1a8c033d65c00675790023e9184e55405c https://raw.githubusercontent.com/elastic/apm-server/7.x/changelogs/6.4.asciidoc success Traceback (most recent call last): File ""script/check_changelogs.py"", line 60, in main() File ""script/check_changelogs.py"", line 56, in main raise Exception('Some changelogs are missing, please look at for failed.') Exception: Some changelogs are missing, please look at for failed. ```",1,check changelogs test is broken the check changelogs test fails with every version before this makes that the test never success asciidoc checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on x success asciidoc checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on x success asciidoc checking asciidoc on failed checking asciidoc on failed checking asciidoc on failed checking asciidoc on failed checking asciidoc on failed checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on success checking asciidoc on x success traceback most recent call last file script check changelogs py line in main file script check changelogs py line in main raise exception some changelogs are missing please look at for failed exception some changelogs are missing please look at for failed ,1 207639,16090417777.0,IssuesEvent,2021-04-26 16:01:16,MicrosoftDocs/dynamics-365-unified-operations-public,https://api.github.com/repos/MicrosoftDocs/dynamics-365-unified-operations-public,closed,Explanation required for note (not clear) ,assigned-to-author documentation triaged-kristin," i need an explanation for note of the products must also be assorted to a product, under Add products to category nodes. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e0c7f3e5-5cff-3b3a-8876-48bf006e9c60 * Version Independent ID: 9dde0656-917e-f6f5-1c44-1ac7fb5aecdd * Content: [Create a channel navigation hierarchy - Commerce | Dynamics 365](https://docs.microsoft.com/en-us/dynamics365/commerce/create-channel-hierarchy) * Content Source: [articles/commerce/create-channel-hierarchy.md](https://github.com/MicrosoftDocs/Dynamics-365-Unified-Operations-Public/blob/main/articles/commerce/create-channel-hierarchy.md) * Service: **dynamics-365-commerce ** * Product: **** * Technology: **** * GitHub Login: @samjarawan * Microsoft Alias: **samjar**",1.0,"Explanation required for note (not clear) - i need an explanation for note of the products must also be assorted to a product, under Add products to category nodes. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e0c7f3e5-5cff-3b3a-8876-48bf006e9c60 * Version Independent ID: 9dde0656-917e-f6f5-1c44-1ac7fb5aecdd * Content: [Create a channel navigation hierarchy - Commerce | Dynamics 365](https://docs.microsoft.com/en-us/dynamics365/commerce/create-channel-hierarchy) * Content Source: [articles/commerce/create-channel-hierarchy.md](https://github.com/MicrosoftDocs/Dynamics-365-Unified-Operations-Public/blob/main/articles/commerce/create-channel-hierarchy.md) * Service: **dynamics-365-commerce ** * Product: **** * Technology: **** * GitHub Login: @samjarawan * Microsoft Alias: **samjar**",0,explanation required for note not clear i need an explanation for note of the products must also be assorted to a product under add products to category nodes document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service dynamics commerce product technology github login samjarawan microsoft alias samjar ,0 857,8418792701.0,IssuesEvent,2018-10-15 02:55:26,xcat2/xcat-core,https://api.github.com/repos/xcat2/xcat-core,closed,[automation] Install xCAT failed on sles because xCAT-probe install config,component:automation sprint2,"[20181011.0615][sles12.3-P8VMs-Mysql-166][STOP for install_xCAT_on_rhels_sles] ``Final environment stop at install_xCAT_on_rhels_sles`` ``` Problem: nothing provides /usr/bin/tree needed by xCAT-probe-4:2.14.4-snap201810110615.noarch Solution 1: do not install xCAT-2.14.4-snap201810110615.ppc64le Solution 2: break xCAT-probe-4:2.14.4-snap201810110615.noarch by ignoring some of its dependencies ``` ",1.0,"[automation] Install xCAT failed on sles because xCAT-probe install config - [20181011.0615][sles12.3-P8VMs-Mysql-166][STOP for install_xCAT_on_rhels_sles] ``Final environment stop at install_xCAT_on_rhels_sles`` ``` Problem: nothing provides /usr/bin/tree needed by xCAT-probe-4:2.14.4-snap201810110615.noarch Solution 1: do not install xCAT-2.14.4-snap201810110615.ppc64le Solution 2: break xCAT-probe-4:2.14.4-snap201810110615.noarch by ignoring some of its dependencies ``` ",1, install xcat failed on sles because xcat probe install config final environment stop at install xcat on rhels sles problem nothing provides usr bin tree needed by xcat probe noarch solution do not install xcat solution break xcat probe noarch by ignoring some of its dependencies ,1 124933,16677242917.0,IssuesEvent,2021-06-07 17:49:05,AmericaSCORESBayArea/AmericaScores-CoachApp,https://api.github.com/repos/AmericaSCORESBayArea/AmericaScores-CoachApp,closed,"The Sessions view should titled like ""2021 Spring Teams"" based on selected season",IceBox-Deferred needs design!!!,"The UI needs to clearly distinguish what part(s) of the hierarchy is in view. Also the terminology in the Nav needs to match the view title. For the sake of space, the Season Name doesn't have to appear on the outer Nav bar, but should persist in the UI, presumably in the top area, so that it's never ambiguous to the user what Season is on view and the scope of the list is obvious. For Fall 2020 for example, the list of teams for the selected season would be titled: **""2020 Fall Teams""** UX Criteria: 1. Coach can easily change seasons in a current view or by navigating back in the hierarchy 2. Season name appears in the title 3. Viewed items such as lists are filtered to the selected season Dependencies: https://github.com/AmericaSCORESBayArea/AmericaScores-attendanceApp/issues/46 ",1.0,"The Sessions view should titled like ""2021 Spring Teams"" based on selected season - The UI needs to clearly distinguish what part(s) of the hierarchy is in view. Also the terminology in the Nav needs to match the view title. For the sake of space, the Season Name doesn't have to appear on the outer Nav bar, but should persist in the UI, presumably in the top area, so that it's never ambiguous to the user what Season is on view and the scope of the list is obvious. For Fall 2020 for example, the list of teams for the selected season would be titled: **""2020 Fall Teams""** UX Criteria: 1. Coach can easily change seasons in a current view or by navigating back in the hierarchy 2. Season name appears in the title 3. Viewed items such as lists are filtered to the selected season Dependencies: https://github.com/AmericaSCORESBayArea/AmericaScores-attendanceApp/issues/46 ",0,the sessions view should titled like spring teams based on selected season the ui needs to clearly distinguish what part s of the hierarchy is in view also the terminology in the nav needs to match the view title for the sake of space the season name doesn t have to appear on the outer nav bar but should persist in the ui presumably in the top area so that it s never ambiguous to the user what season is on view and the scope of the list is obvious for fall for example the list of teams for the selected season would be titled fall teams ux criteria coach can easily change seasons in a current view or by navigating back in the hierarchy season name appears in the title viewed items such as lists are filtered to the selected season dependencies ,0 6957,24061584322.0,IssuesEvent,2022-09-16 23:56:24,StoneCypher/fsl,https://api.github.com/repos/StoneCypher/fsl,closed,Coverage analysis,Research material Tooling needed Easy starting issue Automation,"Doing this stochastically isn't good enough. It's not actually clear whether coverage is possible on a system with a lot of hooks But if it's just edges and probabilities, then coverage is calculatable Semi-related to #62 ",1.0,"Coverage analysis - Doing this stochastically isn't good enough. It's not actually clear whether coverage is possible on a system with a lot of hooks But if it's just edges and probabilities, then coverage is calculatable Semi-related to #62 ",1,coverage analysis doing this stochastically isn t good enough it s not actually clear whether coverage is possible on a system with a lot of hooks but if it s just edges and probabilities then coverage is calculatable semi related to ,1 3290,13384135477.0,IssuesEvent,2020-09-02 11:30:37,elastic/apm-agent-php,https://api.github.com/repos/elastic/apm-agent-php,closed,"Package Agent as .rpm for RHEL/Centos, Fedora, etc. Linux distros",[zube]: Backlog automation enhancement,"Package should: - Install extension binary - Install PHP part of the agent - Update php.ini file(s) including setting `elastic_apm.bootstrap_php_part_file` ",1.0,"Package Agent as .rpm for RHEL/Centos, Fedora, etc. Linux distros - Package should: - Install extension binary - Install PHP part of the agent - Update php.ini file(s) including setting `elastic_apm.bootstrap_php_part_file` ",1,package agent as rpm for rhel centos fedora etc linux distros package should install extension binary install php part of the agent update php ini file s including setting elastic apm bootstrap php part file ,1 256814,27561726012.0,IssuesEvent,2023-03-07 22:42:28,samqws-marketing/walmartlabs-concord,https://api.github.com/repos/samqws-marketing/walmartlabs-concord,closed,"CVE-2022-24999 (High) detected in qs-6.7.0.tgz, qs-6.5.2.tgz - autoclosed",security vulnerability,"## CVE-2022-24999 - High Severity Vulnerability
Vulnerable Libraries - qs-6.7.0.tgz, qs-6.5.2.tgz

qs-6.7.0.tgz

A querystring parser that supports nesting and arrays, with a depth limit

Library home page: https://registry.npmjs.org/qs/-/qs-6.7.0.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/express/node_modules/qs/package.json,/console2/node_modules/body-parser/node_modules/qs/package.json

Dependency Hierarchy: - express-4.17.1.tgz (Root Library) - :x: **qs-6.7.0.tgz** (Vulnerable Library)

qs-6.5.2.tgz

A querystring parser that supports nesting and arrays, with a depth limit

Library home page: https://registry.npmjs.org/qs/-/qs-6.5.2.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/qs/package.json

Dependency Hierarchy: - react-scripts-4.0.3.tgz (Root Library) - jest-26.6.0.tgz - jest-cli-26.6.3.tgz - jest-config-26.6.3.tgz - jest-environment-jsdom-26.6.2.tgz - jsdom-16.5.3.tgz - request-2.88.2.tgz - :x: **qs-6.5.2.tgz** (Vulnerable Library)

Found in HEAD commit: b9420f3b9e73a9d381266ece72f7afb756f35a76

Found in base branch: master

Vulnerability Details

qs before 6.10.3, as used in Express before 4.17.3 and other products, allows attackers to cause a Node process hang for an Express application because an __ proto__ key can be used. In many typical Express use cases, an unauthenticated remote attacker can place the attack payload in the query string of the URL that is used to visit the application, such as a[__proto__]=b&a[__proto__]&a[length]=100000000. The fix was backported to qs 6.9.7, 6.8.3, 6.7.3, 6.6.1, 6.5.3, 6.4.1, 6.3.3, and 6.2.4 (and therefore Express 4.17.3, which has ""deps: qs@6.9.7"" in its release description, is not vulnerable).

Publish Date: 2022-11-26

URL: CVE-2022-24999

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.cve.org/CVERecord?id=CVE-2022-24999

Release Date: 2022-11-26

Fix Resolution (qs): 6.7.3

Direct dependency fix Resolution (express): 4.17.2

Fix Resolution (qs): 6.5.3

Direct dependency fix Resolution (react-scripts): 5.0.0

*** - [ ] Check this box to open an automated fix PR ",True,"CVE-2022-24999 (High) detected in qs-6.7.0.tgz, qs-6.5.2.tgz - autoclosed - ## CVE-2022-24999 - High Severity Vulnerability
Vulnerable Libraries - qs-6.7.0.tgz, qs-6.5.2.tgz

qs-6.7.0.tgz

A querystring parser that supports nesting and arrays, with a depth limit

Library home page: https://registry.npmjs.org/qs/-/qs-6.7.0.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/express/node_modules/qs/package.json,/console2/node_modules/body-parser/node_modules/qs/package.json

Dependency Hierarchy: - express-4.17.1.tgz (Root Library) - :x: **qs-6.7.0.tgz** (Vulnerable Library)

qs-6.5.2.tgz

A querystring parser that supports nesting and arrays, with a depth limit

Library home page: https://registry.npmjs.org/qs/-/qs-6.5.2.tgz

Path to dependency file: /console2/package.json

Path to vulnerable library: /console2/node_modules/qs/package.json

Dependency Hierarchy: - react-scripts-4.0.3.tgz (Root Library) - jest-26.6.0.tgz - jest-cli-26.6.3.tgz - jest-config-26.6.3.tgz - jest-environment-jsdom-26.6.2.tgz - jsdom-16.5.3.tgz - request-2.88.2.tgz - :x: **qs-6.5.2.tgz** (Vulnerable Library)

Found in HEAD commit: b9420f3b9e73a9d381266ece72f7afb756f35a76

Found in base branch: master

Vulnerability Details

qs before 6.10.3, as used in Express before 4.17.3 and other products, allows attackers to cause a Node process hang for an Express application because an __ proto__ key can be used. In many typical Express use cases, an unauthenticated remote attacker can place the attack payload in the query string of the URL that is used to visit the application, such as a[__proto__]=b&a[__proto__]&a[length]=100000000. The fix was backported to qs 6.9.7, 6.8.3, 6.7.3, 6.6.1, 6.5.3, 6.4.1, 6.3.3, and 6.2.4 (and therefore Express 4.17.3, which has ""deps: qs@6.9.7"" in its release description, is not vulnerable).

Publish Date: 2022-11-26

URL: CVE-2022-24999

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.cve.org/CVERecord?id=CVE-2022-24999

Release Date: 2022-11-26

Fix Resolution (qs): 6.7.3

Direct dependency fix Resolution (express): 4.17.2

Fix Resolution (qs): 6.5.3

Direct dependency fix Resolution (react-scripts): 5.0.0

*** - [ ] Check this box to open an automated fix PR ",0,cve high detected in qs tgz qs tgz autoclosed cve high severity vulnerability vulnerable libraries qs tgz qs tgz qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href path to dependency file package json path to vulnerable library node modules express node modules qs package json node modules body parser node modules qs package json dependency hierarchy express tgz root library x qs tgz vulnerable library qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href path to dependency file package json path to vulnerable library node modules qs package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz jest config tgz jest environment jsdom tgz jsdom tgz request tgz x qs tgz vulnerable library found in head commit a href found in base branch master vulnerability details qs before as used in express before and other products allows attackers to cause a node process hang for an express application because an proto key can be used in many typical express use cases an unauthenticated remote attacker can place the attack payload in the query string of the url that is used to visit the application such as a b a a the fix was backported to qs and and therefore express which has deps qs in its release description is not vulnerable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution qs direct dependency fix resolution express fix resolution qs direct dependency fix resolution react scripts check this box to open an automated fix pr ,0 2346,11785231921.0,IssuesEvent,2020-03-17 09:55:54,keptn/keptn,https://api.github.com/repos/keptn/keptn,opened,Automatically create/update a github release/tag as a draft when pushing something to a release-branch,automation type:chore,"Within our workflow we should automatically create a draft release when we push something to a release-* branch. If a release is already published/exists, this needs to fail (re-releasing software with the same version number is a bad practice). We can also automatically take the release-notes from the [releasenotes/](releasenotes/) folder and add them.",1.0,"Automatically create/update a github release/tag as a draft when pushing something to a release-branch - Within our workflow we should automatically create a draft release when we push something to a release-* branch. If a release is already published/exists, this needs to fail (re-releasing software with the same version number is a bad practice). We can also automatically take the release-notes from the [releasenotes/](releasenotes/) folder and add them.",1,automatically create update a github release tag as a draft when pushing something to a release branch within our workflow we should automatically create a draft release when we push something to a release branch if a release is already published exists this needs to fail re releasing software with the same version number is a bad practice we can also automatically take the release notes from the releasenotes folder and add them ,1 3784,14580676881.0,IssuesEvent,2020-12-18 09:32:16,SAP/fundamental-ngx,https://api.github.com/repos/SAP/fundamental-ngx,opened,Bug. Link(Platform): Example of using image file as link has broken image.,E2E automation Medium bug platform,"#### Is this a bug, enhancement, or feature request? Bug. #### Briefly describe your proposal. #### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.) v 0.26.0-rc.1 #### If this is a bug, please provide steps for reproducing it. Open Platform ==> Link component ==> Scroll to **Example of using image file as link** example. #### Please provide relevant source code if applicable. #### Is there anything else we should know? ![link image](https://user-images.githubusercontent.com/5969492/102598229-6095f300-4124-11eb-87fd-256454adf76c.png)",1.0,"Bug. Link(Platform): Example of using image file as link has broken image. - #### Is this a bug, enhancement, or feature request? Bug. #### Briefly describe your proposal. #### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.) v 0.26.0-rc.1 #### If this is a bug, please provide steps for reproducing it. Open Platform ==> Link component ==> Scroll to **Example of using image file as link** example. #### Please provide relevant source code if applicable. #### Is there anything else we should know? ![link image](https://user-images.githubusercontent.com/5969492/102598229-6095f300-4124-11eb-87fd-256454adf76c.png)",1,bug link platform example of using image file as link has broken image is this a bug enhancement or feature request bug briefly describe your proposal which versions of angular and fundamental library for angular are affected if this is a feature request use current version v rc if this is a bug please provide steps for reproducing it open platform link component scroll to example of using image file as link example please provide relevant source code if applicable is there anything else we should know ,1 373313,26049117984.0,IssuesEvent,2022-12-22 16:53:39,transparencia-mg/handbook,https://api.github.com/repos/transparencia-mg/handbook,closed,Fazer post sobre instalação docker local,documentation,"Referências - [ckan-demo-data](https://github.com/ckan/ckan-demo-data) - [Instalação Augusto Herrmann](https://herrmann.tech/en/blog/2020/09/30/how-to-install-and-configure-ckan-2-9-0-using-docker.html) Descobrir erro ou abrir pergunta para comunidade: ``` ➜ ckan-docker git:(master) docker exec -it ckan bash OCI runtime exec failed: exec failed: unable to start container process: exec: ""bash"": executable file not found in $PATH: unknown ➜ ckan-docker git:(master) docker exec -it ckan bin/bash OCI runtime exec failed: exec failed: unable to start container process: exec: ""bin/bash"": stat bin/bash: no such file or directory: unknown ```",1.0,"Fazer post sobre instalação docker local - Referências - [ckan-demo-data](https://github.com/ckan/ckan-demo-data) - [Instalação Augusto Herrmann](https://herrmann.tech/en/blog/2020/09/30/how-to-install-and-configure-ckan-2-9-0-using-docker.html) Descobrir erro ou abrir pergunta para comunidade: ``` ➜ ckan-docker git:(master) docker exec -it ckan bash OCI runtime exec failed: exec failed: unable to start container process: exec: ""bash"": executable file not found in $PATH: unknown ➜ ckan-docker git:(master) docker exec -it ckan bin/bash OCI runtime exec failed: exec failed: unable to start container process: exec: ""bin/bash"": stat bin/bash: no such file or directory: unknown ```",0,fazer post sobre instalação docker local referências descobrir erro ou abrir pergunta para comunidade ➜ ckan docker git master docker exec it ckan bash oci runtime exec failed exec failed unable to start container process exec bash executable file not found in path unknown ➜ ckan docker git master docker exec it ckan bin bash oci runtime exec failed exec failed unable to start container process exec bin bash stat bin bash no such file or directory unknown ,0 472167,13617584175.0,IssuesEvent,2020-09-23 17:13:23,kubernetes/website,https://api.github.com/repos/kubernetes/website,closed,Improvement for k8s.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/,priority/awaiting-more-evidence sig/cluster-lifecycle,Maybe mention that deployments require changing the api attribute when going from major version to major version.,1.0,Improvement for k8s.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/ - Maybe mention that deployments require changing the api attribute when going from major version to major version.,0,improvement for io docs tasks administer cluster kubeadm kubeadm upgrade maybe mention that deployments require changing the api attribute when going from major version to major version ,0 591348,17837659944.0,IssuesEvent,2021-09-03 05:11:17,Alexhuszagh/rust-lexical,https://api.github.com/repos/Alexhuszagh/rust-lexical,closed,[FEATURE] Add support for numbers with different radices in different components.,enhancement normal priority,"## Problem By default, we assume the radix is the same for the entire number. That is, the radix for the mantissa digits, the exponent base, and the radix for the exponent digits is the same. ## Solution Provide in `ParseFloatOptions` 2 additional fields: - `exponent_radix`, the radix for the exponent digit encoding - `exponent_base`, the numerical base for the exponent These should both be limited to valid radices as well. ## Additional Context C++ hexadecimal float literals, and hexadecimal float representations demonstrate this issue: ```c++ // 0xa.b, which is 10.6875 in hex notation // p specifies an exponent base of 2 // The exponent is never optional for literals // The exponent is optional for strings // 10 is a decimal-encoded integer // So, the float is identical to 10.6875 * 2^10 const float = 0xa.bp10 ``` ",1.0,"[FEATURE] Add support for numbers with different radices in different components. - ## Problem By default, we assume the radix is the same for the entire number. That is, the radix for the mantissa digits, the exponent base, and the radix for the exponent digits is the same. ## Solution Provide in `ParseFloatOptions` 2 additional fields: - `exponent_radix`, the radix for the exponent digit encoding - `exponent_base`, the numerical base for the exponent These should both be limited to valid radices as well. ## Additional Context C++ hexadecimal float literals, and hexadecimal float representations demonstrate this issue: ```c++ // 0xa.b, which is 10.6875 in hex notation // p specifies an exponent base of 2 // The exponent is never optional for literals // The exponent is optional for strings // 10 is a decimal-encoded integer // So, the float is identical to 10.6875 * 2^10 const float = 0xa.bp10 ``` ",0, add support for numbers with different radices in different components problem by default we assume the radix is the same for the entire number that is the radix for the mantissa digits the exponent base and the radix for the exponent digits is the same solution provide in parsefloatoptions additional fields exponent radix the radix for the exponent digit encoding exponent base the numerical base for the exponent these should both be limited to valid radices as well additional context c hexadecimal float literals and hexadecimal float representations demonstrate this issue c b which is in hex notation p specifies an exponent base of the exponent is never optional for literals the exponent is optional for strings is a decimal encoded integer so the float is identical to const float ,0 499,6757372967.0,IssuesEvent,2017-10-24 10:33:04,zero-os/0-orchestrator,https://api.github.com/repos/zero-os/0-orchestrator,closed,cover delete storage cluster which a has vdiskstorage,qa_automation state_inprogress,"- create storagecluster - create vdisk storage - try to delete the storagecluster, should fail",1.0,"cover delete storage cluster which a has vdiskstorage - - create storagecluster - create vdisk storage - try to delete the storagecluster, should fail",1,cover delete storage cluster which a has vdiskstorage create storagecluster create vdisk storage try to delete the storagecluster should fail,1 7356,24702912005.0,IssuesEvent,2022-10-19 16:35:43,keycloak/keycloak-benchmark,https://api.github.com/repos/keycloak/keycloak-benchmark,opened,Keycloak Pod fails to come up with a strict HTTPS check,bug provision automation,"### Describe the bug Due to the below error and the series of events which are leading up to a missing set of keys/certificates, the Keycloak Pod keeps going into a CrashLoopBackOff situation and the cluster doesnt come up fully. ``` ""ERROR"",""message"":""ERROR: Key material not provided to setup HTTPS. Please configure your keys/certificates or start the server in development mode."" ``` ### Version keycloak-benchmark latest main ### Expected behavior When you run the `task` command from within the `provision/minikube` directory, the cluster should come back up within a sane amount of time. ### Actual behavior The cluster fails to start up with the below logs coming up in the keycloak server ``` {""timestamp"":""2022-10-19T16:22:05.386Z"",""sequence"":40,""loggerClassName"":""org.jboss.logging.Logger"",""loggerName"":""org.keycloak.common.Profile"",""level"":""WARN"",""message"":""Experimental feature enabled: map_storage"",""threadName"":""main"",""threadId"":1,""mdc"":{},""ndc"":"""",""hostName"":""keycloak-0"",""processName"":""QuarkusEntryPoint"",""processId"":1} {""timestamp"":""2022-10-19T16:22:05.4Z"",""sequence"":41,""loggerClassName"":""org.jboss.logging.Logger"",""loggerName"":""org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider"",""level"":""INFO"",""message"":""Hostname settings: Base URL: , Hostname: keycloak.192.168.39.249.nip.io, Strict HTTPS: true, Path: , Strict BackChannel: false, Admin URL: , Admin: , Port: -1, Proxied: true"",""threadName"":""main"",""threadId"":1,""mdc"":{},""ndc"":"""",""hostName"":""keycloak-0"",""processName"":""QuarkusEntryPoint"",""processId"":1} {""timestamp"":""2022-10-19T16:22:07.138Z"",""sequence"":4839,""loggerClassName"":""org.jboss.logging.Logger"",""loggerName"":""org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler"",""level"":""ERROR"",""message"":""ERROR: Failed to start server in (production) mode"",""threadName"":""main"",""threadId"":1,""mdc"":{},""ndc"":"""",""hostName"":""keycloak-0"",""processName"":""QuarkusEntryPoint"",""processId"":1} {""timestamp"":""2022-10-19T16:22:07.138Z"",""sequence"":4840,""loggerClassName"":""org.jboss.logging.Logger"",""loggerName"":""org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler"",""level"":""ERROR"",""message"":""ERROR: Key material not provided to setup HTTPS. Please configure your keys/certificates or start the server in development mode."",""threadName"":""main"",""threadId"":1,""mdc"":{},""ndc"":"""",""hostName"":""keycloak-0"",""processName"":""QuarkusEntryPoint"",""processId"":1} {""timestamp"":""2022-10-19T16:22:07.139Z"",""sequence"":4841,""loggerClassName"":""org.jboss.logging.Logger"",""loggerName"":""org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler"",""level"":""ERROR"",""message"":""For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command."",""threadName"":""main"",""threadId"":1,""mdc"":{},""ndc"":"""",""hostName"":""keycloak-0"",""processName"":""QuarkusEntryPoint"",""processId"":1} ``` Cluster state: ``` $ kubectl get pods -A -w NAMESPACE NAME READY STATUS RESTARTS AGE cadvisor cadvisor-6kzc2 1/1 Running 0 104m ingress-nginx ingress-nginx-admission-create-rmchb 0/1 Completed 0 106m ingress-nginx ingress-nginx-admission-patch-62mbp 0/1 Completed 0 106m ingress-nginx ingress-nginx-controller-5959f988fd-4tjbj 1/1 Running 0 106m keycloak cryostat-5cd8d9cddb-mkhbg 3/3 Running 0 103m keycloak keycloak-0 0/1 Running 3 (40s ago) 105s keycloak keycloak-operator-6bc8cd9849-b4lpf 1/1 Running 0 103m keycloak postgres-7bf755846c-tkqp6 1/1 Running 0 103m keycloak postgres-exporter-7f9c9dc98b-6tl8j 1/1 Running 0 103m keycloak sqlpad-74cdc455d7-fbrb4 1/1 Running 0 103m kube-system coredns-565d847f94-qp6h4 1/1 Running 0 106m kube-system etcd-minikube 1/1 Running 0 106m kube-system kube-apiserver-minikube 1/1 Running 0 106m kube-system kube-controller-manager-minikube 1/1 Running 0 106m kube-system kube-proxy-d78sn 1/1 Running 0 106m kube-system kube-scheduler-minikube 1/1 Running 0 106m kube-system storage-provisioner 1/1 Running 0 106m kubebox kubebox-698f46bdcd-88m9t 1/1 Running 0 104m monitoring alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 104m monitoring graphite-exporter-5686cd9d-8g9qc 1/1 Running 0 104m monitoring jaeger-65879b795c-mngxt 1/1 Running 0 115s monitoring loki-0 1/1 Running 0 104m monitoring loki-grafana-agent-operator-684b478b77-n8svm 1/1 Running 0 104m monitoring loki-logs-4ws22 2/2 Running 0 102m monitoring prometheus-grafana-f79c75b6-8v62j 2/2 Running 0 104m monitoring prometheus-kube-prometheus-operator-6f5798cb9c-25vj5 1/1 Running 0 104m monitoring prometheus-kube-state-metrics-9bbb8b774-9jcxz 1/1 Running 0 104m monitoring prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 104m monitoring prometheus-prometheus-node-exporter-v4pf7 1/1 Running 0 104m monitoring promtail-vv5rq 1/1 Running 0 104m keycloak keycloak-0 0/1 Error 3 (44s ago) 109s keycloak keycloak-0 0/1 CrashLoopBackOff 3 (2s ago) 111s ``` ### How to Reproduce? On an existing or newly provisioned minikube vm (aka after running ./rebuild.sh script), run the `task` command and wait for the cluster to come up. And then observe the pods using the command `kubectl get pods -A -w` and monitor the server logs either on grafana or the kubectl directly ### Anything else? This was working very recently as early as Oct 17th, 2022. I suspect this [commit](https://github.com/keycloak/keycloak/commit/19ee00ff545d0c6cb68079849b5f18188f38928c) from the keycloak operator might be causing a stricter check on non-dev deployment to have the HTTPS keys/certs to be present.",1.0,"Keycloak Pod fails to come up with a strict HTTPS check - ### Describe the bug Due to the below error and the series of events which are leading up to a missing set of keys/certificates, the Keycloak Pod keeps going into a CrashLoopBackOff situation and the cluster doesnt come up fully. ``` ""ERROR"",""message"":""ERROR: Key material not provided to setup HTTPS. Please configure your keys/certificates or start the server in development mode."" ``` ### Version keycloak-benchmark latest main ### Expected behavior When you run the `task` command from within the `provision/minikube` directory, the cluster should come back up within a sane amount of time. ### Actual behavior The cluster fails to start up with the below logs coming up in the keycloak server ``` {""timestamp"":""2022-10-19T16:22:05.386Z"",""sequence"":40,""loggerClassName"":""org.jboss.logging.Logger"",""loggerName"":""org.keycloak.common.Profile"",""level"":""WARN"",""message"":""Experimental feature enabled: map_storage"",""threadName"":""main"",""threadId"":1,""mdc"":{},""ndc"":"""",""hostName"":""keycloak-0"",""processName"":""QuarkusEntryPoint"",""processId"":1} {""timestamp"":""2022-10-19T16:22:05.4Z"",""sequence"":41,""loggerClassName"":""org.jboss.logging.Logger"",""loggerName"":""org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider"",""level"":""INFO"",""message"":""Hostname settings: Base URL: , Hostname: keycloak.192.168.39.249.nip.io, Strict HTTPS: true, Path: , Strict BackChannel: false, Admin URL: , Admin: , Port: -1, Proxied: true"",""threadName"":""main"",""threadId"":1,""mdc"":{},""ndc"":"""",""hostName"":""keycloak-0"",""processName"":""QuarkusEntryPoint"",""processId"":1} {""timestamp"":""2022-10-19T16:22:07.138Z"",""sequence"":4839,""loggerClassName"":""org.jboss.logging.Logger"",""loggerName"":""org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler"",""level"":""ERROR"",""message"":""ERROR: Failed to start server in (production) mode"",""threadName"":""main"",""threadId"":1,""mdc"":{},""ndc"":"""",""hostName"":""keycloak-0"",""processName"":""QuarkusEntryPoint"",""processId"":1} {""timestamp"":""2022-10-19T16:22:07.138Z"",""sequence"":4840,""loggerClassName"":""org.jboss.logging.Logger"",""loggerName"":""org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler"",""level"":""ERROR"",""message"":""ERROR: Key material not provided to setup HTTPS. Please configure your keys/certificates or start the server in development mode."",""threadName"":""main"",""threadId"":1,""mdc"":{},""ndc"":"""",""hostName"":""keycloak-0"",""processName"":""QuarkusEntryPoint"",""processId"":1} {""timestamp"":""2022-10-19T16:22:07.139Z"",""sequence"":4841,""loggerClassName"":""org.jboss.logging.Logger"",""loggerName"":""org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler"",""level"":""ERROR"",""message"":""For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command."",""threadName"":""main"",""threadId"":1,""mdc"":{},""ndc"":"""",""hostName"":""keycloak-0"",""processName"":""QuarkusEntryPoint"",""processId"":1} ``` Cluster state: ``` $ kubectl get pods -A -w NAMESPACE NAME READY STATUS RESTARTS AGE cadvisor cadvisor-6kzc2 1/1 Running 0 104m ingress-nginx ingress-nginx-admission-create-rmchb 0/1 Completed 0 106m ingress-nginx ingress-nginx-admission-patch-62mbp 0/1 Completed 0 106m ingress-nginx ingress-nginx-controller-5959f988fd-4tjbj 1/1 Running 0 106m keycloak cryostat-5cd8d9cddb-mkhbg 3/3 Running 0 103m keycloak keycloak-0 0/1 Running 3 (40s ago) 105s keycloak keycloak-operator-6bc8cd9849-b4lpf 1/1 Running 0 103m keycloak postgres-7bf755846c-tkqp6 1/1 Running 0 103m keycloak postgres-exporter-7f9c9dc98b-6tl8j 1/1 Running 0 103m keycloak sqlpad-74cdc455d7-fbrb4 1/1 Running 0 103m kube-system coredns-565d847f94-qp6h4 1/1 Running 0 106m kube-system etcd-minikube 1/1 Running 0 106m kube-system kube-apiserver-minikube 1/1 Running 0 106m kube-system kube-controller-manager-minikube 1/1 Running 0 106m kube-system kube-proxy-d78sn 1/1 Running 0 106m kube-system kube-scheduler-minikube 1/1 Running 0 106m kube-system storage-provisioner 1/1 Running 0 106m kubebox kubebox-698f46bdcd-88m9t 1/1 Running 0 104m monitoring alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 104m monitoring graphite-exporter-5686cd9d-8g9qc 1/1 Running 0 104m monitoring jaeger-65879b795c-mngxt 1/1 Running 0 115s monitoring loki-0 1/1 Running 0 104m monitoring loki-grafana-agent-operator-684b478b77-n8svm 1/1 Running 0 104m monitoring loki-logs-4ws22 2/2 Running 0 102m monitoring prometheus-grafana-f79c75b6-8v62j 2/2 Running 0 104m monitoring prometheus-kube-prometheus-operator-6f5798cb9c-25vj5 1/1 Running 0 104m monitoring prometheus-kube-state-metrics-9bbb8b774-9jcxz 1/1 Running 0 104m monitoring prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 104m monitoring prometheus-prometheus-node-exporter-v4pf7 1/1 Running 0 104m monitoring promtail-vv5rq 1/1 Running 0 104m keycloak keycloak-0 0/1 Error 3 (44s ago) 109s keycloak keycloak-0 0/1 CrashLoopBackOff 3 (2s ago) 111s ``` ### How to Reproduce? On an existing or newly provisioned minikube vm (aka after running ./rebuild.sh script), run the `task` command and wait for the cluster to come up. And then observe the pods using the command `kubectl get pods -A -w` and monitor the server logs either on grafana or the kubectl directly ### Anything else? This was working very recently as early as Oct 17th, 2022. I suspect this [commit](https://github.com/keycloak/keycloak/commit/19ee00ff545d0c6cb68079849b5f18188f38928c) from the keycloak operator might be causing a stricter check on non-dev deployment to have the HTTPS keys/certs to be present.",1,keycloak pod fails to come up with a strict https check describe the bug due to the below error and the series of events which are leading up to a missing set of keys certificates the keycloak pod keeps going into a crashloopbackoff situation and the cluster doesnt come up fully error message error key material not provided to setup https please configure your keys certificates or start the server in development mode version keycloak benchmark latest main expected behavior when you run the task command from within the provision minikube directory the cluster should come back up within a sane amount of time actual behavior the cluster fails to start up with the below logs coming up in the keycloak server timestamp sequence loggerclassname org jboss logging logger loggername org keycloak common profile level warn message experimental feature enabled map storage threadname main threadid mdc ndc hostname keycloak processname quarkusentrypoint processid timestamp sequence loggerclassname org jboss logging logger loggername org keycloak quarkus runtime hostname defaulthostnameprovider level info message hostname settings base url hostname keycloak nip io strict https true path strict backchannel false admin url admin port proxied true threadname main threadid mdc ndc hostname keycloak processname quarkusentrypoint processid timestamp sequence loggerclassname org jboss logging logger loggername org keycloak quarkus runtime cli executionexceptionhandler level error message error failed to start server in production mode threadname main threadid mdc ndc hostname keycloak processname quarkusentrypoint processid timestamp sequence loggerclassname org jboss logging logger loggername org keycloak quarkus runtime cli executionexceptionhandler level error message error key material not provided to setup https please configure your keys certificates or start the server in development mode threadname main threadid mdc ndc hostname keycloak processname quarkusentrypoint processid timestamp sequence loggerclassname org jboss logging logger loggername org keycloak quarkus runtime cli executionexceptionhandler level error message for more details run the same command passing the verbose option also you can use help to see the details about the usage of the particular command threadname main threadid mdc ndc hostname keycloak processname quarkusentrypoint processid cluster state kubectl get pods a w namespace name ready status restarts age cadvisor cadvisor running ingress nginx ingress nginx admission create rmchb completed ingress nginx ingress nginx admission patch completed ingress nginx ingress nginx controller running keycloak cryostat mkhbg running keycloak keycloak running ago keycloak keycloak operator running keycloak postgres running keycloak postgres exporter running keycloak sqlpad running kube system coredns running kube system etcd minikube running kube system kube apiserver minikube running kube system kube controller manager minikube running kube system kube proxy running kube system kube scheduler minikube running kube system storage provisioner running kubebox kubebox running monitoring alertmanager prometheus kube prometheus alertmanager running monitoring graphite exporter running monitoring jaeger mngxt running monitoring loki running monitoring loki grafana agent operator running monitoring loki logs running monitoring prometheus grafana running monitoring prometheus kube prometheus operator running monitoring prometheus kube state metrics running monitoring prometheus prometheus kube prometheus prometheus running monitoring prometheus prometheus node exporter running monitoring promtail running keycloak keycloak error ago keycloak keycloak crashloopbackoff ago how to reproduce on an existing or newly provisioned minikube vm aka after running rebuild sh script run the task command and wait for the cluster to come up and then observe the pods using the command kubectl get pods a w and monitor the server logs either on grafana or the kubectl directly anything else this was working very recently as early as oct i suspect this from the keycloak operator might be causing a stricter check on non dev deployment to have the https keys certs to be present ,1 348139,10439320340.0,IssuesEvent,2019-09-18 05:51:06,openmsupply/mobile,https://api.github.com/repos/openmsupply/mobile,closed,Refactor StockPage to use new DataTable,DataTable Docs: not needed Effort: small Priority: High Refactor,"## EPIC: #1043 ## Is your feature request related to a problem? Please describe. Need to use the new `DataTable` in `StockPage`. ## Describe the solution you'd like - A more or less complete refactor, using the first refactored page `CustomerInvoicePage` as a guide. - This will require using hooks and the new DataTable `component`. ## Describe alternatives you've considered N/A ## Additional context This is a placeholder issue. Please investigate and compare the page with `CustomerInvoicePage`. Each page being refactored may have their own quirks. The tissue is encouraged to describe any additional problems and differences with the particular page here, to find a solution which encourages further extensibility and code-reuse with other pages, if possible.",1.0,"Refactor StockPage to use new DataTable - ## EPIC: #1043 ## Is your feature request related to a problem? Please describe. Need to use the new `DataTable` in `StockPage`. ## Describe the solution you'd like - A more or less complete refactor, using the first refactored page `CustomerInvoicePage` as a guide. - This will require using hooks and the new DataTable `component`. ## Describe alternatives you've considered N/A ## Additional context This is a placeholder issue. Please investigate and compare the page with `CustomerInvoicePage`. Each page being refactored may have their own quirks. The tissue is encouraged to describe any additional problems and differences with the particular page here, to find a solution which encourages further extensibility and code-reuse with other pages, if possible.",0,refactor stockpage to use new datatable epic is your feature request related to a problem please describe need to use the new datatable in stockpage describe the solution you d like a more or less complete refactor using the first refactored page customerinvoicepage as a guide this will require using hooks and the new datatable component describe alternatives you ve considered n a additional context this is a placeholder issue please investigate and compare the page with customerinvoicepage each page being refactored may have their own quirks the tissue is encouraged to describe any additional problems and differences with the particular page here to find a solution which encourages further extensibility and code reuse with other pages if possible ,0 239875,7800105402.0,IssuesEvent,2018-06-09 04:53:17,tine20/Tine-2.0-Open-Source-Groupware-and-CRM,https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM,closed,"0007324: Configurable maxLoginFailures",Admin Feature Request Mantis high priority,"**Reported by mmuehlfeld on 30 Oct 2012 08:13** **Version:** Joey (2012.10.1) Some users here use Lighning for accessing Tine. But when they have to change their network password (Tine20 is authenticating against the same base), then retries of Lightning to login with the old passwort locks the account. I saw that there's a workaround in the forum: http://www.tine20.org/forum/viewtopic.php?f=8&t=11202 But it would be nice, and I guess it's not much work, to have this configurable or defeatable. ",1.0,"0007324: Configurable maxLoginFailures - **Reported by mmuehlfeld on 30 Oct 2012 08:13** **Version:** Joey (2012.10.1) Some users here use Lighning for accessing Tine. But when they have to change their network password (Tine20 is authenticating against the same base), then retries of Lightning to login with the old passwort locks the account. I saw that there's a workaround in the forum: http://www.tine20.org/forum/viewtopic.php?f=8&t=11202 But it would be nice, and I guess it's not much work, to have this configurable or defeatable. ",0, configurable maxloginfailures reported by mmuehlfeld on oct version joey some users here use lighning for accessing tine but when they have to change their network password is authenticating against the same base then retries of lightning to login with the old passwort locks the account i saw that there s a workaround in the forum but it would be nice and i guess it s not much work to have this configurable or defeatable ,0 361396,10708077990.0,IssuesEvent,2019-10-24 18:52:37,microsoft/terminal,https://api.github.com/repos/microsoft/terminal,closed,"ConPTY: Extended Attributes only ever get turned on once, and are forever after turned off",Area-Rendering In-PR Issue-Bug Priority-3 Product-Conpty,"When you `printf ""\e[3;5;9mWhatever\e[m"", repeatedly, what you get is this. ``` \e[3m\e[5m\e[9mwhatever\e[23m\e[25m\e[29m \e[23m\e[25m\e[29mwhatever\e[23m\e[25m\e[29m \e[23m\e[25m\e[29mwhatever\e[23m\e[25m\e[29m ``` ",1.0,"ConPTY: Extended Attributes only ever get turned on once, and are forever after turned off - When you `printf ""\e[3;5;9mWhatever\e[m"", repeatedly, what you get is this. ``` \e[3m\e[5m\e[9mwhatever\e[23m\e[25m\e[29m \e[23m\e[25m\e[29mwhatever\e[23m\e[25m\e[29m \e[23m\e[25m\e[29mwhatever\e[23m\e[25m\e[29m ``` ",0,conpty extended attributes only ever get turned on once and are forever after turned off when you printf e e m repeatedly what you get is this e e e e e e e e e e e e e e e e e e ,0 2766,12541233265.0,IssuesEvent,2020-06-05 11:57:04,input-output-hk/cardano-node,https://api.github.com/repos/input-output-hk/cardano-node,opened,[QA] - De-register the stake key used for the pledged of 1 of the pools,e2e automation,"De-register the stake key used for the pledged of 1 of the pools. Check the pool status.",1.0,"[QA] - De-register the stake key used for the pledged of 1 of the pools - De-register the stake key used for the pledged of 1 of the pools. Check the pool status.",1, de register the stake key used for the pledged of of the pools de register the stake key used for the pledged of of the pools check the pool status ,1 6016,21899072217.0,IssuesEvent,2022-05-20 11:37:32,Tanden-Garage/Hoku-Navi-Beta,https://api.github.com/repos/Tanden-Garage/Hoku-Navi-Beta,closed,各定例会のブランチ・PRも自動生成できるように,automation,"## 🎉 Goal - Issue作成のタイミングで`mtg/#number`ブランチとIssueと同じ名前のPRが作成される ## 💪 Motivation - 怠慢 ## 📖 Reference (optional) (参考リンクなどあれば) ## 📎 Task children - [ ] 自動ブランチ作成を調べて書く - [ ] 自動PRが作成を調べて書く ",1.0,"各定例会のブランチ・PRも自動生成できるように - ## 🎉 Goal - Issue作成のタイミングで`mtg/#number`ブランチとIssueと同じ名前のPRが作成される ## 💪 Motivation - 怠慢 ## 📖 Reference (optional) (参考リンクなどあれば) ## 📎 Task children - [ ] 自動ブランチ作成を調べて書く - [ ] 自動PRが作成を調べて書く ",1,各定例会のブランチ・prも自動生成できるように 🎉 goal issue作成のタイミングで mtg number ブランチとissueと同じ名前のprが作成される 💪 motivation 怠慢 📖 reference optional 参考リンクなどあれば 📎 task children 自動ブランチ作成を調べて書く 自動prが作成を調べて書く ,1 6654,23660787844.0,IssuesEvent,2022-08-26 15:22:26,prisma/docs,https://api.github.com/repos/prisma/docs,closed,Docs for CI/CD workflows,docs docs: candidate topic: automation,"This issues persists a docs requests from [Slack](https://prisma.slack.com/archives/CKQTGR6T0/p1574868511482700?thread_ts=1574796772.460900&cid=CKQTGR6T0): > In Prisma 1 there is not a lot of information about deployment. Specifically for creating a CI/CD workflow. For instance creating a staging and production environments and rolling out incremental updates. I would love to see more information around production best practices with Prisma 2. As well as more options/info about deployments.",1.0,"Docs for CI/CD workflows - This issues persists a docs requests from [Slack](https://prisma.slack.com/archives/CKQTGR6T0/p1574868511482700?thread_ts=1574796772.460900&cid=CKQTGR6T0): > In Prisma 1 there is not a lot of information about deployment. Specifically for creating a CI/CD workflow. For instance creating a staging and production environments and rolling out incremental updates. I would love to see more information around production best practices with Prisma 2. As well as more options/info about deployments.",1,docs for ci cd workflows this issues persists a docs requests from in prisma there is not a lot of information about deployment specifically for creating a ci cd workflow for instance creating a staging and production environments and rolling out incremental updates i would love to see more information around production best practices with prisma as well as more options info about deployments ,1 121110,25929679166.0,IssuesEvent,2022-12-16 08:54:26,Regalis11/Barotrauma,https://api.github.com/repos/Regalis11/Barotrauma,closed,[Factions] Dugong caught clipping through the Clown District,Bug Code,"### Disclaimers - [X] I have searched the issue tracker to check if the issue has already been reported. - [ ] My issue happened while using mods. ### What happened? We entered a new outpost with the Dugong and the crew quickly discovered it had merged with the Clown District. Players could clip into the rooms and submarine as if they were one entity, though sometimes walls would act like walls again and trap players. Here's the seed: ![image](https://user-images.githubusercontent.com/105495756/205387379-11e479d5-59df-4778-b059-2a21594fe9ca.png) (Maughanasilly Fossa) Here's the compressed save file: [Submarine x Outpost.zip](https://github.com/Regalis11/Barotrauma/files/10144367/Submarine.x.Outpost.zip) And here are some screenshots of the wacky event! ![image](https://user-images.githubusercontent.com/105495756/205387344-afa61f8f-9b90-4e3e-8606-992dd74f1006.png) ![image](https://user-images.githubusercontent.com/105495756/205387359-b910f78e-16fe-4239-9b03-d59b50b0cfec.png) ![image](https://user-images.githubusercontent.com/105495756/205387400-bff8e867-f592-46f1-80ee-f427aaf06c55.png) ![image](https://user-images.githubusercontent.com/105495756/205387418-ddc34ddf-607f-4e21-8f20-013f08e78e5c.png) ![image](https://user-images.githubusercontent.com/105495756/205387437-de153155-93df-4cee-8c55-f980826e66af.png) ![image](https://user-images.githubusercontent.com/105495756/205387473-1a65daf6-2822-435e-aa0e-a143f78522b3.png) ![image](https://user-images.githubusercontent.com/105495756/205387804-3fe4f1ec-12a3-4251-93ed-6b5b4677caf4.png) ### Reproduction steps _No response_ ### Bug prevalence Just once ### Version Faction/endgame test branch ### - _No response_ ### Which operating system did you encounter this bug on? Windows ### Relevant error messages and crash reports _No response_",1.0,"[Factions] Dugong caught clipping through the Clown District - ### Disclaimers - [X] I have searched the issue tracker to check if the issue has already been reported. - [ ] My issue happened while using mods. ### What happened? We entered a new outpost with the Dugong and the crew quickly discovered it had merged with the Clown District. Players could clip into the rooms and submarine as if they were one entity, though sometimes walls would act like walls again and trap players. Here's the seed: ![image](https://user-images.githubusercontent.com/105495756/205387379-11e479d5-59df-4778-b059-2a21594fe9ca.png) (Maughanasilly Fossa) Here's the compressed save file: [Submarine x Outpost.zip](https://github.com/Regalis11/Barotrauma/files/10144367/Submarine.x.Outpost.zip) And here are some screenshots of the wacky event! ![image](https://user-images.githubusercontent.com/105495756/205387344-afa61f8f-9b90-4e3e-8606-992dd74f1006.png) ![image](https://user-images.githubusercontent.com/105495756/205387359-b910f78e-16fe-4239-9b03-d59b50b0cfec.png) ![image](https://user-images.githubusercontent.com/105495756/205387400-bff8e867-f592-46f1-80ee-f427aaf06c55.png) ![image](https://user-images.githubusercontent.com/105495756/205387418-ddc34ddf-607f-4e21-8f20-013f08e78e5c.png) ![image](https://user-images.githubusercontent.com/105495756/205387437-de153155-93df-4cee-8c55-f980826e66af.png) ![image](https://user-images.githubusercontent.com/105495756/205387473-1a65daf6-2822-435e-aa0e-a143f78522b3.png) ![image](https://user-images.githubusercontent.com/105495756/205387804-3fe4f1ec-12a3-4251-93ed-6b5b4677caf4.png) ### Reproduction steps _No response_ ### Bug prevalence Just once ### Version Faction/endgame test branch ### - _No response_ ### Which operating system did you encounter this bug on? Windows ### Relevant error messages and crash reports _No response_",0, dugong caught clipping through the clown district disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened we entered a new outpost with the dugong and the crew quickly discovered it had merged with the clown district players could clip into the rooms and submarine as if they were one entity though sometimes walls would act like walls again and trap players here s the seed maughanasilly fossa here s the compressed save file and here are some screenshots of the wacky event reproduction steps no response bug prevalence just once version faction endgame test branch no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response ,0 4029,15213013473.0,IssuesEvent,2021-02-17 11:11:40,nf-core/tools,https://api.github.com/repos/nf-core/tools,closed,Custom pipeline files incorrectly deleted in automated sync,automation bug,"Ahoy :boat: Everytime I sync MHCquant all of the scripts in /bin get deleted. We know why it happens, but maybe there's some way of protecting them? See: https://github.com/nf-core/mhcquant/pull/99/files Maybe we can copy scripts in /bin (except version scraping) in an intermediate folder and then just copy them over after the fresh template was created and add them to the TEMPLATE PR?",1.0,"Custom pipeline files incorrectly deleted in automated sync - Ahoy :boat: Everytime I sync MHCquant all of the scripts in /bin get deleted. We know why it happens, but maybe there's some way of protecting them? See: https://github.com/nf-core/mhcquant/pull/99/files Maybe we can copy scripts in /bin (except version scraping) in an intermediate folder and then just copy them over after the fresh template was created and add them to the TEMPLATE PR?",1,custom pipeline files incorrectly deleted in automated sync ahoy boat everytime i sync mhcquant all of the scripts in bin get deleted we know why it happens but maybe there s some way of protecting them see maybe we can copy scripts in bin except version scraping in an intermediate folder and then just copy them over after the fresh template was created and add them to the template pr ,1 309252,26659124662.0,IssuesEvent,2023-01-25 19:24:52,MPMG-DCC-UFMG/F01,https://api.github.com/repos/MPMG-DCC-UFMG/F01,closed,Teste de generalizacao para a tag Acesso à Informação - Informações - Itabirito,generalization test development tag - Acesso à Informação template - ABO (21) subtag - Informações,DoD: Realizar o teste de Generalização do validador da tag Acesso à Informação - Informações para o Município de Itabirito.,1.0,Teste de generalizacao para a tag Acesso à Informação - Informações - Itabirito - DoD: Realizar o teste de Generalização do validador da tag Acesso à Informação - Informações para o Município de Itabirito.,0,teste de generalizacao para a tag acesso à informação informações itabirito dod realizar o teste de generalização do validador da tag acesso à informação informações para o município de itabirito ,0 5364,19324291853.0,IssuesEvent,2021-12-14 09:41:51,pingcap/ticdc,https://api.github.com/repos/pingcap/ticdc,closed,"[ticdc 5.3.0] tikv_scale_plan_tidb case fail: 1. cdc oom, 2. data inconsistency, 3. changefeed stuck",type/bug subject/correctness severity/major found/automation area/ticdc,"### What did you do? // create 1 changefeed, 3 cdc, 3 tikv // run tpcc 100 warehouse prepare // run tpcc run, meanwhile, scale cluster tikv from 3 -> 7, after 15m, scale in to 3 // create table ""finishmark"" // wait table ""finishmark"" sync to downstream //check data consistency ### What did you expect to see? pass ### What did you see instead? - [x] 1. cdc oom - [x] 2. The two table has data inconsistency: warehouse,order_line - [x] 3. changefeed stuck at almost the time create table ""finishmark"" testbed saved for 24 hours. - upstream grafana: http://172.16.4.180:30687/ - testbed: cdc-testbed--tps-450050-1-772 - conprof: http://172.16.4.180:30857/?query=%7Bnamespace%3D%22cdc-testbed--tps-450050-1-772%22%2C+job%3D%22ticdc%22%2C+profile_path%3D%22%2Fdebug%2Fpprof%2Fheap%22%7D&from=1637135181536&to=1637135672266&now=false - test case log: http://172.16.4.180:31714/workflows/testground/plan-exec-450050?tab=workflow&nodeId=plan-exec-450050-3513432500&sidePanel=logs ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console TiKV Release Version: 5.3.0 Edition: Community Git Commit Hash: d514230a40974393297050645c223bcf1db9aedc Git Commit Branch: heads/refs/tags/v5.3.0 UTC Build Time: 2021-11-16 12:18:25 Rust Version: rustc 1.56.0-nightly (2faabf579 2021-07-27) Enable Features: jemalloc mem-profiling portable sse protobuf-codec test-engines-rocksdb cloud-aws cloud-gcp Profile: dist_release ``` TiCDC version (execute `cdc version`): ```console /cdc version Release Version: v5.3.0 Git Commit Hash: f847b331572379527bf37a7f19be20448a74b2c2 Git Branch: heads/refs/tags/v5.3.0 UTC Build Time: 2021-11-16 11:54:34 Go Version: go version go1.16.4 linux/amd64 Failpoint Build: false ```",1.0,"[ticdc 5.3.0] tikv_scale_plan_tidb case fail: 1. cdc oom, 2. data inconsistency, 3. changefeed stuck - ### What did you do? // create 1 changefeed, 3 cdc, 3 tikv // run tpcc 100 warehouse prepare // run tpcc run, meanwhile, scale cluster tikv from 3 -> 7, after 15m, scale in to 3 // create table ""finishmark"" // wait table ""finishmark"" sync to downstream //check data consistency ### What did you expect to see? pass ### What did you see instead? - [x] 1. cdc oom - [x] 2. The two table has data inconsistency: warehouse,order_line - [x] 3. changefeed stuck at almost the time create table ""finishmark"" testbed saved for 24 hours. - upstream grafana: http://172.16.4.180:30687/ - testbed: cdc-testbed--tps-450050-1-772 - conprof: http://172.16.4.180:30857/?query=%7Bnamespace%3D%22cdc-testbed--tps-450050-1-772%22%2C+job%3D%22ticdc%22%2C+profile_path%3D%22%2Fdebug%2Fpprof%2Fheap%22%7D&from=1637135181536&to=1637135672266&now=false - test case log: http://172.16.4.180:31714/workflows/testground/plan-exec-450050?tab=workflow&nodeId=plan-exec-450050-3513432500&sidePanel=logs ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console TiKV Release Version: 5.3.0 Edition: Community Git Commit Hash: d514230a40974393297050645c223bcf1db9aedc Git Commit Branch: heads/refs/tags/v5.3.0 UTC Build Time: 2021-11-16 12:18:25 Rust Version: rustc 1.56.0-nightly (2faabf579 2021-07-27) Enable Features: jemalloc mem-profiling portable sse protobuf-codec test-engines-rocksdb cloud-aws cloud-gcp Profile: dist_release ``` TiCDC version (execute `cdc version`): ```console /cdc version Release Version: v5.3.0 Git Commit Hash: f847b331572379527bf37a7f19be20448a74b2c2 Git Branch: heads/refs/tags/v5.3.0 UTC Build Time: 2021-11-16 11:54:34 Go Version: go version go1.16.4 linux/amd64 Failpoint Build: false ```",1, tikv scale plan tidb case fail cdc oom data inconsistency changefeed stuck what did you do create changefeed cdc tikv run tpcc warehouse prepare run tpcc run meanwhile scale cluster tikv from after scale in to create table finishmark wait table finishmark sync to downstream check data consistency what did you expect to see pass what did you see instead cdc oom the two table has data inconsistency warehouse,order line changefeed stuck at almost the time create table finishmark testbed saved for hours upstream grafana testbed cdc testbed tps conprof test case log versions of the cluster upstream tidb cluster version execute select tidb version in a mysql client console tikv release version edition community git commit hash git commit branch heads refs tags utc build time rust version rustc nightly enable features jemalloc mem profiling portable sse protobuf codec test engines rocksdb cloud aws cloud gcp profile dist release ticdc version execute cdc version console cdc version release version git commit hash git branch heads refs tags utc build time go version go version linux failpoint build false ,1 8745,27172198870.0,IssuesEvent,2023-02-17 20:32:45,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,downloadUrl not available when searching,area:Docs automation:Closed,"## Category - [ ] Question - [ ] Documentation issue - [x] Bug #### Expected or Desired Behavior It seems that @microsoft.graph.downloadUrl is not available from the graph **when searching**... Something like this only gives the name: `https://graph.microsoft.com/v1.0/drive/root/search(q='mp4')?select=name,@microsoft.graph.downloadUrl` whereas the desired property is available on a specific file. e.g `https://graph.microsoft.com/v1.0/drive/items/{itemID}?select=name,@microsoft.graph.downloadUrl`",1.0,"downloadUrl not available when searching - ## Category - [ ] Question - [ ] Documentation issue - [x] Bug #### Expected or Desired Behavior It seems that @microsoft.graph.downloadUrl is not available from the graph **when searching**... Something like this only gives the name: `https://graph.microsoft.com/v1.0/drive/root/search(q='mp4')?select=name,@microsoft.graph.downloadUrl` whereas the desired property is available on a specific file. e.g `https://graph.microsoft.com/v1.0/drive/items/{itemID}?select=name,@microsoft.graph.downloadUrl`",1,downloadurl not available when searching category question documentation issue bug expected or desired behavior it seems that microsoft graph downloadurl is not available from the graph when searching something like this only gives the name whereas the desired property is available on a specific file e g ,1 223233,24711708612.0,IssuesEvent,2022-10-20 01:41:28,VeeVee450/swiper-website,https://api.github.com/repos/VeeVee450/swiper-website,opened,"CVE-2021-23440 (High) detected in set-value-0.4.3.tgz, set-value-2.0.0.tgz",security vulnerability,"## CVE-2021-23440 - High Severity Vulnerability
Vulnerable Libraries - set-value-0.4.3.tgz, set-value-2.0.0.tgz

set-value-0.4.3.tgz

Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.

Library home page: https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz

Path to dependency file: /swiper-website/package.json

Path to vulnerable library: /node_modules/union-value/node_modules/set-value/package.json

Dependency Hierarchy: - gulp-3.9.1.tgz (Root Library) - liftoff-2.5.0.tgz - findup-sync-2.0.0.tgz - micromatch-3.1.5.tgz - snapdragon-0.8.1.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - union-value-1.0.0.tgz - :x: **set-value-0.4.3.tgz** (Vulnerable Library)

set-value-2.0.0.tgz

Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.

Library home page: https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz

Path to dependency file: /swiper-website/package.json

Path to vulnerable library: /node_modules/set-value/package.json

Dependency Hierarchy: - gulp-3.9.1.tgz (Root Library) - liftoff-2.5.0.tgz - findup-sync-2.0.0.tgz - micromatch-3.1.5.tgz - snapdragon-0.8.1.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - :x: **set-value-2.0.0.tgz** (Vulnerable Library)

Vulnerability Details

This affects the package set-value before <2.0.1, >=3.0.0 <4.0.1. A type confusion vulnerability can lead to a bypass of CVE-2019-10747 when the user-provided keys used in the path parameter are arrays.

Publish Date: 2021-09-12

URL: CVE-2021-23440

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23440

Release Date: 2021-09-12

Fix Resolution (set-value): 2.0.1

Direct dependency fix Resolution (gulp): 4.0.0

Fix Resolution (set-value): 2.0.1

Direct dependency fix Resolution (gulp): 4.0.0

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-23440 (High) detected in set-value-0.4.3.tgz, set-value-2.0.0.tgz - ## CVE-2021-23440 - High Severity Vulnerability
Vulnerable Libraries - set-value-0.4.3.tgz, set-value-2.0.0.tgz

set-value-0.4.3.tgz

Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.

Library home page: https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz

Path to dependency file: /swiper-website/package.json

Path to vulnerable library: /node_modules/union-value/node_modules/set-value/package.json

Dependency Hierarchy: - gulp-3.9.1.tgz (Root Library) - liftoff-2.5.0.tgz - findup-sync-2.0.0.tgz - micromatch-3.1.5.tgz - snapdragon-0.8.1.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - union-value-1.0.0.tgz - :x: **set-value-0.4.3.tgz** (Vulnerable Library)

set-value-2.0.0.tgz

Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.

Library home page: https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz

Path to dependency file: /swiper-website/package.json

Path to vulnerable library: /node_modules/set-value/package.json

Dependency Hierarchy: - gulp-3.9.1.tgz (Root Library) - liftoff-2.5.0.tgz - findup-sync-2.0.0.tgz - micromatch-3.1.5.tgz - snapdragon-0.8.1.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - :x: **set-value-2.0.0.tgz** (Vulnerable Library)

Vulnerability Details

This affects the package set-value before <2.0.1, >=3.0.0 <4.0.1. A type confusion vulnerability can lead to a bypass of CVE-2019-10747 when the user-provided keys used in the path parameter are arrays.

Publish Date: 2021-09-12

URL: CVE-2021-23440

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23440

Release Date: 2021-09-12

Fix Resolution (set-value): 2.0.1

Direct dependency fix Resolution (gulp): 4.0.0

Fix Resolution (set-value): 2.0.1

Direct dependency fix Resolution (gulp): 4.0.0

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in set value tgz set value tgz cve high severity vulnerability vulnerable libraries set value tgz set value tgz set value tgz create nested values and any intermediaries using dot notation a b c paths library home page a href path to dependency file swiper website package json path to vulnerable library node modules union value node modules set value package json dependency hierarchy gulp tgz root library liftoff tgz findup sync tgz micromatch tgz snapdragon tgz base tgz cache base tgz union value tgz x set value tgz vulnerable library set value tgz create nested values and any intermediaries using dot notation a b c paths library home page a href path to dependency file swiper website package json path to vulnerable library node modules set value package json dependency hierarchy gulp tgz root library liftoff tgz findup sync tgz micromatch tgz snapdragon tgz base tgz cache base tgz x set value tgz vulnerable library vulnerability details this affects the package set value before a type confusion vulnerability can lead to a bypass of cve when the user provided keys used in the path parameter are arrays publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution set value direct dependency fix resolution gulp fix resolution set value direct dependency fix resolution gulp step up your open source security game with mend ,0 132592,10760320555.0,IssuesEvent,2019-10-31 18:21:02,rancher/rio,https://api.github.com/repos/rancher/rio,closed,rio attach now showing pod stdout,[zube]: To Test bug to-test,"**Describe the bug** doing a `rio attach ` is now showing the container stdout as in 0.5.0 **To Reproduce** Steps to reproduce the behavior: 1. Deploy a new service `rio run --name attachtest izaac/attachtest:v1` 2. `rio attach attachtest` Nothing is shown after the service is ready, this container shows the `date` every 5 seconds to stdout. **Expected behavior** Attach Should show output from the pod in this case output of the running bash script running `date` every 5 seconds. **Kubernetes version & type (GKE, on-prem)**: `kubectl version` ``` v1.14.7-gke.10 ``` Type: **Rio version**: `rio info` ``` Rio Version: v0.6.0-alpha.1 (7678d67d) Rio CLI Version: v0.6.0-alpha.1 (7678d67d) ``` ",2.0,"rio attach now showing pod stdout - **Describe the bug** doing a `rio attach ` is now showing the container stdout as in 0.5.0 **To Reproduce** Steps to reproduce the behavior: 1. Deploy a new service `rio run --name attachtest izaac/attachtest:v1` 2. `rio attach attachtest` Nothing is shown after the service is ready, this container shows the `date` every 5 seconds to stdout. **Expected behavior** Attach Should show output from the pod in this case output of the running bash script running `date` every 5 seconds. **Kubernetes version & type (GKE, on-prem)**: `kubectl version` ``` v1.14.7-gke.10 ``` Type: **Rio version**: `rio info` ``` Rio Version: v0.6.0-alpha.1 (7678d67d) Rio CLI Version: v0.6.0-alpha.1 (7678d67d) ``` ",0,rio attach now showing pod stdout describe the bug doing a rio attach is now showing the container stdout as in to reproduce steps to reproduce the behavior deploy a new service rio run name attachtest izaac attachtest rio attach attachtest nothing is shown after the service is ready this container shows the date every seconds to stdout expected behavior attach should show output from the pod in this case output of the running bash script running date every seconds kubernetes version type gke on prem kubectl version gke type rio version rio info rio version alpha rio cli version alpha ,0 66989,12857066185.0,IssuesEvent,2020-07-09 08:42:17,hzi-braunschweig/SORMAS-Project,https://api.github.com/repos/hzi-braunschweig/SORMAS-Project,opened,Update dependencies (1.45.0),Code Quality change refine," ### Feature Description ### Problem Description Dependencies need updates at least to include bugfixes. ### Proposed Change 1. Hibernate 5 latest 2. Vaadin 8 latest 3. ... ### Possible Alternatives ### Additional Information ",1.0,"Update dependencies (1.45.0) - ### Feature Description ### Problem Description Dependencies need updates at least to include bugfixes. ### Proposed Change 1. Hibernate 5 latest 2. Vaadin 8 latest 3. ... ### Possible Alternatives ### Additional Information ",0,update dependencies if you ve never submitted an issue to the sormas repository before or this is your first time using this template please read the contributing guidelines accessible in the right sidebar for an explanation about the information we d like you to provide feature description problem description dependencies need updates at least to include bugfixes proposed change hibernate latest vaadin latest possible alternatives additional information ,0 6935,24042105977.0,IssuesEvent,2022-09-16 03:33:35,AdamXweb/awesome-aussie,https://api.github.com/repos/AdamXweb/awesome-aussie,closed,[ADDITION] Airtasker,Added to Airtable Automation from Airtable,"### Category Marketplace ### Software to be added Airtasker ### Supporting Material URL: https://www.airtasker.com/ Description: Airtasker is an online and mobile marketplace that connects people and businesses with local community members. Size: HQ: Sydney LinkedIn: https://www.linkedin.com/company/airtasker/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/recppqAzdZYQCdKqx",1.0,"[ADDITION] Airtasker - ### Category Marketplace ### Software to be added Airtasker ### Supporting Material URL: https://www.airtasker.com/ Description: Airtasker is an online and mobile marketplace that connects people and businesses with local community members. Size: HQ: Sydney LinkedIn: https://www.linkedin.com/company/airtasker/ #### See Record on Airtable: https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/recppqAzdZYQCdKqx",1, airtasker category marketplace software to be added airtasker supporting material url description airtasker is an online and mobile marketplace that connects people and businesses with local community members size hq sydney linkedin see record on airtable ,1 9116,27578693073.0,IssuesEvent,2023-03-08 14:48:32,Budibase/budibase,https://api.github.com/repos/Budibase/budibase,closed,[BUDI-6690] Automation - Create row with relationships errors,bug automations relationships linear,"### Discussed in https://github.com/Budibase/budibase/discussions/9894
Originally posted by **andyburgessmd** March 6, 2023 Hoping someone can help with an issues I am having with automation. The automation is triggered by a row creation (table a) and it then needs to create a row in another table (table b) using some data from the triggered row (table a) and some of the fields/columns in table b are relationships. What is the format of the data that should be set in the create row fields for table b where it is relational? Is it just the _id or does it require more? Example if I want to use the 'Created By' user from the trigger.row do I use just the _id from trigger.row.""Created By"" or the whole ""Created By"" object? Or do I need to make an object like {""_id"":""ro_ta_xxxxxx""} with the _id value?
--- **Context**: I made a change to the `cleanInputValues` to fix a bug relating to the **Update Row** action. It was erroring because existing relationships are always within an array (even singleton relationships). https://github.com/Budibase/budibase/pull/9468 What I didn't account for was the **Create Row** action which provides the relationship ID directly as a string (not in an array). This is fixed with a try/catch on the JSON parse. [BUDI-6690](https://linear.app/budibase/issue/BUDI-6690/automation-create-row-with-relationships-errors)",1.0,"[BUDI-6690] Automation - Create row with relationships errors - ### Discussed in https://github.com/Budibase/budibase/discussions/9894
Originally posted by **andyburgessmd** March 6, 2023 Hoping someone can help with an issues I am having with automation. The automation is triggered by a row creation (table a) and it then needs to create a row in another table (table b) using some data from the triggered row (table a) and some of the fields/columns in table b are relationships. What is the format of the data that should be set in the create row fields for table b where it is relational? Is it just the _id or does it require more? Example if I want to use the 'Created By' user from the trigger.row do I use just the _id from trigger.row.""Created By"" or the whole ""Created By"" object? Or do I need to make an object like {""_id"":""ro_ta_xxxxxx""} with the _id value?
--- **Context**: I made a change to the `cleanInputValues` to fix a bug relating to the **Update Row** action. It was erroring because existing relationships are always within an array (even singleton relationships). https://github.com/Budibase/budibase/pull/9468 What I didn't account for was the **Create Row** action which provides the relationship ID directly as a string (not in an array). This is fixed with a try/catch on the JSON parse. [BUDI-6690](https://linear.app/budibase/issue/BUDI-6690/automation-create-row-with-relationships-errors)",1, automation create row with relationships errors discussed in originally posted by andyburgessmd march hoping someone can help with an issues i am having with automation the automation is triggered by a row creation table a and it then needs to create a row in another table table b using some data from the triggered row table a and some of the fields columns in table b are relationships what is the format of the data that should be set in the create row fields for table b where it is relational is it just the id or does it require more example if i want to use the created by user from the trigger row do i use just the id from trigger row created by or the whole created by object or do i need to make an object like id ro ta xxxxxx with the id value context i made a change to the cleaninputvalues to fix a bug relating to the update row action it was erroring because existing relationships are always within an array even singleton relationships what i didn t account for was the create row action which provides the relationship id directly as a string not in an array this is fixed with a try catch on the json parse ,1 10260,32058356859.0,IssuesEvent,2023-09-24 11:10:03,kotools/types,https://api.github.com/repos/kotools/types,opened,Reducing Gradle tasks list,automation,"## Description We would like to customize the `gradlew :tasks` Gradle task for the root and the `library` projects for reducing the number of tasks displayed when running it. ## Checklist - [ ] Implement. - [ ] Test. - [ ] Refactor. - [ ] Add the selected tasks to the [contribution guidelines](https://github.com/kotools/types/blob/259c1b809382f5c911362b18826588638cd077f4/CONTRIBUTING.md). ",1.0,"Reducing Gradle tasks list - ## Description We would like to customize the `gradlew :tasks` Gradle task for the root and the `library` projects for reducing the number of tasks displayed when running it. ## Checklist - [ ] Implement. - [ ] Test. - [ ] Refactor. - [ ] Add the selected tasks to the [contribution guidelines](https://github.com/kotools/types/blob/259c1b809382f5c911362b18826588638cd077f4/CONTRIBUTING.md). ",1,reducing gradle tasks list description we would like to customize the gradlew tasks gradle task for the root and the library projects for reducing the number of tasks displayed when running it checklist implement test refactor add the selected tasks to the ,1 90435,26095974303.0,IssuesEvent,2022-12-26 19:48:30,Twon/Morpheus,https://api.github.com/repos/Twon/Morpheus,opened,Include-What-You-Use support in build,enhancement build,"Find module to locate program, work on the version and set common CMake variables: https://gitlab.kitware.com/cmake/cmake/-/issues/18926",1.0,"Include-What-You-Use support in build - Find module to locate program, work on the version and set common CMake variables: https://gitlab.kitware.com/cmake/cmake/-/issues/18926",0,include what you use support in build find module to locate program work on the version and set common cmake variables ,0 41,2982517532.0,IssuesEvent,2015-07-17 11:53:32,MISP/MISP,https://api.github.com/repos/MISP/MISP,opened,Scheduled tasks - next run time,automation enhancement usability,"For scheduled tasks, if the next run time is in the past, update it to be in the future so that execution is not skipped. I don't know where it would be best to add that (scheduler worker startup? other place?).",1.0,"Scheduled tasks - next run time - For scheduled tasks, if the next run time is in the past, update it to be in the future so that execution is not skipped. I don't know where it would be best to add that (scheduler worker startup? other place?).",1,scheduled tasks next run time for scheduled tasks if the next run time is in the past update it to be in the future so that execution is not skipped i don t know where it would be best to add that scheduler worker startup other place ,1 5745,20958295560.0,IssuesEvent,2022-03-27 12:20:34,spacemeshos/smapp,https://api.github.com/repos/spacemeshos/smapp,closed,Basic Installs Stats,automation,"We would like to get a basic sense of the total number of downloads of each installer from the labs endpoint initiated from the guide. Do we get these stats from aws? Is there something we need to do to enable them on aws? @yhaspel @zalmen @beckmani @IlyaVi ",1.0,"Basic Installs Stats - We would like to get a basic sense of the total number of downloads of each installer from the labs endpoint initiated from the guide. Do we get these stats from aws? Is there something we need to do to enable them on aws? @yhaspel @zalmen @beckmani @IlyaVi ",1,basic installs stats we would like to get a basic sense of the total number of downloads of each installer from the labs endpoint initiated from the guide do we get these stats from aws is there something we need to do to enable them on aws yhaspel zalmen beckmani ilyavi ,1 2523,12197895705.0,IssuesEvent,2020-04-29 21:39:50,lee-dohm/close-matching-issues,https://api.github.com/repos/lee-dohm/close-matching-issues,opened,Automate building and tagging workflow,automation,"Right now, this all has to be done relatively manually. This means that I have to: 1. Merge the latest `master` to the latest `releases/v*` branch 1. Create the latest semver tag 1. Delete the local `v*` tag 1. Delete the remote `v*` tag 1. Create the local `v*` tag pointing to the same commit 1. Push the commits 1. Push the tags 1. Create the GitHub release object for the latest semver tag and check the box to publish to the Marketplace With the tools mentioned in https://github.com/JasonEtco/create-an-issue/issues/55, I should be able to reduce this to: 1. Merge the latest `master` to the latest `releases/v*` branch 1. Create the latest semver tag 1. Push the commits 1. Push the tags 1. Create the GitHub release object for the latest semver tag and check the box to publish to the Marketplace And I probably don't even need to worry about the `releases/v*` branch anymore ",1.0,"Automate building and tagging workflow - Right now, this all has to be done relatively manually. This means that I have to: 1. Merge the latest `master` to the latest `releases/v*` branch 1. Create the latest semver tag 1. Delete the local `v*` tag 1. Delete the remote `v*` tag 1. Create the local `v*` tag pointing to the same commit 1. Push the commits 1. Push the tags 1. Create the GitHub release object for the latest semver tag and check the box to publish to the Marketplace With the tools mentioned in https://github.com/JasonEtco/create-an-issue/issues/55, I should be able to reduce this to: 1. Merge the latest `master` to the latest `releases/v*` branch 1. Create the latest semver tag 1. Push the commits 1. Push the tags 1. Create the GitHub release object for the latest semver tag and check the box to publish to the Marketplace And I probably don't even need to worry about the `releases/v*` branch anymore ",1,automate building and tagging workflow right now this all has to be done relatively manually this means that i have to merge the latest master to the latest releases v branch create the latest semver tag delete the local v tag delete the remote v tag create the local v tag pointing to the same commit push the commits push the tags create the github release object for the latest semver tag and check the box to publish to the marketplace with the tools mentioned in i should be able to reduce this to merge the latest master to the latest releases v branch create the latest semver tag push the commits push the tags create the github release object for the latest semver tag and check the box to publish to the marketplace and i probably don t even need to worry about the releases v branch anymore ,1 82965,10316490183.0,IssuesEvent,2019-08-30 10:07:16,fortanix/rust-sgx,https://api.github.com/repos/fortanix/rust-sgx,closed,GDB: Instruction for adding setting to Cargo.toml should be on top,documentation enhancement,"In https://edp.fortanix.com/docs/tasks/debugging/ I feel that the point number 9 ``` In Cargo.toml you can configure panics to abort rather than unwind as follows, to preserve the call stack for the debugger ``` should be on top. Maybe as an optional point. ",1.0,"GDB: Instruction for adding setting to Cargo.toml should be on top - In https://edp.fortanix.com/docs/tasks/debugging/ I feel that the point number 9 ``` In Cargo.toml you can configure panics to abort rather than unwind as follows, to preserve the call stack for the debugger ``` should be on top. Maybe as an optional point. ",0,gdb instruction for adding setting to cargo toml should be on top in i feel that the point number in cargo toml you can configure panics to abort rather than unwind as follows to preserve the call stack for the debugger should be on top maybe as an optional point ,0 119,3835665963.0,IssuesEvent,2016-04-01 15:07:35,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,opened,Active iframe window doesn't consider itself as active if it was reloaded,AREA: client SYSTEM: automations TYPE: bug,"When an iframe is reloaded by changing location or submitting a from, it's `contentWindow` will be reinitialized, but it will be the same window object from top page view (`iframe.contentWindow` before reload === `iframe.contentWindow` after reload). So after iframe reloading it will be still considered as active by top window, but doesn't be considered as active from inside. It leads to infinite activation loop when trying to perform a click in this iframe.",1.0,"Active iframe window doesn't consider itself as active if it was reloaded - When an iframe is reloaded by changing location or submitting a from, it's `contentWindow` will be reinitialized, but it will be the same window object from top page view (`iframe.contentWindow` before reload === `iframe.contentWindow` after reload). So after iframe reloading it will be still considered as active by top window, but doesn't be considered as active from inside. It leads to infinite activation loop when trying to perform a click in this iframe.",1,active iframe window doesn t consider itself as active if it was reloaded when an iframe is reloaded by changing location or submitting a from it s contentwindow will be reinitialized but it will be the same window object from top page view iframe contentwindow before reload iframe contentwindow after reload so after iframe reloading it will be still considered as active by top window but doesn t be considered as active from inside it leads to infinite activation loop when trying to perform a click in this iframe ,1 344992,30779821130.0,IssuesEvent,2023-07-31 09:14:46,wazuh/wazuh,https://api.github.com/repos/wazuh/wazuh,closed,Release 4.5.0 - Alpha 1 - Workload benchmarks metrics,type/test level/subtask,"The following issue aims to run all `workload benchmarks` for the current release candidate, report the results, and open new issues for any encountered errors. ## Workload benchmarks metrics information | | | |-----------------------------------------------|--------------------------------------------| | **Main release candidate issue** |https://github.com/wazuh/wazuh/issues/18058| | **Version** |4.5.0| | **Release candidate #** |Alpha 1| | **Tag** |[v4.5.0-alpha1](https://github.com/wazuh/wazuh/tree/v4.5.0-alpha1)| | **Previous Workload benchmarks metrics issue**|https://github.com/wazuh/wazuh/issues/16975| ## Test configuration All tests will be run and workload performance metrics will be obtained for the following clustered environment configurations: | | | |-------------------|--------------------| | **# Agents** | **# Worker nodes** | |50000|25| ## Test report procedure All individual test checks must be marked as: | | | |---------------------------------|--------------------------------------------| | Pass | The test ran successfully. | | Xfail | The test was expected to fail and it failed. It must be properly justified and reported in an issue. | | Skip | The test was not run. It must be properly justified and reported in an issue. | | Fail | The test failed. A new issue must be opened to evaluate and address the problem. | All test results must have one the following statuses: | | | |---------------------------------|--------------------------------------------| | :green_circle: | All checks passed. | | :red_circle: | There is at least one failed check. | | :yellow_circle: | There is at least one expected fail or skipped test and no failures. | Any failing test must be properly addressed with a new issue, detailing the error and the possible cause. It must be included in the `Fixes` section of the current release candidate main issue. Any expected fail or skipped test must have an issue justifying the reason. All auditors must validate the justification for an expected fail or skipped test. An extended report of the test results must be attached as a zip or txt. This report can be used by the auditors to dig deeper into any possible failures and details. ## Conclusions All tests have been executed and the results can be found [here](https://github.com/wazuh/wazuh/issues/18113#issuecomment-1657894781). All tests have passed and the fails have been reported or justified. I therefore conclude that this issue is finished and OK for this release candidate. ## Auditors validation The definition of done for this one is the validation of the conclusions and the test results from all auditors. All checks from below must be accepted in order to close this issue. - [x] @Selutario ",1.0,"Release 4.5.0 - Alpha 1 - Workload benchmarks metrics - The following issue aims to run all `workload benchmarks` for the current release candidate, report the results, and open new issues for any encountered errors. ## Workload benchmarks metrics information | | | |-----------------------------------------------|--------------------------------------------| | **Main release candidate issue** |https://github.com/wazuh/wazuh/issues/18058| | **Version** |4.5.0| | **Release candidate #** |Alpha 1| | **Tag** |[v4.5.0-alpha1](https://github.com/wazuh/wazuh/tree/v4.5.0-alpha1)| | **Previous Workload benchmarks metrics issue**|https://github.com/wazuh/wazuh/issues/16975| ## Test configuration All tests will be run and workload performance metrics will be obtained for the following clustered environment configurations: | | | |-------------------|--------------------| | **# Agents** | **# Worker nodes** | |50000|25| ## Test report procedure All individual test checks must be marked as: | | | |---------------------------------|--------------------------------------------| | Pass | The test ran successfully. | | Xfail | The test was expected to fail and it failed. It must be properly justified and reported in an issue. | | Skip | The test was not run. It must be properly justified and reported in an issue. | | Fail | The test failed. A new issue must be opened to evaluate and address the problem. | All test results must have one the following statuses: | | | |---------------------------------|--------------------------------------------| | :green_circle: | All checks passed. | | :red_circle: | There is at least one failed check. | | :yellow_circle: | There is at least one expected fail or skipped test and no failures. | Any failing test must be properly addressed with a new issue, detailing the error and the possible cause. It must be included in the `Fixes` section of the current release candidate main issue. Any expected fail or skipped test must have an issue justifying the reason. All auditors must validate the justification for an expected fail or skipped test. An extended report of the test results must be attached as a zip or txt. This report can be used by the auditors to dig deeper into any possible failures and details. ## Conclusions All tests have been executed and the results can be found [here](https://github.com/wazuh/wazuh/issues/18113#issuecomment-1657894781). All tests have passed and the fails have been reported or justified. I therefore conclude that this issue is finished and OK for this release candidate. ## Auditors validation The definition of done for this one is the validation of the conclusions and the test results from all auditors. All checks from below must be accepted in order to close this issue. - [x] @Selutario ",0,release alpha workload benchmarks metrics the following issue aims to run all workload benchmarks for the current release candidate report the results and open new issues for any encountered errors workload benchmarks metrics information main release candidate issue version release candidate alpha tag previous workload benchmarks metrics issue test configuration all tests will be run and workload performance metrics will be obtained for the following clustered environment configurations agents worker nodes test report procedure all individual test checks must be marked as pass the test ran successfully xfail the test was expected to fail and it failed it must be properly justified and reported in an issue skip the test was not run it must be properly justified and reported in an issue fail the test failed a new issue must be opened to evaluate and address the problem all test results must have one the following statuses green circle all checks passed red circle there is at least one failed check yellow circle there is at least one expected fail or skipped test and no failures any failing test must be properly addressed with a new issue detailing the error and the possible cause it must be included in the fixes section of the current release candidate main issue any expected fail or skipped test must have an issue justifying the reason all auditors must validate the justification for an expected fail or skipped test an extended report of the test results must be attached as a zip or txt this report can be used by the auditors to dig deeper into any possible failures and details conclusions all tests have been executed and the results can be found all tests have passed and the fails have been reported or justified i therefore conclude that this issue is finished and ok for this release candidate auditors validation the definition of done for this one is the validation of the conclusions and the test results from all auditors all checks from below must be accepted in order to close this issue selutario ,0 6006,21880639301.0,IssuesEvent,2022-05-19 14:04:22,gergelytakacs/AutomationShield,https://api.github.com/repos/gergelytakacs/AutomationShield,closed,LQ and more: LINPACK benchmark,AutomationShield common software,A LINPACK benchmark should be added to the timing and memory evaluation. See[ this ](https://en.wikipedia.org/wiki/LINPACK_benchmarks) for a start. Essentially it is about the solution of the set of linear equations given in the matrix form by Ax=b.,1.0,LQ and more: LINPACK benchmark - A LINPACK benchmark should be added to the timing and memory evaluation. See[ this ](https://en.wikipedia.org/wiki/LINPACK_benchmarks) for a start. Essentially it is about the solution of the set of linear equations given in the matrix form by Ax=b.,1,lq and more linpack benchmark a linpack benchmark should be added to the timing and memory evaluation see for a start essentially it is about the solution of the set of linear equations given in the matrix form by ax b ,1 4692,17259371555.0,IssuesEvent,2021-07-22 04:14:06,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,[Automation API] Crosswalk API Gateway Fails,area/automation-api kind/bug resolution/fixed," Hey! 👋 When creating an AWS API Gateway using the automation API and the Pulumi Crosswalk library, the provisioning process fails. This behavior _does not_ occur when attempting to run the same code outside of the automation API. ## Expected Behavior The automation API should be able to create an AWS API Gateway using the Pulumi Crosswalk (`@pulumi/awsx`) library. ## Current Behavior While running the script, I get this error: ``` CommandError: code: 255 stdout: Updating (my-company/development-blake-api-gateway) View Live: https://app.pulumi.com/my-company/my-company-foundations-kafka/development-blake-api-gateway/updates/2 + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway creating + aws:apigateway:x:API hello-world creating + aws:iam:Role hello-world1923ec7f creating + aws:iam:Role hello-world1923ec7f created + aws:lambda:Function hello-world1923ec7f creating + aws:iam:RolePolicyAttachment hello-world1923ec7f-32be53a2 creating + aws:iam:RolePolicyAttachment hello-world1923ec7f-32be53a2 created + aws:lambda:Function hello-world1923ec7f created + aws:apigateway:RestApi hello-world creating + aws:apigateway:RestApi hello-world creating error: 1 error occurred: + aws:apigateway:RestApi hello-world **creating failed** error: 1 error occurred: + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway creating error: update failed + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway **creating failed** 1 error Diagnostics: pulumi:pulumi:Stack (my-company-foundations-kafka-development-blake-api-gateway): error: update failed aws:apigateway:RestApi (hello-world): error: 1 error occurred: * creating urn:pulumi:development-blake-api-gateway::my-company-foundations-kafka::aws:apigateway:x:API$aws:apigateway/restApi:RestApi::hello-world: 1 error occurred: * error creating API Gateway specification: BadRequestException: Errors found during import: Unable to put integration on 'GET' for resource at path '/health': Invalid ARN specified in the request Resources: + 5 created Duration: 22s stderr: err?: at Object.createCommandError (/Users/blake/dev/my-company/my-company-foundations-kafka/node_modules/@pulumi/pulumi/x/automation/errors.js:71:17) at ChildProcess. (/Users/blake/dev/my-company/my-company-foundations-kafka/node_modules/@pulumi/pulumi/x/automation/cmd.js:63:40) at ChildProcess.emit (events.js:315:20) at ChildProcess.EventEmitter.emit (domain.js:485:12) at Process.ChildProcess._handle.onexit (internal/child_process.js:276:12) { commandResult: CommandResult { stdout: 'Updating (my-company/development-blake-api-gateway)\n' + '\n' + 'View Live: https://app.pulumi.com/my-company/my-company-foundations-kafka/development-blake-api-gateway/updates/2\n' + '\n' + '\n' + ' + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway creating \n' + ' + aws:apigateway:x:API hello-world creating \n' + ' + aws:iam:Role hello-world1923ec7f creating \n' + ' + aws:iam:Role hello-world1923ec7f created \n' + ' + aws:lambda:Function hello-world1923ec7f creating \n' + ' + aws:iam:RolePolicyAttachment hello-world1923ec7f-32be53a2 creating \n' + ' + aws:iam:RolePolicyAttachment hello-world1923ec7f-32be53a2 created \n' + ' + aws:lambda:Function hello-world1923ec7f created \n' + ' + aws:apigateway:RestApi hello-world creating \n' + ' + aws:apigateway:RestApi hello-world creating error: 1 error occurred:\n' + ' + aws:apigateway:RestApi hello-world **creating failed** error: 1 error occurred:\n' + ' + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway creating error: update failed\n' + ' + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway **creating failed** 1 error\n' + ' \n' + 'Diagnostics:\n' + ' pulumi:pulumi:Stack (my-company-foundations-kafka-development-blake-api-gateway):\n' + ' error: update failed\n' + ' \n' + ' aws:apigateway:RestApi (hello-world):\n' + ' error: 1 error occurred:\n' + ' \t* creating urn:pulumi:development-blake-api-gateway::my-company-foundations-kafka::aws:apigateway:x:API$aws:apigateway/restApi:RestApi::hello-world: 1 error occurred:\n' + ' \t* error creating API Gateway specification: BadRequestException: Errors found during import:\n' + "" \tUnable to put integration on 'GET' for resource at path '/health': Invalid ARN specified in the request\n"" + ' \n' + 'Resources:\n' + ' + 5 created\n' + '\n' + 'Duration: 22s\n' + '\n', stderr: '', code: 255, err: undefined } } ``` After this fails, occasionally a `pulumi destroy` would also fail. I couldn't recreate the error this morning, figured I'd document it just in case it's helpful in any way. After looking into the error a bit more, it seemed like one of the updates would get stuck in the pending state, and I'd have to: - `pulumi stack export >> stack.json` - Remove the pending operation - Manually delete the API Gateway via the AWS CLI - `pulumi stack import --file stack.json` - `pulumi refresh` - `pulumi destroy` ## Steps to Reproduce Here's a code sample that fails: ```ts import { LocalWorkspace } from '@pulumi/pulumi/x/automation'; import * as awsx from '@pulumi/awsx'; async function createInfra(): Promise<{ endpointUrl: string; }> { const pulumiProgram = async () => { const endpoint = new awsx.apigateway.API('hello-world', { routes: [ { path: '/health', method: 'GET', eventHandler: async () => { return { statusCode: 200, body: 'Hello world' }; } } ] }); return { endpointUrl: endpoint.url }; }; const args = { stackName: 'my-company/development-blake-api-gateway', projectName: 'my-company-foundations-kafka', program: pulumiProgram }; const stack = await LocalWorkspace.createOrSelectStack(args); await stack.workspace.installPlugin('aws', 'v3.6.1'); await stack.setConfig('aws:region', { value: 'us-east-2' }); await stack.refresh({ onOutput: console.info }); await stack.up({ onOutput: console.info }); const stackOutputs = await stack.outputs(); return { endpointUrl: stackOutputs.endpointUrl.value }; } createInfra().catch((err) => console.error(err)); ``` I'm running the above using a command like: `npx ts-node file.ts` ## Context (Environment) We have been using the automation API as a way to stand up infra for integration tests. As of now, we're unable to do this with an API Gateway using the Pulumi Crosswalk library. Environment details: - `@pulumi/aws` v3.23.0 - `@pulumi/awsx` v0.22.0 - `@pulumi/pulumi` v2.17.1 - `typescript` v4.1.3 - macOS Catalina v10.15.17 *Update* After chatting in Slack with @pierskarsenbarg, he suggested switching to using a `LocalProgram` ([example](https://github.com/pulumi/automation-api-examples/tree/main/nodejs/localProgram-tsnode-mochatests)) instead of an `InlineProgram`. Doing this, we were able to get the API Gateway up and running. However, we would still like to use the `InlineProgram` in the future for similar scenarios if possible.",1.0,"[Automation API] Crosswalk API Gateway Fails - Hey! 👋 When creating an AWS API Gateway using the automation API and the Pulumi Crosswalk library, the provisioning process fails. This behavior _does not_ occur when attempting to run the same code outside of the automation API. ## Expected Behavior The automation API should be able to create an AWS API Gateway using the Pulumi Crosswalk (`@pulumi/awsx`) library. ## Current Behavior While running the script, I get this error: ``` CommandError: code: 255 stdout: Updating (my-company/development-blake-api-gateway) View Live: https://app.pulumi.com/my-company/my-company-foundations-kafka/development-blake-api-gateway/updates/2 + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway creating + aws:apigateway:x:API hello-world creating + aws:iam:Role hello-world1923ec7f creating + aws:iam:Role hello-world1923ec7f created + aws:lambda:Function hello-world1923ec7f creating + aws:iam:RolePolicyAttachment hello-world1923ec7f-32be53a2 creating + aws:iam:RolePolicyAttachment hello-world1923ec7f-32be53a2 created + aws:lambda:Function hello-world1923ec7f created + aws:apigateway:RestApi hello-world creating + aws:apigateway:RestApi hello-world creating error: 1 error occurred: + aws:apigateway:RestApi hello-world **creating failed** error: 1 error occurred: + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway creating error: update failed + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway **creating failed** 1 error Diagnostics: pulumi:pulumi:Stack (my-company-foundations-kafka-development-blake-api-gateway): error: update failed aws:apigateway:RestApi (hello-world): error: 1 error occurred: * creating urn:pulumi:development-blake-api-gateway::my-company-foundations-kafka::aws:apigateway:x:API$aws:apigateway/restApi:RestApi::hello-world: 1 error occurred: * error creating API Gateway specification: BadRequestException: Errors found during import: Unable to put integration on 'GET' for resource at path '/health': Invalid ARN specified in the request Resources: + 5 created Duration: 22s stderr: err?: at Object.createCommandError (/Users/blake/dev/my-company/my-company-foundations-kafka/node_modules/@pulumi/pulumi/x/automation/errors.js:71:17) at ChildProcess. (/Users/blake/dev/my-company/my-company-foundations-kafka/node_modules/@pulumi/pulumi/x/automation/cmd.js:63:40) at ChildProcess.emit (events.js:315:20) at ChildProcess.EventEmitter.emit (domain.js:485:12) at Process.ChildProcess._handle.onexit (internal/child_process.js:276:12) { commandResult: CommandResult { stdout: 'Updating (my-company/development-blake-api-gateway)\n' + '\n' + 'View Live: https://app.pulumi.com/my-company/my-company-foundations-kafka/development-blake-api-gateway/updates/2\n' + '\n' + '\n' + ' + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway creating \n' + ' + aws:apigateway:x:API hello-world creating \n' + ' + aws:iam:Role hello-world1923ec7f creating \n' + ' + aws:iam:Role hello-world1923ec7f created \n' + ' + aws:lambda:Function hello-world1923ec7f creating \n' + ' + aws:iam:RolePolicyAttachment hello-world1923ec7f-32be53a2 creating \n' + ' + aws:iam:RolePolicyAttachment hello-world1923ec7f-32be53a2 created \n' + ' + aws:lambda:Function hello-world1923ec7f created \n' + ' + aws:apigateway:RestApi hello-world creating \n' + ' + aws:apigateway:RestApi hello-world creating error: 1 error occurred:\n' + ' + aws:apigateway:RestApi hello-world **creating failed** error: 1 error occurred:\n' + ' + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway creating error: update failed\n' + ' + pulumi:pulumi:Stack my-company-foundations-kafka-development-blake-api-gateway **creating failed** 1 error\n' + ' \n' + 'Diagnostics:\n' + ' pulumi:pulumi:Stack (my-company-foundations-kafka-development-blake-api-gateway):\n' + ' error: update failed\n' + ' \n' + ' aws:apigateway:RestApi (hello-world):\n' + ' error: 1 error occurred:\n' + ' \t* creating urn:pulumi:development-blake-api-gateway::my-company-foundations-kafka::aws:apigateway:x:API$aws:apigateway/restApi:RestApi::hello-world: 1 error occurred:\n' + ' \t* error creating API Gateway specification: BadRequestException: Errors found during import:\n' + "" \tUnable to put integration on 'GET' for resource at path '/health': Invalid ARN specified in the request\n"" + ' \n' + 'Resources:\n' + ' + 5 created\n' + '\n' + 'Duration: 22s\n' + '\n', stderr: '', code: 255, err: undefined } } ``` After this fails, occasionally a `pulumi destroy` would also fail. I couldn't recreate the error this morning, figured I'd document it just in case it's helpful in any way. After looking into the error a bit more, it seemed like one of the updates would get stuck in the pending state, and I'd have to: - `pulumi stack export >> stack.json` - Remove the pending operation - Manually delete the API Gateway via the AWS CLI - `pulumi stack import --file stack.json` - `pulumi refresh` - `pulumi destroy` ## Steps to Reproduce Here's a code sample that fails: ```ts import { LocalWorkspace } from '@pulumi/pulumi/x/automation'; import * as awsx from '@pulumi/awsx'; async function createInfra(): Promise<{ endpointUrl: string; }> { const pulumiProgram = async () => { const endpoint = new awsx.apigateway.API('hello-world', { routes: [ { path: '/health', method: 'GET', eventHandler: async () => { return { statusCode: 200, body: 'Hello world' }; } } ] }); return { endpointUrl: endpoint.url }; }; const args = { stackName: 'my-company/development-blake-api-gateway', projectName: 'my-company-foundations-kafka', program: pulumiProgram }; const stack = await LocalWorkspace.createOrSelectStack(args); await stack.workspace.installPlugin('aws', 'v3.6.1'); await stack.setConfig('aws:region', { value: 'us-east-2' }); await stack.refresh({ onOutput: console.info }); await stack.up({ onOutput: console.info }); const stackOutputs = await stack.outputs(); return { endpointUrl: stackOutputs.endpointUrl.value }; } createInfra().catch((err) => console.error(err)); ``` I'm running the above using a command like: `npx ts-node file.ts` ## Context (Environment) We have been using the automation API as a way to stand up infra for integration tests. As of now, we're unable to do this with an API Gateway using the Pulumi Crosswalk library. Environment details: - `@pulumi/aws` v3.23.0 - `@pulumi/awsx` v0.22.0 - `@pulumi/pulumi` v2.17.1 - `typescript` v4.1.3 - macOS Catalina v10.15.17 *Update* After chatting in Slack with @pierskarsenbarg, he suggested switching to using a `LocalProgram` ([example](https://github.com/pulumi/automation-api-examples/tree/main/nodejs/localProgram-tsnode-mochatests)) instead of an `InlineProgram`. Doing this, we were able to get the API Gateway up and running. However, we would still like to use the `InlineProgram` in the future for similar scenarios if possible.",1, crosswalk api gateway fails hey 👋 when creating an aws api gateway using the automation api and the pulumi crosswalk library the provisioning process fails this behavior does not occur when attempting to run the same code outside of the automation api expected behavior the automation api should be able to create an aws api gateway using the pulumi crosswalk pulumi awsx library current behavior while running the script i get this error commanderror code stdout updating my company development blake api gateway view live pulumi pulumi stack my company foundations kafka development blake api gateway creating aws apigateway x api hello world creating aws iam role hello creating aws iam role hello created aws lambda function hello creating aws iam rolepolicyattachment hello creating aws iam rolepolicyattachment hello created aws lambda function hello created aws apigateway restapi hello world creating aws apigateway restapi hello world creating error error occurred aws apigateway restapi hello world creating failed error error occurred pulumi pulumi stack my company foundations kafka development blake api gateway creating error update failed pulumi pulumi stack my company foundations kafka development blake api gateway creating failed error diagnostics pulumi pulumi stack my company foundations kafka development blake api gateway error update failed aws apigateway restapi hello world error error occurred creating urn pulumi development blake api gateway my company foundations kafka aws apigateway x api aws apigateway restapi restapi hello world error occurred error creating api gateway specification badrequestexception errors found during import unable to put integration on get for resource at path health invalid arn specified in the request resources created duration stderr err at object createcommanderror users blake dev my company my company foundations kafka node modules pulumi pulumi x automation errors js at childprocess users blake dev my company my company foundations kafka node modules pulumi pulumi x automation cmd js at childprocess emit events js at childprocess eventemitter emit domain js at process childprocess handle onexit internal child process js commandresult commandresult stdout updating my company development blake api gateway n n view live n n pulumi pulumi stack my company foundations kafka development blake api gateway creating n aws apigateway x api hello world creating n aws iam role hello creating n aws iam role hello created n aws lambda function hello creating n aws iam rolepolicyattachment hello creating n aws iam rolepolicyattachment hello created n aws lambda function hello created n aws apigateway restapi hello world creating n aws apigateway restapi hello world creating error error occurred n aws apigateway restapi hello world creating failed error error occurred n pulumi pulumi stack my company foundations kafka development blake api gateway creating error update failed n pulumi pulumi stack my company foundations kafka development blake api gateway creating failed error n n diagnostics n pulumi pulumi stack my company foundations kafka development blake api gateway n error update failed n n aws apigateway restapi hello world n error error occurred n t creating urn pulumi development blake api gateway my company foundations kafka aws apigateway x api aws apigateway restapi restapi hello world error occurred n t error creating api gateway specification badrequestexception errors found during import n tunable to put integration on get for resource at path health invalid arn specified in the request n n resources n created n n duration n n stderr code err undefined after this fails occasionally a pulumi destroy would also fail i couldn t recreate the error this morning figured i d document it just in case it s helpful in any way after looking into the error a bit more it seemed like one of the updates would get stuck in the pending state and i d have to pulumi stack export stack json remove the pending operation manually delete the api gateway via the aws cli pulumi stack import file stack json pulumi refresh pulumi destroy steps to reproduce here s a code sample that fails ts import localworkspace from pulumi pulumi x automation import as awsx from pulumi awsx async function createinfra promise endpointurl string const pulumiprogram async const endpoint new awsx apigateway api hello world routes path health method get eventhandler async return statuscode body hello world return endpointurl endpoint url const args stackname my company development blake api gateway projectname my company foundations kafka program pulumiprogram const stack await localworkspace createorselectstack args await stack workspace installplugin aws await stack setconfig aws region value us east await stack refresh onoutput console info await stack up onoutput console info const stackoutputs await stack outputs return endpointurl stackoutputs endpointurl value createinfra catch err console error err i m running the above using a command like npx ts node file ts context environment we have been using the automation api as a way to stand up infra for integration tests as of now we re unable to do this with an api gateway using the pulumi crosswalk library environment details pulumi aws pulumi awsx pulumi pulumi typescript macos catalina update after chatting in slack with pierskarsenbarg he suggested switching to using a localprogram instead of an inlineprogram doing this we were able to get the api gateway up and running however we would still like to use the inlineprogram in the future for similar scenarios if possible ,1 191314,14593789321.0,IssuesEvent,2020-12-20 01:06:19,github-vet/rangeloop-pointer-findings,https://api.github.com/repos/github-vet/rangeloop-pointer-findings,closed,kubernetes/perf-tests: clusterloader2/pkg/measurement/common/slos/api_responsiveness_prometheus_test.go; 7 LoC,fresh test tiny," Found a possible issue in [kubernetes/perf-tests](https://www.github.com/kubernetes/perf-tests) at [clusterloader2/pkg/measurement/common/slos/api_responsiveness_prometheus_test.go](https://github.com/kubernetes/perf-tests/blob/8180a946b47f1460bcdd7dc2d2a1ee1d2a9dba38/clusterloader2/pkg/measurement/common/slos/api_responsiveness_prometheus_test.go#L437-L443) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > reference to item is reassigned at line 442 [Click here to see the code in its original context.](https://github.com/kubernetes/perf-tests/blob/8180a946b47f1460bcdd7dc2d2a1ee1d2a9dba38/clusterloader2/pkg/measurement/common/slos/api_responsiveness_prometheus_test.go#L437-L443)
Click here to show the 7 line(s) of Go which triggered the analyzer. ```go for _, item := range perfData.DataItems { items[toKey( item.Labels[""Resource""], item.Labels[""Subresource""], item.Labels[""Verb""], item.Labels[""Scope""])] = &item } ```
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 8180a946b47f1460bcdd7dc2d2a1ee1d2a9dba38 ",1.0,"kubernetes/perf-tests: clusterloader2/pkg/measurement/common/slos/api_responsiveness_prometheus_test.go; 7 LoC - Found a possible issue in [kubernetes/perf-tests](https://www.github.com/kubernetes/perf-tests) at [clusterloader2/pkg/measurement/common/slos/api_responsiveness_prometheus_test.go](https://github.com/kubernetes/perf-tests/blob/8180a946b47f1460bcdd7dc2d2a1ee1d2a9dba38/clusterloader2/pkg/measurement/common/slos/api_responsiveness_prometheus_test.go#L437-L443) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > reference to item is reassigned at line 442 [Click here to see the code in its original context.](https://github.com/kubernetes/perf-tests/blob/8180a946b47f1460bcdd7dc2d2a1ee1d2a9dba38/clusterloader2/pkg/measurement/common/slos/api_responsiveness_prometheus_test.go#L437-L443)
Click here to show the 7 line(s) of Go which triggered the analyzer. ```go for _, item := range perfData.DataItems { items[toKey( item.Labels[""Resource""], item.Labels[""Subresource""], item.Labels[""Verb""], item.Labels[""Scope""])] = &item } ```
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 8180a946b47f1460bcdd7dc2d2a1ee1d2a9dba38 ",0,kubernetes perf tests pkg measurement common slos api responsiveness prometheus test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to item is reassigned at line click here to show the line s of go which triggered the analyzer go for item range perfdata dataitems items tokey item labels item labels item labels item labels item leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id ,0 9401,28211831713.0,IssuesEvent,2023-04-05 05:27:47,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,closed,[CDCSDK] Out of memory error with packed columns + CDC (enabled on xcluster source),kind/bug priority/medium area/cdcsdk status/awaiting-triage qa_automation,"Jira Link: [DB-5681](https://yugabyte.atlassian.net/browse/DB-5681) ### Description http://stress.dev.yugabyte.com/stress_test/26d285e5-5570-463c-a107-e17b4e37d17a ![Profile](https://user-images.githubusercontent.com/109518123/221798523-b4ad7ac7-fe8c-417e-956a-95b96261877f.png) [AllLogs.zip](https://github.com/yugabyte/yugabyte-db/files/10848120/AllLogs.zip) ``` 99062 2023-02-26 20:51:18,242 INFO YugabyteDB|db_cdc|streaming|3 Received throwable to check for retry: {} [io.debezium.connector.yugabytedb.YugabyteDBErrorHandler] 99063 java.lang.OutOfMemoryError: Java heap space 99064 at java.base/java.util.Arrays.copyOf(Arrays.java:3745) 99065 at java.base/java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:172) 99066 at java.base/java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:538) 99067 at java.base/java.lang.StringBuffer.append(StringBuffer.java:317) 99068 at org.apache.log4j.helpers.PatternParser$LiteralPatternConverter.format(PatternParser.java:389) 99069 at org.apache.log4j.PatternLayout.format(PatternLayout.java:510) 99070 at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:303) 99071 at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:353) 99072 at org.apache.log4j.WriterAppender.append(WriterAppender.java:156) 99073 at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:232) 99074 at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:65) 99075 at org.apache.log4j.Category.callAppenders(Category.java:206) 99076 at org.apache.log4j.Category.forcedLog(Category.java:384) 99077 at org.apache.log4j.Category.log(Category.java:810) 99078 at org.slf4j.impl.Reload4jLoggerAdapter.debug(Reload4jLoggerAdapter.java:209) 99079 at io.debezium.connector.yugabytedb.YugabyteDBStreamingChangeEventSource.getChanges2(YugabyteDBStreamingChangeEventSource.java:505) 99080 at io.debezium.connector.yugabytedb.YugabyteDBStreamingChangeEventSource.execute(YugabyteDBStreamingChangeEventSource.java:138) 99081 at io.debezium.connector.yugabytedb.YugabyteDBStreamingChangeEventSource.execute(YugabyteDBStreamingChangeEventSource.java:47) 99082 at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:174) 99083 at io.debezium.connector.yugabytedb.YugabyteDBChangeEventSourceCoordinator.executeChangeEventSources(YugabyteDBChangeEventSourceCoordinator.java:135) 99084 at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:109) 99085 at io.debezium.pipeline.ChangeEventSourceCoordinator$$Lambda$1102/0x000000084098f840.run(Unknown Source) 99086 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 99087 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 99088 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 99089 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 99090 at java.base/java.lang.Thread.run(Thread.java:829) 99091 2023-02-26 20:51:12,171 ERROR || [Consumer clientId=connector-consumer-jdbc-sink-test_cdc_8474a0-0, groupId=connect-jdbc-sink-test_cdc_8474a0] Heartbeat thread failed due to unexpected error [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 99092 java.lang.OutOfMemoryError: Java heap space 99093 at java.base/java.nio.HeapByteBuffer.(HeapByteBuffer.java:61) 99094 at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:348) 99095 at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30) 99096 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:113) 99097 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 99098 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 99099 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 99100 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 99101 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 99102 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 99103 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265) 99104 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:306) 99105 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1433) ``` Also: ``` 99312 2023-02-26 20:51:34,153 ERROR || WorkerSourceTask{id=ybconnector_cdc_066b50_test_cdc_8474a0-0} failed to send record to db_cdc.public.test_cdc_8474a0: [org.apache.kafka.connect.runtime.WorkerSourceTask] 99313 org.apache.kafka.common.KafkaException: Producer is closed forcefully. 99314 at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:781) 99315 at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:768) 99316 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:283) 99317 at java.base/java.lang.Thread.run(Thread.java:829) ``` As per discussion with @vaibhav-yb , raising this to track to look to check if something is not getting cleaned up Like any other leak possible ? We hadn’t seen this case fail for a long time apart from 2 recent runs ### Source connector version quay.io/yugabyte/debezium-connector:latest v1.9.5.y.16 ### Connector configuration ``` adding yb connector stream_id='349e045d05a94c6eb50541a553e59431' db_name='cdc_066b50' connector_host='172.151.31.240' table_list=['test_cdc_8474a0'] 2023-02-26 19:21:04,123:DEBUG: add connector connector_name='ybconnector_cdc_066b50_test_cdc_8474a0' stream_id='349e045d05a94c6eb50541a553e59431' db_name='cdc_066b50' connector_host='172.151.31.240' table_list=['test_cdc_8474a0'] {'name': 'ybconnector_cdc_066b50_test_cdc_8474a0', 'config': {'connector.class': 'io.debezium.connector.yugabytedb.YugabyteDBConnector', 'database.hostname': '172.151.16.206', 'database.master.addresses': '172.151.24.64:7100,172.151.22.133:7100,172.151.16.206:7100', 'database.port': 5433, 'database.masterhost': '172.151.16.206', 'database.masterport': '7100', 'database.user': 'yugabyte', 'database.password': 'yugabyte', 'database.dbname': 'cdc_066b50', 'database.server.name': 'db_cdc', 'database.streamid': '349e045d05a94c6eb50541a553e59431', 'snapshot.mode': 'never', 'admin.operation.timeout.ms': 600000, 'socket.read.timeout.ms': 600000, 'max.connector.retries': '10', 'operation.timeout.ms': 600000, 'topic.creation.default.compression.type': 'lz4', 'topic.creation.default.cleanup.policy': 'delete', 'topic.creation.default.partitions': '2', 'topic.creation.default.replication.factor': '1', 'tasks.max': '10', 'table.include.list': 'public.test_cdc_8474a0'}} ``` ### YugabyteDB version 2.17.3.0-b6 [DB-5681]: https://yugabyte.atlassian.net/browse/DB-5681?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ",1.0,"[CDCSDK] Out of memory error with packed columns + CDC (enabled on xcluster source) - Jira Link: [DB-5681](https://yugabyte.atlassian.net/browse/DB-5681) ### Description http://stress.dev.yugabyte.com/stress_test/26d285e5-5570-463c-a107-e17b4e37d17a ![Profile](https://user-images.githubusercontent.com/109518123/221798523-b4ad7ac7-fe8c-417e-956a-95b96261877f.png) [AllLogs.zip](https://github.com/yugabyte/yugabyte-db/files/10848120/AllLogs.zip) ``` 99062 2023-02-26 20:51:18,242 INFO YugabyteDB|db_cdc|streaming|3 Received throwable to check for retry: {} [io.debezium.connector.yugabytedb.YugabyteDBErrorHandler] 99063 java.lang.OutOfMemoryError: Java heap space 99064 at java.base/java.util.Arrays.copyOf(Arrays.java:3745) 99065 at java.base/java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:172) 99066 at java.base/java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:538) 99067 at java.base/java.lang.StringBuffer.append(StringBuffer.java:317) 99068 at org.apache.log4j.helpers.PatternParser$LiteralPatternConverter.format(PatternParser.java:389) 99069 at org.apache.log4j.PatternLayout.format(PatternLayout.java:510) 99070 at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:303) 99071 at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:353) 99072 at org.apache.log4j.WriterAppender.append(WriterAppender.java:156) 99073 at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:232) 99074 at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:65) 99075 at org.apache.log4j.Category.callAppenders(Category.java:206) 99076 at org.apache.log4j.Category.forcedLog(Category.java:384) 99077 at org.apache.log4j.Category.log(Category.java:810) 99078 at org.slf4j.impl.Reload4jLoggerAdapter.debug(Reload4jLoggerAdapter.java:209) 99079 at io.debezium.connector.yugabytedb.YugabyteDBStreamingChangeEventSource.getChanges2(YugabyteDBStreamingChangeEventSource.java:505) 99080 at io.debezium.connector.yugabytedb.YugabyteDBStreamingChangeEventSource.execute(YugabyteDBStreamingChangeEventSource.java:138) 99081 at io.debezium.connector.yugabytedb.YugabyteDBStreamingChangeEventSource.execute(YugabyteDBStreamingChangeEventSource.java:47) 99082 at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:174) 99083 at io.debezium.connector.yugabytedb.YugabyteDBChangeEventSourceCoordinator.executeChangeEventSources(YugabyteDBChangeEventSourceCoordinator.java:135) 99084 at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:109) 99085 at io.debezium.pipeline.ChangeEventSourceCoordinator$$Lambda$1102/0x000000084098f840.run(Unknown Source) 99086 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 99087 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 99088 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 99089 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 99090 at java.base/java.lang.Thread.run(Thread.java:829) 99091 2023-02-26 20:51:12,171 ERROR || [Consumer clientId=connector-consumer-jdbc-sink-test_cdc_8474a0-0, groupId=connect-jdbc-sink-test_cdc_8474a0] Heartbeat thread failed due to unexpected error [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 99092 java.lang.OutOfMemoryError: Java heap space 99093 at java.base/java.nio.HeapByteBuffer.(HeapByteBuffer.java:61) 99094 at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:348) 99095 at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30) 99096 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:113) 99097 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 99098 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 99099 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 99100 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 99101 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 99102 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 99103 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265) 99104 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:306) 99105 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1433) ``` Also: ``` 99312 2023-02-26 20:51:34,153 ERROR || WorkerSourceTask{id=ybconnector_cdc_066b50_test_cdc_8474a0-0} failed to send record to db_cdc.public.test_cdc_8474a0: [org.apache.kafka.connect.runtime.WorkerSourceTask] 99313 org.apache.kafka.common.KafkaException: Producer is closed forcefully. 99314 at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:781) 99315 at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:768) 99316 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:283) 99317 at java.base/java.lang.Thread.run(Thread.java:829) ``` As per discussion with @vaibhav-yb , raising this to track to look to check if something is not getting cleaned up Like any other leak possible ? We hadn’t seen this case fail for a long time apart from 2 recent runs ### Source connector version quay.io/yugabyte/debezium-connector:latest v1.9.5.y.16 ### Connector configuration ``` adding yb connector stream_id='349e045d05a94c6eb50541a553e59431' db_name='cdc_066b50' connector_host='172.151.31.240' table_list=['test_cdc_8474a0'] 2023-02-26 19:21:04,123:DEBUG: add connector connector_name='ybconnector_cdc_066b50_test_cdc_8474a0' stream_id='349e045d05a94c6eb50541a553e59431' db_name='cdc_066b50' connector_host='172.151.31.240' table_list=['test_cdc_8474a0'] {'name': 'ybconnector_cdc_066b50_test_cdc_8474a0', 'config': {'connector.class': 'io.debezium.connector.yugabytedb.YugabyteDBConnector', 'database.hostname': '172.151.16.206', 'database.master.addresses': '172.151.24.64:7100,172.151.22.133:7100,172.151.16.206:7100', 'database.port': 5433, 'database.masterhost': '172.151.16.206', 'database.masterport': '7100', 'database.user': 'yugabyte', 'database.password': 'yugabyte', 'database.dbname': 'cdc_066b50', 'database.server.name': 'db_cdc', 'database.streamid': '349e045d05a94c6eb50541a553e59431', 'snapshot.mode': 'never', 'admin.operation.timeout.ms': 600000, 'socket.read.timeout.ms': 600000, 'max.connector.retries': '10', 'operation.timeout.ms': 600000, 'topic.creation.default.compression.type': 'lz4', 'topic.creation.default.cleanup.policy': 'delete', 'topic.creation.default.partitions': '2', 'topic.creation.default.replication.factor': '1', 'tasks.max': '10', 'table.include.list': 'public.test_cdc_8474a0'}} ``` ### YugabyteDB version 2.17.3.0-b6 [DB-5681]: https://yugabyte.atlassian.net/browse/DB-5681?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ",1, out of memory error with packed columns cdc enabled on xcluster source jira link description info yugabytedb db cdc streaming received throwable to check for retry java lang outofmemoryerror java heap space at java base java util arrays copyof arrays java at java base java lang abstractstringbuilder ensurecapacityinternal abstractstringbuilder java at java base java lang abstractstringbuilder append abstractstringbuilder java at java base java lang stringbuffer append stringbuffer java at org apache helpers patternparser literalpatternconverter format patternparser java at org apache patternlayout format patternlayout java at org apache writerappender subappend writerappender java at org apache dailyrollingfileappender subappend dailyrollingfileappender java at org apache writerappender append writerappender java at org apache appenderskeleton doappend appenderskeleton java at org apache helpers appenderattachableimpl appendlooponappenders appenderattachableimpl java at org apache category callappenders category java at org apache category forcedlog category java at org apache category log category java at org impl debug java at io debezium connector yugabytedb yugabytedbstreamingchangeeventsource yugabytedbstreamingchangeeventsource java at io debezium connector yugabytedb yugabytedbstreamingchangeeventsource execute yugabytedbstreamingchangeeventsource java at io debezium connector yugabytedb yugabytedbstreamingchangeeventsource execute yugabytedbstreamingchangeeventsource java at io debezium pipeline changeeventsourcecoordinator streamevents changeeventsourcecoordinator java at io debezium connector yugabytedb yugabytedbchangeeventsourcecoordinator executechangeeventsources yugabytedbchangeeventsourcecoordinator java at io debezium pipeline changeeventsourcecoordinator lambda start changeeventsourcecoordinator java at io debezium pipeline changeeventsourcecoordinator lambda run unknown source at java base java util concurrent executors runnableadapter call executors java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java error heartbeat thread failed due to unexpected error java lang outofmemoryerror java heap space at java base java nio heapbytebuffer heapbytebuffer java at java base java nio bytebuffer allocate bytebuffer java at org apache kafka common memory memorypool tryallocate memorypool java at org apache kafka common network networkreceive readfrom networkreceive java at org apache kafka common network kafkachannel receive kafkachannel java at org apache kafka common network kafkachannel read kafkachannel java at org apache kafka common network selector attemptread selector java at org apache kafka common network selector pollselectionkeys selector java at org apache kafka common network selector poll selector java at org apache kafka clients networkclient poll networkclient java at org apache kafka clients consumer internals consumernetworkclient poll consumernetworkclient java at org apache kafka clients consumer internals consumernetworkclient pollnowakeup consumernetworkclient java at org apache kafka clients consumer internals abstractcoordinator heartbeatthread run abstractcoordinator java also error workersourcetask id ybconnector cdc test cdc failed to send record to db cdc public test cdc org apache kafka common kafkaexception producer is closed forcefully at org apache kafka clients producer internals recordaccumulator abortbatches recordaccumulator java at org apache kafka clients producer internals recordaccumulator abortincompletebatches recordaccumulator java at org apache kafka clients producer internals sender run sender java at java base java lang thread run thread java as per discussion with vaibhav yb raising this to track to look to check if something is not getting cleaned up like any other leak possible we hadn’t seen this case fail for a long time apart from recent runs img width alt screenshot at pm src source connector version quay io yugabyte debezium connector latest y connector configuration adding yb connector stream id db name cdc connector host table list debug add connector connector name ybconnector cdc test cdc stream id db name cdc connector host table list name ybconnector cdc test cdc config connector class io debezium connector yugabytedb yugabytedbconnector database hostname database master addresses database port database masterhost database masterport database user yugabyte database password yugabyte database dbname cdc database server name db cdc database streamid snapshot mode never admin operation timeout ms socket read timeout ms max connector retries operation timeout ms topic creation default compression type topic creation default cleanup policy delete topic creation default partitions topic creation default replication factor tasks max table include list public test cdc yugabytedb version ,1 2462,12034782393.0,IssuesEvent,2020-04-13 16:40:42,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Configuration-Automations missing after upgrade to 108.0,integration: automation," ## The problem ## Environment - Home Assistant Core release with the issue: 108.0 - Last working Home Assistant Core release (if known): 107.7 - Operating environment (Home Assistant/Supervised/Docker/venv): arch | armv7l -- | -- dev | false docker | true hassio | true os_name | Linux os_version | 4.19.75-v7l+ python_version | 3.7.7 timezone | America/New_York version | 0.108.0 virtualenv | false - Integration causing this issue: automations and default_config - Link to integration documentation on our website: ## Problem-relevant `configuration.yaml` ```yaml # Configure a default setup of Home Assistant (frontend, api, etc) default_config: automation: !include automations.yaml ``` ## Traceback/Error logs 2020-04-08 19:01:22 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for hacs which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant. 2020-04-08 19:01:22 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for alexa_media which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant. ```txt ``` ## Additional information ",1.0,"Configuration-Automations missing after upgrade to 108.0 - ## The problem ## Environment - Home Assistant Core release with the issue: 108.0 - Last working Home Assistant Core release (if known): 107.7 - Operating environment (Home Assistant/Supervised/Docker/venv): arch | armv7l -- | -- dev | false docker | true hassio | true os_name | Linux os_version | 4.19.75-v7l+ python_version | 3.7.7 timezone | America/New_York version | 0.108.0 virtualenv | false - Integration causing this issue: automations and default_config - Link to integration documentation on our website: ## Problem-relevant `configuration.yaml` ```yaml # Configure a default setup of Home Assistant (frontend, api, etc) default_config: automation: !include automations.yaml ``` ## Traceback/Error logs 2020-04-08 19:01:22 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for hacs which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant. 2020-04-08 19:01:22 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for alexa_media which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant. ```txt ``` ## Additional information ",1,configuration automations missing after upgrade to read this first if you need additional help with this template please refer to make sure you are running the latest version of home assistant before reporting an issue do not report issues for integrations if you are using custom components or integrations provide as many details as possible paste logs configuration samples and code into the backticks do not delete any text from this template otherwise your issue may be closed without comment the problem the automations configuration is missing also getting notifications about invalid config for automations and default config but i didn t change them and my config checker comes back ok the following integrations and platforms could not be set up automation default config environment provide details about the versions you are using which helps us to reproduce and find the issue quicker version information is found in the home assistant frontend developer tools info home assistant core release with the issue last working home assistant core release if known operating environment home assistant supervised docker venv arch dev false docker true hassio true os name linux os version python version timezone america new york version virtualenv false integration causing this issue automations and default config link to integration documentation on our website problem relevant configuration yaml an example configuration that caused the problem for you fill this out even if it seems unimportant to you please be sure to remove personal information like passwords private urls and other credentials yaml configure a default setup of home assistant frontend api etc default config automation include automations yaml traceback error logs if you come across any trace or error logs please provide them warning mainthread you are using a custom integration for hacs which has not been tested by home assistant this component might cause stability problems be sure to disable it if you experience issues with home assistant warning mainthread you are using a custom integration for alexa media which has not been tested by home assistant this component might cause stability problems be sure to disable it if you experience issues with home assistant txt additional information ,1 4092,15393041835.0,IssuesEvent,2021-03-03 16:16:47,spacemeshos/go-spacemesh,https://api.github.com/repos/spacemeshos/go-spacemesh,opened,Automation Code Improvement ,Automation,"## Description The automation code has many code duplications and general code disorder. Rewriting and giving the automation a ""facelift"".",1.0,"Automation Code Improvement - ## Description The automation code has many code duplications and general code disorder. Rewriting and giving the automation a ""facelift"".",1,automation code improvement description the automation code has many code duplications and general code disorder rewriting and giving the automation a facelift ,1 2216,11592816185.0,IssuesEvent,2020-02-24 12:20:08,big-neon/bn-web,https://api.github.com/repos/big-neon/bn-web,opened,Automation: Big Neon: Test: Allow Past Events to Remain On Site: Search Using URL,Automation,"**Pre-conditions:** 1. User should have admin access to Big Neon 2. User should be logged into Big Neon 3. User should have an event that has past **Steps:** 1. Add the URL to view the event that has past 2. View event page loads successfully 3. Verify the button ""Purchase Tickets"" is now displayed as ""This Event Is Now Over"" with a tear emoji 4. Try to select the above button 5. User should be unable to select the button test pad link: https://big-neon.ontestpad.com/script/194#11//",1.0,"Automation: Big Neon: Test: Allow Past Events to Remain On Site: Search Using URL - **Pre-conditions:** 1. User should have admin access to Big Neon 2. User should be logged into Big Neon 3. User should have an event that has past **Steps:** 1. Add the URL to view the event that has past 2. View event page loads successfully 3. Verify the button ""Purchase Tickets"" is now displayed as ""This Event Is Now Over"" with a tear emoji 4. Try to select the above button 5. User should be unable to select the button test pad link: https://big-neon.ontestpad.com/script/194#11//",1,automation big neon test allow past events to remain on site search using url pre conditions user should have admin access to big neon user should be logged into big neon user should have an event that has past steps add the url to view the event that has past view event page loads successfully verify the button purchase tickets is now displayed as this event is now over with a tear emoji try to select the above button user should be unable to select the button test pad link ,1 53261,6306486066.0,IssuesEvent,2017-07-21 21:08:32,rancher/rancher,https://api.github.com/repos/rancher/rancher,closed,Can't change Server/TLS info for Active Directory,area/access-control kind/bug status/resolved status/to-test,"**Rancher versions:** v1.6.6-rc1 **Steps to Reproduce:** 1. Log into AD using tad.rancher.io info with TLS on 2. Disable access 3. Change the server to the IP and turn off TLS **Results:** The info comes back as hostname with TLS on ",1.0,"Can't change Server/TLS info for Active Directory - **Rancher versions:** v1.6.6-rc1 **Steps to Reproduce:** 1. Log into AD using tad.rancher.io info with TLS on 2. Disable access 3. Change the server to the IP and turn off TLS **Results:** The info comes back as hostname with TLS on ",0,can t change server tls info for active directory rancher versions steps to reproduce log into ad using tad rancher io info with tls on disable access change the server to the ip and turn off tls results the info comes back as hostname with tls on ,0 146116,19393857274.0,IssuesEvent,2021-12-18 01:18:31,nealkumar/Morning-Brew,https://api.github.com/repos/nealkumar/Morning-Brew,opened,CVE-2021-42550 (Medium) detected in logback-classic-1.2.3.jar,security vulnerability,"## CVE-2021-42550 - Medium Severity Vulnerability
Vulnerable Library - logback-classic-1.2.3.jar

logback-classic module

Library home page: http://logback.qos.ch

Path to dependency file: /tmp/ws-scm/MorningBuddy/morning-buddy-weather-service/pom.xml

Path to vulnerable library: /root/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar

Dependency Hierarchy: - spring-boot-starter-web-2.1.8.RELEASE.jar (Root Library) - spring-boot-starter-2.1.8.RELEASE.jar - spring-boot-starter-logging-2.1.8.RELEASE.jar - :x: **logback-classic-1.2.3.jar** (Vulnerable Library)

Vulnerability Details

In logback version 1.2.7 and prior versions, an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from LDAP servers.

Publish Date: 2021-12-16

URL: CVE-2021-42550

CVSS 3 Score Details (6.6)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: http://logback.qos.ch/news.html

Release Date: 2021-12-16

Fix Resolution: ch.qos.logback:logback-classic:1.2.8

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-42550 (Medium) detected in logback-classic-1.2.3.jar - ## CVE-2021-42550 - Medium Severity Vulnerability
Vulnerable Library - logback-classic-1.2.3.jar

logback-classic module

Library home page: http://logback.qos.ch

Path to dependency file: /tmp/ws-scm/MorningBuddy/morning-buddy-weather-service/pom.xml

Path to vulnerable library: /root/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar

Dependency Hierarchy: - spring-boot-starter-web-2.1.8.RELEASE.jar (Root Library) - spring-boot-starter-2.1.8.RELEASE.jar - spring-boot-starter-logging-2.1.8.RELEASE.jar - :x: **logback-classic-1.2.3.jar** (Vulnerable Library)

Vulnerability Details

In logback version 1.2.7 and prior versions, an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from LDAP servers.

Publish Date: 2021-12-16

URL: CVE-2021-42550

CVSS 3 Score Details (6.6)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: http://logback.qos.ch/news.html

Release Date: 2021-12-16

Fix Resolution: ch.qos.logback:logback-classic:1.2.8

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in logback classic jar cve medium severity vulnerability vulnerable library logback classic jar logback classic module library home page a href path to dependency file tmp ws scm morningbuddy morning buddy weather service pom xml path to vulnerable library root repository ch qos logback logback classic logback classic jar dependency hierarchy spring boot starter web release jar root library spring boot starter release jar spring boot starter logging release jar x logback classic jar vulnerable library vulnerability details in logback version and prior versions an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from ldap servers publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos logback logback classic step up your open source security game with whitesource ,0 9098,27540719591.0,IssuesEvent,2023-03-07 08:21:35,elastic/apm-pipeline-library,https://api.github.com/repos/elastic/apm-pipeline-library,closed,[Automation] ITs for the library,enhancement automation ci,"This might help to add more coverage similar to what we do in production. See https://github.com/elastic/observability-dev/issues/271 There are certain initiatives to support ITs within a shared library. See: - https://github.com/jenkins-infra/pipeline-library/pull/78 Which are using the JenkinsRunner and the Test Framework: - https://github.com/jenkinsci/jenkinsfile-runner - https://github.com/jenkinsci/jenkinsfile-runner-test-framework Let's see whether we can add more functional/integration testing on our end to help with not just the stability but with the documentation. cc @elastic/observablt-robots ",1.0,"[Automation] ITs for the library - This might help to add more coverage similar to what we do in production. See https://github.com/elastic/observability-dev/issues/271 There are certain initiatives to support ITs within a shared library. See: - https://github.com/jenkins-infra/pipeline-library/pull/78 Which are using the JenkinsRunner and the Test Framework: - https://github.com/jenkinsci/jenkinsfile-runner - https://github.com/jenkinsci/jenkinsfile-runner-test-framework Let's see whether we can add more functional/integration testing on our end to help with not just the stability but with the documentation. cc @elastic/observablt-robots ",1, its for the library this might help to add more coverage similar to what we do in production see there are certain initiatives to support its within a shared library see which are using the jenkinsrunner and the test framework let s see whether we can add more functional integration testing on our end to help with not just the stability but with the documentation cc elastic observablt robots ,1 82209,23707642230.0,IssuesEvent,2022-08-30 03:54:00,pytorch/pytorch,https://api.github.com/repos/pytorch/pytorch,closed,torch is not compiled with CUDA support,needs reproduction module: build module: cuda triaged,"### 🐛 Describe the bug when I compiled pytorch source code according to official docs, I found that I cannot run torch inference on GPU. How can I manually control the compiling flags ### Versions The latest torch version:1.13 cc @malfet @seemethere @ngimel",1.0,"torch is not compiled with CUDA support - ### 🐛 Describe the bug when I compiled pytorch source code according to official docs, I found that I cannot run torch inference on GPU. How can I manually control the compiling flags ### Versions The latest torch version:1.13 cc @malfet @seemethere @ngimel",0,torch is not compiled with cuda support 🐛 describe the bug when i compiled pytorch source code according to official docs i found that i cannot run torch inference on gpu how can i manually control the compiling flags versions the latest torch version cc malfet seemethere ngimel,0 9997,31023019398.0,IssuesEvent,2023-08-10 07:11:44,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Testcafe fails to find selector after performing action on it,TYPE: bug FREQUENCY: level 1 SYSTEM: native automation,"### What is your Scenario? Have updated to 3.1.0-rc.1 and running in native automation, when running example below notice that TestCafe fails to find a selector that it has already interacted with and should not be clicking anymore. ### What is the Current behavior? TestCafe fails to find selector that it has just performed action on. ### What is the Expected behavior? Expect TestCafe to not fail and move onto next action. ### What is your public website URL? (or attach your complete example) stage.economist.com ### What is your TestCafe test code? // cookie-consent.js // ``` import { RequestMock } from 'testcafe'; // Mock Evidon cookie consent to avoid interacting with the module on each new session export function mockEvidonCookieConsent() { return RequestMock() .onRequestTo(/evidon\.com\//) .respond('', 200); } export function mockSourcepointCookieConsent() { return RequestMock() .onRequestTo(/cmp-cdn\.p\.aws\.economist\.com\/latest\/cmp\.min\.js/) .respond('', 200); } ``` // example.js // ``` import { ClientFunction, fixture , Selector, t } from 'testcafe'; import { xpathSelector } from './helpers'; import { mockEvidonCookieConsent, mockSourcepointCookieConsent, } from './cookie-consent'; fixture `example fixture` .page('stage.economist.com') .requestHooks([mockEvidonCookieConsent(), mockSourcepointCookieConsent()]) const loginLink = Selector('.ds-masthead').find('a').withText('Log in') const subscribeLink = Selector('header a').withText('Subscribe'); const emailField = xpathSelector('//*[@type=""text""]'); const passwordField = xpathSelector('//*[@type=""password""]'); test('example', async t => { console.log('it exists') await t.click(loginLink) console.log('clicked') await t.typeText(emailField, 'email@example.com') await t.typeText(passwordField, 'password') console.log('password entered') }); test('example 2', async t => { console.log('it exists') await t.click(loginLink) console.log('clicked') await t.typeText(emailField, 'email@example.com') console.log('email entered') await t.typeText(passwordField, 'password') console.log('password entered') }); test('example 3', async t => { console.log('it exists') await t.click(loginLink) console.log('clicked') await t.typeText(emailField, 'email@example.com') console.log('email entered') await t.typeText(passwordField, 'password') console.log('password entered') }); test('example 3', async t => { console.log('it exists') await t.click(subscribeLink); console.log('clicked') const getURL = await ClientFunction(() => window.location.href)(); await t.expect(getURL).contains('subscribe'); }); ``` helpers.js ``` import { Selector } from 'testcafe'; /** * Retrieves all elements that match a given xpath. * @param {string} xpath - xpath to search with * @return {Object} found elements */ const getElementsByXPath = Selector(xpath => { const iterator = document.evaluate( xpath, document, null, XPathResult.UNORDERED_NODE_ITERATOR_TYPE, null, ); const items = []; let item = iterator.iterateNext(); while (item) { items.push(item); item = iterator.iterateNext(); } return items; }); /** * Create a selector based on a xpath. Testcafe does not natively support xpath * selectors, hence this function. * @param {string} xpath - xpath to search with * @returns {Selector} returns a selector */ export const xpathSelector = xpath => { return Selector(getElementsByXPath(xpath)); }; ``` ### Your complete configuration file _No response_ ### Your complete test report ``` example fixture it exists ✖ example 1) The specified selector does not match any element in the DOM tree. > | Selector('.ds-masthead') | .find('a') | .withText('Log in') Browser: Chrome 114.0.0.0 / Ventura 13 14 |const emailField = xpathSelector('//*[@type=""text""]'); 15 |const passwordField = xpathSelector('//*[@type=""password""]'); 16 | 17 |test('example', async t => { 18 | console.log('it exists') > 19 | await t.click(loginLink) 20 | console.log('clicked') 21 | await t.typeText(emailField, 'email@example.com') 22 | await t.typeText(passwordField, 'password') 23 | console.log('password entered') 24 |}); at (/Users/tomgrisley/Desktop/example/example.js:19:11) at asyncGeneratorStep (/Users/tomgrisley/Desktop/example/example.js:3:150) at _next (/Users/tomgrisley/Desktop/example/example.js:3:488) at (/Users/tomgrisley/Desktop/example/example.js:3:653) at (/Users/tomgrisley/Desktop/example/example.js:3:394) at (/Users/tomgrisley/Desktop/example/example.js:17:5) ``` ### Screenshots ### Steps to Reproduce 1. testcafe chrome example.js --skip-js-errors 2. notice that it clicks on login link on homepage and takes user to login page 3. notice that testcafe fails as it cannot find login link on homepage, even though it has just performed this action so shouldn't need to check it? ### TestCafe version 3.1.0-rc.1 ### Node.js version _No response_ ### Command-line arguments testcafe chrome example.js --skip-js-errors ### Browser name(s) and version(s) Chrome 114 ### Platform(s) and version(s) macOS 13.3 ### Other _No response_",1.0,"Testcafe fails to find selector after performing action on it - ### What is your Scenario? Have updated to 3.1.0-rc.1 and running in native automation, when running example below notice that TestCafe fails to find a selector that it has already interacted with and should not be clicking anymore. ### What is the Current behavior? TestCafe fails to find selector that it has just performed action on. ### What is the Expected behavior? Expect TestCafe to not fail and move onto next action. ### What is your public website URL? (or attach your complete example) stage.economist.com ### What is your TestCafe test code? // cookie-consent.js // ``` import { RequestMock } from 'testcafe'; // Mock Evidon cookie consent to avoid interacting with the module on each new session export function mockEvidonCookieConsent() { return RequestMock() .onRequestTo(/evidon\.com\//) .respond('', 200); } export function mockSourcepointCookieConsent() { return RequestMock() .onRequestTo(/cmp-cdn\.p\.aws\.economist\.com\/latest\/cmp\.min\.js/) .respond('', 200); } ``` // example.js // ``` import { ClientFunction, fixture , Selector, t } from 'testcafe'; import { xpathSelector } from './helpers'; import { mockEvidonCookieConsent, mockSourcepointCookieConsent, } from './cookie-consent'; fixture `example fixture` .page('stage.economist.com') .requestHooks([mockEvidonCookieConsent(), mockSourcepointCookieConsent()]) const loginLink = Selector('.ds-masthead').find('a').withText('Log in') const subscribeLink = Selector('header a').withText('Subscribe'); const emailField = xpathSelector('//*[@type=""text""]'); const passwordField = xpathSelector('//*[@type=""password""]'); test('example', async t => { console.log('it exists') await t.click(loginLink) console.log('clicked') await t.typeText(emailField, 'email@example.com') await t.typeText(passwordField, 'password') console.log('password entered') }); test('example 2', async t => { console.log('it exists') await t.click(loginLink) console.log('clicked') await t.typeText(emailField, 'email@example.com') console.log('email entered') await t.typeText(passwordField, 'password') console.log('password entered') }); test('example 3', async t => { console.log('it exists') await t.click(loginLink) console.log('clicked') await t.typeText(emailField, 'email@example.com') console.log('email entered') await t.typeText(passwordField, 'password') console.log('password entered') }); test('example 3', async t => { console.log('it exists') await t.click(subscribeLink); console.log('clicked') const getURL = await ClientFunction(() => window.location.href)(); await t.expect(getURL).contains('subscribe'); }); ``` helpers.js ``` import { Selector } from 'testcafe'; /** * Retrieves all elements that match a given xpath. * @param {string} xpath - xpath to search with * @return {Object} found elements */ const getElementsByXPath = Selector(xpath => { const iterator = document.evaluate( xpath, document, null, XPathResult.UNORDERED_NODE_ITERATOR_TYPE, null, ); const items = []; let item = iterator.iterateNext(); while (item) { items.push(item); item = iterator.iterateNext(); } return items; }); /** * Create a selector based on a xpath. Testcafe does not natively support xpath * selectors, hence this function. * @param {string} xpath - xpath to search with * @returns {Selector} returns a selector */ export const xpathSelector = xpath => { return Selector(getElementsByXPath(xpath)); }; ``` ### Your complete configuration file _No response_ ### Your complete test report ``` example fixture it exists ✖ example 1) The specified selector does not match any element in the DOM tree. > | Selector('.ds-masthead') | .find('a') | .withText('Log in') Browser: Chrome 114.0.0.0 / Ventura 13 14 |const emailField = xpathSelector('//*[@type=""text""]'); 15 |const passwordField = xpathSelector('//*[@type=""password""]'); 16 | 17 |test('example', async t => { 18 | console.log('it exists') > 19 | await t.click(loginLink) 20 | console.log('clicked') 21 | await t.typeText(emailField, 'email@example.com') 22 | await t.typeText(passwordField, 'password') 23 | console.log('password entered') 24 |}); at (/Users/tomgrisley/Desktop/example/example.js:19:11) at asyncGeneratorStep (/Users/tomgrisley/Desktop/example/example.js:3:150) at _next (/Users/tomgrisley/Desktop/example/example.js:3:488) at (/Users/tomgrisley/Desktop/example/example.js:3:653) at (/Users/tomgrisley/Desktop/example/example.js:3:394) at (/Users/tomgrisley/Desktop/example/example.js:17:5) ``` ### Screenshots ### Steps to Reproduce 1. testcafe chrome example.js --skip-js-errors 2. notice that it clicks on login link on homepage and takes user to login page 3. notice that testcafe fails as it cannot find login link on homepage, even though it has just performed this action so shouldn't need to check it? ### TestCafe version 3.1.0-rc.1 ### Node.js version _No response_ ### Command-line arguments testcafe chrome example.js --skip-js-errors ### Browser name(s) and version(s) Chrome 114 ### Platform(s) and version(s) macOS 13.3 ### Other _No response_",1,testcafe fails to find selector after performing action on it what is your scenario have updated to rc and running in native automation when running example below notice that testcafe fails to find a selector that it has already interacted with and should not be clicking anymore what is the current behavior testcafe fails to find selector that it has just performed action on what is the expected behavior expect testcafe to not fail and move onto next action what is your public website url or attach your complete example stage economist com what is your testcafe test code cookie consent js import requestmock from testcafe mock evidon cookie consent to avoid interacting with the module on each new session export function mockevidoncookieconsent return requestmock onrequestto evidon com respond export function mocksourcepointcookieconsent return requestmock onrequestto cmp cdn p aws economist com latest cmp min js respond example js import clientfunction fixture selector t from testcafe import xpathselector from helpers import mockevidoncookieconsent mocksourcepointcookieconsent from cookie consent fixture example fixture page stage economist com requesthooks const loginlink selector ds masthead find a withtext log in const subscribelink selector header a withtext subscribe const emailfield xpathselector const passwordfield xpathselector test example async t console log it exists await t click loginlink console log clicked await t typetext emailfield email example com await t typetext passwordfield password console log password entered test example async t console log it exists await t click loginlink console log clicked await t typetext emailfield email example com console log email entered await t typetext passwordfield password console log password entered test example async t console log it exists await t click loginlink console log clicked await t typetext emailfield email example com console log email entered await t typetext passwordfield password console log password entered test example async t console log it exists await t click subscribelink console log clicked const geturl await clientfunction window location href await t expect geturl contains subscribe helpers js import selector from testcafe retrieves all elements that match a given xpath param string xpath xpath to search with return object found elements const getelementsbyxpath selector xpath const iterator document evaluate xpath document null xpathresult unordered node iterator type null const items let item iterator iteratenext while item items push item item iterator iteratenext return items create a selector based on a xpath testcafe does not natively support xpath selectors hence this function param string xpath xpath to search with returns selector returns a selector export const xpathselector xpath return selector getelementsbyxpath xpath your complete configuration file no response your complete test report example fixture it exists ✖ example the specified selector does not match any element in the dom tree selector ds masthead find a withtext log in browser chrome ventura const emailfield xpathselector const passwordfield xpathselector test example async t console log it exists await t click loginlink console log clicked await t typetext emailfield email example com await t typetext passwordfield password console log password entered at users tomgrisley desktop example example js at asyncgeneratorstep users tomgrisley desktop example example js at next users tomgrisley desktop example example js at users tomgrisley desktop example example js at users tomgrisley desktop example example js at users tomgrisley desktop example example js screenshots img width alt screenshot at src img width alt screenshot at src steps to reproduce testcafe chrome example js skip js errors notice that it clicks on login link on homepage and takes user to login page notice that testcafe fails as it cannot find login link on homepage even though it has just performed this action so shouldn t need to check it testcafe version rc node js version no response command line arguments testcafe chrome example js skip js errors browser name s and version s chrome platform s and version s macos other no response ,1 299872,25933082416.0,IssuesEvent,2022-12-16 11:44:54,DucTrann1310/FeedbackOnline,https://api.github.com/repos/DucTrann1310/FeedbackOnline,opened,[BugID_31]_FUNC_Quản lý học viên_Xóa học viên_Xóa học viên thành công khi học viên đã đánh feedback,bug Open Integration Test Fun_Wrong Business logic Priority_Medium Severity_Medium,"Preconditon: - admin đang ở màn hình Quản lí học viên - Học viên có Account Học viên = 'HocVienB' đã đánh feedback Step: 1. Click [Xóa] button của record 'HocVienB' 2. Hiển thị popup message ""Vô hiệu hóa"" 3. Click [Có] button Actual output: 1. Record đã chọn được update trong DB với status = 0 và remove khỏi grid 2. Hiển thị green toast message 'Xóa học viên thành công' Expected output: 1. Record đã chọn không được update gì trong DB và không remove khỏi grid 2. Hiển thị error message ""Học viên này đã tham gia đánh feedback nên không thể xóa được!"" -------------- TestcaseID = 43 ![image](https://user-images.githubusercontent.com/118715011/208091217-57cf7c50-ad39-4a80-960d-b998d6ad4393.png) ",1.0,"[BugID_31]_FUNC_Quản lý học viên_Xóa học viên_Xóa học viên thành công khi học viên đã đánh feedback - Preconditon: - admin đang ở màn hình Quản lí học viên - Học viên có Account Học viên = 'HocVienB' đã đánh feedback Step: 1. Click [Xóa] button của record 'HocVienB' 2. Hiển thị popup message ""Vô hiệu hóa"" 3. Click [Có] button Actual output: 1. Record đã chọn được update trong DB với status = 0 và remove khỏi grid 2. Hiển thị green toast message 'Xóa học viên thành công' Expected output: 1. Record đã chọn không được update gì trong DB và không remove khỏi grid 2. Hiển thị error message ""Học viên này đã tham gia đánh feedback nên không thể xóa được!"" -------------- TestcaseID = 43 ![image](https://user-images.githubusercontent.com/118715011/208091217-57cf7c50-ad39-4a80-960d-b998d6ad4393.png) ",0, func quản lý học viên xóa học viên xóa học viên thành công khi học viên đã đánh feedback preconditon admin đang ở màn hình quản lí học viên học viên có account học viên hocvienb đã đánh feedback step click button của record hocvienb hiển thị popup message vô hiệu hóa click button actual output record đã chọn được update trong db với status và remove khỏi grid hiển thị green toast message xóa học viên thành công expected output record đã chọn không được update gì trong db và không remove khỏi grid hiển thị error message học viên này đã tham gia đánh feedback nên không thể xóa được testcaseid ,0 786214,27638959904.0,IssuesEvent,2023-03-10 16:29:00,linkerd/linkerd2,https://api.github.com/repos/linkerd/linkerd2,closed,linkerd2 (2.11) control plane pod failure on k8s 1.21,help wanted priority/triage bug env/eks,"### What is the issue? When installing linkerd2 (version 2.11) on k8s 1.21 (EKS running on AWS) the control plane services fail to come up. ### How can it be reproduced? I'm installing linkerd2 via helm here, passing in manually generated the cert/keys as flags to helm. The same setup has worked for us when running linkerd2 version 2.9 on k8s 1.18 and 1.19. ### Logs, error output, etc ``` ; k logs pods/linkerd-destination-6b4bfb9f87-hpvg4 -n linkerd linkerd-proxy time=""2022-01-28T18:13:19Z"" level=info msg=""Found pre-existing key: /var/run/linkerd/identity/end-entity/key.p8"" time=""2022-01-28T18:13:19Z"" level=info msg=""Found pre-existing CSR: /var/run/linkerd/identity/end-entity/csr.der"" [ 0.001141s] ERROR ThreadId(01) linkerd_app::env: Could not read LINKERD2_PROXY_IDENTITY_TOKEN_FILE: Permission denied (os error 13) [ 0.001176s] ERROR ThreadId(01) linkerd_app::env: LINKERD2_PROXY_IDENTITY_TOKEN_FILE=""/var/run/secrets/kubernetes.io/serviceaccount/token"" is not valid: InvalidTokenSource Invalid configuration: invalid environment variable ``` ### output of `linkerd check -o short` ``` Linkerd core checks =================== linkerd-existence ----------------- \ pod/linkerd-destination-6b4bfb9f87-hpvg4 container sp-validator is not ready ``` ### Environment Kubernetes: 1.21 Host Env: EKS/AWS Linkerd version: 2.11 HostOs: Amazon Linux2 ### Possible solution _No response_ ### Additional context _No response_ ### Would you like to work on fixing this bug? _No response_",1.0,"linkerd2 (2.11) control plane pod failure on k8s 1.21 - ### What is the issue? When installing linkerd2 (version 2.11) on k8s 1.21 (EKS running on AWS) the control plane services fail to come up. ### How can it be reproduced? I'm installing linkerd2 via helm here, passing in manually generated the cert/keys as flags to helm. The same setup has worked for us when running linkerd2 version 2.9 on k8s 1.18 and 1.19. ### Logs, error output, etc ``` ; k logs pods/linkerd-destination-6b4bfb9f87-hpvg4 -n linkerd linkerd-proxy time=""2022-01-28T18:13:19Z"" level=info msg=""Found pre-existing key: /var/run/linkerd/identity/end-entity/key.p8"" time=""2022-01-28T18:13:19Z"" level=info msg=""Found pre-existing CSR: /var/run/linkerd/identity/end-entity/csr.der"" [ 0.001141s] ERROR ThreadId(01) linkerd_app::env: Could not read LINKERD2_PROXY_IDENTITY_TOKEN_FILE: Permission denied (os error 13) [ 0.001176s] ERROR ThreadId(01) linkerd_app::env: LINKERD2_PROXY_IDENTITY_TOKEN_FILE=""/var/run/secrets/kubernetes.io/serviceaccount/token"" is not valid: InvalidTokenSource Invalid configuration: invalid environment variable ``` ### output of `linkerd check -o short` ``` Linkerd core checks =================== linkerd-existence ----------------- \ pod/linkerd-destination-6b4bfb9f87-hpvg4 container sp-validator is not ready ``` ### Environment Kubernetes: 1.21 Host Env: EKS/AWS Linkerd version: 2.11 HostOs: Amazon Linux2 ### Possible solution _No response_ ### Additional context _No response_ ### Would you like to work on fixing this bug? _No response_",0, control plane pod failure on what is the issue when installing version on eks running on aws the control plane services fail to come up how can it be reproduced i m installing via helm here passing in manually generated the cert keys as flags to helm the same setup has worked for us when running version on and logs error output etc k logs pods linkerd destination n linkerd linkerd proxy time level info msg found pre existing key var run linkerd identity end entity key time level info msg found pre existing csr var run linkerd identity end entity csr der error threadid linkerd app env could not read proxy identity token file permission denied os error error threadid linkerd app env proxy identity token file var run secrets kubernetes io serviceaccount token is not valid invalidtokensource invalid configuration invalid environment variable output of linkerd check o short linkerd core checks linkerd existence pod linkerd destination container sp validator is not ready environment kubernetes host env eks aws linkerd version hostos amazon possible solution no response additional context no response would you like to work on fixing this bug no response ,0 912,8685340429.0,IssuesEvent,2018-12-03 07:20:53,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Wrong events is sequence simulated during test execution,AREA: client SYSTEM: automations TYPE: bug,"Reproducing: 1) page ```html ``` 2) test ```js fixture `fix` .page `http://localhost/testcafe`; test('test', async t => { await t .typeText('#test', 'aaa') .click('#anotherInput') .debug(); //check events sequence in console (focus in `doTest()` handler doesn't have an effect) }); ``` After test actions without TestCafe we have `input` with `id === 'test'` is `document.activeElement` and it have `selectionStart = 0, selectionEnd = 3`. But with TestCafe only selection is correct. ",1.0,"Wrong events is sequence simulated during test execution - Reproducing: 1) page ```html ``` 2) test ```js fixture `fix` .page `http://localhost/testcafe`; test('test', async t => { await t .typeText('#test', 'aaa') .click('#anotherInput') .debug(); //check events sequence in console (focus in `doTest()` handler doesn't have an effect) }); ``` After test actions without TestCafe we have `input` with `id === 'test'` is `document.activeElement` and it have `selectionStart = 0, selectionEnd = 3`. But with TestCafe only selection is correct. ",1,wrong events is sequence simulated during test execution reproducing page html function handler e console log console log e type console log e target console log document activeelement console log const checkedevents for const event of checkedevents document addeventlistener event handler true function dotest const input document getelementbyid test if input value aaa input focus input setselectionrange test js fixture fix page test test async t await t typetext test aaa click anotherinput debug check events sequence in console focus in dotest handler doesn t have an effect after test actions without testcafe we have input with id test is document activeelement and it have selectionstart selectionend but with testcafe only selection is correct ,1 31604,13577921357.0,IssuesEvent,2020-09-20 04:32:59,Ryujinx/Ryujinx,https://api.github.com/repos/Ryujinx/Ryujinx,closed,acc:u0 IAccountServiceForApplication: 141 (ListQualifiedUsers) is not implemented,needs-re not-implemented service:account,"IAccountServiceForApplication: ListQualifiedUsers is not implemented. ## Service description: [Switchbrew](https://switchbrew.org/wiki/Account_services#acc:u0) ``` Unknown ``` Unknown ## Required by: - https://github.com/Ryujinx/Ryujinx-Games-List/issues/1764 ",1.0,"acc:u0 IAccountServiceForApplication: 141 (ListQualifiedUsers) is not implemented - IAccountServiceForApplication: ListQualifiedUsers is not implemented. ## Service description: [Switchbrew](https://switchbrew.org/wiki/Account_services#acc:u0) ``` Unknown ``` Unknown ## Required by: - https://github.com/Ryujinx/Ryujinx-Games-List/issues/1764 ",0,acc iaccountserviceforapplication listqualifiedusers is not implemented iaccountserviceforapplication listqualifiedusers is not implemented service description unknown unknown required by ,0 85044,7960570954.0,IssuesEvent,2018-07-13 07:45:51,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,github.com/cockroachdb/cockroach/pkg/storage: {WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[100]_readerFirst:false_interferes:false} failed under stress,C-test-failure O-robot,"SHA: https://github.com/cockroachdb/cockroach/commits/7a388b610d1e91d8fa3f62c29a7ad39f17b0cc60 Parameters: ``` TAGS= GOFLAGS= ``` Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=776082&tab=buildLog ``` === RUN TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[100]_readerFirst:false_interferes:false} ```",1.0,"github.com/cockroachdb/cockroach/pkg/storage: {WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[100]_readerFirst:false_interferes:false} failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/7a388b610d1e91d8fa3f62c29a7ad39f17b0cc60 Parameters: ``` TAGS= GOFLAGS= ``` Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=776082&tab=buildLog ``` === RUN TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[100]_readerFirst:false_interferes:false} ```",0,github com cockroachdb cockroach pkg storage walltime logical writerts walltime logical key readerfirst false interferes false failed under stress sha parameters tags goflags failed test run testreplicacommandqueuetimestampnoninterference readerts walltime logical writerts walltime logical key readerfirst false interferes false ,0 920,8697378278.0,IssuesEvent,2018-12-04 20:05:49,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Authentication not working when trying to setup new Source Control,automation/svc cxp product-issue triaged,"As other issues opened, the Authenticate button is not working. Error message ""The authorization process could not be completed."" is showing up. I've seen the Personal Access Token Permissions, but it's not clear how it needs to be configured. Even if I manually create a Personal Access Token, there is no place to enter it into the Source Control page. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 83c90e64-b615-711f-a53d-fc76606e2ecd * Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea * Content: [Source Control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration) * Content Source: [articles/automation/source-control-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/source-control-integration.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Authentication not working when trying to setup new Source Control - As other issues opened, the Authenticate button is not working. Error message ""The authorization process could not be completed."" is showing up. I've seen the Personal Access Token Permissions, but it's not clear how it needs to be configured. Even if I manually create a Personal Access Token, there is no place to enter it into the Source Control page. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 83c90e64-b615-711f-a53d-fc76606e2ecd * Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea * Content: [Source Control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration) * Content Source: [articles/automation/source-control-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/source-control-integration.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,authentication not working when trying to setup new source control as other issues opened the authenticate button is not working error message the authorization process could not be completed is showing up i ve seen the personal access token permissions but it s not clear how it needs to be configured even if i manually create a personal access token there is no place to enter it into the source control page document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 445346,31235458590.0,IssuesEvent,2023-08-20 08:05:08,garcialo/OATAudit,https://api.github.com/repos/garcialo/OATAudit,opened,Add Issue Instances,documentation IssueTable,"- Currently: Only one issue instance can exist for each check. - Future: Multiple instances of an issue can be created for each check. ### Ideas: Add button to each row to ""create another instance for this check""? Maybe too many buttons/tab stops. Select dropdown of issues + ""create"" button? Probably too many issues. Add ""actions"" button to each row and embed ""create another instance of this check"" in there? ",1.0,"Add Issue Instances - - Currently: Only one issue instance can exist for each check. - Future: Multiple instances of an issue can be created for each check. ### Ideas: Add button to each row to ""create another instance for this check""? Maybe too many buttons/tab stops. Select dropdown of issues + ""create"" button? Probably too many issues. Add ""actions"" button to each row and embed ""create another instance of this check"" in there? ",0,add issue instances currently only one issue instance can exist for each check future multiple instances of an issue can be created for each check ideas add button to each row to create another instance for this check maybe too many buttons tab stops select dropdown of issues create button probably too many issues add actions button to each row and embed create another instance of this check in there ,0 2720,12481154816.0,IssuesEvent,2020-05-29 21:49:04,amazeeio/lagoon,https://api.github.com/repos/amazeeio/lagoon,closed,Deal with poorly encoded webhook payloads smarter,8-automation-helpers,"**Describe the bug** When you configure a webhook from Github to Lagoon, and say fail to read the docs 100% perfect, you might skip over setting ![image](https://user-images.githubusercontent.com/10605846/81551750-7d23bd00-93d6-11ea-9449-40ec97c0b340.png) This in turn leads to very cryptic error messages in the webhook logs in Github: ```json {""error"":""request body is not parsable as JSON: SyntaxError: Unexpected token p in JSON at position 0""} ``` **To Reproduce** Steps to reproduce the behavior: 1. Go to Github 2. Configure webhook, but use the default of `application/x-www-form-urlencoded` 3. Do a commit 4. See cryptic error message **Expected behavior** Lagoon responds with a message saying ""You must select `application/json` for the payload in the webhook handler"" or similar **Screenshots** ![image](https://user-images.githubusercontent.com/10605846/81552097-f28f8d80-93d6-11ea-8786-dbb7032645f2.png) ",1.0,"Deal with poorly encoded webhook payloads smarter - **Describe the bug** When you configure a webhook from Github to Lagoon, and say fail to read the docs 100% perfect, you might skip over setting ![image](https://user-images.githubusercontent.com/10605846/81551750-7d23bd00-93d6-11ea-9449-40ec97c0b340.png) This in turn leads to very cryptic error messages in the webhook logs in Github: ```json {""error"":""request body is not parsable as JSON: SyntaxError: Unexpected token p in JSON at position 0""} ``` **To Reproduce** Steps to reproduce the behavior: 1. Go to Github 2. Configure webhook, but use the default of `application/x-www-form-urlencoded` 3. Do a commit 4. See cryptic error message **Expected behavior** Lagoon responds with a message saying ""You must select `application/json` for the payload in the webhook handler"" or similar **Screenshots** ![image](https://user-images.githubusercontent.com/10605846/81552097-f28f8d80-93d6-11ea-8786-dbb7032645f2.png) ",1,deal with poorly encoded webhook payloads smarter describe the bug when you configure a webhook from github to lagoon and say fail to read the docs perfect you might skip over setting this in turn leads to very cryptic error messages in the webhook logs in github json error request body is not parsable as json syntaxerror unexpected token p in json at position to reproduce steps to reproduce the behavior go to github configure webhook but use the default of application x www form urlencoded do a commit see cryptic error message expected behavior lagoon responds with a message saying you must select application json for the payload in the webhook handler or similar screenshots ,1 143424,11563496434.0,IssuesEvent,2020-02-20 06:13:41,ballerina-platform/ballerina-lang,https://api.github.com/repos/ballerina-platform/ballerina-lang,closed,Tests written for testerina are disabled,Area/TestFramework Priority/High,"**Description:** Unit tests for testerina are disabled due to incompatibility. These should be re-written to match 1.1.x. ",1.0,"Tests written for testerina are disabled - **Description:** Unit tests for testerina are disabled due to incompatibility. These should be re-written to match 1.1.x. ",0,tests written for testerina are disabled description unit tests for testerina are disabled due to incompatibility these should be re written to match x ,0 1675,10574707563.0,IssuesEvent,2019-10-07 14:28:24,perfsonar/project,https://api.github.com/repos/perfsonar/project,opened,Ansible Deployment troubleshooting for psconfig publishers,Automation enhancement,"Make sure that testpoints, toolkits, and dashboards that consume schedules from psconfig publishers can successfully retrieve the schedules via curl after deployment.",1.0,"Ansible Deployment troubleshooting for psconfig publishers - Make sure that testpoints, toolkits, and dashboards that consume schedules from psconfig publishers can successfully retrieve the schedules via curl after deployment.",1,ansible deployment troubleshooting for psconfig publishers make sure that testpoints toolkits and dashboards that consume schedules from psconfig publishers can successfully retrieve the schedules via curl after deployment ,1 780,8115073942.0,IssuesEvent,2018-08-15 04:38:45,oilshell/oil,https://api.github.com/repos/oilshell/oil,closed,"Release spec test HTML should show the real _bin/osh binary, not byterun",release-automation,"This is because we run the spec tests twice in `scripts/release.sh`, and only the second HTML tree is published. This shows `osh-byterun`: http://www.oilshell.org/release/0.5.0/test/spec.wwz/append.html ",1.0,"Release spec test HTML should show the real _bin/osh binary, not byterun - This is because we run the spec tests twice in `scripts/release.sh`, and only the second HTML tree is published. This shows `osh-byterun`: http://www.oilshell.org/release/0.5.0/test/spec.wwz/append.html ",1,release spec test html should show the real bin osh binary not byterun this is because we run the spec tests twice in scripts release sh and only the second html tree is published this shows osh byterun ,1 602652,18492603718.0,IssuesEvent,2021-10-19 03:37:32,AY2122S1-CS2103T-F12-2/tp,https://api.github.com/repos/AY2122S1-CS2103T-F12-2/tp,closed,As an HR I would like to delete a filtered list of applicants all in one go,priority.Medium,"Builds on top of delete where all applicants on the observable list is deleted ",1.0,"As an HR I would like to delete a filtered list of applicants all in one go - Builds on top of delete where all applicants on the observable list is deleted ",0,as an hr i would like to delete a filtered list of applicants all in one go builds on top of delete where all applicants on the observable list is deleted ,0 40685,5253103529.0,IssuesEvent,2017-02-02 08:21:00,cgstudiomap/cgstudiomap,https://api.github.com/repos/cgstudiomap/cgstudiomap,closed,"En tant que Webdesigner, je dois corriger la navbar seulement pour les appareils mobiles et de petites résolution.",4 - Done bug design,"- [x] changer la hauteur - [x] Taille Texte placeholder recherche - [x] Largeur Barre de recherche Responsive - [x] Meilleure UX pour du TOUCH (identifier les zones d'interactions) - [x] Revoir les espaces. ",1.0,"En tant que Webdesigner, je dois corriger la navbar seulement pour les appareils mobiles et de petites résolution. - - [x] changer la hauteur - [x] Taille Texte placeholder recherche - [x] Largeur Barre de recherche Responsive - [x] Meilleure UX pour du TOUCH (identifier les zones d'interactions) - [x] Revoir les espaces. ",0,en tant que webdesigner je dois corriger la navbar seulement pour les appareils mobiles et de petites résolution changer la hauteur taille texte placeholder recherche largeur barre de recherche responsive meilleure ux pour du touch identifier les zones d interactions revoir les espaces huboard order milestone order custom state ready ,0 1412,10081979540.0,IssuesEvent,2019-07-25 10:02:45,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,Add pre-commit stage,[zube]: In Progress automation ci enhancement,"we've configured pre-commit in the repository so we can add a pre-commit stage to make the same checks related PR https://github.com/elastic/apm-integration-testing/pull/547",1.0,"Add pre-commit stage - we've configured pre-commit in the repository so we can add a pre-commit stage to make the same checks related PR https://github.com/elastic/apm-integration-testing/pull/547",1,add pre commit stage we ve configured pre commit in the repository so we can add a pre commit stage to make the same checks related pr ,1 1518,10265653006.0,IssuesEvent,2019-08-22 19:23:52,askmench/mench-web-app,https://api.github.com/repos/askmench/mench-web-app,closed,Trainers App V1,Bot/Chat-Automation Company Tools Increase Engagements Inputs/Forms Major New Feature,"After reviewing TripleByte #2423 #2421 #2395 I believe we need a more customized solution for partner companies so their HR team can easily manage their candidate funnel for each Mench job posting. Imagine a combination of Mench + Trello + HubSpot that would enable company HR staff to login and seamlessly manage their end-to-end hiring funnel. Mench Company App Entity: https://mench.com/entities/7370 ### Funnel Manager App Features This is our 3rd product for Company partners, and will include: - **Dashboard** which lists the latest updates from all users/jobs/candidates for each company. Also provided general stats on active Cities, Job Postings & Successful Hires. - **Jobs** which will be [similar to a Trello Board](https://trello.com/b/A1zZKFBt/get-hired-as-a-full-stack-developer-at-airbnb-in-new-york), one board per job posting and a funnel that HR team can easily manage and automate. They can even add messages for each step of the funnel to inform users of what they need to do (like Pitch Call preparation) which is basically making them miners with an easy-to-use UI. Initially, we can start with Candidate Management APIs #2443 - **Company Settings** to manage payments (for unlocking Mench leads not referred by them), hiring cities, team members, etc... - **My Account** (We already have) to allow each HR staff to manage their own account, notifications, etc... - **Logout** (We already have) ### Company Onboarding Workflow 1. Partner Companies signup through instant login via Facebook, Google or email/password 2. They land at the Job Funnels tab where they can add their first job 3. They modify the job settings to define its position, location, skills, etc... through easy to use UI 4. They get a Messenger and/or landing page URL that they can share with their candidates 5. Candidates click on the link to start their screening process 6. If passed, they would be placed into the job Funnel to be reviewed by the HR team and advance through that funnel TODO - [ ] Company HR Staff Permissions V1 #2332 ",1.0,"Trainers App V1 - After reviewing TripleByte #2423 #2421 #2395 I believe we need a more customized solution for partner companies so their HR team can easily manage their candidate funnel for each Mench job posting. Imagine a combination of Mench + Trello + HubSpot that would enable company HR staff to login and seamlessly manage their end-to-end hiring funnel. Mench Company App Entity: https://mench.com/entities/7370 ### Funnel Manager App Features This is our 3rd product for Company partners, and will include: - **Dashboard** which lists the latest updates from all users/jobs/candidates for each company. Also provided general stats on active Cities, Job Postings & Successful Hires. - **Jobs** which will be [similar to a Trello Board](https://trello.com/b/A1zZKFBt/get-hired-as-a-full-stack-developer-at-airbnb-in-new-york), one board per job posting and a funnel that HR team can easily manage and automate. They can even add messages for each step of the funnel to inform users of what they need to do (like Pitch Call preparation) which is basically making them miners with an easy-to-use UI. Initially, we can start with Candidate Management APIs #2443 - **Company Settings** to manage payments (for unlocking Mench leads not referred by them), hiring cities, team members, etc... - **My Account** (We already have) to allow each HR staff to manage their own account, notifications, etc... - **Logout** (We already have) ### Company Onboarding Workflow 1. Partner Companies signup through instant login via Facebook, Google or email/password 2. They land at the Job Funnels tab where they can add their first job 3. They modify the job settings to define its position, location, skills, etc... through easy to use UI 4. They get a Messenger and/or landing page URL that they can share with their candidates 5. Candidates click on the link to start their screening process 6. If passed, they would be placed into the job Funnel to be reviewed by the HR team and advance through that funnel TODO - [ ] Company HR Staff Permissions V1 #2332 ",1,trainers app after reviewing triplebyte i believe we need a more customized solution for partner companies so their hr team can easily manage their candidate funnel for each mench job posting imagine a combination of mench trello hubspot that would enable company hr staff to login and seamlessly manage their end to end hiring funnel mench company app entity funnel manager app features this is our product for company partners and will include dashboard which lists the latest updates from all users jobs candidates for each company also provided general stats on active cities job postings successful hires jobs which will be one board per job posting and a funnel that hr team can easily manage and automate they can even add messages for each step of the funnel to inform users of what they need to do like pitch call preparation which is basically making them miners with an easy to use ui initially we can start with candidate management apis company settings to manage payments for unlocking mench leads not referred by them hiring cities team members etc my account we already have to allow each hr staff to manage their own account notifications etc logout we already have company onboarding workflow partner companies signup through instant login via facebook google or email password they land at the job funnels tab where they can add their first job they modify the job settings to define its position location skills etc through easy to use ui they get a messenger and or landing page url that they can share with their candidates candidates click on the link to start their screening process if passed they would be placed into the job funnel to be reviewed by the hr team and advance through that funnel todo company hr staff permissions ,1 6204,22496744030.0,IssuesEvent,2022-06-23 08:16:00,hackforla/website,https://api.github.com/repos/hackforla/website,closed,GitHub Actions: Bot adding and removing the same label,role: back end/devOps Size: Large Feature: Board/GitHub Maintenance automation size: 1pt,"### Overview As a developer, we have to ensure that our kanban board is organized for all teams so that productivity is high. For this issue, we want to know why the GitHub action bot is adding and removing the same label and change the code logic so it doesn't do that. ### Action Items - [x] Please go through the wiki article on [Hack for LA's GitHub Actions](https://github.com/hackforla/website/wiki/Hack-for-LA's-GitHub-Actions) - [x] Review the [set-pr-label.js](https://github.com/hackforla/website/blob/gh-pages/github-actions/trigger-pr/set-pr-labels.js) file, to understand how labels are applied on the PR - [x] Understand why the comment adds and removes the same label. Helpful links from PR #2621: [Link1](https://github.com/hackforla/website/runs/4611929579?check_suite_focus=true), [Link2](https://github.com/hackforla/website/runs/4611935048?check_suite_focus=true) - [x] Change the code logic to fix this error - [x] Make sure all labels still get added correctly ### Checks - [x] Test in your local environment that it works - [x] Only correct labels should be mentioned in the GitHub bot comment ### Resources/Instructions
Screenshot of pr #2621 showing github-actions adding and removing labels ""role: back end/devOps"" and ""Size: Large""
Never done GitHub actions? [Start here!](https://docs.github.com/en/actions) [GitHub Complex Workflows doc](https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows) [GitHub Actions Workflow Directory](https://github.com/hackforla/website/tree/gh-pages/.github/workflows) [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows) [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions) [actions/github-script](https://github.com/actions/github-script) [GitHub RESTAPI](https://docs.github.com/en/rest) #### Architecture Notes The idea behind the refactor is to organize our GitHub Actions so that developers can easily maintain and understand them. Currently, we want our GitHub Actions to be structured like so based on this [proposal](https://docs.google.com/spreadsheets/d/12NcZQoyGYlHlMQtJE2IM8xLYpHN75agb/edit#gid=1231634015): - Schedules (military time) - Schedule Friday 0700 - Schedule Thursday 1100 - Schedule Daily 1100 - Linters - Lint SCSS - PR Trigger - Add Linked Issue Labels to Pull Request - Add Pull Request Instructions - Issue Trigger - Add Missing Labels To Issues - WR - PR Trigger - WR Add Linked Issue Labels to Pull Request - WR Add Pull Request Instructions - WR - Issue Trigger Actions with the same triggers (excluding linters, which will be their own category) will live in the same github action file. Scheduled actions will live in the same file if they trigger on the same schedule (i.e. all files that trigger everyday at 11am will live in one file, while files that trigger on Friday at 7am will be on a separate file). That said, this structure is not set in stone. If any part of it feels strange, or you have questions, feel free to bring it up with the team so we can evolve this format!",1.0,"GitHub Actions: Bot adding and removing the same label - ### Overview As a developer, we have to ensure that our kanban board is organized for all teams so that productivity is high. For this issue, we want to know why the GitHub action bot is adding and removing the same label and change the code logic so it doesn't do that. ### Action Items - [x] Please go through the wiki article on [Hack for LA's GitHub Actions](https://github.com/hackforla/website/wiki/Hack-for-LA's-GitHub-Actions) - [x] Review the [set-pr-label.js](https://github.com/hackforla/website/blob/gh-pages/github-actions/trigger-pr/set-pr-labels.js) file, to understand how labels are applied on the PR - [x] Understand why the comment adds and removes the same label. Helpful links from PR #2621: [Link1](https://github.com/hackforla/website/runs/4611929579?check_suite_focus=true), [Link2](https://github.com/hackforla/website/runs/4611935048?check_suite_focus=true) - [x] Change the code logic to fix this error - [x] Make sure all labels still get added correctly ### Checks - [x] Test in your local environment that it works - [x] Only correct labels should be mentioned in the GitHub bot comment ### Resources/Instructions
Screenshot of pr #2621 showing github-actions adding and removing labels ""role: back end/devOps"" and ""Size: Large""
Never done GitHub actions? [Start here!](https://docs.github.com/en/actions) [GitHub Complex Workflows doc](https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows) [GitHub Actions Workflow Directory](https://github.com/hackforla/website/tree/gh-pages/.github/workflows) [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows) [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions) [actions/github-script](https://github.com/actions/github-script) [GitHub RESTAPI](https://docs.github.com/en/rest) #### Architecture Notes The idea behind the refactor is to organize our GitHub Actions so that developers can easily maintain and understand them. Currently, we want our GitHub Actions to be structured like so based on this [proposal](https://docs.google.com/spreadsheets/d/12NcZQoyGYlHlMQtJE2IM8xLYpHN75agb/edit#gid=1231634015): - Schedules (military time) - Schedule Friday 0700 - Schedule Thursday 1100 - Schedule Daily 1100 - Linters - Lint SCSS - PR Trigger - Add Linked Issue Labels to Pull Request - Add Pull Request Instructions - Issue Trigger - Add Missing Labels To Issues - WR - PR Trigger - WR Add Linked Issue Labels to Pull Request - WR Add Pull Request Instructions - WR - Issue Trigger Actions with the same triggers (excluding linters, which will be their own category) will live in the same github action file. Scheduled actions will live in the same file if they trigger on the same schedule (i.e. all files that trigger everyday at 11am will live in one file, while files that trigger on Friday at 7am will be on a separate file). That said, this structure is not set in stone. If any part of it feels strange, or you have questions, feel free to bring it up with the team so we can evolve this format!",1,github actions bot adding and removing the same label overview as a developer we have to ensure that our kanban board is organized for all teams so that productivity is high for this issue we want to know why the github action bot is adding and removing the same label and change the code logic so it doesn t do that action items please go through the wiki article on review the file to understand how labels are applied on the pr understand why the comment adds and removes the same label helpful links from pr change the code logic to fix this error make sure all labels still get added correctly checks test in your local environment that it works only correct labels should be mentioned in the github bot comment resources instructions screenshot of pr showing github actions adding and removing labels role back end devops and size large img width alt pr gha adding removing backend large labels src never done github actions architecture notes the idea behind the refactor is to organize our github actions so that developers can easily maintain and understand them currently we want our github actions to be structured like so based on this schedules military time schedule friday schedule thursday schedule daily linters lint scss pr trigger add linked issue labels to pull request add pull request instructions issue trigger add missing labels to issues wr pr trigger wr add linked issue labels to pull request wr add pull request instructions wr issue trigger actions with the same triggers excluding linters which will be their own category will live in the same github action file scheduled actions will live in the same file if they trigger on the same schedule i e all files that trigger everyday at will live in one file while files that trigger on friday at will be on a separate file that said this structure is not set in stone if any part of it feels strange or you have questions feel free to bring it up with the team so we can evolve this format ,1 131517,10697861168.0,IssuesEvent,2019-10-23 17:26:17,pandas-dev/pandas,https://api.github.com/repos/pandas-dev/pandas,closed,Series of integers mod behavior,Needs Tests Numeric good first issue,"#### Code Sample ```python import pandas as pd s=pd.Series(range(1,10)) s1=pd.Series(""foo"") s1%s 0 foo 1 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 2 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 3 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 4 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 5 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 6 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 7 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 8 [nan, nan, nan, nan, nan, nan, nan, nan, nan] dtype: object import pandas as pd s=pd.Series(range(1,10)) s2=pd.Series(""foo"", index=s.index) s2%s 0 foo 1 foo 2 foo 3 foo 4 foo 5 foo 6 foo 7 foo 8 foo dtype: object ``` #### Problem description Was expecting a TypeError. I think this is breaking a pint pandas interface test which tests s2 arithmetic_op Series(PintArray... raises some exception. (Which is failing on __rmod__ ) ```python import pandas as pd import numpy as np import pint from pint.pandas_interface import PintArray ureg = pint.UnitRegistry() Q_ = ureg.Quantity torque = PintArray(Q_([1, 2, 2, 3, 4, 5], ""lbf ft"")) angular_velocity = PintArray(Q_([1000, 2000, 2000, 3000, 3000, 3000], ""rpm"")) df = pd.DataFrame({""torque"": torque, ""angular_velocity"": angular_velocity}) pd.Series(""foo"", index=df.index)%df.torque 0 foo 1 foo 2 foo 3 foo 4 foo 5 foo dtype: object ``` #### Output of ``pd.show_versions()``
INSTALLED VERSIONS ------------------ commit: None python: 3.6.6.final.0 python-bits: 64 OS: Linux OS-release: 4.4.0-53-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 pandas: 0.24.0.dev0+1340.g8c58817bd pytest: 3.7.1 pip: 10.0.1 setuptools: 40.0.0 Cython: 0.28.5 numpy: 1.15.0 scipy: None pyarrow: None xarray: None IPython: 6.5.0 sphinx: None patsy: None dateutil: 2.7.3 pytz: 2018.5 blosc: None bottleneck: None tables: None numexpr: None feather: None matplotlib: 2.1.2 openpyxl: None xlrd: 1.1.0 xlwt: None xlsxwriter: 1.0.2 lxml.etree: None bs4: 4.6.0 html5lib: None sqlalchemy: None pymysql: None psycopg2: None jinja2: None s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None gcsfs: None
",1.0,"Series of integers mod behavior - #### Code Sample ```python import pandas as pd s=pd.Series(range(1,10)) s1=pd.Series(""foo"") s1%s 0 foo 1 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 2 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 3 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 4 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 5 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 6 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 7 [nan, nan, nan, nan, nan, nan, nan, nan, nan] 8 [nan, nan, nan, nan, nan, nan, nan, nan, nan] dtype: object import pandas as pd s=pd.Series(range(1,10)) s2=pd.Series(""foo"", index=s.index) s2%s 0 foo 1 foo 2 foo 3 foo 4 foo 5 foo 6 foo 7 foo 8 foo dtype: object ``` #### Problem description Was expecting a TypeError. I think this is breaking a pint pandas interface test which tests s2 arithmetic_op Series(PintArray... raises some exception. (Which is failing on __rmod__ ) ```python import pandas as pd import numpy as np import pint from pint.pandas_interface import PintArray ureg = pint.UnitRegistry() Q_ = ureg.Quantity torque = PintArray(Q_([1, 2, 2, 3, 4, 5], ""lbf ft"")) angular_velocity = PintArray(Q_([1000, 2000, 2000, 3000, 3000, 3000], ""rpm"")) df = pd.DataFrame({""torque"": torque, ""angular_velocity"": angular_velocity}) pd.Series(""foo"", index=df.index)%df.torque 0 foo 1 foo 2 foo 3 foo 4 foo 5 foo dtype: object ``` #### Output of ``pd.show_versions()``
INSTALLED VERSIONS ------------------ commit: None python: 3.6.6.final.0 python-bits: 64 OS: Linux OS-release: 4.4.0-53-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 pandas: 0.24.0.dev0+1340.g8c58817bd pytest: 3.7.1 pip: 10.0.1 setuptools: 40.0.0 Cython: 0.28.5 numpy: 1.15.0 scipy: None pyarrow: None xarray: None IPython: 6.5.0 sphinx: None patsy: None dateutil: 2.7.3 pytz: 2018.5 blosc: None bottleneck: None tables: None numexpr: None feather: None matplotlib: 2.1.2 openpyxl: None xlrd: 1.1.0 xlwt: None xlsxwriter: 1.0.2 lxml.etree: None bs4: 4.6.0 html5lib: None sqlalchemy: None pymysql: None psycopg2: None jinja2: None s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None gcsfs: None
",0,series of integers mod behavior code sample python import pandas as pd s pd series range pd series foo s foo dtype object import pandas as pd s pd series range pd series foo index s index s foo foo foo foo foo foo foo foo foo dtype object problem description was expecting a typeerror i think this is breaking a pint pandas interface test which tests arithmetic op series pintarray raises some exception which is failing on rmod python import pandas as pd import numpy as np import pint from pint pandas interface import pintarray ureg pint unitregistry q ureg quantity torque pintarray q lbf ft angular velocity pintarray q rpm df pd dataframe torque torque angular velocity angular velocity pd series foo index df index df torque foo foo foo foo foo foo dtype object output of pd show versions installed versions commit none python final python bits os linux os release generic machine processor byteorder little lc all none lang en gb utf locale en gb utf pandas pytest pip setuptools cython numpy scipy none pyarrow none xarray none ipython sphinx none patsy none dateutil pytz blosc none bottleneck none tables none numexpr none feather none matplotlib openpyxl none xlrd xlwt none xlsxwriter lxml etree none none sqlalchemy none pymysql none none none none fastparquet none pandas gbq none pandas datareader none gcsfs none ,0 181403,30681836561.0,IssuesEvent,2023-07-26 09:39:50,consta-design-system/uikit,https://api.github.com/repos/consta-design-system/uikit,opened,Loader: добавить размер XS,feature design 🔥🔥 priority,"возникают ситуации, когда размер S слишком крупный, а XS — нету например, в кнопках: надо добавить",1.0,"Loader: добавить размер XS - возникают ситуации, когда размер S слишком крупный, а XS — нету например, в кнопках: надо добавить",0,loader добавить размер xs возникают ситуации когда размер s слишком крупный а xs — нету например в кнопках img width alt image src надо добавить,0 4173,15725000811.0,IssuesEvent,2021-03-29 09:28:14,elastic/apm-server,https://api.github.com/repos/elastic/apm-server,closed,Create a Python Docker container for run check-changelogs.sh,[zube]: Done automation enhancement subtask,"We use a docker container with some libs installed that we would build in advance daily/weekly, then put it on the packer cache for direct use on `script/jenkins/check-changelogs.sh` ",1.0,"Create a Python Docker container for run check-changelogs.sh - We use a docker container with some libs installed that we would build in advance daily/weekly, then put it on the packer cache for direct use on `script/jenkins/check-changelogs.sh` ",1,create a python docker container for run check changelogs sh we use a docker container with some libs installed that we would build in advance daily weekly then put it on the packer cache for direct use on script jenkins check changelogs sh ,1 662,7739418612.0,IssuesEvent,2018-05-28 15:39:51,Shopify/quilt,https://api.github.com/repos/Shopify/quilt,closed,Consider switching to Travis CI,automation developer experience difficulty: easy,We currently have a limit on how many projects can use circle CI per organization. Travis builds should not be subject to that restriction.,1.0,Consider switching to Travis CI - We currently have a limit on how many projects can use circle CI per organization. Travis builds should not be subject to that restriction.,1,consider switching to travis ci we currently have a limit on how many projects can use circle ci per organization travis builds should not be subject to that restriction ,1 3377,13617590483.0,IssuesEvent,2020-09-23 17:13:59,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Problem on line 56 link to update-mgmt-view-logs.md,Pri2 automation/svc update-management/subsvc," On line 56, there is a parenthesis missing when mentioning (or on an [Azure query]update-mgmt-view-logs.md). I would try to fix it, but I am not sure what ""_update-mgmt-view-logs.md_"" is referring to as it does not exist in the _update-management_ folder. Is it supposed to be _update-mgmt-query-logs.md_ or referring to a file in another folder? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 52d72bbe-319c-0378-b36d-c9eb0c0527b3 * Version Independent ID: a7b6206d-e7af-f7df-3442-204990c71afc * Content: [Azure Automation Update Management overview](https://docs.microsoft.com/en-us/azure/automation/update-management/update-mgmt-overview) * Content Source: [articles/automation/update-management/update-mgmt-overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/update-management/update-mgmt-overview.md) * Service: **automation** * Sub-service: **update-management** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1.0,"Problem on line 56 link to update-mgmt-view-logs.md - On line 56, there is a parenthesis missing when mentioning (or on an [Azure query]update-mgmt-view-logs.md). I would try to fix it, but I am not sure what ""_update-mgmt-view-logs.md_"" is referring to as it does not exist in the _update-management_ folder. Is it supposed to be _update-mgmt-query-logs.md_ or referring to a file in another folder? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 52d72bbe-319c-0378-b36d-c9eb0c0527b3 * Version Independent ID: a7b6206d-e7af-f7df-3442-204990c71afc * Content: [Azure Automation Update Management overview](https://docs.microsoft.com/en-us/azure/automation/update-management/update-mgmt-overview) * Content Source: [articles/automation/update-management/update-mgmt-overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/update-management/update-mgmt-overview.md) * Service: **automation** * Sub-service: **update-management** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1,problem on line link to update mgmt view logs md on line there is a parenthesis missing when mentioning or on an update mgmt view logs md i would try to fix it but i am not sure what update mgmt view logs md is referring to as it does not exist in the update management folder is it supposed to be update mgmt query logs md or referring to a file in another folder document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service update management github login mgoedtel microsoft alias magoedte ,1 790165,27817739122.0,IssuesEvent,2023-03-18 21:57:10,apcountryman/picolibrary-microchip-megaavr0,https://api.github.com/repos/apcountryman/picolibrary-microchip-megaavr0,opened,Add Microchip megaAVR 0-series EVSYS peripheral,priority-normal status-awaiting_development type-feature,"Add Microchip megaAVR 0-series EVSYS peripheral (`::picolibrary::Microchip::megaAVR0::Peripheral::EVSYS`). - [ ] The `EVSYS` class should be defined in the `include/picolibrary/microchip/megaavr0/peripheral/evsys.h`/`source/picolibrary/microchip/megaavr0/peripheral/evsys.cc` header/source file pair - [ ] The following `EVSYS` class instances should be defined in the `include/picolibrary/microchip/megaavr0/peripheral.h`/`source/picolibrary/microchip/megaavr0/peripheral.cc` header/source file pair - [ ] `::picolibrary::Microchip::megaAVR0::Peripheral::EVSYS0` - [ ] Update documentation",1.0,"Add Microchip megaAVR 0-series EVSYS peripheral - Add Microchip megaAVR 0-series EVSYS peripheral (`::picolibrary::Microchip::megaAVR0::Peripheral::EVSYS`). - [ ] The `EVSYS` class should be defined in the `include/picolibrary/microchip/megaavr0/peripheral/evsys.h`/`source/picolibrary/microchip/megaavr0/peripheral/evsys.cc` header/source file pair - [ ] The following `EVSYS` class instances should be defined in the `include/picolibrary/microchip/megaavr0/peripheral.h`/`source/picolibrary/microchip/megaavr0/peripheral.cc` header/source file pair - [ ] `::picolibrary::Microchip::megaAVR0::Peripheral::EVSYS0` - [ ] Update documentation",0,add microchip megaavr series evsys peripheral add microchip megaavr series evsys peripheral picolibrary microchip peripheral evsys the evsys class should be defined in the include picolibrary microchip peripheral evsys h source picolibrary microchip peripheral evsys cc header source file pair the following evsys class instances should be defined in the include picolibrary microchip peripheral h source picolibrary microchip peripheral cc header source file pair picolibrary microchip peripheral update documentation,0 8293,26635303179.0,IssuesEvent,2023-01-24 21:22:57,shellebusch2/DSAllo,https://api.github.com/repos/shellebusch2/DSAllo,closed,Assigning Urgency,feature-automation,"When a complaint comes in, urgency level is automatically assigned by decided analysis. Urgency will likely be tied to the complaint type Urgency levels: * urgent * Non-urgent A/C: When complaint is submitted through test form, the correct urgency level is assigned. MAKE SOLUTION: 1. First module checks for new entries in the database 2. Searches complaint type database for the complaint type and market of the new entry 3. Update new entry with complaint type found",1.0,"Assigning Urgency - When a complaint comes in, urgency level is automatically assigned by decided analysis. Urgency will likely be tied to the complaint type Urgency levels: * urgent * Non-urgent A/C: When complaint is submitted through test form, the correct urgency level is assigned. MAKE SOLUTION: 1. First module checks for new entries in the database 2. Searches complaint type database for the complaint type and market of the new entry 3. Update new entry with complaint type found",1,assigning urgency when a complaint comes in urgency level is automatically assigned by decided analysis urgency will likely be tied to the complaint type urgency levels urgent non urgent a c when complaint is submitted through test form the correct urgency level is assigned make solution first module checks for new entries in the database searches complaint type database for the complaint type and market of the new entry update new entry with complaint type found,1 3753,14501405254.0,IssuesEvent,2020-12-11 19:27:47,BCDevOps/developer-experience,https://api.github.com/repos/BCDevOps/developer-experience,closed,Sysdig-Agent deployment to Silver,Sysdig automation monitoring ops,"**Describe the issue** DXC team to deploy the Sysdig Agent to the silver cluster using CCM. **Additional context** Sysdig-agent CCM configuration has been developed and deployed successfully to KLAB. This is the followup handoff to DXC for a production deployment. **Definition of done** - [x] Sysdig-Agent running successfully in Silver cluster (deployed by DXC)",1.0,"Sysdig-Agent deployment to Silver - **Describe the issue** DXC team to deploy the Sysdig Agent to the silver cluster using CCM. **Additional context** Sysdig-agent CCM configuration has been developed and deployed successfully to KLAB. This is the followup handoff to DXC for a production deployment. **Definition of done** - [x] Sysdig-Agent running successfully in Silver cluster (deployed by DXC)",1,sysdig agent deployment to silver describe the issue dxc team to deploy the sysdig agent to the silver cluster using ccm additional context sysdig agent ccm configuration has been developed and deployed successfully to klab this is the followup handoff to dxc for a production deployment definition of done sysdig agent running successfully in silver cluster deployed by dxc ,1 260125,22593843940.0,IssuesEvent,2022-06-28 23:10:37,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,`CscCompile_WithSourceCodeRedirectedViaStandardInput_ProducesLibrary` fails locally,Area-Compilers 4 - In Review Test,"``` Assert.Null() Failure Expected: (null) Actual: Byte[] [] Stack trace: at Microsoft.CodeAnalysis.CSharp.CommandLine.UnitTests.CommandLineTests.CscCompile_WithSourceCodeRedirectedViaStandardInput_ProducesLibrary() in C:\Users\PC\Desktop\Roslyn\src\Compilers\CSharp\Test\CommandLine\CommandLineTests.cs:line 5839 ``` I ran tests using `./build.cmd -testCompilerOnly -testCoreClr`",1.0,"`CscCompile_WithSourceCodeRedirectedViaStandardInput_ProducesLibrary` fails locally - ``` Assert.Null() Failure Expected: (null) Actual: Byte[] [] Stack trace: at Microsoft.CodeAnalysis.CSharp.CommandLine.UnitTests.CommandLineTests.CscCompile_WithSourceCodeRedirectedViaStandardInput_ProducesLibrary() in C:\Users\PC\Desktop\Roslyn\src\Compilers\CSharp\Test\CommandLine\CommandLineTests.cs:line 5839 ``` I ran tests using `./build.cmd -testCompilerOnly -testCoreClr`",0, csccompile withsourcecoderedirectedviastandardinput produceslibrary fails locally assert null failure expected null actual byte stack trace at microsoft codeanalysis csharp commandline unittests commandlinetests csccompile withsourcecoderedirectedviastandardinput produceslibrary in c users pc desktop roslyn src compilers csharp test commandline commandlinetests cs line i ran tests using build cmd testcompileronly testcoreclr ,0 6422,23128328742.0,IssuesEvent,2022-07-28 08:08:23,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,opened,[XCUITests] Failure happening in several tests: Application org.mozilla.ios.Fennec is not running ,eng:automation,"We used to face that issue in the first test running for each test plan. Now I'm seeing this failure also when running other tests. Look at this commit 2dc1f3e59a9ca5dc3fc28db952413e998a6ac527 as an example: - https://github.com/mozilla-mobile/firefox-ios/runs/7555553311 It may be good to add the solution we applied to those tests to all the tests we see failing, see for example: https://github.com/mozilla-mobile/firefox-ios/blob/2dc1f3e59a9ca5dc3fc28db952413e998a6ac527/Tests/XCUITests/ActivityStreamTest.swift#L53 ",1.0,"[XCUITests] Failure happening in several tests: Application org.mozilla.ios.Fennec is not running - We used to face that issue in the first test running for each test plan. Now I'm seeing this failure also when running other tests. Look at this commit 2dc1f3e59a9ca5dc3fc28db952413e998a6ac527 as an example: - https://github.com/mozilla-mobile/firefox-ios/runs/7555553311 It may be good to add the solution we applied to those tests to all the tests we see failing, see for example: https://github.com/mozilla-mobile/firefox-ios/blob/2dc1f3e59a9ca5dc3fc28db952413e998a6ac527/Tests/XCUITests/ActivityStreamTest.swift#L53 ",1, failure happening in several tests application org mozilla ios fennec is not running we used to face that issue in the first test running for each test plan now i m seeing this failure also when running other tests look at this commit as an example it may be good to add the solution we applied to those tests to all the tests we see failing see for example ,1 29407,5632713058.0,IssuesEvent,2017-04-05 17:12:41,sunpy/sunpy,https://api.github.com/repos/sunpy/sunpy,closed,Update the glymur / openjpeg installation instructions,Documentation,"Anaconda installations don't come with glymur, and glymur itself relies on the installation of openjpeg. A user (thanks!) sent the description below on how he managed to install openjpeg to the mailing list. These are included below for reference to make it easy to update the sunpy docs. Okay. After a couple false starts I was able to install the latest openjpeg library and get glymur to find it. In case this comes up again, what I did was: - download openjpeg-2.1.0.tar.gz, untar it (tar xvzf ..., of course) - in the openjpeg-2.1.0 directory I then did: - cmake . -DCMAKE_INSTALL_PREFIX='/export/slavin/python/anaconda' (note the dot after cmake) so setting the install prefix to the root of my anaconda distribution (I found that setting the DESTDIR environment variable to /export/slavin/python/anaconda resulted in an install under /export/slavin/python/anaconda/usr/local). - make - make install I then needed to create a configuration file for glymur to ensure that it could find the library. So I created a /home/jslavin/.config/glymur directory and a file in it named glymurrc into which I put: [library] openjp2:/export/slavin/python/anaconda/lib/libopenjp2.so ",1.0,"Update the glymur / openjpeg installation instructions - Anaconda installations don't come with glymur, and glymur itself relies on the installation of openjpeg. A user (thanks!) sent the description below on how he managed to install openjpeg to the mailing list. These are included below for reference to make it easy to update the sunpy docs. Okay. After a couple false starts I was able to install the latest openjpeg library and get glymur to find it. In case this comes up again, what I did was: - download openjpeg-2.1.0.tar.gz, untar it (tar xvzf ..., of course) - in the openjpeg-2.1.0 directory I then did: - cmake . -DCMAKE_INSTALL_PREFIX='/export/slavin/python/anaconda' (note the dot after cmake) so setting the install prefix to the root of my anaconda distribution (I found that setting the DESTDIR environment variable to /export/slavin/python/anaconda resulted in an install under /export/slavin/python/anaconda/usr/local). - make - make install I then needed to create a configuration file for glymur to ensure that it could find the library. So I created a /home/jslavin/.config/glymur directory and a file in it named glymurrc into which I put: [library] openjp2:/export/slavin/python/anaconda/lib/libopenjp2.so ",0,update the glymur openjpeg installation instructions anaconda installations don t come with glymur and glymur itself relies on the installation of openjpeg a user thanks sent the description below on how he managed to install openjpeg to the mailing list these are included below for reference to make it easy to update the sunpy docs okay after a couple false starts i was able to install the latest openjpeg library and get glymur to find it in case this comes up again what i did was download openjpeg tar gz untar it tar xvzf of course in the openjpeg directory i then did cmake dcmake install prefix export slavin python anaconda note the dot after cmake so setting the install prefix to the root of my anaconda distribution i found that setting the destdir environment variable to export slavin python anaconda resulted in an install under export slavin python anaconda usr local make make install i then needed to create a configuration file for glymur to ensure that it could find the library so i created a home jslavin config glymur directory and a file in it named glymurrc into which i put export slavin python anaconda lib so ,0 541,7078435496.0,IssuesEvent,2018-01-10 03:49:03,hfagerlund/elections-carousel,https://api.github.com/repos/hfagerlund/elections-carousel,closed,Automate code formatting,Category: automation Type: enhancement,"## Benefits: * improve workflow; * standardize code by adhering to style guide. ",1.0,"Automate code formatting - ## Benefits: * improve workflow; * standardize code by adhering to style guide. ",1,automate code formatting benefits improve workflow standardize code by adhering to style guide ,1 13776,16502931273.0,IssuesEvent,2021-05-25 15:59:40,Geolykt/EnchantmentsPlus,https://api.github.com/repos/Geolykt/EnchantmentsPlus,closed,Spartan throws NullPointerExceptions in connection to the Blizzard Enchantment,bug:confirmed compatibility:anticheats compatibility:plugins,"I am getting ton of warns on the console... ![Screenshot_1](https://user-images.githubusercontent.com/84740054/119395007-98a8ef80-bcdb-11eb-925d-52ffb83d64f0.png) ",True,"Spartan throws NullPointerExceptions in connection to the Blizzard Enchantment - I am getting ton of warns on the console... ![Screenshot_1](https://user-images.githubusercontent.com/84740054/119395007-98a8ef80-bcdb-11eb-925d-52ffb83d64f0.png) ",0,spartan throws nullpointerexceptions in connection to the blizzard enchantment i am getting ton of warns on the console ,0 4702,17270027531.0,IssuesEvent,2021-07-22 18:29:09,newrelic/docs-website,https://api.github.com/repos/newrelic/docs-website,closed,Update workflow that saves page slugs (add slugs -> add slugs to a pending job),automation eng localization sp:3,"## Description Once we have a new database and utility functions for reading / writing to it (#3074), we need to update [the workflow](https://github.com/newrelic/docs-website/blob/69c8529bdec2baa2750561a4481d6e32fd354c8e/.github/workflows/get-slugs-to-translate.yml) that saves new & modified files to be translated. ## Acceptance Criteria * [ ] New & modified to-be-translated pages will be added to the pages table (if they don't already exist) * [ ] New & modified to-be-translated pages will be added to the pending job _for each locale_ (staging) * [ ] Removed should be removed from pending job, the pages table, and any additional logic that currently happens should run",1.0,"Update workflow that saves page slugs (add slugs -> add slugs to a pending job) - ## Description Once we have a new database and utility functions for reading / writing to it (#3074), we need to update [the workflow](https://github.com/newrelic/docs-website/blob/69c8529bdec2baa2750561a4481d6e32fd354c8e/.github/workflows/get-slugs-to-translate.yml) that saves new & modified files to be translated. ## Acceptance Criteria * [ ] New & modified to-be-translated pages will be added to the pages table (if they don't already exist) * [ ] New & modified to-be-translated pages will be added to the pending job _for each locale_ (staging) * [ ] Removed should be removed from pending job, the pages table, and any additional logic that currently happens should run",1,update workflow that saves page slugs add slugs add slugs to a pending job description once we have a new database and utility functions for reading writing to it we need to update that saves new modified files to be translated acceptance criteria new modified to be translated pages will be added to the pages table if they don t already exist new modified to be translated pages will be added to the pending job for each locale staging removed should be removed from pending job the pages table and any additional logic that currently happens should run,1 7336,24661821828.0,IssuesEvent,2022-10-18 07:20:38,keycloak/keycloak-benchmark,https://api.github.com/repos/keycloak/keycloak-benchmark,closed,enhance kcb-ci-runner.sh to support authentication scenarios,enhancement automation,"### Description enhance kcb-ci-runner.sh to support authentication scenarios ### Discussion _No response_ ### Motivation enhance kcb-ci-runner.sh to support authentication scenarios, so that we can run LoginUserPassword, ClientSecret and AuthorizationCode scenarios from a CI system ### Details _No response_",1.0,"enhance kcb-ci-runner.sh to support authentication scenarios - ### Description enhance kcb-ci-runner.sh to support authentication scenarios ### Discussion _No response_ ### Motivation enhance kcb-ci-runner.sh to support authentication scenarios, so that we can run LoginUserPassword, ClientSecret and AuthorizationCode scenarios from a CI system ### Details _No response_",1,enhance kcb ci runner sh to support authentication scenarios description enhance kcb ci runner sh to support authentication scenarios discussion no response motivation enhance kcb ci runner sh to support authentication scenarios so that we can run loginuserpassword clientsecret and authorizationcode scenarios from a ci system details no response ,1 238398,19716876062.0,IssuesEvent,2022-01-13 11:53:54,elastic/elasticsearch,https://api.github.com/repos/elastic/elasticsearch,closed,[CI] MultiVersionRepositoryAccessIT failures ,:Distributed/Snapshot/Restore >test-failure Team:Distributed,"Execution failed for task `:qa:repository-multi-version:v7.11.2#Step1OldClusterTest`. The following tests fail for `org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT`: - `testReadOnlyRepo` - `testUpgradeMovesRepoToNewMetaVersion` - `testCreateAndRestoreSnapshot` **Build scan**: https://gradle-enterprise.elastic.co/s/wqdizvweltvmi https://gradle-enterprise.elastic.co/s/k4jys7bayx6cs **Repro line**: ``` REPRODUCE WITH: ./gradlew ':qa:repository-multi-version:v7.7.0#Step1OldClusterTest' \ -Dtests.class=""org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT"" \ -Dtests.method=""testReadOnlyRepo"" \ -Dtests.seed=4F0038EE588A6063 \ -Dtests.bwc=true \ -Dtests.locale=sl \ -Dtests.timezone=Etc/UCT \ -Druntime.java=17 REPRODUCE WITH: ./gradlew ':qa:repository-multi-version:v7.7.0#Step1OldClusterTest' \ -Dtests.class=""org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT"" \ -Dtests.method=""testUpgradeMovesRepoToNewMetaVersion"" \ -Dtests.seed=4F0038EE588A6063 \ -Dtests.bwc=true \ -Dtests.locale=sl \ -Dtests.timezone=Etc/UCT \ -Druntime.java=17 REPRODUCE WITH: ./gradlew ':qa:repository-multi-version:v7.7.0#Step1OldClusterTest' \ -Dtests.class=""org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT"" \ -Dtests.method=""testCreateAndRestoreSnapshot"" \ -Dtests.seed=4F0038EE588A6063 \ -Dtests.bwc=true \ -Dtests.locale=sl \ -Dtests.timezone=Etc/UCT \ -Druntime.java=17 ``` **Reproduces locally?**: No **Applicable branches**: master **Failure history**: https://build-stats.elastic.co/app/kibana#/discover?_g=(refreshInterval:(pause:!t,value:0),time:(from:'2021-08-31T21:00:00.000Z',mode:absolute,to:'2021-10-29T13:46:50.119Z'))&_a=(columns:!(_source),index:e58bf320-7efd-11e8-bf69-63c8ef516157,interval:auto,query:(language:lucene,query:'class:%22org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT%22'),sort:!(time,desc)) https://gradle-enterprise.elastic.co/scans/tests?search.startTimeMax=1635515071899&search.startTimeMin=1634850000000&search.timeZoneId=Europe/Athens&tests.container=org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT&tests.sortField=FAILED&tests.unstableOnly=true **Failure excerpt**: ``` org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT > testReadOnlyRepo FAILED org.elasticsearch.client.ResponseException: method [GET], host [http://[::1]:41741], URI [_index_template], status line [HTTP/1.1 400 Bad Request] {""error"":{""root_cause"":[{""type"":""invalid_index_name_exception"",""reason"":""Invalid index name [_index_template], must not start with '_'."",""index_uuid"":""_na_"",""index"":""_index_template""}],""type"":""invalid_index_name_exception"",""reason"":""Invalid index name [_index_template], must not start with '_'."",""index_uuid"":""_na_"",""index"":""_index_template""},""status"":400} at __randomizedtesting.SeedInfo.seed([4F0038EE588A6063:D0C4727D8CF71D9]:0) at app//org.elasticsearch.client.RestClient.convertResponse(RestClient.java:335) at app//org.elasticsearch.client.RestClient.performRequest(RestClient.java:301) at app//org.elasticsearch.client.RestClient.performRequest(RestClient.java:276) at app//org.elasticsearch.test.rest.ESRestTestCase.getAllUnexpectedTemplates(ESRestTestCase.java:855) at app//org.elasticsearch.test.rest.ESRestTestCase.checkForUnexpectedlyRecreatedObjects(ESRestTestCase.java:817) at app//org.elasticsearch.test.rest.ESRestTestCase.cleanUpCluster(ESRestTestCase.java:385) at java.base@17.0.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base@17.0.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base@17.0.1/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base@17.0.1/java.lang.reflect.Method.invoke(Method.java:568) at app//com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758) at app//com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:1004) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44) at app//org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at app//org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45) at app//org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at app//org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at app//com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824) at app//com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475) at app//com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955) at app//com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840) at app//com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891) at app//com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902) at app//org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at app//com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at app//com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at app//org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at app//org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at app//org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at app//org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at app//com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831) at java.base@17.0.1/java.lang.Thread.run(Thread.java:833) ``` Also, some tests failed with the following error: ``` org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT > testCreateAndRestoreSnapshot FAILED java.lang.AssertionError: Expected no templates after deletions, but found .logstash-management at __randomizedtesting.SeedInfo.seed([8C28F35E378D8CA1:A90D100AA8CED1C]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.elasticsearch.test.rest.ESRestTestCase.checkForUnexpectedlyRecreatedObjects(ESRestTestCase.java:818) at org.elasticsearch.test.rest.ESRestTestCase.cleanUpCluster(ESRestTestCase.java:385) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:1004) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831) at java.base/java.lang.Thread.run(Thread.java:833) ``` ",1.0,"[CI] MultiVersionRepositoryAccessIT failures - Execution failed for task `:qa:repository-multi-version:v7.11.2#Step1OldClusterTest`. The following tests fail for `org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT`: - `testReadOnlyRepo` - `testUpgradeMovesRepoToNewMetaVersion` - `testCreateAndRestoreSnapshot` **Build scan**: https://gradle-enterprise.elastic.co/s/wqdizvweltvmi https://gradle-enterprise.elastic.co/s/k4jys7bayx6cs **Repro line**: ``` REPRODUCE WITH: ./gradlew ':qa:repository-multi-version:v7.7.0#Step1OldClusterTest' \ -Dtests.class=""org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT"" \ -Dtests.method=""testReadOnlyRepo"" \ -Dtests.seed=4F0038EE588A6063 \ -Dtests.bwc=true \ -Dtests.locale=sl \ -Dtests.timezone=Etc/UCT \ -Druntime.java=17 REPRODUCE WITH: ./gradlew ':qa:repository-multi-version:v7.7.0#Step1OldClusterTest' \ -Dtests.class=""org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT"" \ -Dtests.method=""testUpgradeMovesRepoToNewMetaVersion"" \ -Dtests.seed=4F0038EE588A6063 \ -Dtests.bwc=true \ -Dtests.locale=sl \ -Dtests.timezone=Etc/UCT \ -Druntime.java=17 REPRODUCE WITH: ./gradlew ':qa:repository-multi-version:v7.7.0#Step1OldClusterTest' \ -Dtests.class=""org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT"" \ -Dtests.method=""testCreateAndRestoreSnapshot"" \ -Dtests.seed=4F0038EE588A6063 \ -Dtests.bwc=true \ -Dtests.locale=sl \ -Dtests.timezone=Etc/UCT \ -Druntime.java=17 ``` **Reproduces locally?**: No **Applicable branches**: master **Failure history**: https://build-stats.elastic.co/app/kibana#/discover?_g=(refreshInterval:(pause:!t,value:0),time:(from:'2021-08-31T21:00:00.000Z',mode:absolute,to:'2021-10-29T13:46:50.119Z'))&_a=(columns:!(_source),index:e58bf320-7efd-11e8-bf69-63c8ef516157,interval:auto,query:(language:lucene,query:'class:%22org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT%22'),sort:!(time,desc)) https://gradle-enterprise.elastic.co/scans/tests?search.startTimeMax=1635515071899&search.startTimeMin=1634850000000&search.timeZoneId=Europe/Athens&tests.container=org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT&tests.sortField=FAILED&tests.unstableOnly=true **Failure excerpt**: ``` org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT > testReadOnlyRepo FAILED org.elasticsearch.client.ResponseException: method [GET], host [http://[::1]:41741], URI [_index_template], status line [HTTP/1.1 400 Bad Request] {""error"":{""root_cause"":[{""type"":""invalid_index_name_exception"",""reason"":""Invalid index name [_index_template], must not start with '_'."",""index_uuid"":""_na_"",""index"":""_index_template""}],""type"":""invalid_index_name_exception"",""reason"":""Invalid index name [_index_template], must not start with '_'."",""index_uuid"":""_na_"",""index"":""_index_template""},""status"":400} at __randomizedtesting.SeedInfo.seed([4F0038EE588A6063:D0C4727D8CF71D9]:0) at app//org.elasticsearch.client.RestClient.convertResponse(RestClient.java:335) at app//org.elasticsearch.client.RestClient.performRequest(RestClient.java:301) at app//org.elasticsearch.client.RestClient.performRequest(RestClient.java:276) at app//org.elasticsearch.test.rest.ESRestTestCase.getAllUnexpectedTemplates(ESRestTestCase.java:855) at app//org.elasticsearch.test.rest.ESRestTestCase.checkForUnexpectedlyRecreatedObjects(ESRestTestCase.java:817) at app//org.elasticsearch.test.rest.ESRestTestCase.cleanUpCluster(ESRestTestCase.java:385) at java.base@17.0.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base@17.0.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base@17.0.1/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base@17.0.1/java.lang.reflect.Method.invoke(Method.java:568) at app//com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758) at app//com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:1004) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44) at app//org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at app//org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45) at app//org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at app//org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at app//com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824) at app//com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475) at app//com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955) at app//com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840) at app//com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891) at app//com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902) at app//org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at app//com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at app//com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at app//org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at app//org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at app//org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at app//org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at app//com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at app//com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at app//com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831) at java.base@17.0.1/java.lang.Thread.run(Thread.java:833) ``` Also, some tests failed with the following error: ``` org.elasticsearch.upgrades.MultiVersionRepositoryAccessIT > testCreateAndRestoreSnapshot FAILED java.lang.AssertionError: Expected no templates after deletions, but found .logstash-management at __randomizedtesting.SeedInfo.seed([8C28F35E378D8CA1:A90D100AA8CED1C]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.elasticsearch.test.rest.ESRestTestCase.checkForUnexpectedlyRecreatedObjects(ESRestTestCase.java:818) at org.elasticsearch.test.rest.ESRestTestCase.cleanUpCluster(ESRestTestCase.java:385) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:1004) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831) at java.base/java.lang.Thread.run(Thread.java:833) ``` ",0, multiversionrepositoryaccessit failures execution failed for task qa repository multi version the following tests fail for org elasticsearch upgrades multiversionrepositoryaccessit testreadonlyrepo testupgrademovesrepotonewmetaversion testcreateandrestoresnapshot build scan repro line reproduce with gradlew qa repository multi version dtests class org elasticsearch upgrades multiversionrepositoryaccessit dtests method testreadonlyrepo dtests seed dtests bwc true dtests locale sl dtests timezone etc uct druntime java reproduce with gradlew qa repository multi version dtests class org elasticsearch upgrades multiversionrepositoryaccessit dtests method testupgrademovesrepotonewmetaversion dtests seed dtests bwc true dtests locale sl dtests timezone etc uct druntime java reproduce with gradlew qa repository multi version dtests class org elasticsearch upgrades multiversionrepositoryaccessit dtests method testcreateandrestoresnapshot dtests seed dtests bwc true dtests locale sl dtests timezone etc uct druntime java reproduces locally no applicable branches master failure history failure excerpt org elasticsearch upgrades multiversionrepositoryaccessit testreadonlyrepo failed org elasticsearch client responseexception method host uri status line error root cause must not start with index uuid na index index template type invalid index name exception reason invalid index name must not start with index uuid na index index template status at randomizedtesting seedinfo seed at app org elasticsearch client restclient convertresponse restclient java at app org elasticsearch client restclient performrequest restclient java at app org elasticsearch client restclient performrequest restclient java at app org elasticsearch test rest esresttestcase getallunexpectedtemplates esresttestcase java at app org elasticsearch test rest esresttestcase checkforunexpectedlyrecreatedobjects esresttestcase java at app org elasticsearch test rest esresttestcase cleanupcluster esresttestcase java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at app com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at app com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at app com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at app org apache lucene util testrulesetupteardownchained evaluate testrulesetupteardownchained java at app org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at app org apache lucene util testrulethreadandtestname evaluate testrulethreadandtestname java at app org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at app org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at app com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at app com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at app com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at app com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at app com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at app com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at app com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at app com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at app org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at app com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at app org apache lucene util testrulestoreclassname evaluate testrulestoreclassname java at app com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at app com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at app com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at app com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at app org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at app org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at app org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at app org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at app org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at app com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at app com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at app com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java base java lang thread run thread java also some tests failed with the following error org elasticsearch upgrades multiversionrepositoryaccessit testcreateandrestoresnapshot failed java lang assertionerror expected no templates after deletions but found logstash management at randomizedtesting seedinfo seed at org junit assert fail assert java at org junit assert asserttrue assert java at org elasticsearch test rest esresttestcase checkforunexpectedlyrecreatedobjects esresttestcase java at org elasticsearch test rest esresttestcase cleanupcluster esresttestcase java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java base java lang thread run thread java ,0 32449,12127361209.0,IssuesEvent,2020-04-22 18:35:29,lgmorand/aks-checklist,https://api.github.com/repos/lgmorand/aks-checklist,closed,"CVE-2020-7598 (High) detected in minimist-0.0.8.tgz, minimist-1.2.0.tgz",security vulnerability,"## CVE-2020-7598 - High Severity Vulnerability
Vulnerable Libraries - minimist-0.0.8.tgz, minimist-1.2.0.tgz

minimist-0.0.8.tgz

parse argument options

Library home page: https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz

Path to dependency file: /tmp/ws-scm/aks-checklist/package.json

Path to vulnerable library: /tmp/ws-scm/aks-checklist/node_modules/minimist/package.json

Dependency Hierarchy: - gulp-webpack-1.5.0.tgz (Root Library) - webpack-1.15.0.tgz - optimist-0.6.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library)

minimist-1.2.0.tgz

parse argument options

Library home page: https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz

Path to dependency file: /tmp/ws-scm/aks-checklist/package.json

Path to vulnerable library: /tmp/ws-scm/aks-checklist/node_modules/gulp-util/node_modules/minimist/package.json

Dependency Hierarchy: - gulp-util-3.0.8.tgz (Root Library) - :x: **minimist-1.2.0.tgz** (Vulnerable Library)

Found in HEAD commit: 926dbe40d781fa9d6192ead52fce59376cdc9666

Vulnerability Details

minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a ""constructor"" or ""__proto__"" payload.

Publish Date: 2020-03-11

URL: CVE-2020-7598

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94

Release Date: 2020-03-11

Fix Resolution: minimist - 0.2.1,1.2.3

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-7598 (High) detected in minimist-0.0.8.tgz, minimist-1.2.0.tgz - ## CVE-2020-7598 - High Severity Vulnerability
Vulnerable Libraries - minimist-0.0.8.tgz, minimist-1.2.0.tgz

minimist-0.0.8.tgz

parse argument options

Library home page: https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz

Path to dependency file: /tmp/ws-scm/aks-checklist/package.json

Path to vulnerable library: /tmp/ws-scm/aks-checklist/node_modules/minimist/package.json

Dependency Hierarchy: - gulp-webpack-1.5.0.tgz (Root Library) - webpack-1.15.0.tgz - optimist-0.6.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library)

minimist-1.2.0.tgz

parse argument options

Library home page: https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz

Path to dependency file: /tmp/ws-scm/aks-checklist/package.json

Path to vulnerable library: /tmp/ws-scm/aks-checklist/node_modules/gulp-util/node_modules/minimist/package.json

Dependency Hierarchy: - gulp-util-3.0.8.tgz (Root Library) - :x: **minimist-1.2.0.tgz** (Vulnerable Library)

Found in HEAD commit: 926dbe40d781fa9d6192ead52fce59376cdc9666

Vulnerability Details

minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a ""constructor"" or ""__proto__"" payload.

Publish Date: 2020-03-11

URL: CVE-2020-7598

CVSS 3 Score Details (9.8)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94

Release Date: 2020-03-11

Fix Resolution: minimist - 0.2.1,1.2.3

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in minimist tgz minimist tgz cve high severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz parse argument options library home page a href path to dependency file tmp ws scm aks checklist package json path to vulnerable library tmp ws scm aks checklist node modules minimist package json dependency hierarchy gulp webpack tgz root library webpack tgz optimist tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file tmp ws scm aks checklist package json path to vulnerable library tmp ws scm aks checklist node modules gulp util node modules minimist package json dependency hierarchy gulp util tgz root library x minimist tgz vulnerable library found in head commit a href vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist step up your open source security game with whitesource ,0 502,6776697666.0,IssuesEvent,2017-10-27 18:54:10,rancher/rancher,https://api.github.com/repos/rancher/rancher,closed,"Container fails to start with error -Conflict. The name """" is already in use by container",kind/bug setup/automation,"Rancher-server version - v1.3.0 RHEL 7.3 - docker version - 1.10.3 Following automation run fails with - ```Conflict. The name """" is already in use by container``` ``` admin_client = client = , rancher_compose_container = None socat_containers = None def test_rancher_compose_service_option_2(admin_client, client, rancher_compose_container, socat_containers): hosts = client.list_host(kind='docker', removed_null=True, state=""active"") cpu_shares = 400 ulimit = {""hard"": 1024, ""name"": ""cpu"", ""soft"": 1024} ulimit_inspect = {""Hard"": 1024, ""Name"": ""cpu"", ""Soft"": 1024} ipcMode = ""host"" sysctls = {""net.ipv4.ip_forward"": ""1""} dev_opts = { '/dev/null': { 'readIops': 2000, 'writeIops': 3000, 'readBps': 4000, 'writeBps': 200, } } cpu_shares = 400 blkio_weight = 1000 cpu_period = 10000 cpu_quota = 20000 cpu_set = ""0"" cpu_setmems = ""0"" dns_opt = [""abc""] group_add = [""root""] kernel_memory = 6000000 memory_reservation = 5000000 memory_swap = -1 memory_swappiness = 100 oom_killdisable = True oom_scoreadj = 100 read_only = True shm_size = 1024 stop_signal = ""SIGTERM"" uts = ""host"" memory = 8000000 dev_opts_inspect = {u""Path"": ""/dev/null"", u""Rate"": 400} cgroup_parent = ""xyz"" extraHosts = [""host1:10.1.1.1"", ""host2:10.2.2.2""] tmp_fs = {""/tmp"": ""rw""} security_opt = [""label=user:USER"", ""label=role:ROLE""] launch_config = {""imageUuid"": TEST_SERVICE_OPT_IMAGE_UUID, ""extraHosts"": extraHosts, ""privileged"": True, ""cpuShares"": cpu_shares, ""blkioWeight"": blkio_weight, ""blkioDeviceOptions"": dev_opts, ""cgroupParent"": cgroup_parent, ""cpuShares"": cpu_shares, ""cpuPeriod"": cpu_period, ""cpuQuota"": cpu_quota, ""cpuSet"": cpu_set, ""cpuSetMems"": cpu_setmems, ""dnsOpt"": dns_opt, ""groupAdd"": group_add, ""kernelMemory"": kernel_memory, ""memory"": memory, ""memoryReservation"": memory_reservation, ""memorySwap"": memory_swap, ""memorySwappiness"": memory_swappiness, ""oomKillDisable"": oom_killdisable, ""oomScoreAdj"": oom_scoreadj, ""readOnly"": read_only, ""securityOpt"": security_opt, ""shmSize"": shm_size, ""stopSignal"": stop_signal, ""sysctls"": sysctls, ""tmpfs"": tmp_fs, ""ulimits"": [ulimit], ""ipcMode"": ipcMode, ""uts"": uts, ""requestedHostId"": hosts[0].id } scale = 2 service, env = create_env_and_svc(client, launch_config, scale) launch_rancher_compose(client, env) rancher_envs = client.list_stack(name=env.name+""rancher"") assert len(rancher_envs) == 1 rancher_env = rancher_envs[0] rancher_service = get_rancher_compose_service( > client, rancher_env.id, service) /Users/sangeethahariharan1/cattle/validation/validation-tests/tests/v2_validation/cattlevalidationtest/core/test_rancher_compose.py:212: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /Users/sangeethahariharan1/cattle/validation/validation-tests/tests/v2_validation/cattlevalidationtest/core/test_rancher_compose.py:832: in get_rancher_compose_service rancher_service = client.wait_success(rancher_service, 120) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = obj = {'createdTS': 1483748893000, 'launchConfig': {'tty': False, 'blkioWeight': 1000, 'labels': {'io.rancher.service.hash':...13Z', 'scalePolicy': None, 'fqdn': None, 'createIndex': 2, 'instanceIds': [u'1i7278', u'1i7279'], 'selectorLink': None} timeout = 120 def wait_success(self, obj, timeout=DEFAULT_TIMEOUT): obj = self.wait_transitioning(obj, timeout) if obj.transitioning != 'no': > raise gdapi.ClientApiError(obj.transitioningMessage) E ClientApiError: Expected state running but got error: Error response from daemon: Conflict. The name ""/r-test341701rancher-test825055-1-371a1483"" is already in use by container 999782634ff22c4d977d7f6aea39f44e3bb9eefeef9f19277a46ee19b20cee7f. You have to remove (or rename) that container to be able to reuse that name. ```",1.0,"Container fails to start with error -Conflict. The name """" is already in use by container - Rancher-server version - v1.3.0 RHEL 7.3 - docker version - 1.10.3 Following automation run fails with - ```Conflict. The name """" is already in use by container``` ``` admin_client = client = , rancher_compose_container = None socat_containers = None def test_rancher_compose_service_option_2(admin_client, client, rancher_compose_container, socat_containers): hosts = client.list_host(kind='docker', removed_null=True, state=""active"") cpu_shares = 400 ulimit = {""hard"": 1024, ""name"": ""cpu"", ""soft"": 1024} ulimit_inspect = {""Hard"": 1024, ""Name"": ""cpu"", ""Soft"": 1024} ipcMode = ""host"" sysctls = {""net.ipv4.ip_forward"": ""1""} dev_opts = { '/dev/null': { 'readIops': 2000, 'writeIops': 3000, 'readBps': 4000, 'writeBps': 200, } } cpu_shares = 400 blkio_weight = 1000 cpu_period = 10000 cpu_quota = 20000 cpu_set = ""0"" cpu_setmems = ""0"" dns_opt = [""abc""] group_add = [""root""] kernel_memory = 6000000 memory_reservation = 5000000 memory_swap = -1 memory_swappiness = 100 oom_killdisable = True oom_scoreadj = 100 read_only = True shm_size = 1024 stop_signal = ""SIGTERM"" uts = ""host"" memory = 8000000 dev_opts_inspect = {u""Path"": ""/dev/null"", u""Rate"": 400} cgroup_parent = ""xyz"" extraHosts = [""host1:10.1.1.1"", ""host2:10.2.2.2""] tmp_fs = {""/tmp"": ""rw""} security_opt = [""label=user:USER"", ""label=role:ROLE""] launch_config = {""imageUuid"": TEST_SERVICE_OPT_IMAGE_UUID, ""extraHosts"": extraHosts, ""privileged"": True, ""cpuShares"": cpu_shares, ""blkioWeight"": blkio_weight, ""blkioDeviceOptions"": dev_opts, ""cgroupParent"": cgroup_parent, ""cpuShares"": cpu_shares, ""cpuPeriod"": cpu_period, ""cpuQuota"": cpu_quota, ""cpuSet"": cpu_set, ""cpuSetMems"": cpu_setmems, ""dnsOpt"": dns_opt, ""groupAdd"": group_add, ""kernelMemory"": kernel_memory, ""memory"": memory, ""memoryReservation"": memory_reservation, ""memorySwap"": memory_swap, ""memorySwappiness"": memory_swappiness, ""oomKillDisable"": oom_killdisable, ""oomScoreAdj"": oom_scoreadj, ""readOnly"": read_only, ""securityOpt"": security_opt, ""shmSize"": shm_size, ""stopSignal"": stop_signal, ""sysctls"": sysctls, ""tmpfs"": tmp_fs, ""ulimits"": [ulimit], ""ipcMode"": ipcMode, ""uts"": uts, ""requestedHostId"": hosts[0].id } scale = 2 service, env = create_env_and_svc(client, launch_config, scale) launch_rancher_compose(client, env) rancher_envs = client.list_stack(name=env.name+""rancher"") assert len(rancher_envs) == 1 rancher_env = rancher_envs[0] rancher_service = get_rancher_compose_service( > client, rancher_env.id, service) /Users/sangeethahariharan1/cattle/validation/validation-tests/tests/v2_validation/cattlevalidationtest/core/test_rancher_compose.py:212: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /Users/sangeethahariharan1/cattle/validation/validation-tests/tests/v2_validation/cattlevalidationtest/core/test_rancher_compose.py:832: in get_rancher_compose_service rancher_service = client.wait_success(rancher_service, 120) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = obj = {'createdTS': 1483748893000, 'launchConfig': {'tty': False, 'blkioWeight': 1000, 'labels': {'io.rancher.service.hash':...13Z', 'scalePolicy': None, 'fqdn': None, 'createIndex': 2, 'instanceIds': [u'1i7278', u'1i7279'], 'selectorLink': None} timeout = 120 def wait_success(self, obj, timeout=DEFAULT_TIMEOUT): obj = self.wait_transitioning(obj, timeout) if obj.transitioning != 'no': > raise gdapi.ClientApiError(obj.transitioningMessage) E ClientApiError: Expected state running but got error: Error response from daemon: Conflict. The name ""/r-test341701rancher-test825055-1-371a1483"" is already in use by container 999782634ff22c4d977d7f6aea39f44e3bb9eefeef9f19277a46ee19b20cee7f. You have to remove (or rename) that container to be able to reuse that name. ```",1,container fails to start with error conflict the name is already in use by container rancher server version rhel docker version following automation run fails with conflict the name is already in use by container admin client client rancher compose container none socat containers none def test rancher compose service option admin client client rancher compose container socat containers hosts client list host kind docker removed null true state active cpu shares ulimit hard name cpu soft ulimit inspect hard name cpu soft ipcmode host sysctls net ip forward dev opts dev null readiops writeiops readbps writebps cpu shares blkio weight cpu period cpu quota cpu set cpu setmems dns opt group add kernel memory memory reservation memory swap memory swappiness oom killdisable true oom scoreadj read only true shm size stop signal sigterm uts host memory dev opts inspect u path dev null u rate cgroup parent xyz extrahosts tmp fs tmp rw security opt launch config imageuuid test service opt image uuid extrahosts extrahosts privileged true cpushares cpu shares blkioweight blkio weight blkiodeviceoptions dev opts cgroupparent cgroup parent cpushares cpu shares cpuperiod cpu period cpuquota cpu quota cpuset cpu set cpusetmems cpu setmems dnsopt dns opt groupadd group add kernelmemory kernel memory memory memory memoryreservation memory reservation memoryswap memory swap memoryswappiness memory swappiness oomkilldisable oom killdisable oomscoreadj oom scoreadj readonly read only securityopt security opt shmsize shm size stopsignal stop signal sysctls sysctls tmpfs tmp fs ulimits ipcmode ipcmode uts uts requestedhostid hosts id scale service env create env and svc client launch config scale launch rancher compose client env rancher envs client list stack name env name rancher assert len rancher envs rancher env rancher envs rancher service get rancher compose service client rancher env id service users cattle validation validation tests tests validation cattlevalidationtest core test rancher compose py users cattle validation validation tests tests validation cattlevalidationtest core test rancher compose py in get rancher compose service rancher service client wait success rancher service self obj createdts launchconfig tty false blkioweight labels io rancher service hash scalepolicy none fqdn none createindex instanceids selectorlink none timeout def wait success self obj timeout default timeout obj self wait transitioning obj timeout if obj transitioning no raise gdapi clientapierror obj transitioningmessage e clientapierror expected state running but got error error response from daemon conflict the name r is already in use by container you have to remove or rename that container to be able to reuse that name ,1 2220,11592911116.0,IssuesEvent,2020-02-24 12:32:46,spacemeshos/go-spacemesh,https://api.github.com/repos/spacemeshos/go-spacemesh,closed,Validate block/ATX creation while adding/removing miners,ATX automation block,"# Overview / Motivation create a test checking the effect of adding and removing miners in the aspect of blocks per epoch and ATX publication. # The Test epoch i: - start with x miners - wait an epoch epoch i+1 - validate total miner generated Tavg/x (floored) - add a miner - validate miner created an ATX - wait an epoch epoch i+2: - validate total miner generated Tavg/x+1 (floored) create same test only with removing a miner and validate for Tavg/x-1 ",1.0,"Validate block/ATX creation while adding/removing miners - # Overview / Motivation create a test checking the effect of adding and removing miners in the aspect of blocks per epoch and ATX publication. # The Test epoch i: - start with x miners - wait an epoch epoch i+1 - validate total miner generated Tavg/x (floored) - add a miner - validate miner created an ATX - wait an epoch epoch i+2: - validate total miner generated Tavg/x+1 (floored) create same test only with removing a miner and validate for Tavg/x-1 ",1,validate block atx creation while adding removing miners overview motivation create a test checking the effect of adding and removing miners in the aspect of blocks per epoch and atx publication the test epoch i start with x miners wait an epoch epoch i validate total miner generated tavg x floored add a miner validate miner created an atx wait an epoch epoch i validate total miner generated tavg x floored create same test only with removing a miner and validate for tavg x ,1 1143,2698792789.0,IssuesEvent,2015-04-03 11:07:20,neovim/neovim,https://api.github.com/repos/neovim/neovim,closed,travis: test on OSX too,buildsystem,"Travis is getting awesomer and awesomer. We once again see the possibility of augmenting our test capability: travis seems to support OSX now: **NOTE:** I haven't been able to find if it's now possible to specify all that is necessary in one `.travis.yml` file. This used to be impossible in earlier betas. Information seems scarce. - https://github.com/travis-ci/travis-ci/issues/216 (people usually link to this when they have a PR for their project, so we can look at examples there) - http://docs.travis-ci.com/user/osx-ci-environment/ - https://github.com/citra-emu/citra/pull/7 (this seems like a good example to follow) It seems one needs the `os` key: ```yaml os: - osx - linux ``` I don't see a need of building all our current lines on OSX though, just one should be enough. Ideally there would also come a FreeBSD build with time. Travis [doesn't support FreeBSD at the moment, though](https://github.com/travis-ci/travis-ci/issues/1818). --- Want to back this issue? **[Place a bounty on it!](https://www.bountysource.com/issues/2345752-travis-test-on-osx-too?utm_campaign=plugin&utm_content=tracker%2F461131&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F461131&utm_medium=issues&utm_source=github). ",1.0,"travis: test on OSX too - Travis is getting awesomer and awesomer. We once again see the possibility of augmenting our test capability: travis seems to support OSX now: **NOTE:** I haven't been able to find if it's now possible to specify all that is necessary in one `.travis.yml` file. This used to be impossible in earlier betas. Information seems scarce. - https://github.com/travis-ci/travis-ci/issues/216 (people usually link to this when they have a PR for their project, so we can look at examples there) - http://docs.travis-ci.com/user/osx-ci-environment/ - https://github.com/citra-emu/citra/pull/7 (this seems like a good example to follow) It seems one needs the `os` key: ```yaml os: - osx - linux ``` I don't see a need of building all our current lines on OSX though, just one should be enough. Ideally there would also come a FreeBSD build with time. Travis [doesn't support FreeBSD at the moment, though](https://github.com/travis-ci/travis-ci/issues/1818). --- Want to back this issue? **[Place a bounty on it!](https://www.bountysource.com/issues/2345752-travis-test-on-osx-too?utm_campaign=plugin&utm_content=tracker%2F461131&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F461131&utm_medium=issues&utm_source=github). ",0,travis test on osx too travis is getting awesomer and awesomer we once again see the possibility of augmenting our test capability travis seems to support osx now note i haven t been able to find if it s now possible to specify all that is necessary in one travis yml file this used to be impossible in earlier betas information seems scarce people usually link to this when they have a pr for their project so we can look at examples there this seems like a good example to follow it seems one needs the os key yaml os osx linux i don t see a need of building all our current lines on osx though just one should be enough ideally there would also come a freebsd build with time travis want to back this issue we accept bounties via ,0 286,5171182056.0,IssuesEvent,2017-01-18 09:36:05,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,opened,Module outputs are not available in scripts,Automation bug,"The values of outputs of other modules are made available to script, which is also directly supported by the rule editor: ![screen shot 2017-01-18 at 10 15 23](https://cloud.githubusercontent.com/assets/3244965/22057952/507b7474-dd67-11e6-88b9-7b4b69045d65.png) When trying to access those outputs, an exception is thrown. The problem is that in the rule context, they are made available as e.g. ""1.newState"", which is provided as a map 1->newState in within the rule execution context. It is probably rather a bad idea to define ""1"" as a variable name... ",1.0,"Module outputs are not available in scripts - The values of outputs of other modules are made available to script, which is also directly supported by the rule editor: ![screen shot 2017-01-18 at 10 15 23](https://cloud.githubusercontent.com/assets/3244965/22057952/507b7474-dd67-11e6-88b9-7b4b69045d65.png) When trying to access those outputs, an exception is thrown. The problem is that in the rule context, they are made available as e.g. ""1.newState"", which is provided as a map 1->newState in within the rule execution context. It is probably rather a bad idea to define ""1"" as a variable name... ",1,module outputs are not available in scripts the values of outputs of other modules are made available to script which is also directly supported by the rule editor when trying to access those outputs an exception is thrown the problem is that in the rule context they are made available as e g newstate which is provided as a map newstate in within the rule execution context it is probably rather a bad idea to define as a variable name ,1 7962,25922384803.0,IssuesEvent,2022-12-15 23:33:24,Carbonable/carbonable-dapp,https://api.github.com/repos/Carbonable/carbonable-dapp,opened,Implement Apibara Indexer to listen starknet events and store data which come from on-chain,Type: Feature Complexity: Complex Readiness: Approved Component: Backend Epic: Infra Automation,"**Description** --- Implement Apibara Indexer to listen starknet events and store data which come from on-chain in a backend. **Acceptance Criteria** --- Features needed to be implement : - [ ] Integration and configuration of Indexer SDK - [ ] Storage into database - [ ] Exposing APIs to access data **Additional context** --- Apibara - what does it offer? An SDK (in Python and TypeScrypt) that allows you to listen to the events of the Starknet blockchain and store them in the database. You have to configure the SDK with the address of the smart contract and the name of the events (function) it has to listen to. Once configured and the backend that integrates the SDK launched/started, the events are ""streamed"" through the gRPC protocol and the backend needs to manage them as it wants (processing, storage, etc...). This Indexer SDK must be integrated into a backend developed by the business/enterprise/architecture that wishes to use it. It is also necessary to configure the storage of these events (Prisma + Postrgesql for example) and to implement the APIs which must be exposed by this backend. Roadmap : - Currently only two client SDKs: - **Python**: the most used and most battletested but it will not be maintained by Apibara in the futur (only community). - **TypeScrypt**: the most recent and least battletested, but it is this one that Apibara will maintain and improve. - New release during the week of december 19th to 23rd (either SDK TypeScrypt or both, I didn't ask the question). -> It is recommended to wait for this new release before implementing our Indexer. - In 2 to 4 months (or even 6 months), a serverless service, in the form of a lambda, will be proposed to consume the starknet events without having to host a backend/server. Proof of Concept : It is recommended, if we want to make the implementation in TypeScrypt, to use the workshop made in this repo and replace the database connector by Prisma : - Workshop : https://github.com/apibara/encode-club-workshop-nov22/tree/04-api-server (For the moment, there is only a template in Python but not yet in TypeScrypt). ",1.0,"Implement Apibara Indexer to listen starknet events and store data which come from on-chain - **Description** --- Implement Apibara Indexer to listen starknet events and store data which come from on-chain in a backend. **Acceptance Criteria** --- Features needed to be implement : - [ ] Integration and configuration of Indexer SDK - [ ] Storage into database - [ ] Exposing APIs to access data **Additional context** --- Apibara - what does it offer? An SDK (in Python and TypeScrypt) that allows you to listen to the events of the Starknet blockchain and store them in the database. You have to configure the SDK with the address of the smart contract and the name of the events (function) it has to listen to. Once configured and the backend that integrates the SDK launched/started, the events are ""streamed"" through the gRPC protocol and the backend needs to manage them as it wants (processing, storage, etc...). This Indexer SDK must be integrated into a backend developed by the business/enterprise/architecture that wishes to use it. It is also necessary to configure the storage of these events (Prisma + Postrgesql for example) and to implement the APIs which must be exposed by this backend. Roadmap : - Currently only two client SDKs: - **Python**: the most used and most battletested but it will not be maintained by Apibara in the futur (only community). - **TypeScrypt**: the most recent and least battletested, but it is this one that Apibara will maintain and improve. - New release during the week of december 19th to 23rd (either SDK TypeScrypt or both, I didn't ask the question). -> It is recommended to wait for this new release before implementing our Indexer. - In 2 to 4 months (or even 6 months), a serverless service, in the form of a lambda, will be proposed to consume the starknet events without having to host a backend/server. Proof of Concept : It is recommended, if we want to make the implementation in TypeScrypt, to use the workshop made in this repo and replace the database connector by Prisma : - Workshop : https://github.com/apibara/encode-club-workshop-nov22/tree/04-api-server (For the moment, there is only a template in Python but not yet in TypeScrypt). ",1,implement apibara indexer to listen starknet events and store data which come from on chain description implement apibara indexer to listen starknet events and store data which come from on chain in a backend acceptance criteria features needed to be implement integration and configuration of indexer sdk storage into database exposing apis to access data additional context apibara what does it offer an sdk in python and typescrypt that allows you to listen to the events of the starknet blockchain and store them in the database you have to configure the sdk with the address of the smart contract and the name of the events function it has to listen to once configured and the backend that integrates the sdk launched started the events are streamed through the grpc protocol and the backend needs to manage them as it wants processing storage etc this indexer sdk must be integrated into a backend developed by the business enterprise architecture that wishes to use it it is also necessary to configure the storage of these events prisma postrgesql for example and to implement the apis which must be exposed by this backend roadmap currently only two client sdks python the most used and most battletested but it will not be maintained by apibara in the futur only community typescrypt the most recent and least battletested but it is this one that apibara will maintain and improve new release during the week of december to either sdk typescrypt or both i didn t ask the question it is recommended to wait for this new release before implementing our indexer in to months or even months a serverless service in the form of a lambda will be proposed to consume the starknet events without having to host a backend server proof of concept it is recommended if we want to make the implementation in typescrypt to use the workshop made in this repo and replace the database connector by prisma workshop for the moment there is only a template in python but not yet in typescrypt ,1 7445,24895322162.0,IssuesEvent,2022-10-28 15:18:47,astropy/astropy,https://api.github.com/repos/astropy/astropy,opened,Automated notification on when to bump minversion of required dependencies as per APE 18,Feature Request Release dev-automation,"Would be nice to have automated notice to implement what was laid out in APE 18 so we do not suddenly realize we have to do it and then drop a few old numpy versions all at once. APE 18: https://github.com/astropy/astropy-APEs/blob/main/APE18.rst tl;dr *Astropy core will support:* * *All minor versions of Python released 42 months prior to a non-bugfix, and at minimum the two latest minor versions.* * *All minor versions of numpy released in the 24 months prior to a non-bugfix, and at minimum the last three minor versions.* * *Versions of other runtime dependencies released 24 months prior to a non-bugfix release.*",1.0,"Automated notification on when to bump minversion of required dependencies as per APE 18 - Would be nice to have automated notice to implement what was laid out in APE 18 so we do not suddenly realize we have to do it and then drop a few old numpy versions all at once. APE 18: https://github.com/astropy/astropy-APEs/blob/main/APE18.rst tl;dr *Astropy core will support:* * *All minor versions of Python released 42 months prior to a non-bugfix, and at minimum the two latest minor versions.* * *All minor versions of numpy released in the 24 months prior to a non-bugfix, and at minimum the last three minor versions.* * *Versions of other runtime dependencies released 24 months prior to a non-bugfix release.*",1,automated notification on when to bump minversion of required dependencies as per ape would be nice to have automated notice to implement what was laid out in ape so we do not suddenly realize we have to do it and then drop a few old numpy versions all at once ape tl dr astropy core will support all minor versions of python released months prior to a non bugfix and at minimum the two latest minor versions all minor versions of numpy released in the months prior to a non bugfix and at minimum the last three minor versions versions of other runtime dependencies released months prior to a non bugfix release ,1 428044,29924697265.0,IssuesEvent,2023-06-22 03:47:55,Xithrius/twitch-tui,https://api.github.com/repos/Xithrius/twitch-tui,closed,Put emotes into a feature flag,area: documentation type: enhancement area: dependencies,"Not all terminals support Kitty's graphics protocol, and a bunch of dependencies are pulled just to make this implementation work. Of course, document in the README how to get emotes to work so if a user wants them, they can have them.",1.0,"Put emotes into a feature flag - Not all terminals support Kitty's graphics protocol, and a bunch of dependencies are pulled just to make this implementation work. Of course, document in the README how to get emotes to work so if a user wants them, they can have them.",0,put emotes into a feature flag not all terminals support kitty s graphics protocol and a bunch of dependencies are pulled just to make this implementation work of course document in the readme how to get emotes to work so if a user wants them they can have them ,0 330233,24251915060.0,IssuesEvent,2022-09-27 14:46:28,castor-software/deptrim,https://api.github.com/repos/castor-software/deptrim,closed,Draft README,documentation,"Today it is not clear what the purpose of DepTrim is. Therefore, it is necessary to decide on: - Input/Output - Architecture - Usage ",1.0,"Draft README - Today it is not clear what the purpose of DepTrim is. Therefore, it is necessary to decide on: - Input/Output - Architecture - Usage ",0,draft readme today it is not clear what the purpose of deptrim is therefore it is necessary to decide on input output architecture usage ,0 291177,25127409594.0,IssuesEvent,2022-11-09 12:49:49,Uuvana-Studios/longvinter-windows-client,https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client,closed,Achievements still broken,Bug Not Tested,"### _**Description**_ _Same issue(s) as #1286, #691, #620, #776 and exactly as #421_ - Cannot get 5 broken achievements. (Tested all my remaining locked aside from 10 and 100 kills) - I was in a team with my better half and every time I cut down a tree they got a steam achievement progress update popup adding +1 to their 100 trees cut achievement, not me. - Partner achieved 100 trees cut, I never did. Aside from that one, we are missing the same amount of bugged achievements. - I have tried collecting all plants on EUW Uuvana 2, as well as NA Uuvana 2 and Uuvana 5. No achievement. - Attempted on ""Game Version 1.0.8"" **as well as** the current ""Game Version 1.0.9b"" ![bug1](https://user-images.githubusercontent.com/74375454/199508755-67199538-d940-4b6f-9f7a-8da2dd3e74cd.jpg) ![bug 2](https://user-images.githubusercontent.com/74375454/199509099-314f722c-1243-48bc-99d7-162ab610dc82.png) ### **_To Reproduce_** 1. Get all fish in your first 10 hours of gameplay. 2. Chop down 100 trees, complete all other compendium tabs, heal yourself to full health after having 1 health point. 3. Steam Achievements don't update. ### **_Expected behaviour_** Able to achieve Steam Achievements. ### _**Desktop:**_ OS: Windows 10 (64 bit) Game Version: 1.0.9b Steam Version: Latest (1666144119) ![steam vers](https://user-images.githubusercontent.com/74375454/199513247-f4ce840a-fb68-4e80-88e0-57a37b69d940.png) ### **_Additional Context_** I've chopped down about 200 trees, completed the fishing compendium tab twice on 2 separate NA servers, completed the plants compendium tab 3 times including an EU server. Tried these all months ago in June, and again this month. Reinstalled, deleted ALL registry items and folders using RevoUninstaller and **THEN** reinstalled yet another time. Still no luck. If I can give anymore information that might help then please let me know, I know this has been reported as fixed in the past but I still see many reports of this problem coming up again, not sure what else to say because I've tried everything, please help, thank you very much and have a good day! <3",1.0,"Achievements still broken - ### _**Description**_ _Same issue(s) as #1286, #691, #620, #776 and exactly as #421_ - Cannot get 5 broken achievements. (Tested all my remaining locked aside from 10 and 100 kills) - I was in a team with my better half and every time I cut down a tree they got a steam achievement progress update popup adding +1 to their 100 trees cut achievement, not me. - Partner achieved 100 trees cut, I never did. Aside from that one, we are missing the same amount of bugged achievements. - I have tried collecting all plants on EUW Uuvana 2, as well as NA Uuvana 2 and Uuvana 5. No achievement. - Attempted on ""Game Version 1.0.8"" **as well as** the current ""Game Version 1.0.9b"" ![bug1](https://user-images.githubusercontent.com/74375454/199508755-67199538-d940-4b6f-9f7a-8da2dd3e74cd.jpg) ![bug 2](https://user-images.githubusercontent.com/74375454/199509099-314f722c-1243-48bc-99d7-162ab610dc82.png) ### **_To Reproduce_** 1. Get all fish in your first 10 hours of gameplay. 2. Chop down 100 trees, complete all other compendium tabs, heal yourself to full health after having 1 health point. 3. Steam Achievements don't update. ### **_Expected behaviour_** Able to achieve Steam Achievements. ### _**Desktop:**_ OS: Windows 10 (64 bit) Game Version: 1.0.9b Steam Version: Latest (1666144119) ![steam vers](https://user-images.githubusercontent.com/74375454/199513247-f4ce840a-fb68-4e80-88e0-57a37b69d940.png) ### **_Additional Context_** I've chopped down about 200 trees, completed the fishing compendium tab twice on 2 separate NA servers, completed the plants compendium tab 3 times including an EU server. Tried these all months ago in June, and again this month. Reinstalled, deleted ALL registry items and folders using RevoUninstaller and **THEN** reinstalled yet another time. Still no luck. If I can give anymore information that might help then please let me know, I know this has been reported as fixed in the past but I still see many reports of this problem coming up again, not sure what else to say because I've tried everything, please help, thank you very much and have a good day! <3",0,achievements still broken description same issue s as and exactly as cannot get broken achievements tested all my remaining locked aside from and kills i was in a team with my better half and every time i cut down a tree they got a steam achievement progress update popup adding to their trees cut achievement not me partner achieved trees cut i never did aside from that one we are missing the same amount of bugged achievements i have tried collecting all plants on euw uuvana as well as na uuvana and uuvana no achievement attempted on game version as well as the current game version to reproduce get all fish in your first hours of gameplay chop down trees complete all other compendium tabs heal yourself to full health after having health point steam achievements don t update expected behaviour able to achieve steam achievements desktop os windows bit game version steam version latest additional context i ve chopped down about trees completed the fishing compendium tab twice on separate na servers completed the plants compendium tab times including an eu server tried these all months ago in june and again this month reinstalled deleted all registry items and folders using revouninstaller and then reinstalled yet another time still no luck if i can give anymore information that might help then please let me know i know this has been reported as fixed in the past but i still see many reports of this problem coming up again not sure what else to say because i ve tried everything please help thank you very much and have a good day ,0 157305,5997133912.0,IssuesEvent,2017-06-03 20:48:38,robag524/brk,https://api.github.com/repos/robag524/brk,closed,Navigation issues,bug high priority,"# Tasklist - [x] 1 The mobile -> desktop -> mobile bug has been checked and repaired. - [x] 2 The desktop view line has been solved. - [x] 3 The mobile view navigation slip has been solved. ## 1 There is a holy shit bug in the navigation: In mobile view, open hamburger menu then the ""Archívum"" submenu. Change to desktop view and change back... Try to close and reopen the navigation. ## 2 There is a line in desktop view: ![image](https://cloud.githubusercontent.com/assets/15779059/26756329/0fb9f16e-48a0-11e7-86f6-e98907793db1.png) ## 3 And this is the mobile view... ![image](https://cloud.githubusercontent.com/assets/15779059/26756334/5137e470-48a0-11e7-92e1-caaabb557ea9.png) ",1.0,"Navigation issues - # Tasklist - [x] 1 The mobile -> desktop -> mobile bug has been checked and repaired. - [x] 2 The desktop view line has been solved. - [x] 3 The mobile view navigation slip has been solved. ## 1 There is a holy shit bug in the navigation: In mobile view, open hamburger menu then the ""Archívum"" submenu. Change to desktop view and change back... Try to close and reopen the navigation. ## 2 There is a line in desktop view: ![image](https://cloud.githubusercontent.com/assets/15779059/26756329/0fb9f16e-48a0-11e7-86f6-e98907793db1.png) ## 3 And this is the mobile view... ![image](https://cloud.githubusercontent.com/assets/15779059/26756334/5137e470-48a0-11e7-92e1-caaabb557ea9.png) ",0,navigation issues tasklist the mobile desktop mobile bug has been checked and repaired the desktop view line has been solved the mobile view navigation slip has been solved there is a holy shit bug in the navigation in mobile view open hamburger menu then the archívum submenu change to desktop view and change back try to close and reopen the navigation there is a line in desktop view and this is the mobile view ,0 15078,2611059262.0,IssuesEvent,2015-02-27 00:27:17,alistairreilly/andors-trail,https://api.github.com/repos/alistairreilly/andors-trail,closed,New-Mountain-map,auto-migrated Milestone-0.6.9 Priority-Low Type-Enhancement,"``` hi, so i´ve made 3 maps... so if you like, take a look at it and tell me if this hits the sense of andor´s trail. and if you want me so, i would really like to build a complete mountain-area.... ``` Original issue reported on code.google.com by `michisch...@web.de` on 2 Mar 2011 at 6:28 Attachments: * [blackwater_mountain1.tmx](https://storage.googleapis.com/google-code-attachments/andors-trail/issue-167/comment-0/blackwater_mountain1.tmx) * [blackwater_mountain2.tmx](https://storage.googleapis.com/google-code-attachments/andors-trail/issue-167/comment-0/blackwater_mountain2.tmx) * [blackwater_mountain3.tmx](https://storage.googleapis.com/google-code-attachments/andors-trail/issue-167/comment-0/blackwater_mountain3.tmx) ",1.0,"New-Mountain-map - ``` hi, so i´ve made 3 maps... so if you like, take a look at it and tell me if this hits the sense of andor´s trail. and if you want me so, i would really like to build a complete mountain-area.... ``` Original issue reported on code.google.com by `michisch...@web.de` on 2 Mar 2011 at 6:28 Attachments: * [blackwater_mountain1.tmx](https://storage.googleapis.com/google-code-attachments/andors-trail/issue-167/comment-0/blackwater_mountain1.tmx) * [blackwater_mountain2.tmx](https://storage.googleapis.com/google-code-attachments/andors-trail/issue-167/comment-0/blackwater_mountain2.tmx) * [blackwater_mountain3.tmx](https://storage.googleapis.com/google-code-attachments/andors-trail/issue-167/comment-0/blackwater_mountain3.tmx) ",0,new mountain map hi so i´ve made maps so if you like take a look at it and tell me if this hits the sense of andor´s trail and if you want me so i would really like to build a complete mountain area original issue reported on code google com by michisch web de on mar at attachments ,0 162064,12609715146.0,IssuesEvent,2020-06-12 02:32:26,microsoft/azure-tools-for-java,https://api.github.com/repos/microsoft/azure-tools-for-java,closed,[IntelliJ] Highligh two options when you choose to create scala application without scala plugin,HDInsight IntelliJ Internal Test fixed,"![image](https://user-images.githubusercontent.com/20041888/27727349-b5c35184-5db0-11e7-9f87-fda92c147536.png) ",1.0,"[IntelliJ] Highligh two options when you choose to create scala application without scala plugin - ![image](https://user-images.githubusercontent.com/20041888/27727349-b5c35184-5db0-11e7-9f87-fda92c147536.png) ",0, highligh two options when you choose to create scala application without scala plugin ,0 5395,19474994478.0,IssuesEvent,2021-12-24 10:23:15,davidkel/fabric-chaos-testing,https://api.github.com/repos/davidkel/fabric-chaos-testing,closed,[EPIC] Define and deploy solution as an automated service,epic automation,"The current proposal for fabric test is 2 build runs - bring down and up a set of peers which should never cause an issue - complete chaos where everything goes down and up which will cause failures but client always recovers without being restarted",1.0,"[EPIC] Define and deploy solution as an automated service - The current proposal for fabric test is 2 build runs - bring down and up a set of peers which should never cause an issue - complete chaos where everything goes down and up which will cause failures but client always recovers without being restarted",1, define and deploy solution as an automated service the current proposal for fabric test is build runs bring down and up a set of peers which should never cause an issue complete chaos where everything goes down and up which will cause failures but client always recovers without being restarted,1 4774,17405930091.0,IssuesEvent,2021-08-03 05:51:41,appsmithorg/appsmith,https://api.github.com/repos/appsmithorg/appsmith,opened,[Bug] GenerateCRUDPage_Spec.js is failing randomly,Automation Bug,"## Description GenerateCRUDPage_Spec.js is failing randomly ### Steps to reproduce the behavior: Run GenerateCRUDPage_Spec.js ### Important Details ",1.0,"[Bug] GenerateCRUDPage_Spec.js is failing randomly - ## Description GenerateCRUDPage_Spec.js is failing randomly ### Steps to reproduce the behavior: Run GenerateCRUDPage_Spec.js ### Important Details ",1, generatecrudpage spec js is failing randomly description generatecrudpage spec js is failing randomly steps to reproduce the behavior run generatecrudpage spec js important details ,1 6733,23812105332.0,IssuesEvent,2022-09-04 22:40:50,smcnab1/op-question-mark,https://api.github.com/repos/smcnab1/op-question-mark,closed,[BUG] Node-Red Rebuild,Status: Confirmed Type: Bug Priority: Critical For: Automations,"Node-Red Flows missing. Find and re-build https://github.com/dortamur/ha-node-red-flows **Flows to Create** - [x] Hall Lighting - [x] Bathroom Lighting - [x] Living Room Lighting - [x] Bedroom Lighting - [x] Bed Sensor Activation - [x] Office Lighting (XBOX, Meeting) - [x] Actionable Notifications",1.0,"[BUG] Node-Red Rebuild - Node-Red Flows missing. Find and re-build https://github.com/dortamur/ha-node-red-flows **Flows to Create** - [x] Hall Lighting - [x] Bathroom Lighting - [x] Living Room Lighting - [x] Bedroom Lighting - [x] Bed Sensor Activation - [x] Office Lighting (XBOX, Meeting) - [x] Actionable Notifications",1, node red rebuild node red flows missing find and re build flows to create hall lighting bathroom lighting living room lighting bedroom lighting bed sensor activation office lighting xbox meeting actionable notifications,1 6155,22327248548.0,IssuesEvent,2022-06-14 11:46:26,netlify/build,https://api.github.com/repos/netlify/build,closed,chore(release): look into release-please manifest releaser node-workspace plugin,theme/automation theme/ci,"In one of the recent [release PRs](https://github.com/netlify/build/pull/3948), we encountered an issue where we needed to do a breaking change in both `@netlify/build` and `@netlify/run-utils`. The generated release PR was missing the version bump of `@netlify/run-utils` inside `@netlify/build`'s `package.json`. To fix this, we released the PR, and quickly followed up with [another release to fix it](https://github.com/netlify/build/pull/3949). Going forward, we would like the release PR to bump the version in the dependent package automatically. Seems like this is possible via a plugin https://github.com/googleapis/release-please/blob/d8bb7caddfa14aabd3bfa19008c10ed911638a66/docs/manifest-releaser.md#node-workspace ",1.0,"chore(release): look into release-please manifest releaser node-workspace plugin - In one of the recent [release PRs](https://github.com/netlify/build/pull/3948), we encountered an issue where we needed to do a breaking change in both `@netlify/build` and `@netlify/run-utils`. The generated release PR was missing the version bump of `@netlify/run-utils` inside `@netlify/build`'s `package.json`. To fix this, we released the PR, and quickly followed up with [another release to fix it](https://github.com/netlify/build/pull/3949). Going forward, we would like the release PR to bump the version in the dependent package automatically. Seems like this is possible via a plugin https://github.com/googleapis/release-please/blob/d8bb7caddfa14aabd3bfa19008c10ed911638a66/docs/manifest-releaser.md#node-workspace ",1,chore release look into release please manifest releaser node workspace plugin in one of the recent we encountered an issue where we needed to do a breaking change in both netlify build and netlify run utils the generated release pr was missing the version bump of netlify run utils inside netlify build s package json to fix this we released the pr and quickly followed up with going forward we would like the release pr to bump the version in the dependent package automatically seems like this is possible via a plugin ,1 6107,13748916774.0,IssuesEvent,2020-10-06 09:43:45,owncloud/android,https://api.github.com/repos/owncloud/android,closed,[New arch] Save file/folder in database ,Estimation - 5 (L) New architecture Sprint,"Currently there are some operations that need to save file info into the database. To achieve that, we can simply create a usecase to keep the current behaviour. ### TASKS - [x] Research (if needed) - [x] Create branch new_arch/save_file_in_db - [x] Development tasks - [x] Create saveFileUseCase (add it to UseCaseModule) - [x] Add saveFile to FileRepository - [x] Add saveFile to FileLocalDataSource - [x] Add saveFile to FileDao if it is not added yet - [x] Add domain unit tests - [x] Add data unit tests - [x] Code review and apply changes requested - [ ] Design test plan - [ ] QA - [x] Merge branch new_arch/save_file_in_db into new_arch/synchronization ### PR - App: https://github.com/owncloud/android/pull/2972 ",1.0,"[New arch] Save file/folder in database - Currently there are some operations that need to save file info into the database. To achieve that, we can simply create a usecase to keep the current behaviour. ### TASKS - [x] Research (if needed) - [x] Create branch new_arch/save_file_in_db - [x] Development tasks - [x] Create saveFileUseCase (add it to UseCaseModule) - [x] Add saveFile to FileRepository - [x] Add saveFile to FileLocalDataSource - [x] Add saveFile to FileDao if it is not added yet - [x] Add domain unit tests - [x] Add data unit tests - [x] Code review and apply changes requested - [ ] Design test plan - [ ] QA - [x] Merge branch new_arch/save_file_in_db into new_arch/synchronization ### PR - App: https://github.com/owncloud/android/pull/2972 ",0, save file folder in database currently there are some operations that need to save file info into the database to achieve that we can simply create a usecase to keep the current behaviour tasks research if needed create branch new arch save file in db development tasks create savefileusecase add it to usecasemodule add savefile to filerepository add savefile to filelocaldatasource add savefile to filedao if it is not added yet add domain unit tests add data unit tests code review and apply changes requested design test plan qa merge branch new arch save file in db into new arch synchronization pr app ,0 246052,20819427515.0,IssuesEvent,2022-03-18 13:59:12,ampproject/amphtml,https://api.github.com/repos/ampproject/amphtml,opened,🍱 ✅ [bento-date-picker:1.0] Improve test coverage,Type: Testing WG: bento,"### High-Level Requirements Improve test coverage for bento-date-picker ### Feature Checklist n/a ### Migration Notes n/a ### Open Tasks ### Unit Tests - [x] `Preact unit tests` - [ ] `Stand-alone unit tests` ### E2E Tests - [x] `Stand-alone e2e tests` - [ ] `Preact/React e2e tests` ### Notifications /cc @ampproject/wg-bento",1.0,"🍱 ✅ [bento-date-picker:1.0] Improve test coverage - ### High-Level Requirements Improve test coverage for bento-date-picker ### Feature Checklist n/a ### Migration Notes n/a ### Open Tasks ### Unit Tests - [x] `Preact unit tests` - [ ] `Stand-alone unit tests` ### E2E Tests - [x] `Stand-alone e2e tests` - [ ] `Preact/React e2e tests` ### Notifications /cc @ampproject/wg-bento",0,🍱 ✅ improve test coverage high level requirements improve test coverage for bento date picker feature checklist n a migration notes n a open tasks unit tests preact unit tests stand alone unit tests tests stand alone tests preact react tests notifications cc ampproject wg bento,0 7627,9885032284.0,IssuesEvent,2019-06-25 00:38:37,sass/libsass,https://api.github.com/repos/sass/libsass,opened,unquote() should reject non-string arguments,Compatibility - P2,"In `LibSass`, the `unquote()` function currently passes through non-string arguments unchanged. This has produced a deprecation warning for a while, but should be fully disallowed. See also https://github.com/sass/sass/issues/1583.",True,"unquote() should reject non-string arguments - In `LibSass`, the `unquote()` function currently passes through non-string arguments unchanged. This has produced a deprecation warning for a while, but should be fully disallowed. See also https://github.com/sass/sass/issues/1583.",0,unquote should reject non string arguments in libsass the unquote function currently passes through non string arguments unchanged this has produced a deprecation warning for a while but should be fully disallowed see also ,0 248576,26794061817.0,IssuesEvent,2023-02-01 10:33:49,elastic/beats,https://api.github.com/repos/elastic/beats,closed,[Agent]: Microsoft Module (ATP) keeps showing permission errors,Filebeat Stalled Team:Security-External Integrations,"Filebeat 7.11, Microsoft Module OS: Ubuntu Steps to Reproduce: - Enabled Microsoft module after doing the configuration - Per the documentation the permissions required have been given - All permissions added to confirm if this is the issue - Verifying through Journalctl reveals that it still requires permissions even though it has all of them ![image](https://user-images.githubusercontent.com/37713049/117864094-467ebd80-b249-11eb-883d-82fc2d0f4a50.png) ![image](https://user-images.githubusercontent.com/37713049/117864571-cf95f480-b249-11eb-87d4-5a23d9e708c3.png) ",True,"[Agent]: Microsoft Module (ATP) keeps showing permission errors - Filebeat 7.11, Microsoft Module OS: Ubuntu Steps to Reproduce: - Enabled Microsoft module after doing the configuration - Per the documentation the permissions required have been given - All permissions added to confirm if this is the issue - Verifying through Journalctl reveals that it still requires permissions even though it has all of them ![image](https://user-images.githubusercontent.com/37713049/117864094-467ebd80-b249-11eb-883d-82fc2d0f4a50.png) ![image](https://user-images.githubusercontent.com/37713049/117864571-cf95f480-b249-11eb-87d4-5a23d9e708c3.png) ",0, microsoft module atp keeps showing permission errors filebeat microsoft module os ubuntu steps to reproduce enabled microsoft module after doing the configuration per the documentation the permissions required have been given all permissions added to confirm if this is the issue verifying through journalctl reveals that it still requires permissions even though it has all of them ,0 105206,11435556037.0,IssuesEvent,2020-02-04 19:38:34,8051Enthusiast/at51,https://api.github.com/repos/8051Enthusiast/at51,closed,Additional documentation please,documentation enhancement,"Hi! I'm very pleased to see this tool. Thank you for making it. Unfortunately I am unclear on how to interpret the results. For example, the `stat` command says it ""Shows statistical information about 8051 instruction frequency"". There is also something in the README that explains that lower numbers are more indicative of 8051. But I'm still not sure what the two columns mean. I believe the first is an opcode, and maybe the second gives the chi-square value vs. some ""expected"" distribution? What sort of distributions would make you feel confident it was 8051 code? An example might help. I have a similar concern about the libfind subcommand, where I am getting some output that I _assume_ means you've detected calls to library functions in the .LIB files I supplied. But I don't have the experience to determine whether it's accidental... maybe an example there? And/or add call counts - I think if I saw `(MAIN)` being called once but `?C?ULCMP` multiple times I might assume it was legitimate 8051 code and not noise... Thank you!",1.0,"Additional documentation please - Hi! I'm very pleased to see this tool. Thank you for making it. Unfortunately I am unclear on how to interpret the results. For example, the `stat` command says it ""Shows statistical information about 8051 instruction frequency"". There is also something in the README that explains that lower numbers are more indicative of 8051. But I'm still not sure what the two columns mean. I believe the first is an opcode, and maybe the second gives the chi-square value vs. some ""expected"" distribution? What sort of distributions would make you feel confident it was 8051 code? An example might help. I have a similar concern about the libfind subcommand, where I am getting some output that I _assume_ means you've detected calls to library functions in the .LIB files I supplied. But I don't have the experience to determine whether it's accidental... maybe an example there? And/or add call counts - I think if I saw `(MAIN)` being called once but `?C?ULCMP` multiple times I might assume it was legitimate 8051 code and not noise... Thank you!",0,additional documentation please hi i m very pleased to see this tool thank you for making it unfortunately i am unclear on how to interpret the results for example the stat command says it shows statistical information about instruction frequency there is also something in the readme that explains that lower numbers are more indicative of but i m still not sure what the two columns mean i believe the first is an opcode and maybe the second gives the chi square value vs some expected distribution what sort of distributions would make you feel confident it was code an example might help i have a similar concern about the libfind subcommand where i am getting some output that i assume means you ve detected calls to library functions in the lib files i supplied but i don t have the experience to determine whether it s accidental maybe an example there and or add call counts i think if i saw main being called once but c ulcmp multiple times i might assume it was legitimate code and not noise thank you ,0 236378,19536250920.0,IssuesEvent,2021-12-31 07:44:22,astropy/astropy,https://api.github.com/repos/astropy/astropy,opened,MNT: Remove deprecated deprecations as exceptions and friends,testing io.ascii utils Refactoring,"Remove the things deprecated in #12633 . Deprecation started in v5.1. So, maybe we should remove them in v6.0 or after?",1.0,"MNT: Remove deprecated deprecations as exceptions and friends - Remove the things deprecated in #12633 . Deprecation started in v5.1. So, maybe we should remove them in v6.0 or after?",0,mnt remove deprecated deprecations as exceptions and friends remove the things deprecated in deprecation started in so maybe we should remove them in or after ,0 5527,19974586461.0,IssuesEvent,2022-01-29 00:05:26,freebsd/poudriere,https://api.github.com/repos/freebsd/poudriere,closed,"A command to run ""exp-runs""",Feature_Request portmgr/automation,"A nice feature would be to have poudriere test downstream users in tree using a command / switch Example: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=251512#c10 ",1.0,"A command to run ""exp-runs"" - A nice feature would be to have poudriere test downstream users in tree using a command / switch Example: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=251512#c10 ",1,a command to run exp runs a nice feature would be to have poudriere test downstream users in tree using a command switch example ,1 2027,11275268381.0,IssuesEvent,2020-01-14 20:23:15,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,PowerShell DSC Broken Link,Pri2 assigned-to-author automation/svc doc-bug dsc/subsvc triaged,"There is a broken link (404) on the left hand menu. State Configuration (DSC) > Concepts > PowerShell DSC links to https://docs.microsoft.com/en-us/powershell/dsc/overview/overview which does not appear to exist. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 96a48d43-dc10-b334-3774-b72ea10efdb7 * Version Independent ID: e56beb8a-da04-b799-8a04-4bed0b094836 * Content: [Getting started with Azure Automation State Configuration](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-getting-started#feedback) * Content Source: [articles/automation/automation-dsc-getting-started.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-dsc-getting-started.md) * Service: **automation** * Sub-service: **dsc** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1.0,"PowerShell DSC Broken Link - There is a broken link (404) on the left hand menu. State Configuration (DSC) > Concepts > PowerShell DSC links to https://docs.microsoft.com/en-us/powershell/dsc/overview/overview which does not appear to exist. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 96a48d43-dc10-b334-3774-b72ea10efdb7 * Version Independent ID: e56beb8a-da04-b799-8a04-4bed0b094836 * Content: [Getting started with Azure Automation State Configuration](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-getting-started#feedback) * Content Source: [articles/automation/automation-dsc-getting-started.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-dsc-getting-started.md) * Service: **automation** * Sub-service: **dsc** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**",1,powershell dsc broken link there is a broken link on the left hand menu state configuration dsc gt concepts gt powershell dsc links to which does not appear to exist document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service dsc github login mgoedtel microsoft alias magoedte ,1 21044,32038387407.0,IssuesEvent,2023-09-22 17:09:26,ClickHouse/ClickHouse,https://api.github.com/repos/ClickHouse/ClickHouse,closed,Create projection for MergeTree,backward compatibility comp-projections,"**Describe the issue** Creating projections for a MergeTree table was working well in the version `v23.3.8.21-lts` but now it is returning the error ``` Sorting key contains nullable columns, but merge tree setting `allow_nullable_key` is disabled ``` **How to reproduce** ```sql CREATE TABLE sales ( DATE_SOLD DateTime64(3, 'UTC'), PRODUCT_ID Nullable(String), ) Engine MergeTree() PARTITION BY toYYYYMM(DATE_SOLD) ORDER BY DATE_SOLD ``` then create the projection ```sql ALTER TABLE sales ADD PROJECTION test ( SELECT toInt64(COUNT(*)) GROUP BY PRODUCT_ID, DATE_SOLD ) ``` It should create the projection, but instead is returning an error * Which ClickHouse server versions are incompatible Working versions: `23.3.8.21` and `23.5.1` Broken in the version `23.8.2.7` **Error message and/or stacktrace** ``` Received exception from server (version 23.8.2): Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Sorting key contains nullable columns, but merge tree setting `allow_nullable_key` is disabled. (ILLEGAL_COLUMN) ``` ",True,"Create projection for MergeTree - **Describe the issue** Creating projections for a MergeTree table was working well in the version `v23.3.8.21-lts` but now it is returning the error ``` Sorting key contains nullable columns, but merge tree setting `allow_nullable_key` is disabled ``` **How to reproduce** ```sql CREATE TABLE sales ( DATE_SOLD DateTime64(3, 'UTC'), PRODUCT_ID Nullable(String), ) Engine MergeTree() PARTITION BY toYYYYMM(DATE_SOLD) ORDER BY DATE_SOLD ``` then create the projection ```sql ALTER TABLE sales ADD PROJECTION test ( SELECT toInt64(COUNT(*)) GROUP BY PRODUCT_ID, DATE_SOLD ) ``` It should create the projection, but instead is returning an error * Which ClickHouse server versions are incompatible Working versions: `23.3.8.21` and `23.5.1` Broken in the version `23.8.2.7` **Error message and/or stacktrace** ``` Received exception from server (version 23.8.2): Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Sorting key contains nullable columns, but merge tree setting `allow_nullable_key` is disabled. (ILLEGAL_COLUMN) ``` ",0,create projection for mergetree describe the issue creating projections for a mergetree table was working well in the version lts but now it is returning the error sorting key contains nullable columns but merge tree setting allow nullable key is disabled how to reproduce sql create table sales date sold utc product id nullable string engine mergetree partition by toyyyymm date sold order by date sold then create the projection sql alter table sales add projection test select count group by product id date sold it should create the projection but instead is returning an error which clickhouse server versions are incompatible working versions and broken in the version error message and or stacktrace received exception from server version code db exception received from localhost db exception sorting key contains nullable columns but merge tree setting allow nullable key is disabled illegal column ,0 9448,28352570517.0,IssuesEvent,2023-04-12 04:14:32,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[FEATURE] CIFS Backup Store Support,kind/enhancement highlight priority/0 require/automation-e2e require/doc require/lep area/volume-backup-restore,"We have setup a k8s cluster running with longhorn. I'm now trying to figure out how to manage the backup of our longhorn volumes. I saw we can add an NFS provisioner and configure longhorn `backupTarget` to an NFS share... but in our company, we don't have NFS servers but Windows Shares that can be mounted using CIFS. Is there any plans to support CIFS as a backup target, like it already exists for NFS/S3 ? ",1.0,"[FEATURE] CIFS Backup Store Support - We have setup a k8s cluster running with longhorn. I'm now trying to figure out how to manage the backup of our longhorn volumes. I saw we can add an NFS provisioner and configure longhorn `backupTarget` to an NFS share... but in our company, we don't have NFS servers but Windows Shares that can be mounted using CIFS. Is there any plans to support CIFS as a backup target, like it already exists for NFS/S3 ? ",1, cifs backup store support we have setup a cluster running with longhorn i m now trying to figure out how to manage the backup of our longhorn volumes i saw we can add an nfs provisioner and configure longhorn backuptarget to an nfs share but in our company we don t have nfs servers but windows shares that can be mounted using cifs is there any plans to support cifs as a backup target like it already exists for nfs ,1 6824,23950307008.0,IssuesEvent,2022-09-12 10:54:14,smcnab1/op-question-mark,https://api.github.com/repos/smcnab1/op-question-mark,closed,[BUG] Fix Off Trigger for living room lights post TV off,Status: Confirmed Type: Bug Priority: High For: Automations,"**Describe the bug** Living Room TV not triggering lights off when turned off **To Reproduce** Steps to reproduce the behavior: ? Scene manually toggled effecting ? Node red malfunction **Expected behavior** Living room lights that are currently on to turn off when tv turned off",1.0,"[BUG] Fix Off Trigger for living room lights post TV off - **Describe the bug** Living Room TV not triggering lights off when turned off **To Reproduce** Steps to reproduce the behavior: ? Scene manually toggled effecting ? Node red malfunction **Expected behavior** Living room lights that are currently on to turn off when tv turned off",1, fix off trigger for living room lights post tv off describe the bug living room tv not triggering lights off when turned off to reproduce steps to reproduce the behavior scene manually toggled effecting node red malfunction expected behavior living room lights that are currently on to turn off when tv turned off,1 14892,8700041400.0,IssuesEvent,2018-12-05 07:17:11,nolanlawson/pinafore,https://api.github.com/repos/nolanlawson/pinafore,closed,Use HTMLImageElement#decode to reduce UI jank,enhancement performance,[img.decode](https://www.chromestatus.com/feature/5637156160667648) is supported in Chrome and Safari now. Could be useful for media images or avatar images in a timeline.,True,Use HTMLImageElement#decode to reduce UI jank - [img.decode](https://www.chromestatus.com/feature/5637156160667648) is supported in Chrome and Safari now. Could be useful for media images or avatar images in a timeline.,0,use htmlimageelement decode to reduce ui jank is supported in chrome and safari now could be useful for media images or avatar images in a timeline ,0 69340,30243475591.0,IssuesEvent,2023-07-06 14:54:26,cityofaustin/atd-data-tech,https://api.github.com/repos/cityofaustin/atd-data-tech,closed,Assist ATSD Users with AGOL Access,Workgroup: ATSD Type: IT Support Service: Geo,"Three ATSD users (Yvonne, Romani and Matt) are needing access to the applications listed below: - SBO Coordination Map: https://austin.maps.arcgis.com/apps/webappviewer/index.html?id=6b5c7e64420b4f9792c254ec07c4bf60 - iMOPED: https://austin.maps.arcgis.com/apps/webappviewer/index.html?id=d6194f783b114453a70eb769853e390c ",1.0,"Assist ATSD Users with AGOL Access - Three ATSD users (Yvonne, Romani and Matt) are needing access to the applications listed below: - SBO Coordination Map: https://austin.maps.arcgis.com/apps/webappviewer/index.html?id=6b5c7e64420b4f9792c254ec07c4bf60 - iMOPED: https://austin.maps.arcgis.com/apps/webappviewer/index.html?id=d6194f783b114453a70eb769853e390c ",0,assist atsd users with agol access three atsd users yvonne romani and matt are needing access to the applications listed below sbo coordination map imoped ,0 9153,27635918466.0,IssuesEvent,2023-03-10 14:26:25,awslabs/aws-lambda-powertools-typescript,https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript,closed,Maintenance: improve husky pre-commit hook,area/automation type/internal status/confirmed,"### Summary As reported by @shdq on Discord, the current iteration of the `pre-commit` hook defined is not applying the `eslint` fixes in the correct order, and instead it's doing so in the files already committed. ### Why is this needed? The expected behavior should be to lint the staged files instead and fix them before they are committed. ### Which area does this relate to? Automation ### Solution _No response_ ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/) ### Future readers Please react with 👍 and your use case to help us understand customer demand.",1.0,"Maintenance: improve husky pre-commit hook - ### Summary As reported by @shdq on Discord, the current iteration of the `pre-commit` hook defined is not applying the `eslint` fixes in the correct order, and instead it's doing so in the files already committed. ### Why is this needed? The expected behavior should be to lint the staged files instead and fix them before they are committed. ### Which area does this relate to? Automation ### Solution _No response_ ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/) ### Future readers Please react with 👍 and your use case to help us understand customer demand.",1,maintenance improve husky pre commit hook summary as reported by shdq on discord the current iteration of the pre commit hook defined is not applying the eslint fixes in the correct order and instead it s doing so in the files already committed why is this needed the expected behavior should be to lint the staged files instead and fix them before they are committed which area does this relate to automation solution no response acknowledgment this request meets should this be considered in other lambda powertools languages i e and future readers please react with 👍 and your use case to help us understand customer demand ,1 668363,22580623729.0,IssuesEvent,2022-06-28 11:17:51,ros-controls/ros2_control,https://api.github.com/repos/ros-controls/ros2_control,closed,Refactor enforceLimits in joint_limits_interface,high-priority,"We are using snake case in this codebase _Originally posted by @bmagyar in https://github.com/ros-controls/ros2_control/pull/181#discussion_r502761169_",1.0,"Refactor enforceLimits in joint_limits_interface - We are using snake case in this codebase _Originally posted by @bmagyar in https://github.com/ros-controls/ros2_control/pull/181#discussion_r502761169_",0,refactor enforcelimits in joint limits interface we are using snake case in this codebase originally posted by bmagyar in ,0 1946,2678634751.0,IssuesEvent,2015-03-26 12:18:43,stan-dev/stan,https://api.github.com/repos/stan-dev/stan,closed,fix types in check_equal so clients don't need to cast,Code cleanup Feature,There's a problem with issue #1405 triggering compiler style warnings that are due to a problem in `check_equal`. Those calls to `check_equal` should not be triggering warnings. ,1.0,fix types in check_equal so clients don't need to cast - There's a problem with issue #1405 triggering compiler style warnings that are due to a problem in `check_equal`. Those calls to `check_equal` should not be triggering warnings. ,0,fix types in check equal so clients don t need to cast there s a problem with issue triggering compiler style warnings that are due to a problem in check equal those calls to check equal should not be triggering warnings ,0 73399,15253673509.0,IssuesEvent,2021-02-20 08:45:41,gsylvie/madness,https://api.github.com/repos/gsylvie/madness,closed,CVE-2016-5725 (Medium) detected in jsch-0.1.53.jar - autoclosed,security vulnerability,"## CVE-2016-5725 - Medium Severity Vulnerability
Vulnerable Library - jsch-0.1.53.jar

JSch is a pure Java implementation of SSH2

Library home page: http://www.jcraft.com/jsch/

Path to dependency file: madness/sub1/pom.xml

Path to vulnerable library: canner/.m2/repository/com/jcraft/jsch/0.1.53/jsch-0.1.53.jar,madness/sub1/target/madness-sub1-2019.02.01/WEB-INF/lib/jsch-0.1.53.jar

Dependency Hierarchy: - :x: **jsch-0.1.53.jar** (Vulnerable Library)

Found in HEAD commit: 032e0bc50a6a45a60e9aed1a5aae9530ad02548a

Vulnerability Details

Directory traversal vulnerability in JCraft JSch before 0.1.54 on Windows, when the mode is ChannelSftp.OVERWRITE, allows remote SFTP servers to write to arbitrary files via a ..\ (dot dot backslash) in a response to a recursive GET command.

Publish Date: 2017-01-19

URL: CVE-2016-5725

CVSS 3 Score Details (5.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-5725

Release Date: 2017-01-19

Fix Resolution: 0.1.54

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2016-5725 (Medium) detected in jsch-0.1.53.jar - autoclosed - ## CVE-2016-5725 - Medium Severity Vulnerability
Vulnerable Library - jsch-0.1.53.jar

JSch is a pure Java implementation of SSH2

Library home page: http://www.jcraft.com/jsch/

Path to dependency file: madness/sub1/pom.xml

Path to vulnerable library: canner/.m2/repository/com/jcraft/jsch/0.1.53/jsch-0.1.53.jar,madness/sub1/target/madness-sub1-2019.02.01/WEB-INF/lib/jsch-0.1.53.jar

Dependency Hierarchy: - :x: **jsch-0.1.53.jar** (Vulnerable Library)

Found in HEAD commit: 032e0bc50a6a45a60e9aed1a5aae9530ad02548a

Vulnerability Details

Directory traversal vulnerability in JCraft JSch before 0.1.54 on Windows, when the mode is ChannelSftp.OVERWRITE, allows remote SFTP servers to write to arbitrary files via a ..\ (dot dot backslash) in a response to a recursive GET command.

Publish Date: 2017-01-19

URL: CVE-2016-5725

CVSS 3 Score Details (5.9)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-5725

Release Date: 2017-01-19

Fix Resolution: 0.1.54

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in jsch jar autoclosed cve medium severity vulnerability vulnerable library jsch jar jsch is a pure java implementation of library home page a href path to dependency file madness pom xml path to vulnerable library canner repository com jcraft jsch jsch jar madness target madness web inf lib jsch jar dependency hierarchy x jsch jar vulnerable library found in head commit a href vulnerability details directory traversal vulnerability in jcraft jsch before on windows when the mode is channelsftp overwrite allows remote sftp servers to write to arbitrary files via a dot dot backslash in a response to a recursive get command publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 163406,25808666451.0,IssuesEvent,2022-12-11 16:46:15,TeamGoodGood/RelayNovel-iOS,https://api.github.com/repos/TeamGoodGood/RelayNovel-iOS,opened,[Feat] 마이페이지 디자인 UI 변경,🎨Design,"**Issue : Feature** 마이페이지의 UI 디자인을 수정합니다 **Description** 마이페이지의 UI 디자인을 수정합니다 **Todo** 작업해야 하는 투두리스트를 작성해주세요. - [ ] 알림 디자인 수정 - [ ] 시작한 릴레이, 참여한 릴레이의 수 색상 수정 - [ ] 나의 활동 캐릭터 수정 - [ ] 나의 활동 폰트, 색상 수정 **Reference(Optional)** 작업에 대해 참고하거나 알아야 할 기타사항이 있다면 작성해주세요. ",1.0,"[Feat] 마이페이지 디자인 UI 변경 - **Issue : Feature** 마이페이지의 UI 디자인을 수정합니다 **Description** 마이페이지의 UI 디자인을 수정합니다 **Todo** 작업해야 하는 투두리스트를 작성해주세요. - [ ] 알림 디자인 수정 - [ ] 시작한 릴레이, 참여한 릴레이의 수 색상 수정 - [ ] 나의 활동 캐릭터 수정 - [ ] 나의 활동 폰트, 색상 수정 **Reference(Optional)** 작업에 대해 참고하거나 알아야 할 기타사항이 있다면 작성해주세요. ",0, 마이페이지 디자인 ui 변경 issue feature 마이페이지의 ui 디자인을 수정합니다 description 마이페이지의 ui 디자인을 수정합니다 todo 작업해야 하는 투두리스트를 작성해주세요 알림 디자인 수정 시작한 릴레이 참여한 릴레이의 수 색상 수정 나의 활동 캐릭터 수정 나의 활동 폰트 색상 수정 reference optional 작업에 대해 참고하거나 알아야 할 기타사항이 있다면 작성해주세요 ,0 3734,14443301865.0,IssuesEvent,2020-12-07 19:26:34,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,reopened,Malformed credentials file,area/automation-api area/cli good-first-issue,"Running Pulumi CLI commands in CI is failing with the following error: ``` error: could not get cloud url: unmarshalling credentials file: unexpected end of JSON input ``` The following scripts are representative of what was running when the error happened: ```bash # prepare.sh curl https://sdk.cloud.google.com | bash > /dev/null export PATH=$PATH:/root/google-cloud-sdk/bin KEY_FILE=""$(mktemp)"" echo ""$GCP_SERVICE_ACCOUNT"" > ""$KEY_FILE"" gcloud auth activate-service-account --key-file ""$KEY_FILE"" curl -fsSL https://get.pulumi.com/ | bash export PATH=$PATH:$HOME/.pulumi/bin pulumi login ``` ```bash # deploy.sh pulumi stack init ""$PULUMI_STACK"" || true if [[ ""$PULUMI_STACK"" != ""production"" ]] || [[ ""$PULUMI_STACK"" != ""staging"" ]]; then pulumi config refresh --stack development || true json=""$(pulumi config --stack development --show-secrets --json)"" keys=""$(echo $json | jq -r 'keys[]')"" for key in $(echo $keys | tr '\n' ' ') do value=""$(echo $json | jq -r '.[""'""$key""'""].value')"" is_secret=""$(echo $json | jq -r '.[""'""$key""'""].secret')"" pulumi config --stack ""$PULUMI_STACK"" set ""$key"" ""$value"" $(if [ $is_secret = 'true' ]; then echo '--secret'; fi) done fi ``` Some things to keep in mind: - In some instances, the error doesn't happen at all. - The `prepare.sh` bash script is run once before updating each stack. - The `deploy.sh` bash script above executes in parallel for multiple Pulumi projects. In our case there are around 9 projects. - The scripts were modified in some runs to output the Google key file and well as the Pulumi credentials file (`$HOME/.pulumi/credentials.json`) before running the scripts and they seemed to always be valid JSON. - The scripts were modified to trace each command and the error seemed to happen during the `pulumi config set` call in `deploy.sh`. - There is a conversation in the Pulumi Slack in the channel `#gcp` that precludes this issue. Evaluating all the information throughout the conversation and presented above a potential cause for the issue could be concurrent use of Pulumi commands in the CLI. If the commands are writing to the credentials file while others read it, it's possible that if the writes are not atomic, the reads would be in a transient state that is malformed.",1.0,"Malformed credentials file - Running Pulumi CLI commands in CI is failing with the following error: ``` error: could not get cloud url: unmarshalling credentials file: unexpected end of JSON input ``` The following scripts are representative of what was running when the error happened: ```bash # prepare.sh curl https://sdk.cloud.google.com | bash > /dev/null export PATH=$PATH:/root/google-cloud-sdk/bin KEY_FILE=""$(mktemp)"" echo ""$GCP_SERVICE_ACCOUNT"" > ""$KEY_FILE"" gcloud auth activate-service-account --key-file ""$KEY_FILE"" curl -fsSL https://get.pulumi.com/ | bash export PATH=$PATH:$HOME/.pulumi/bin pulumi login ``` ```bash # deploy.sh pulumi stack init ""$PULUMI_STACK"" || true if [[ ""$PULUMI_STACK"" != ""production"" ]] || [[ ""$PULUMI_STACK"" != ""staging"" ]]; then pulumi config refresh --stack development || true json=""$(pulumi config --stack development --show-secrets --json)"" keys=""$(echo $json | jq -r 'keys[]')"" for key in $(echo $keys | tr '\n' ' ') do value=""$(echo $json | jq -r '.[""'""$key""'""].value')"" is_secret=""$(echo $json | jq -r '.[""'""$key""'""].secret')"" pulumi config --stack ""$PULUMI_STACK"" set ""$key"" ""$value"" $(if [ $is_secret = 'true' ]; then echo '--secret'; fi) done fi ``` Some things to keep in mind: - In some instances, the error doesn't happen at all. - The `prepare.sh` bash script is run once before updating each stack. - The `deploy.sh` bash script above executes in parallel for multiple Pulumi projects. In our case there are around 9 projects. - The scripts were modified in some runs to output the Google key file and well as the Pulumi credentials file (`$HOME/.pulumi/credentials.json`) before running the scripts and they seemed to always be valid JSON. - The scripts were modified to trace each command and the error seemed to happen during the `pulumi config set` call in `deploy.sh`. - There is a conversation in the Pulumi Slack in the channel `#gcp` that precludes this issue. Evaluating all the information throughout the conversation and presented above a potential cause for the issue could be concurrent use of Pulumi commands in the CLI. If the commands are writing to the credentials file while others read it, it's possible that if the writes are not atomic, the reads would be in a transient state that is malformed.",1,malformed credentials file running pulumi cli commands in ci is failing with the following error error could not get cloud url unmarshalling credentials file unexpected end of json input the following scripts are representative of what was running when the error happened bash prepare sh curl bash dev null export path path root google cloud sdk bin key file mktemp echo gcp service account key file gcloud auth activate service account key file key file curl fssl bash export path path home pulumi bin pulumi login bash deploy sh pulumi stack init pulumi stack true if then pulumi config refresh stack development true json pulumi config stack development show secrets json keys echo json jq r keys for key in echo keys tr n do value echo json jq r value is secret echo json jq r secret pulumi config stack pulumi stack set key value if then echo secret fi done fi some things to keep in mind in some instances the error doesn t happen at all the prepare sh bash script is run once before updating each stack the deploy sh bash script above executes in parallel for multiple pulumi projects in our case there are around projects the scripts were modified in some runs to output the google key file and well as the pulumi credentials file home pulumi credentials json before running the scripts and they seemed to always be valid json the scripts were modified to trace each command and the error seemed to happen during the pulumi config set call in deploy sh there is a conversation in the pulumi slack in the channel gcp that precludes this issue evaluating all the information throughout the conversation and presented above a potential cause for the issue could be concurrent use of pulumi commands in the cli if the commands are writing to the credentials file while others read it it s possible that if the writes are not atomic the reads would be in a transient state that is malformed ,1 249548,18858208418.0,IssuesEvent,2021-11-12 09:30:22,lethiciars/pe,https://api.github.com/repos/lethiciars/pe,opened,Inconsistent image size used ,severity.VeryLow type.DocumentationBug,"The second diagram could be made smaller to be consistent with the font size of the DG and the size of other diarams. ![Screenshot 2021-11-12 at 5.28.31 PM.png](https://raw.githubusercontent.com/lethiciars/pe/main/files/76f74ee0-fb6e-40cf-9349-7ef543e442cb.png) ",1.0,"Inconsistent image size used - The second diagram could be made smaller to be consistent with the font size of the DG and the size of other diarams. ![Screenshot 2021-11-12 at 5.28.31 PM.png](https://raw.githubusercontent.com/lethiciars/pe/main/files/76f74ee0-fb6e-40cf-9349-7ef543e442cb.png) ",0,inconsistent image size used the second diagram could be made smaller to be consistent with the font size of the dg and the size of other diarams ,0 73310,3410725386.0,IssuesEvent,2015-12-04 21:34:32,McStasMcXtrace/McWeb,https://api.github.com/repos/McStasMcXtrace/McWeb,closed,Picture display of configure instrument,enhancement mcweb frontend Priority 1000,In configure.html put a display of the particular instrument chosen to simulate if there is a .png file with the same basename as the simulation itself.,1.0,Picture display of configure instrument - In configure.html put a display of the particular instrument chosen to simulate if there is a .png file with the same basename as the simulation itself.,0,picture display of configure instrument in configure html put a display of the particular instrument chosen to simulate if there is a png file with the same basename as the simulation itself ,0 369215,10889608262.0,IssuesEvent,2019-11-18 18:36:59,craftercms/craftercms,https://api.github.com/repos/craftercms/craftercms,closed,"[studio-ui] Inconsistent naming for ""Global Configuration""",priority: low quality,"## Describe the bug Inconsistent naming for ""Global Configuration"". In the `Main Menu` the global configuration module is displayed as `Global Config` and when you click on it, the screen displays `Global Configuration` ## To Reproduce Steps to reproduce the behavior: 1. Start Studio 2. Click on the `Main Menu` in the context nav 3. Click on `Global Config` from the `Main Menu` and notice the names ![Screen Shot 2019-11-12 at 11 16 35 AM](https://user-images.githubusercontent.com/25483966/68689264-13694a00-053e-11ea-8cfe-b90d92bb2865.png) ## Expected behavior The names should be consistent, say use `Global Configuration` in the `Main Menu` ## Screenshots {{If applicable, add screenshots to help explain your problem.}} ## Logs {{If applicable, attach the logs/stack trace (use https://gist.github.com).}} ## Specs ### Version Studio Version Number: 3.1.4-SNAPSHOT-81e2c9 Build Number: 81e2c94346fac49a59b40fa8b7ccc9fc0a5c02de Build Date/Time: 11-12-2019 09:55:05 -0500 ### OS OS X ### Browser Chrome Browser ## Additional context {{Add any other context about the problem here.}} ",1.0,"[studio-ui] Inconsistent naming for ""Global Configuration"" - ## Describe the bug Inconsistent naming for ""Global Configuration"". In the `Main Menu` the global configuration module is displayed as `Global Config` and when you click on it, the screen displays `Global Configuration` ## To Reproduce Steps to reproduce the behavior: 1. Start Studio 2. Click on the `Main Menu` in the context nav 3. Click on `Global Config` from the `Main Menu` and notice the names ![Screen Shot 2019-11-12 at 11 16 35 AM](https://user-images.githubusercontent.com/25483966/68689264-13694a00-053e-11ea-8cfe-b90d92bb2865.png) ## Expected behavior The names should be consistent, say use `Global Configuration` in the `Main Menu` ## Screenshots {{If applicable, add screenshots to help explain your problem.}} ## Logs {{If applicable, attach the logs/stack trace (use https://gist.github.com).}} ## Specs ### Version Studio Version Number: 3.1.4-SNAPSHOT-81e2c9 Build Number: 81e2c94346fac49a59b40fa8b7ccc9fc0a5c02de Build Date/Time: 11-12-2019 09:55:05 -0500 ### OS OS X ### Browser Chrome Browser ## Additional context {{Add any other context about the problem here.}} ",0, inconsistent naming for global configuration describe the bug inconsistent naming for global configuration in the main menu the global configuration module is displayed as global config and when you click on it the screen displays global configuration to reproduce steps to reproduce the behavior start studio click on the main menu in the context nav click on global config from the main menu and notice the names expected behavior the names should be consistent say use global configuration in the main menu screenshots if applicable add screenshots to help explain your problem logs if applicable attach the logs stack trace use specs version studio version number snapshot build number build date time os os x browser chrome browser additional context add any other context about the problem here ,0 138231,5330108067.0,IssuesEvent,2017-02-15 16:20:47,HBHWoolacotts/RPii,https://api.github.com/repos/HBHWoolacotts/RPii,opened,Add-on items being removed still show in the sale balance,Label: General RP Bugs and Support Priority - High,"EXAMPLE SALE: 1958754 Addon item was removed but you can see the value of £99.99 is still included in the sale balance. [nb this sale can be deleted once problem is fixed, as it's now loaded correctly on a different sale on the customer's account] ![image](https://cloud.githubusercontent.com/assets/10868496/22983641/9fe61b10-f39a-11e6-93b3-d6d33abecbd1.png) ",1.0,"Add-on items being removed still show in the sale balance - EXAMPLE SALE: 1958754 Addon item was removed but you can see the value of £99.99 is still included in the sale balance. [nb this sale can be deleted once problem is fixed, as it's now loaded correctly on a different sale on the customer's account] ![image](https://cloud.githubusercontent.com/assets/10868496/22983641/9fe61b10-f39a-11e6-93b3-d6d33abecbd1.png) ",0,add on items being removed still show in the sale balance example sale addon item was removed but you can see the value of £ is still included in the sale balance ,0 10258,32056865904.0,IssuesEvent,2023-09-24 07:17:32,dcaribou/transfermarkt-datasets,https://api.github.com/repos/dcaribou/transfermarkt-datasets,closed,Setup public access to DVC assets,enhancement automations,"In order for everyone to be able to access DVC assets and appeareces snapshots, allow public access ~~on the bucket~~ somehow A few things to consider * https://www.reddit.com/r/aws/comments/99ird9/is_it_safe_to_have_an_s3_bucket_with_private * ~~Is S3 throttling a possible solution to avoid incurring into big costs buts still allowing public access?~~ * ~~AWS Bugdet actions seem to be way to go for implementing a security shutdown of the bucket access if traffic increases too much [here](https://aws.amazon.com/about-aws/whats-new/2020/10/announcing-aws-budgets-actions/)~~ * [This](http://help.reserved.ai/en/articles/4210890-why-are-my-aws-tags-not-showing-up-in-the-dashboard#:~:text=To%20rectify%20this%20you%20can,Usage%20Reports%20are%20also%20enabled.) AWS Budget actions does not support triggering a change in the ACL of an S3 bucket ([the way](https://aws.amazon.com/premiumsupport/knowledge-center/read-access-objects-s3-bucket/) to change the status of a bucket from public to private). Explore the option of setting up a simple lambda that subscribes to AWS Budget events to do the thing. Useful resources * Setting up a lambda with Terraform ([link](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function)) * Setting up a subscription from SNS to a lambda with Terraform ([link](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sns_topic_subscription)) * Lambda deployment packages ([link](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-package.html)) * Lambda development tools ([link](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-tools.html)) * Terraform example ([link](https://github.com/spring-media/terraform-aws-lambda/blob/v5.2.1/modules/event/sns/main.tf))",1.0,"Setup public access to DVC assets - In order for everyone to be able to access DVC assets and appeareces snapshots, allow public access ~~on the bucket~~ somehow A few things to consider * https://www.reddit.com/r/aws/comments/99ird9/is_it_safe_to_have_an_s3_bucket_with_private * ~~Is S3 throttling a possible solution to avoid incurring into big costs buts still allowing public access?~~ * ~~AWS Bugdet actions seem to be way to go for implementing a security shutdown of the bucket access if traffic increases too much [here](https://aws.amazon.com/about-aws/whats-new/2020/10/announcing-aws-budgets-actions/)~~ * [This](http://help.reserved.ai/en/articles/4210890-why-are-my-aws-tags-not-showing-up-in-the-dashboard#:~:text=To%20rectify%20this%20you%20can,Usage%20Reports%20are%20also%20enabled.) AWS Budget actions does not support triggering a change in the ACL of an S3 bucket ([the way](https://aws.amazon.com/premiumsupport/knowledge-center/read-access-objects-s3-bucket/) to change the status of a bucket from public to private). Explore the option of setting up a simple lambda that subscribes to AWS Budget events to do the thing. Useful resources * Setting up a lambda with Terraform ([link](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function)) * Setting up a subscription from SNS to a lambda with Terraform ([link](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sns_topic_subscription)) * Lambda deployment packages ([link](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-package.html)) * Lambda development tools ([link](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-tools.html)) * Terraform example ([link](https://github.com/spring-media/terraform-aws-lambda/blob/v5.2.1/modules/event/sns/main.tf))",1,setup public access to dvc assets in order for everyone to be able to access dvc assets and appeareces snapshots allow public access on the bucket somehow a few things to consider is throttling a possible solution to avoid incurring into big costs buts still allowing public access aws bugdet actions seem to be way to go for implementing a security shutdown of the bucket access if traffic increases too much aws budget actions does not support triggering a change in the acl of an bucket to change the status of a bucket from public to private explore the option of setting up a simple lambda that subscribes to aws budget events to do the thing useful resources setting up a lambda with terraform setting up a subscription from sns to a lambda with terraform lambda deployment packages lambda development tools terraform example ,1 5802,21184245016.0,IssuesEvent,2022-04-08 11:00:57,home-assistant/home-assistant.io,https://api.github.com/repos/home-assistant/home-assistant.io,closed,Documentation out of date?,Stale automation getting-started,"### Feedback Under ""actions"" HA presents a potential list of targets, but not the opportunity to enter a free-form entity name ""entity_id: all"". Also, ""blueprint"" is an new option when an automation is added. This is not reflected in the documentation. ### URL https://www.home-assistant.io/getting-started/automation/ ### Version 2021.12.10 ### Additional information _No response_",1.0,"Documentation out of date? - ### Feedback Under ""actions"" HA presents a potential list of targets, but not the opportunity to enter a free-form entity name ""entity_id: all"". Also, ""blueprint"" is an new option when an automation is added. This is not reflected in the documentation. ### URL https://www.home-assistant.io/getting-started/automation/ ### Version 2021.12.10 ### Additional information _No response_",1,documentation out of date feedback under actions ha presents a potential list of targets but not the opportunity to enter a free form entity name entity id all also blueprint is an new option when an automation is added this is not reflected in the documentation url version additional information no response ,1 9920,30755200776.0,IssuesEvent,2023-07-29 01:26:35,Azure/azure-sdk-for-python,https://api.github.com/repos/Azure/azure-sdk-for-python,closed,DeserializationError's across multiple SDK packages,question Automation Network - CDN Network Mgmt customer-reported Data Bricks no-recent-activity needs-author-feedback,"- **Package Name**: databricks, cdn, frontdoor, automation, network - **Package Version**: multiple - **Operating System**: macOS - **Python Version**: 3.8 **Describe the bug** ``` AttributeError: 'str' object has no attribute 'get' File ""msrest/serialization.py"", line 1436, in _deserialize found_value = key_extractor(attr, attr_desc, data) File ""msrest/serialization.py"", line 1180, in rest_key_extractor return working_data.get(key) DeserializationError: (""Unable to deserialize to object: type, AttributeError: 'str' object has no attribute 'get'"", AttributeError(""'str' object has no attribute 'get'"")) (9 additional frame(s) were not displayed) ... File ""msrest/serialization.py"", line 1376, in __call__ return self._deserialize(target_obj, data) File ""msrest/serialization.py"", line 1454, in _deserialize raise_with_traceback(DeserializationError, msg, err) File ""msrest/exceptions.py"", line 51, in raise_with_traceback raise error.with_traceback(exc_traceback) File ""msrest/serialization.py"", line 1436, in _deserialize found_value = key_extractor(attr, attr_desc, data) File ""msrest/serialization.py"", line 1180, in rest_key_extractor return working_data.get(key) ``` I am seeing this bug occur across many azure packages. It seems that Azure is not surfacing error's correctly up through the api calls. I am seeing it in the following calls (but there are likely more): - azure.mgmt.cdn = profiles.list() - azure.mgmt.frontdoor = front_doors.list() - azure.mgmt.automation = source_control.list_by_automation_account() - azure.mgmt.network = ip_groups.list() - azure.mgmt.network = network_watchers.list_all() - azure.mgmt.databricks = workspaces.list_by_subscription() In `msrest/serialization.py` this code is where the error occurs: ``` # that all properties under are None as well # https://github.com/Azure/msrest-for-python/issues/197 return None key = '.'.join(dict_keys[1:]) return working_data.get(key) <----- error here ``` Examples of what `working_data` value is when error occurs: ``` '{""error"":{""code"":""DisallowedOperation"",""message"":""The current subscription type is not permitted to perform operations on any provider namespace. Please use a different subscription.""}}' ``` ``` '{""error"":{""code"":""InvalidResourceType"",""message"":""The resource type could not be found in the namespace \'Microsoft.Network\' for api version \'2020-05-01\'.""}}' ``` As you can see these errors are flat strings, not dictionaries causing the attribute/deserialization error. This is also causing the actual error to be suppressed, meaning I cannot handle it myself. In the package/operation `azure.mgmt.databricks` operation `workspaces.list_by_subscription()` `working_data` is: ``` [ { id: 'redacted', location: 'centralus', name: 'redacted', properties: {}, sku: {}, tags: {}, type: 'Microsoft.Databricks... } ] ``` Which causes a similar error: ``` AttributeError: 'list' object has no attribute 'get' File ""msrest/serialization.py"", line 1436, in _deserialize found_value = key_extractor(attr, attr_desc, data) File ""msrest/serialization.py"", line 1180, in rest_key_extractor return working_data.get(key) DeserializationError: (""Unable to deserialize to object: type, AttributeError: 'list' object has no attribute 'get'"", AttributeError(""'list' object has no attribute 'get'"")) (9 additional frame(s) were not displayed) ... File ""msrest/serialization.py"", line 1376, in __call__ return self._deserialize(target_obj, data) File ""msrest/serialization.py"", line 1454, in _deserialize raise_with_traceback(DeserializationError, msg, err) File ""msrest/exceptions.py"", line 51, in raise_with_traceback raise error.with_traceback(exc_traceback) File ""msrest/serialization.py"", line 1436, in _deserialize found_value = key_extractor(attr, attr_desc, data) File ""msrest/serialization.py"", line 1180, in rest_key_extractor return working_data.get(key) ``` ",1.0,"DeserializationError's across multiple SDK packages - - **Package Name**: databricks, cdn, frontdoor, automation, network - **Package Version**: multiple - **Operating System**: macOS - **Python Version**: 3.8 **Describe the bug** ``` AttributeError: 'str' object has no attribute 'get' File ""msrest/serialization.py"", line 1436, in _deserialize found_value = key_extractor(attr, attr_desc, data) File ""msrest/serialization.py"", line 1180, in rest_key_extractor return working_data.get(key) DeserializationError: (""Unable to deserialize to object: type, AttributeError: 'str' object has no attribute 'get'"", AttributeError(""'str' object has no attribute 'get'"")) (9 additional frame(s) were not displayed) ... File ""msrest/serialization.py"", line 1376, in __call__ return self._deserialize(target_obj, data) File ""msrest/serialization.py"", line 1454, in _deserialize raise_with_traceback(DeserializationError, msg, err) File ""msrest/exceptions.py"", line 51, in raise_with_traceback raise error.with_traceback(exc_traceback) File ""msrest/serialization.py"", line 1436, in _deserialize found_value = key_extractor(attr, attr_desc, data) File ""msrest/serialization.py"", line 1180, in rest_key_extractor return working_data.get(key) ``` I am seeing this bug occur across many azure packages. It seems that Azure is not surfacing error's correctly up through the api calls. I am seeing it in the following calls (but there are likely more): - azure.mgmt.cdn = profiles.list() - azure.mgmt.frontdoor = front_doors.list() - azure.mgmt.automation = source_control.list_by_automation_account() - azure.mgmt.network = ip_groups.list() - azure.mgmt.network = network_watchers.list_all() - azure.mgmt.databricks = workspaces.list_by_subscription() In `msrest/serialization.py` this code is where the error occurs: ``` # that all properties under are None as well # https://github.com/Azure/msrest-for-python/issues/197 return None key = '.'.join(dict_keys[1:]) return working_data.get(key) <----- error here ``` Examples of what `working_data` value is when error occurs: ``` '{""error"":{""code"":""DisallowedOperation"",""message"":""The current subscription type is not permitted to perform operations on any provider namespace. Please use a different subscription.""}}' ``` ``` '{""error"":{""code"":""InvalidResourceType"",""message"":""The resource type could not be found in the namespace \'Microsoft.Network\' for api version \'2020-05-01\'.""}}' ``` As you can see these errors are flat strings, not dictionaries causing the attribute/deserialization error. This is also causing the actual error to be suppressed, meaning I cannot handle it myself. In the package/operation `azure.mgmt.databricks` operation `workspaces.list_by_subscription()` `working_data` is: ``` [ { id: 'redacted', location: 'centralus', name: 'redacted', properties: {}, sku: {}, tags: {}, type: 'Microsoft.Databricks... } ] ``` Which causes a similar error: ``` AttributeError: 'list' object has no attribute 'get' File ""msrest/serialization.py"", line 1436, in _deserialize found_value = key_extractor(attr, attr_desc, data) File ""msrest/serialization.py"", line 1180, in rest_key_extractor return working_data.get(key) DeserializationError: (""Unable to deserialize to object: type, AttributeError: 'list' object has no attribute 'get'"", AttributeError(""'list' object has no attribute 'get'"")) (9 additional frame(s) were not displayed) ... File ""msrest/serialization.py"", line 1376, in __call__ return self._deserialize(target_obj, data) File ""msrest/serialization.py"", line 1454, in _deserialize raise_with_traceback(DeserializationError, msg, err) File ""msrest/exceptions.py"", line 51, in raise_with_traceback raise error.with_traceback(exc_traceback) File ""msrest/serialization.py"", line 1436, in _deserialize found_value = key_extractor(attr, attr_desc, data) File ""msrest/serialization.py"", line 1180, in rest_key_extractor return working_data.get(key) ``` ",1,deserializationerror s across multiple sdk packages package name databricks cdn frontdoor automation network package version multiple operating system macos python version describe the bug attributeerror str object has no attribute get file msrest serialization py line in deserialize found value key extractor attr attr desc data file msrest serialization py line in rest key extractor return working data get key deserializationerror unable to deserialize to object type attributeerror str object has no attribute get attributeerror str object has no attribute get additional frame s were not displayed file msrest serialization py line in call return self deserialize target obj data file msrest serialization py line in deserialize raise with traceback deserializationerror msg err file msrest exceptions py line in raise with traceback raise error with traceback exc traceback file msrest serialization py line in deserialize found value key extractor attr attr desc data file msrest serialization py line in rest key extractor return working data get key i am seeing this bug occur across many azure packages it seems that azure is not surfacing error s correctly up through the api calls i am seeing it in the following calls but there are likely more azure mgmt cdn profiles list azure mgmt frontdoor front doors list azure mgmt automation source control list by automation account azure mgmt network ip groups list azure mgmt network network watchers list all azure mgmt databricks workspaces list by subscription in msrest serialization py this code is where the error occurs that all properties under are none as well return none key join dict keys return working data get key error here examples of what working data value is when error occurs error code disallowedoperation message the current subscription type is not permitted to perform operations on any provider namespace please use a different subscription error code invalidresourcetype message the resource type could not be found in the namespace microsoft network for api version as you can see these errors are flat strings not dictionaries causing the attribute deserialization error this is also causing the actual error to be suppressed meaning i cannot handle it myself in the package operation azure mgmt databricks operation workspaces list by subscription working data is id redacted location centralus name redacted properties sku tags type microsoft databricks which causes a similar error attributeerror list object has no attribute get file msrest serialization py line in deserialize found value key extractor attr attr desc data file msrest serialization py line in rest key extractor return working data get key deserializationerror unable to deserialize to object type attributeerror list object has no attribute get attributeerror list object has no attribute get additional frame s were not displayed file msrest serialization py line in call return self deserialize target obj data file msrest serialization py line in deserialize raise with traceback deserializationerror msg err file msrest exceptions py line in raise with traceback raise error with traceback exc traceback file msrest serialization py line in deserialize found value key extractor attr attr desc data file msrest serialization py line in rest key extractor return working data get key ,1 6827,2595053709.0,IssuesEvent,2015-02-20 11:04:26,handsontable/handsontable,https://api.github.com/repos/handsontable/handsontable,closed,Horizontally expanding input field when use multiple lines,Bug Cell type: base / text / password Guess: < 2 hours Priority: low,"Input field undesirably expands its width when I start entering content in new line of a cell. **Steps to reproduce the problem:** - Go to http://handsontable.com - Enter longer text in any cell - Press ALT+Enter - Start writing in the new line :information_source: > Handsontable v. 0.11.4 > Windows 8.1 > Chrome v. 38.0.2125.111 m ![input-width](https://cloud.githubusercontent.com/assets/8048526/5030765/92c31a3c-6b52-11e4-9aa3-934819c906c9.jpg) ",1.0,"Horizontally expanding input field when use multiple lines - Input field undesirably expands its width when I start entering content in new line of a cell. **Steps to reproduce the problem:** - Go to http://handsontable.com - Enter longer text in any cell - Press ALT+Enter - Start writing in the new line :information_source: > Handsontable v. 0.11.4 > Windows 8.1 > Chrome v. 38.0.2125.111 m ![input-width](https://cloud.githubusercontent.com/assets/8048526/5030765/92c31a3c-6b52-11e4-9aa3-934819c906c9.jpg) ",0,horizontally expanding input field when use multiple lines input field undesirably expands its width when i start entering content in new line of a cell steps to reproduce the problem go to enter longer text in any cell press alt enter start writing in the new line information source handsontable v windows chrome v m ,0 113533,24440302351.0,IssuesEvent,2022-10-06 14:11:41,Onelinerhub/onelinerhub,https://api.github.com/repos/Onelinerhub/onelinerhub,closed,"Short solution needed: ""Get image orientation"" (php-gd)",help wanted good first issue code php-gd,"Please help us write most modern and shortest code solution for this issue: **Get image orientation** (technology: [php-gd](https://onelinerhub.com/php-gd)) ### Fast way Just write the code solution in the comments. ### Prefered way 1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox). 2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution. 3. Link to this issue in comments of pull request.",1.0,"Short solution needed: ""Get image orientation"" (php-gd) - Please help us write most modern and shortest code solution for this issue: **Get image orientation** (technology: [php-gd](https://onelinerhub.com/php-gd)) ### Fast way Just write the code solution in the comments. ### Prefered way 1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox). 2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution. 3. Link to this issue in comments of pull request.",0,short solution needed get image orientation php gd please help us write most modern and shortest code solution for this issue get image orientation technology fast way just write the code solution in the comments prefered way create with a new code file inside don t forget to explain solution link to this issue in comments of pull request ,0 294010,9012033252.0,IssuesEvent,2019-02-05 15:59:41,Deepomatic/dmake,https://api.github.com/repos/Deepomatic/dmake,opened,"Generalize DMAKE_SKIP_TESTS: --skip type[:all|dependencies|service/name,]",enhancement medium priority size/L,"Following-up on https://github.com/Deepomatic/dmake/issues/235#issuecomment-455613117 Maybe `--skip type[:all|dependencies|service/name,]` Examples: - `--skip test` == `--skip test:all` == `DMAKE_SKIP_TESTS=1` - `--skip deploy:dependencies` == current `dmake deploy A` behavior - `--skip test:service/web` (Related to https://github.com/Deepomatic/dmake/pull/213#issuecomment-392739655 where I proposed to control which dependencies to work on)",1.0,"Generalize DMAKE_SKIP_TESTS: --skip type[:all|dependencies|service/name,] - Following-up on https://github.com/Deepomatic/dmake/issues/235#issuecomment-455613117 Maybe `--skip type[:all|dependencies|service/name,]` Examples: - `--skip test` == `--skip test:all` == `DMAKE_SKIP_TESTS=1` - `--skip deploy:dependencies` == current `dmake deploy A` behavior - `--skip test:service/web` (Related to https://github.com/Deepomatic/dmake/pull/213#issuecomment-392739655 where I proposed to control which dependencies to work on)",0,generalize dmake skip tests skip type following up on maybe skip type examples skip test skip test all dmake skip tests skip deploy dependencies current dmake deploy a behavior skip test service web related to where i proposed to control which dependencies to work on ,0 464943,13348546491.0,IssuesEvent,2020-08-29 19:06:22,MicrosoftDocs/typography-issues,https://api.github.com/repos/MicrosoftDocs/typography-issues,closed,"""Features"" section",OpenType spec Priority 4,"> Features define the basic functionality of the font. This is a strange way to introduce the topic. What do you mean by ""basic functionality"" here? Better to describe what a feature is: a collection of lookups which specify information to the shaper about what substitutions and positioning adjustments need to be applied in particular circumstances. > The OpenType Layout feature model provides great flexibility to font developers because features do not have to be predefined by Microsoft Corporation. Instead, font developers can work with application developers to determine useful features for fonts, add such features to OpenType Layout fonts, and enable client applications to support such features. This paragraph is somewhat misleading; was it written before the explosion of features in the features registry? While it is technically true that font developers could create arbitrary features outside the spec and then somehow convince layout engines to turn on processing for them, it's hardly common or best practice. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 5b15ab9e-86c5-f6e0-0145-f9de77c60141 * Version Independent ID: 0ebe3dec-cc24-b6a4-01b8-73bfdb79a71a * Content: [Advanced typographic tables - OpenType Layout - Typography](https://docs.microsoft.com/en-us/typography/opentype/spec/ttochap1) * Content Source: [typographydocs/opentype/spec/ttochap1.md](https://github.com/MicrosoftDocs/typography/blob/live/typographydocs/opentype/spec/ttochap1.md) * Product: **typography** * GitHub Login: @PeterCon * Microsoft Alias: **PeterCon**",1.0,"""Features"" section - > Features define the basic functionality of the font. This is a strange way to introduce the topic. What do you mean by ""basic functionality"" here? Better to describe what a feature is: a collection of lookups which specify information to the shaper about what substitutions and positioning adjustments need to be applied in particular circumstances. > The OpenType Layout feature model provides great flexibility to font developers because features do not have to be predefined by Microsoft Corporation. Instead, font developers can work with application developers to determine useful features for fonts, add such features to OpenType Layout fonts, and enable client applications to support such features. This paragraph is somewhat misleading; was it written before the explosion of features in the features registry? While it is technically true that font developers could create arbitrary features outside the spec and then somehow convince layout engines to turn on processing for them, it's hardly common or best practice. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 5b15ab9e-86c5-f6e0-0145-f9de77c60141 * Version Independent ID: 0ebe3dec-cc24-b6a4-01b8-73bfdb79a71a * Content: [Advanced typographic tables - OpenType Layout - Typography](https://docs.microsoft.com/en-us/typography/opentype/spec/ttochap1) * Content Source: [typographydocs/opentype/spec/ttochap1.md](https://github.com/MicrosoftDocs/typography/blob/live/typographydocs/opentype/spec/ttochap1.md) * Product: **typography** * GitHub Login: @PeterCon * Microsoft Alias: **PeterCon**",0, features section features define the basic functionality of the font this is a strange way to introduce the topic what do you mean by basic functionality here better to describe what a feature is a collection of lookups which specify information to the shaper about what substitutions and positioning adjustments need to be applied in particular circumstances the opentype layout feature model provides great flexibility to font developers because features do not have to be predefined by microsoft corporation instead font developers can work with application developers to determine useful features for fonts add such features to opentype layout fonts and enable client applications to support such features this paragraph is somewhat misleading was it written before the explosion of features in the features registry while it is technically true that font developers could create arbitrary features outside the spec and then somehow convince layout engines to turn on processing for them it s hardly common or best practice document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product typography github login petercon microsoft alias petercon ,0 139460,20865972618.0,IssuesEvent,2022-03-22 07:11:37,zesty-io/nextjs-website,https://api.github.com/repos/zesty-io/nextjs-website,closed,[MOBILE] Homepage - indentation,High Priority CSS Design,"Please fix odd indentation in screenshot below. Make sure desktop styling isn't affected. ![Screenshot_20220321-091536_Chrome.jpg](https://user-images.githubusercontent.com/101665566/159326268-8def8915-326e-458a-90e6-28c1445b4b5c.jpg)",1.0,"[MOBILE] Homepage - indentation - Please fix odd indentation in screenshot below. Make sure desktop styling isn't affected. ![Screenshot_20220321-091536_Chrome.jpg](https://user-images.githubusercontent.com/101665566/159326268-8def8915-326e-458a-90e6-28c1445b4b5c.jpg)",0, homepage indentation please fix odd indentation in screenshot below make sure desktop styling isn t affected ,0 8321,26692665149.0,IssuesEvent,2023-01-27 07:18:24,hackforla/website,https://api.github.com/repos/hackforla/website,reopened,GitHub Actions: Review ETA and availability,role: back end/devOps Size: Medium Feature: Board/GitHub Maintenance automation size: 1pt,"### Overview Team members reviewing a PR should add their review ETA and availability (in a comment) at the time of assignment so the team is aware of the timeline and the review process becomes more efficient. ### Action Items - [x] Review the notes below to understand the GitHub actions architecture we are currently moving towards - [x] Please go through the wiki article on [Hack for LA's GitHub Actions](https://github.com/hackforla/website/wiki/Hack-for-LA's-GitHub-Actions) - [x] Analyze and implement if this should be a new job under any of the GitHub action files mentioned in the Architecture notes below or should be in a separate file based on the following: - [ ] This job triggers when any reviewer is added to the PR - [ ] The comment should address the reviewers as @reviewerusername - [ ] The comment should request reviewers to add their review ETA and availability - [ ] Make sure other jobs are not affected by this ### Checks - [ ] Test in your local environment that it works - [ ] If a reviewer is added, the comment should be posted in the PR ### Resources/Instructions Never done GitHub actions? [Start here!](https://docs.github.com/en/actions) [GitHub Complex Workflows doc](https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows) [GitHub Actions Workflow Directory](https://github.com/hackforla/website/tree/gh-pages/.github/workflows) [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows) [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions) [actions/github-script](https://github.com/actions/github-script) [GitHub RESTAPI](https://docs.github.com/en/rest) #### Architecture Notes The idea behind the refactor is to organize our GitHub Actions so that developers can easily maintain and understand them. Currently, we want our GitHub Actions to be structured like so based on this [proposal](https://docs.google.com/spreadsheets/d/12NcZQoyGYlHlMQtJE2IM8xLYpHN75agb/edit#gid=1231634015): - Schedules (military time) - Schedule Friday 0700 - Schedule Thursday 1100 - Schedule Daily 1100 - Linters - Lint SCSS - PR Trigger - Add Linked Issue Labels to Pull Request - Add Pull Request Instructions - Issue Trigger - Add Missing Labels To Issues - WR - PR Trigger - WR Add Linked Issue Labels to Pull Request - WR Add Pull Request Instructions - WR - Issue Trigger Actions with the same triggers (excluding linters, which will be their own category) will live in the same github action file. Scheduled actions will live in the same file if they trigger on the same schedule (i.e. all files that trigger everyday at 11am will live in one file, while files that trigger on Friday at 7am will be on a separate file). That said, this structure is not set in stone. If any part of it feels strange, or you have questions, feel free to bring it up with the team so we can evolve this format!",1.0,"GitHub Actions: Review ETA and availability - ### Overview Team members reviewing a PR should add their review ETA and availability (in a comment) at the time of assignment so the team is aware of the timeline and the review process becomes more efficient. ### Action Items - [x] Review the notes below to understand the GitHub actions architecture we are currently moving towards - [x] Please go through the wiki article on [Hack for LA's GitHub Actions](https://github.com/hackforla/website/wiki/Hack-for-LA's-GitHub-Actions) - [x] Analyze and implement if this should be a new job under any of the GitHub action files mentioned in the Architecture notes below or should be in a separate file based on the following: - [ ] This job triggers when any reviewer is added to the PR - [ ] The comment should address the reviewers as @reviewerusername - [ ] The comment should request reviewers to add their review ETA and availability - [ ] Make sure other jobs are not affected by this ### Checks - [ ] Test in your local environment that it works - [ ] If a reviewer is added, the comment should be posted in the PR ### Resources/Instructions Never done GitHub actions? [Start here!](https://docs.github.com/en/actions) [GitHub Complex Workflows doc](https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows) [GitHub Actions Workflow Directory](https://github.com/hackforla/website/tree/gh-pages/.github/workflows) [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows) [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions) [actions/github-script](https://github.com/actions/github-script) [GitHub RESTAPI](https://docs.github.com/en/rest) #### Architecture Notes The idea behind the refactor is to organize our GitHub Actions so that developers can easily maintain and understand them. Currently, we want our GitHub Actions to be structured like so based on this [proposal](https://docs.google.com/spreadsheets/d/12NcZQoyGYlHlMQtJE2IM8xLYpHN75agb/edit#gid=1231634015): - Schedules (military time) - Schedule Friday 0700 - Schedule Thursday 1100 - Schedule Daily 1100 - Linters - Lint SCSS - PR Trigger - Add Linked Issue Labels to Pull Request - Add Pull Request Instructions - Issue Trigger - Add Missing Labels To Issues - WR - PR Trigger - WR Add Linked Issue Labels to Pull Request - WR Add Pull Request Instructions - WR - Issue Trigger Actions with the same triggers (excluding linters, which will be their own category) will live in the same github action file. Scheduled actions will live in the same file if they trigger on the same schedule (i.e. all files that trigger everyday at 11am will live in one file, while files that trigger on Friday at 7am will be on a separate file). That said, this structure is not set in stone. If any part of it feels strange, or you have questions, feel free to bring it up with the team so we can evolve this format!",1,github actions review eta and availability overview team members reviewing a pr should add their review eta and availability in a comment at the time of assignment so the team is aware of the timeline and the review process becomes more efficient action items review the notes below to understand the github actions architecture we are currently moving towards please go through the wiki article on analyze and implement if this should be a new job under any of the github action files mentioned in the architecture notes below or should be in a separate file based on the following this job triggers when any reviewer is added to the pr the comment should address the reviewers as reviewerusername the comment should request reviewers to add their review eta and availability make sure other jobs are not affected by this checks test in your local environment that it works if a reviewer is added the comment should be posted in the pr resources instructions never done github actions architecture notes the idea behind the refactor is to organize our github actions so that developers can easily maintain and understand them currently we want our github actions to be structured like so based on this schedules military time schedule friday schedule thursday schedule daily linters lint scss pr trigger add linked issue labels to pull request add pull request instructions issue trigger add missing labels to issues wr pr trigger wr add linked issue labels to pull request wr add pull request instructions wr issue trigger actions with the same triggers excluding linters which will be their own category will live in the same github action file scheduled actions will live in the same file if they trigger on the same schedule i e all files that trigger everyday at will live in one file while files that trigger on friday at will be on a separate file that said this structure is not set in stone if any part of it feels strange or you have questions feel free to bring it up with the team so we can evolve this format ,1 83407,24055513170.0,IssuesEvent,2022-09-16 16:27:06,gradle/gradle,https://api.github.com/repos/gradle/gradle,closed,Nested composites with cyclic dependencies cause an error,a:bug in:composite-builds stale,"Version: 4.10.nightly Let's say you have the following dependencies between composite builds: ``` projectA --compile--> projectB --testcompile--> projectC --compile--> projectA ``` This does not create a cycle due to the testcompile dependency between projectB and projectC. Now let's say each of these projects is connected with composite build `includedBuild` such that ``` projectA includeBuild '../projectB' projectB includeBuild '../projectC' projectC includeBuild '../projectA' ``` Now if you try to build `cd projectA; ./gradlew build` This will cause the error: ``` Included build in /Users/nicholasdipiazza/projectA has the same root project name 'projectA' as the main build. ``` This is because you are building root project `projectA` but then projectC includes `projectA`. This should just realize that you already included that build and ignore it. The work around is to build this project from a different project root, then it works fine. ",1.0,"Nested composites with cyclic dependencies cause an error - Version: 4.10.nightly Let's say you have the following dependencies between composite builds: ``` projectA --compile--> projectB --testcompile--> projectC --compile--> projectA ``` This does not create a cycle due to the testcompile dependency between projectB and projectC. Now let's say each of these projects is connected with composite build `includedBuild` such that ``` projectA includeBuild '../projectB' projectB includeBuild '../projectC' projectC includeBuild '../projectA' ``` Now if you try to build `cd projectA; ./gradlew build` This will cause the error: ``` Included build in /Users/nicholasdipiazza/projectA has the same root project name 'projectA' as the main build. ``` This is because you are building root project `projectA` but then projectC includes `projectA`. This should just realize that you already included that build and ignore it. The work around is to build this project from a different project root, then it works fine. ",0,nested composites with cyclic dependencies cause an error version nightly let s say you have the following dependencies between composite builds projecta compile projectb testcompile projectc compile projecta this does not create a cycle due to the testcompile dependency between projectb and projectc now let s say each of these projects is connected with composite build includedbuild such that projecta includebuild projectb projectb includebuild projectc projectc includebuild projecta now if you try to build cd projecta gradlew build this will cause the error included build in users nicholasdipiazza projecta has the same root project name projecta as the main build this is because you are building root project projecta but then projectc includes projecta this should just realize that you already included that build and ignore it the work around is to build this project from a different project root then it works fine ,0 236975,18148992037.0,IssuesEvent,2021-09-26 00:35:27,interactivethings/catalog,https://api.github.com/repos/interactivethings/catalog,closed,How to integrate Catalog into an existing CRA app?,documentation,"I would like to render a homepage / landing page at the root, and then load the catalog UI only in a sub-route (`/catalog`). I have a few ideas but I'm not sure how to proceed: 1. Manually render the Catalog in my app. I have tried to do this, but I am running into Markdown parsing issues: https://github.com/interactivethings/catalog/issues/399 2. Somehow define a ""Skinless page"" as suggested in this issue: https://github.com/interactivethings/catalog/issues/331 3. Run `create-catalog` inside my CRA'd app. I'm have a number of problems with this approach however: https://github.com/nulogy/design-system/tree/catalog-style-cra 1. I have to run 2 different dev serves; one for CRA with `yarn start` that runs on `localhost:3000`, and one for Catalog with `yarn catalog-start` that runs on `localhost:4000`. This makes it difficult to route between the two apps. 2. I have to run 2 different build commands that build into to 2 different directories: CRA's `yarn build` outputs to the `/build` directory, while Catalog's `yarn catalog-build` outputs into the `/catalog/build` directory. Do you have any tips about the best way to proceed? Thanks",1.0,"How to integrate Catalog into an existing CRA app? - I would like to render a homepage / landing page at the root, and then load the catalog UI only in a sub-route (`/catalog`). I have a few ideas but I'm not sure how to proceed: 1. Manually render the Catalog in my app. I have tried to do this, but I am running into Markdown parsing issues: https://github.com/interactivethings/catalog/issues/399 2. Somehow define a ""Skinless page"" as suggested in this issue: https://github.com/interactivethings/catalog/issues/331 3. Run `create-catalog` inside my CRA'd app. I'm have a number of problems with this approach however: https://github.com/nulogy/design-system/tree/catalog-style-cra 1. I have to run 2 different dev serves; one for CRA with `yarn start` that runs on `localhost:3000`, and one for Catalog with `yarn catalog-start` that runs on `localhost:4000`. This makes it difficult to route between the two apps. 2. I have to run 2 different build commands that build into to 2 different directories: CRA's `yarn build` outputs to the `/build` directory, while Catalog's `yarn catalog-build` outputs into the `/catalog/build` directory. Do you have any tips about the best way to proceed? Thanks",0,how to integrate catalog into an existing cra app i would like to render a homepage landing page at the root and then load the catalog ui only in a sub route catalog i have a few ideas but i m not sure how to proceed manually render the catalog in my app i have tried to do this but i am running into markdown parsing issues somehow define a skinless page as suggested in this issue run create catalog inside my cra d app i m have a number of problems with this approach however i have to run different dev serves one for cra with yarn start that runs on localhost and one for catalog with yarn catalog start that runs on localhost this makes it difficult to route between the two apps i have to run different build commands that build into to different directories cra s yarn build outputs to the build directory while catalog s yarn catalog build outputs into the catalog build directory do you have any tips about the best way to proceed thanks,0 7273,24556865108.0,IssuesEvent,2022-10-12 16:34:10,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,opened,Cypress Test -Keycloak Migration,automation,"- [ ] Update existing test as per changes of User name logic - [ ] Create scenario for user migration",1.0,"Cypress Test -Keycloak Migration - - [ ] Update existing test as per changes of User name logic - [ ] Create scenario for user migration",1,cypress test keycloak migration update existing test as per changes of user name logic create scenario for user migration,1 2014,11269480900.0,IssuesEvent,2020-01-14 09:00:41,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,opened,a8n/web: UI for manual Campaigns displays unnecessary fields,automation bug web,"The UI shows the ""Type"" and ""Arguments"" field for a manual Campaign even though they cannot be edited: The fields also show up when editing the Campaign. I propose getting rid of ""Type"" and ""Arguments"" for manual Campaigns and instead pulling the ""Add changeset"" form up to that section.",1.0,"a8n/web: UI for manual Campaigns displays unnecessary fields - The UI shows the ""Type"" and ""Arguments"" field for a manual Campaign even though they cannot be edited: The fields also show up when editing the Campaign. I propose getting rid of ""Type"" and ""Arguments"" for manual Campaigns and instead pulling the ""Add changeset"" form up to that section.",1, web ui for manual campaigns displays unnecessary fields the ui shows the type and arguments field for a manual campaign even though they cannot be edited img width alt screen shot at src the fields also show up when editing the campaign i propose getting rid of type and arguments for manual campaigns and instead pulling the add changeset form up to that section ,1 6730,23805373666.0,IssuesEvent,2022-09-04 00:22:48,KILTprotocol/sdk-js,https://api.github.com/repos/KILTprotocol/sdk-js,opened,SDK no longer compatible with latest dependencies,bug incompatible dependencies automation,"## Incompatibilities detected A [scheduled test workflow](https://github.com/KILTprotocol/sdk-js/actions/runs/2986240601) using the latest available dependencies matching our semver ranges has failed. We may need to constrain dependency ranges in our `package.json` or introduce fixes to recover compatibility. Below you can find a summary of dependency versions against which these tests were run. _Note: This issue was **automatically created** as a result of scheduled CI tests on 2022-09-04._
Dependency versions ""@commitlint/cli@npm:9.1.2"" ""@commitlint/config-conventional@npm:9.1.2"" ""@kiltprotocol/chain-helpers@workspace:packages/chain-helpers"" ""@kiltprotocol/config@workspace:packages/config"" ""@kiltprotocol/core@workspace:packages/core"" ""@kiltprotocol/did@workspace:packages/did"" ""@kiltprotocol/messaging@workspace:packages/messaging"" ""@kiltprotocol/sdk-js@workspace:packages/sdk-js"" ""@kiltprotocol/testing@workspace:packages/testing"" ""@kiltprotocol/types@workspace:packages/types"" ""@kiltprotocol/utils@workspace:packages/utils"" ""@kiltprotocol/vc-export@workspace:packages/vc-export"" ""@playwright/test@npm:1.25.1"" ""@polkadot/api-augment@npm:8.14.1"" ""@polkadot/api@npm:8.14.1"" ""@polkadot/keyring@npm:10.1.7"" ""@polkadot/types-known@npm:8.14.1"" ""@polkadot/types@npm:8.14.1"" ""@polkadot/util-crypto@npm:10.1.7"" ""@polkadot/util@npm:10.1.7"" ""@types/jest@npm:27.5.2"" ""@types/jsonld@npm:1.5.1"" ""@types/uuid@npm:8.3.4"" ""@typescript-eslint/eslint-plugin@npm:5.36.1"" ""@typescript-eslint/parser@npm:5.36.1"" ""buffer@npm:6.0.3"" ""cbor@npm:8.1.0"" ""crypto-browserify@npm:3.12.0"" ""crypto-ld@npm:3.9.0"" ""eslint-config-airbnb-base@npm:14.2.1"" ""eslint-config-prettier@npm:6.15.0"" ""eslint-plugin-import@npm:2.26.0"" ""eslint-plugin-jsdoc@npm:37.9.7"" ""eslint-plugin-license-header@npm:0.2.1"" ""eslint-plugin-node@npm:11.1.0"" ""eslint-plugin-prettier@npm:3.4.1"" ""eslint@npm:7.32.0"" ""husky@npm:4.3.8"" ""jest-docblock@npm:27.5.1"" ""jest-runner-groups@npm:2.2.0"" ""jest-runner@npm:27.5.1"" ""jest@npm:27.5.1"" ""jsonld-signatures@npm:5.2.0"" ""jsonld@npm:2.0.2"" ""prettier@npm:2.7.1"" ""process@npm:0.11.10"" ""rimraf@npm:3.0.2"" ""root-workspace-0b6124@workspace:."" ""stream-browserify@npm:3.0.0"" ""terser-webpack-plugin@npm:5.3.6"" ""testcontainers@npm:8.13.0"" ""ts-jest-resolver@npm:2.0.0"" ""ts-jest@npm:27.1.5"" ""ts-node@npm:10.9.1"" ""tweetnacl@npm:1.0.3"" ""typedoc@npm:0.22.18"" ""typescript-logging@npm:0.6.4"" ""typescript@patch:typescript@npm%3A4.8.2#~builtin::version=4.8.2&hash=32657b"" ""url@npm:0.11.0"" ""util@npm:0.12.4"" ""uuid@npm:8.3.2"" ""vc-js@npm:0.6.4"" ""webpack-cli@npm:4.10.0"" ""webpack@npm:5.74.0"" ""yargs@npm:16.2.0""
",1.0,"SDK no longer compatible with latest dependencies - ## Incompatibilities detected A [scheduled test workflow](https://github.com/KILTprotocol/sdk-js/actions/runs/2986240601) using the latest available dependencies matching our semver ranges has failed. We may need to constrain dependency ranges in our `package.json` or introduce fixes to recover compatibility. Below you can find a summary of dependency versions against which these tests were run. _Note: This issue was **automatically created** as a result of scheduled CI tests on 2022-09-04._
Dependency versions ""@commitlint/cli@npm:9.1.2"" ""@commitlint/config-conventional@npm:9.1.2"" ""@kiltprotocol/chain-helpers@workspace:packages/chain-helpers"" ""@kiltprotocol/config@workspace:packages/config"" ""@kiltprotocol/core@workspace:packages/core"" ""@kiltprotocol/did@workspace:packages/did"" ""@kiltprotocol/messaging@workspace:packages/messaging"" ""@kiltprotocol/sdk-js@workspace:packages/sdk-js"" ""@kiltprotocol/testing@workspace:packages/testing"" ""@kiltprotocol/types@workspace:packages/types"" ""@kiltprotocol/utils@workspace:packages/utils"" ""@kiltprotocol/vc-export@workspace:packages/vc-export"" ""@playwright/test@npm:1.25.1"" ""@polkadot/api-augment@npm:8.14.1"" ""@polkadot/api@npm:8.14.1"" ""@polkadot/keyring@npm:10.1.7"" ""@polkadot/types-known@npm:8.14.1"" ""@polkadot/types@npm:8.14.1"" ""@polkadot/util-crypto@npm:10.1.7"" ""@polkadot/util@npm:10.1.7"" ""@types/jest@npm:27.5.2"" ""@types/jsonld@npm:1.5.1"" ""@types/uuid@npm:8.3.4"" ""@typescript-eslint/eslint-plugin@npm:5.36.1"" ""@typescript-eslint/parser@npm:5.36.1"" ""buffer@npm:6.0.3"" ""cbor@npm:8.1.0"" ""crypto-browserify@npm:3.12.0"" ""crypto-ld@npm:3.9.0"" ""eslint-config-airbnb-base@npm:14.2.1"" ""eslint-config-prettier@npm:6.15.0"" ""eslint-plugin-import@npm:2.26.0"" ""eslint-plugin-jsdoc@npm:37.9.7"" ""eslint-plugin-license-header@npm:0.2.1"" ""eslint-plugin-node@npm:11.1.0"" ""eslint-plugin-prettier@npm:3.4.1"" ""eslint@npm:7.32.0"" ""husky@npm:4.3.8"" ""jest-docblock@npm:27.5.1"" ""jest-runner-groups@npm:2.2.0"" ""jest-runner@npm:27.5.1"" ""jest@npm:27.5.1"" ""jsonld-signatures@npm:5.2.0"" ""jsonld@npm:2.0.2"" ""prettier@npm:2.7.1"" ""process@npm:0.11.10"" ""rimraf@npm:3.0.2"" ""root-workspace-0b6124@workspace:."" ""stream-browserify@npm:3.0.0"" ""terser-webpack-plugin@npm:5.3.6"" ""testcontainers@npm:8.13.0"" ""ts-jest-resolver@npm:2.0.0"" ""ts-jest@npm:27.1.5"" ""ts-node@npm:10.9.1"" ""tweetnacl@npm:1.0.3"" ""typedoc@npm:0.22.18"" ""typescript-logging@npm:0.6.4"" ""typescript@patch:typescript@npm%3A4.8.2#~builtin::version=4.8.2&hash=32657b"" ""url@npm:0.11.0"" ""util@npm:0.12.4"" ""uuid@npm:8.3.2"" ""vc-js@npm:0.6.4"" ""webpack-cli@npm:4.10.0"" ""webpack@npm:5.74.0"" ""yargs@npm:16.2.0""
",1,sdk no longer compatible with latest dependencies incompatibilities detected a using the latest available dependencies matching our semver ranges has failed we may need to constrain dependency ranges in our package json or introduce fixes to recover compatibility below you can find a summary of dependency versions against which these tests were run note this issue was automatically created as a result of scheduled ci tests on dependency versions commitlint cli npm commitlint config conventional npm kiltprotocol chain helpers workspace packages chain helpers kiltprotocol config workspace packages config kiltprotocol core workspace packages core kiltprotocol did workspace packages did kiltprotocol messaging workspace packages messaging kiltprotocol sdk js workspace packages sdk js kiltprotocol testing workspace packages testing kiltprotocol types workspace packages types kiltprotocol utils workspace packages utils kiltprotocol vc export workspace packages vc export playwright test npm polkadot api augment npm polkadot api npm polkadot keyring npm polkadot types known npm polkadot types npm polkadot util crypto npm polkadot util npm types jest npm types jsonld npm types uuid npm typescript eslint eslint plugin npm typescript eslint parser npm buffer npm cbor npm crypto browserify npm crypto ld npm eslint config airbnb base npm eslint config prettier npm eslint plugin import npm eslint plugin jsdoc npm eslint plugin license header npm eslint plugin node npm eslint plugin prettier npm eslint npm husky npm jest docblock npm jest runner groups npm jest runner npm jest npm jsonld signatures npm jsonld npm prettier npm process npm rimraf npm root workspace workspace stream browserify npm terser webpack plugin npm testcontainers npm ts jest resolver npm ts jest npm ts node npm tweetnacl npm typedoc npm typescript logging npm typescript patch typescript npm builtin version hash url npm util npm uuid npm vc js npm webpack cli npm webpack npm yargs npm ,1 1541,10316693345.0,IssuesEvent,2019-08-30 10:39:09,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Run bash commands on Azure Linux Hybrid Worker VM.,Pri3 assigned-to-author automation/svc product-question triaged,"I want to run a bash script on Azure linux VM. For this I added this VM in the hybrid worker group in automation. I am able to call powershell scripts because Azure Automation supports powershell by default. But I need to run bash script on my Linux woker VM. I tried using the command New-SshSession and using Invoke-SshCommand to run my bash script but I get the below error while trying to connect to a session. `New-SshSession -ComputerName 'ComputerName' -Username 'UserName'` `Unable to create SSH client object: Exception calling "".ctor"" with ""4"" argument(s): ""Could not load type 'System.Security.Cryptography.HMACRIPEMD160' from assembly 'mscorlib, Version=4.0.0.0, Culture=` Is there any way of achieving this by just using Hybrid Worker Group? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 792f4871-0d07-624f-7daf-02de97121e13 * Version Independent ID: b70d481f-b351-bb38-cb49-1cda63422968 * Content: [Run shell scripts in an Linux VM on Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/run-command#feedback) * Content Source: [articles/virtual-machines/linux/run-command.md](https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machines/linux/run-command.md) * Service: **automation** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**",1.0,"Run bash commands on Azure Linux Hybrid Worker VM. - I want to run a bash script on Azure linux VM. For this I added this VM in the hybrid worker group in automation. I am able to call powershell scripts because Azure Automation supports powershell by default. But I need to run bash script on my Linux woker VM. I tried using the command New-SshSession and using Invoke-SshCommand to run my bash script but I get the below error while trying to connect to a session. `New-SshSession -ComputerName 'ComputerName' -Username 'UserName'` `Unable to create SSH client object: Exception calling "".ctor"" with ""4"" argument(s): ""Could not load type 'System.Security.Cryptography.HMACRIPEMD160' from assembly 'mscorlib, Version=4.0.0.0, Culture=` Is there any way of achieving this by just using Hybrid Worker Group? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 792f4871-0d07-624f-7daf-02de97121e13 * Version Independent ID: b70d481f-b351-bb38-cb49-1cda63422968 * Content: [Run shell scripts in an Linux VM on Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/run-command#feedback) * Content Source: [articles/virtual-machines/linux/run-command.md](https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machines/linux/run-command.md) * Service: **automation** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**",1,run bash commands on azure linux hybrid worker vm i want to run a bash script on azure linux vm for this i added this vm in the hybrid worker group in automation i am able to call powershell scripts because azure automation supports powershell by default but i need to run bash script on my linux woker vm i tried using the command new sshsession and using invoke sshcommand to run my bash script but i get the below error while trying to connect to a session new sshsession computername computername username username unable to create ssh client object exception calling ctor with argument s could not load type system security cryptography from assembly mscorlib version culture is there any way of achieving this by just using hybrid worker group document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login bobbytreed microsoft alias robreed ,1 7346,24693663232.0,IssuesEvent,2022-10-19 10:23:41,gchq/koryphe,https://api.github.com/repos/gchq/koryphe,closed,Create status badges in README,automation,A badge for CI and code coverage status on develop would be a nice improvement,1.0,Create status badges in README - A badge for CI and code coverage status on develop would be a nice improvement,1,create status badges in readme a badge for ci and code coverage status on develop would be a nice improvement,1 1659,10542546967.0,IssuesEvent,2019-10-02 13:24:04,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,update apm-server monitoring configuration,[zube]: Ready automation,`xpack.monitoring` is now `monitoring` on 8.0+. Unclear if this will also be supported any time in 7.x,1.0,update apm-server monitoring configuration - `xpack.monitoring` is now `monitoring` on 8.0+. Unclear if this will also be supported any time in 7.x,1,update apm server monitoring configuration xpack monitoring is now monitoring on unclear if this will also be supported any time in x,1 228745,25250029929.0,IssuesEvent,2022-11-15 14:03:50,MatBenfield/news,https://api.github.com/repos/MatBenfield/news,closed,[SecurityWeek] Thales Denies Getting Hacked as Ransomware Gang Releases Gigabytes of Data,SecurityWeek Stale," **French aerospace, defense, and security giant Thales claims to have found no evidence of its IT systems getting breached after a well-known ransomware group published gigabytes of data allegedly stolen from the company.** [read more](https://www.securityweek.com/thales-denies-getting-hacked-ransomware-gang-releases-gigabytes-data) ",True,"[SecurityWeek] Thales Denies Getting Hacked as Ransomware Gang Releases Gigabytes of Data - **French aerospace, defense, and security giant Thales claims to have found no evidence of its IT systems getting breached after a well-known ransomware group published gigabytes of data allegedly stolen from the company.** [read more](https://www.securityweek.com/thales-denies-getting-hacked-ransomware-gang-releases-gigabytes-data) ",0, thales denies getting hacked as ransomware gang releases gigabytes of data french aerospace defense and security giant thales claims to have found no evidence of its it systems getting breached after a well known ransomware group published gigabytes of data allegedly stolen from the company ,0 254006,21720220853.0,IssuesEvent,2022-05-10 22:47:18,hyperledger/cactus,https://api.github.com/repos/hyperledger/cactus,opened,"test(jest): upgrade jest,jest-extended to latest",dependencies Developer_Experience Flaky-Test-Automation Tests P2,"`jest-extended` now has a `2.0.0` stable release instead of the beta that we were using earlier. There also has been a major release of `jest` itself which contains some performance improvements and therefore we should invest a little time into upgrading these now instead of having a higher migration cost incurred later. ",2.0,"test(jest): upgrade jest,jest-extended to latest - `jest-extended` now has a `2.0.0` stable release instead of the beta that we were using earlier. There also has been a major release of `jest` itself which contains some performance improvements and therefore we should invest a little time into upgrading these now instead of having a higher migration cost incurred later. ",0,test jest upgrade jest jest extended to latest jest extended now has a stable release instead of the beta that we were using earlier there also has been a major release of jest itself which contains some performance improvements and therefore we should invest a little time into upgrading these now instead of having a higher migration cost incurred later ,0 734296,25342529976.0,IssuesEvent,2022-11-18 23:31:39,idaholab/raven,https://api.github.com/repos/idaholab/raven,closed,[TASK] Crow RNG serializability,priority_minor task,"-------- Issue Description -------- In RAVEN we directly interact with Crow `RandomClass` for RNG; however, these `swig`ged objects are not serializable. Creating a wrapper that extends to serializability would prevent each entity using the engines from duplicating the recreation of the engines during `__getstate__` and `__setstate__`. ---------------- For Change Control Board: Issue Review ---------------- This review should occur before any development is performed as a response to this issue. - [x] 1. Is it tagged with a type: defect or task? - [x] 2. Is it tagged with a priority: critical, normal or minor? - [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements? - [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. - [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.) ------- For Change Control Board: Issue Closure ------- This review should occur when the issue is imminently going to be closed. - [ ] 1. If the issue is a defect, is the defect fixed? - [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.) - [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)? - [ ] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)? - [ ] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided? ",1.0,"[TASK] Crow RNG serializability - -------- Issue Description -------- In RAVEN we directly interact with Crow `RandomClass` for RNG; however, these `swig`ged objects are not serializable. Creating a wrapper that extends to serializability would prevent each entity using the engines from duplicating the recreation of the engines during `__getstate__` and `__setstate__`. ---------------- For Change Control Board: Issue Review ---------------- This review should occur before any development is performed as a response to this issue. - [x] 1. Is it tagged with a type: defect or task? - [x] 2. Is it tagged with a priority: critical, normal or minor? - [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements? - [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. - [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.) ------- For Change Control Board: Issue Closure ------- This review should occur when the issue is imminently going to be closed. - [ ] 1. If the issue is a defect, is the defect fixed? - [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.) - [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)? - [ ] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)? - [ ] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided? ",0, crow rng serializability issue description in raven we directly interact with crow randomclass for rng however these swig ged objects are not serializable creating a wrapper that extends to serializability would prevent each entity using the engines from duplicating the recreation of the engines during getstate and setstate for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided ,0 5933,21683080456.0,IssuesEvent,2022-05-09 08:39:58,Azure/azure-sdk-tools,https://api.github.com/repos/Azure/azure-sdk-tools,closed,Consolidate sdk apiViews and swagger Apiviews,SDK Automation,It requires a new bot kebab service to render the consolidated apiView comments.,1.0,Consolidate sdk apiViews and swagger Apiviews - It requires a new bot kebab service to render the consolidated apiView comments.,1,consolidate sdk apiviews and swagger apiviews it requires a new bot kebab service to render the consolidated apiview comments ,1 290321,21875768264.0,IssuesEvent,2022-05-19 09:59:14,appsmithorg/appsmith,https://api.github.com/repos/appsmithorg/appsmith,closed,[Docs] #12531 [Bug]-[1200]:Unable to select previously selected option from dropdown if value outside the options is set - unless other option is selected at-least once,Documentation User Education Pod,"> TODO - [ ] Evaluate if this task is needed. If not add the ""Skip Docs"" label on the parent ticket - [ ] Fill these fields - [ ] Prepare first draft - [ ] Add label: ""Ready for Docs Team"" Field | Details -----|----- **POD** | App Viewers Pod **Parent Ticket** | #12531 Engineer | Release Date | Live Date | First Draft | Auto Assign | Priority | Environment |",1.0,"[Docs] #12531 [Bug]-[1200]:Unable to select previously selected option from dropdown if value outside the options is set - unless other option is selected at-least once - > TODO - [ ] Evaluate if this task is needed. If not add the ""Skip Docs"" label on the parent ticket - [ ] Fill these fields - [ ] Prepare first draft - [ ] Add label: ""Ready for Docs Team"" Field | Details -----|----- **POD** | App Viewers Pod **Parent Ticket** | #12531 Engineer | Release Date | Live Date | First Draft | Auto Assign | Priority | Environment |",0, unable to select previously selected option from dropdown if value outside the options is set unless other option is selected at least once todo evaluate if this task is needed if not add the skip docs label on the parent ticket fill these fields prepare first draft add label ready for docs team field details pod app viewers pod parent ticket engineer release date live date first draft auto assign priority environment ,0 45721,2938844454.0,IssuesEvent,2015-07-01 13:24:40,moneymanagerex/android-money-manager-ex,https://api.github.com/repos/moneymanagerex/android-money-manager-ex,closed,"Investigate automatic Dropbox sync, possible cause of exceptions",priority,"The automatic Dropbox sync could be causing the torrent of Illegal State exceptions. Requires detailed investigation. DropboxServiceIntent, method downloadFile.",1.0,"Investigate automatic Dropbox sync, possible cause of exceptions - The automatic Dropbox sync could be causing the torrent of Illegal State exceptions. Requires detailed investigation. DropboxServiceIntent, method downloadFile.",0,investigate automatic dropbox sync possible cause of exceptions the automatic dropbox sync could be causing the torrent of illegal state exceptions requires detailed investigation dropboxserviceintent method downloadfile ,0 3072,13050396753.0,IssuesEvent,2020-07-29 15:26:00,codelittinc/roadmap,https://api.github.com/repos/codelittinc/roadmap,opened,Create an L10 status check that gets the information for the L10 engineering meetings,enhancement new automation roadrunner-rails,"1) number of points of commited 2) number of points delivered 3) number of points delivered by team mate 4) number of bugs sprint to sprint, up or down 5) percentage of sprint spent on bugs 6) number of times pROD server went down 7) number of roadrunner errors week by week ",1.0,"Create an L10 status check that gets the information for the L10 engineering meetings - 1) number of points of commited 2) number of points delivered 3) number of points delivered by team mate 4) number of bugs sprint to sprint, up or down 5) percentage of sprint spent on bugs 6) number of times pROD server went down 7) number of roadrunner errors week by week ",1,create an status check that gets the information for the engineering meetings number of points of commited number of points delivered number of points delivered by team mate number of bugs sprint to sprint up or down percentage of sprint spent on bugs number of times prod server went down number of roadrunner errors week by week ,1 308202,23237572712.0,IssuesEvent,2022-08-03 13:07:47,Dharmik48/meme-generator,https://api.github.com/repos/Dharmik48/meme-generator,closed,Improve README.md,documentation good first issue EddieHub:good-first-issue,"The current README.md is very lame, doesn't have any proper instructions regarding the project. So we need to add those. Some of the sections that need to added are Contribution, Install steps, Description, Screenshots, etc.",1.0,"Improve README.md - The current README.md is very lame, doesn't have any proper instructions regarding the project. So we need to add those. Some of the sections that need to added are Contribution, Install steps, Description, Screenshots, etc.",0,improve readme md the current readme md is very lame doesn t have any proper instructions regarding the project so we need to add those some of the sections that need to added are contribution install steps description screenshots etc ,0 9642,30110545880.0,IssuesEvent,2023-06-30 07:22:08,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Native Automation: TestCafe hangs when it accesses text files.,TYPE: bug FREQUENCY: level 1 SYSTEM: native automation,"### What is your Scenario? Navigating to a text file causes TestCafe to hang. ### What is the Current behavior? After navigating to a text file, TestCafe becomes unresponsive. ### What is the Expected behavior? TestCafe should not hang when navigating to a text file. ### What is your public website URL? (or attach your complete example) Page ```HTML Title
``` Test file: ``` // file.txt Any text here ``` ### What is your TestCafe test code? ```JS fixture`Interact With the Page` .page`./index.html` test('Click', async t => { await t .click('#btn') }); ``` ### Your complete configuration file _No response_ ### Your complete test report _No response_ ### Screenshots _No response_ ### Steps to Reproduce 1. Launch TestCafe in Native Automation mode. 2. TestCafe becomes unresponsive and hangs. ### TestCafe version 3.0.0-rc.1 ### Node.js version _No response_ ### Command-line arguments testcafe chrome test.js ### Browser name(s) and version(s) Chromium-based ### Platform(s) and version(s) _No response_ ### Other _No response_",1.0,"Native Automation: TestCafe hangs when it accesses text files. - ### What is your Scenario? Navigating to a text file causes TestCafe to hang. ### What is the Current behavior? After navigating to a text file, TestCafe becomes unresponsive. ### What is the Expected behavior? TestCafe should not hang when navigating to a text file. ### What is your public website URL? (or attach your complete example) Page ```HTML Title
``` Test file: ``` // file.txt Any text here ``` ### What is your TestCafe test code? ```JS fixture`Interact With the Page` .page`./index.html` test('Click', async t => { await t .click('#btn') }); ``` ### Your complete configuration file _No response_ ### Your complete test report _No response_ ### Screenshots _No response_ ### Steps to Reproduce 1. Launch TestCafe in Native Automation mode. 2. TestCafe becomes unresponsive and hangs. ### TestCafe version 3.0.0-rc.1 ### Node.js version _No response_ ### Command-line arguments testcafe chrome test.js ### Browser name(s) and version(s) Chromium-based ### Platform(s) and version(s) _No response_ ### Other _No response_",1,native automation testcafe hangs when it accesses text files what is your scenario navigating to a text file causes testcafe to hang what is the current behavior after navigating to a text file testcafe becomes unresponsive what is the expected behavior testcafe should not hang when navigating to a text file what is your public website url or attach your complete example page html title go to file test file file txt any text here what is your testcafe test code js fixture interact with the page page index html test click async t await t click btn your complete configuration file no response your complete test report no response screenshots no response steps to reproduce launch testcafe in native automation mode testcafe becomes unresponsive and hangs testcafe version rc node js version no response command line arguments testcafe chrome test js browser name s and version s chromium based platform s and version s no response other no response ,1 10363,3384572868.0,IssuesEvent,2015-11-27 04:16:50,hackinginformation/dotfiles,https://api.github.com/repos/hackinginformation/dotfiles,opened,WIKI: Document Key Bindings,Documentation i3,"Similar to what ive done with TMux, have a document of things ive customized Also document all ""extra"" stuff needed for i3 to work (if someone wants to do what i have)",1.0,"WIKI: Document Key Bindings - Similar to what ive done with TMux, have a document of things ive customized Also document all ""extra"" stuff needed for i3 to work (if someone wants to do what i have)",0,wiki document key bindings similar to what ive done with tmux have a document of things ive customized also document all extra stuff needed for to work if someone wants to do what i have ,0 6608,23510499530.0,IssuesEvent,2022-08-18 16:04:25,rancher/rancher,https://api.github.com/repos/rancher/rancher,closed,Generate new Provisioning Client,[zube]: QA Review area/automation-framework,Generate new provisioning client like how the v3 clients are generated.,1.0,Generate new Provisioning Client - Generate new provisioning client like how the v3 clients are generated.,1,generate new provisioning client generate new provisioning client like how the clients are generated ,1 227897,25132209643.0,IssuesEvent,2022-11-09 15:54:27,mendts-workshop/jjohnstonmend,https://api.github.com/repos/mendts-workshop/jjohnstonmend,opened,jstl-1.2.jar: 1 vulnerabilities (highest severity is: 7.3),security vulnerability,"
Vulnerable Library - jstl-1.2.jar

Path to dependency file: /pom.xml

Path to vulnerable library: /ository/javax/servlet/jstl/1.2/jstl-1.2.jar

Found in HEAD commit: bb49eba49332ee1cf08932b72870ac993461ac00

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (jstl version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2015-0254](https://www.mend.io/vulnerability-database/CVE-2015-0254) | High | 7.3 | jstl-1.2.jar | Direct | org.apache.taglibs:taglibs-standard-impl:1.2.3 | ✅ | ## Details
CVE-2015-0254 ### Vulnerable Library - jstl-1.2.jar

Path to dependency file: /pom.xml

Path to vulnerable library: /ository/javax/servlet/jstl/1.2/jstl-1.2.jar

Dependency Hierarchy: - :x: **jstl-1.2.jar** (Vulnerable Library)

Found in HEAD commit: bb49eba49332ee1cf08932b72870ac993461ac00

Found in base branch: easybuggy

### Vulnerability Details

Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) or (2) JSTL XML tag.

Publish Date: 2015-03-09

URL: CVE-2015-0254

### CVSS 3 Score Details (7.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://tomcat.apache.org/taglibs/standard/

Release Date: 2015-03-09

Fix Resolution: org.apache.taglibs:taglibs-standard-impl:1.2.3

:rescue_worker_helmet: Automatic Remediation is available for this issue
***

:rescue_worker_helmet: Automatic Remediation is available for this issue.

",True,"jstl-1.2.jar: 1 vulnerabilities (highest severity is: 7.3) -
Vulnerable Library - jstl-1.2.jar

Path to dependency file: /pom.xml

Path to vulnerable library: /ository/javax/servlet/jstl/1.2/jstl-1.2.jar

Found in HEAD commit: bb49eba49332ee1cf08932b72870ac993461ac00

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (jstl version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2015-0254](https://www.mend.io/vulnerability-database/CVE-2015-0254) | High | 7.3 | jstl-1.2.jar | Direct | org.apache.taglibs:taglibs-standard-impl:1.2.3 | ✅ | ## Details
CVE-2015-0254 ### Vulnerable Library - jstl-1.2.jar

Path to dependency file: /pom.xml

Path to vulnerable library: /ository/javax/servlet/jstl/1.2/jstl-1.2.jar

Dependency Hierarchy: - :x: **jstl-1.2.jar** (Vulnerable Library)

Found in HEAD commit: bb49eba49332ee1cf08932b72870ac993461ac00

Found in base branch: easybuggy

### Vulnerability Details

Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) or (2) JSTL XML tag.

Publish Date: 2015-03-09

URL: CVE-2015-0254

### CVSS 3 Score Details (7.3)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://tomcat.apache.org/taglibs/standard/

Release Date: 2015-03-09

Fix Resolution: org.apache.taglibs:taglibs-standard-impl:1.2.3

:rescue_worker_helmet: Automatic Remediation is available for this issue
***

:rescue_worker_helmet: Automatic Remediation is available for this issue.

",0,jstl jar vulnerabilities highest severity is vulnerable library jstl jar path to dependency file pom xml path to vulnerable library ository javax servlet jstl jstl jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in jstl version remediation available high jstl jar direct org apache taglibs taglibs standard impl details cve vulnerable library jstl jar path to dependency file pom xml path to vulnerable library ository javax servlet jstl jstl jar dependency hierarchy x jstl jar vulnerable library found in head commit a href found in base branch easybuggy vulnerability details apache standard taglibs before allows remote attackers to execute arbitrary code or conduct external xml entity xxe attacks via a crafted xslt extension in a or jstl xml tag publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache taglibs taglibs standard impl rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue ,0 193575,14657293617.0,IssuesEvent,2020-12-28 15:17:28,github-vet/rangeloop-pointer-findings,https://api.github.com/repos/github-vet/rangeloop-pointer-findings,closed,banzaicloud/kafka-operator: pkg/errorfactory/errorfactory_test.go; 11 LoC,fresh small test," Found a possible issue in [banzaicloud/kafka-operator](https://www.github.com/banzaicloud/kafka-operator) at [pkg/errorfactory/errorfactory_test.go](https://github.com/banzaicloud/kafka-operator/blob/7be980c680f05b4cf3d2ada704760097086de6df/pkg/errorfactory/errorfactory_test.go#L44-L54) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > [Click here to see the code in its original context.](https://github.com/banzaicloud/kafka-operator/blob/7be980c680f05b4cf3d2ada704760097086de6df/pkg/errorfactory/errorfactory_test.go#L44-L54)
Click here to show the 11 line(s) of Go which triggered the analyzer. ```go for _, errType := range errorTypes { err := New(errType, errors.New(""test-error""), ""test-message"") expected := ""test-message: test-error"" got := err.Error() if got != expected { t.Error(""Expected:"", expected, ""got:"", got) } if !emperrors.As(err, &errType) { t.Error(""Expected:"", reflect.TypeOf(errType), ""got:"", reflect.TypeOf(err)) } } ```
Click here to show extra information the analyzer produced. ``` No path was found through the callgraph that could lead to a function which writes a pointer argument. No path was found through the callgraph that could lead to a function which passes a pointer to third-party code. root signature {As 2} was not found in the callgraph; reference was passed directly to third-party code ```
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 7be980c680f05b4cf3d2ada704760097086de6df ",1.0,"banzaicloud/kafka-operator: pkg/errorfactory/errorfactory_test.go; 11 LoC - Found a possible issue in [banzaicloud/kafka-operator](https://www.github.com/banzaicloud/kafka-operator) at [pkg/errorfactory/errorfactory_test.go](https://github.com/banzaicloud/kafka-operator/blob/7be980c680f05b4cf3d2ada704760097086de6df/pkg/errorfactory/errorfactory_test.go#L44-L54) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > [Click here to see the code in its original context.](https://github.com/banzaicloud/kafka-operator/blob/7be980c680f05b4cf3d2ada704760097086de6df/pkg/errorfactory/errorfactory_test.go#L44-L54)
Click here to show the 11 line(s) of Go which triggered the analyzer. ```go for _, errType := range errorTypes { err := New(errType, errors.New(""test-error""), ""test-message"") expected := ""test-message: test-error"" got := err.Error() if got != expected { t.Error(""Expected:"", expected, ""got:"", got) } if !emperrors.As(err, &errType) { t.Error(""Expected:"", reflect.TypeOf(errType), ""got:"", reflect.TypeOf(err)) } } ```
Click here to show extra information the analyzer produced. ``` No path was found through the callgraph that could lead to a function which writes a pointer argument. No path was found through the callgraph that could lead to a function which passes a pointer to third-party code. root signature {As 2} was not found in the callgraph; reference was passed directly to third-party code ```
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 7be980c680f05b4cf3d2ada704760097086de6df ",0,banzaicloud kafka operator pkg errorfactory errorfactory test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message click here to show the line s of go which triggered the analyzer go for errtype range errortypes err new errtype errors new test error test message expected test message test error got err error if got expected t error expected expected got got if emperrors as err errtype t error expected reflect typeof errtype got reflect typeof err click here to show extra information the analyzer produced no path was found through the callgraph that could lead to a function which writes a pointer argument no path was found through the callgraph that could lead to a function which passes a pointer to third party code root signature as was not found in the callgraph reference was passed directly to third party code leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id ,0 771650,27088150466.0,IssuesEvent,2023-02-14 18:37:41,googleapis/google-cloud-ruby,https://api.github.com/repos/googleapis/google-cloud-ruby,closed,[Nightly CI Failures] Failures detected for google-cloud-functions-v2,type: bug priority: p1 nightly failure,"At 2023-02-09 08:59:50 UTC, detected failures in google-cloud-functions-v2 for: rubocop report_key_2d3b4cd136d3249baba5646606298d56",1.0,"[Nightly CI Failures] Failures detected for google-cloud-functions-v2 - At 2023-02-09 08:59:50 UTC, detected failures in google-cloud-functions-v2 for: rubocop report_key_2d3b4cd136d3249baba5646606298d56",0, failures detected for google cloud functions at utc detected failures in google cloud functions for rubocop report key ,0 1085,9420985519.0,IssuesEvent,2019-04-11 04:57:40,openhab/openhab-core,https://api.github.com/repos/openhab/openhab-core,closed,[Automation - Feature Request] Add Jython and Groovy scripted Actions to Paper UI,automation,The drop down currently only has Javascript.,1.0,[Automation - Feature Request] Add Jython and Groovy scripted Actions to Paper UI - The drop down currently only has Javascript.,1, add jython and groovy scripted actions to paper ui the drop down currently only has javascript ,1 7103,24255792592.0,IssuesEvent,2022-09-27 17:39:06,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,opened,[DocDB][LST] Corruption: New operation's index does not follow the previous op's index,area/docdb status/awaiting-triage qa_automation,"### Description ``` $ cd ~/code/yugabyte-db $ git checkout 14a71540ec1634c926c4865c002e945088d119d2 $ ./yb_build.sh release $ bin/yb-ctl --replication_factor 3 create --tserver_flags=enable_deadlock_detection=true,ysql_max_connections=20,ysql_enable_packed_row=true,yb_enable_read_committed_isolation=true,ysql_num_shards_per_tserver=2,enable_stream_compression=true,stream_compression_algo=2,yb_num_shards_per_tserver=2 --master_flags=yb_enable_read_committed_isolation=true,enable_stream_compression=true,stream_compression_algo=2,enable_automatic_tablet_splitting=true,tablet_split_low_phase_shard_count_per_node=1,tablet_split_high_phase_shard_count_per_node=5,ysql_enable_packed_row=true,enable_deadlock_detection=true $ cd ~/code/yb-long-system-test $ git checkout a396d9540c1837c85c0cb5c23463d2bea46cd0a0 $ ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=10 --runtime=60 --max-columns=20 --complexity=full --seed=988062 ``` ``` [deen@devp yugabyte-db]$ cat /home/deen/yugabyte-data/node-1/disk-1/yb-data/tserver/logs/yb-tserver.FATAL.details.2022-09-27T17_04_17.pid1453091.txt /home/deen/yugabyte-data/node-1/disk-1/yb-data/tserver/logs/yb-tserver.FATAL.details.2022-09-27T17_04_18.pid1709300.txt F20220927 17:04:17 ../../../../../src/yb/consensus/replica_state.cc:903] Check failed: _s.ok() Bad status: Corruption (yb/consensus/replica_state.cc:1143): New operation's index does not follow the previous op's index. Current: 5.2. Previous: 3.2 @ 0x7f564368b97b google::LogMessage::SendToLog() @ 0x7f564368c39e google::LogMessage::Flush() @ 0x7f564368eaff google::LogMessageFatal::~LogMessageFatal() @ 0x7f5646af02a4 yb::consensus::ReplicaState::ApplyPendingOperationsUnlocked() @ 0x7f5646aef5bb yb::consensus::ReplicaState::AdvanceCommittedOpIdUnlocked() @ 0x7f5646aef0c5 yb::consensus::ReplicaState::UpdateMajorityReplicatedUnlocked() @ 0x7f5646acd8f1 yb::consensus::RaftConsensus::UpdateMajorityReplicated() @ 0x7f5646aa2481 yb::consensus::PeerMessageQueue::NotifyObserversOfMajorityReplOpChangeTask() @ 0x7f56439931ca yb::ThreadPool::DispatchThread() @ 0x7f564398990d yb::Thread::SuperviseThread() @ 0x7f5641fbe694 start_thread @ 0x7f5641d0041d __clone ``` LST logs: [lst_2022-09-27_17:03:57_798299.zip](https://github.com/yugabyte/yugabyte-db/files/9658434/lst_2022-09-27_17.03.57_798299.zip) yugabyte-data directory accessible from within Yugabyte org: ",1.0,"[DocDB][LST] Corruption: New operation's index does not follow the previous op's index - ### Description ``` $ cd ~/code/yugabyte-db $ git checkout 14a71540ec1634c926c4865c002e945088d119d2 $ ./yb_build.sh release $ bin/yb-ctl --replication_factor 3 create --tserver_flags=enable_deadlock_detection=true,ysql_max_connections=20,ysql_enable_packed_row=true,yb_enable_read_committed_isolation=true,ysql_num_shards_per_tserver=2,enable_stream_compression=true,stream_compression_algo=2,yb_num_shards_per_tserver=2 --master_flags=yb_enable_read_committed_isolation=true,enable_stream_compression=true,stream_compression_algo=2,enable_automatic_tablet_splitting=true,tablet_split_low_phase_shard_count_per_node=1,tablet_split_high_phase_shard_count_per_node=5,ysql_enable_packed_row=true,enable_deadlock_detection=true $ cd ~/code/yb-long-system-test $ git checkout a396d9540c1837c85c0cb5c23463d2bea46cd0a0 $ ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=10 --runtime=60 --max-columns=20 --complexity=full --seed=988062 ``` ``` [deen@devp yugabyte-db]$ cat /home/deen/yugabyte-data/node-1/disk-1/yb-data/tserver/logs/yb-tserver.FATAL.details.2022-09-27T17_04_17.pid1453091.txt /home/deen/yugabyte-data/node-1/disk-1/yb-data/tserver/logs/yb-tserver.FATAL.details.2022-09-27T17_04_18.pid1709300.txt F20220927 17:04:17 ../../../../../src/yb/consensus/replica_state.cc:903] Check failed: _s.ok() Bad status: Corruption (yb/consensus/replica_state.cc:1143): New operation's index does not follow the previous op's index. Current: 5.2. Previous: 3.2 @ 0x7f564368b97b google::LogMessage::SendToLog() @ 0x7f564368c39e google::LogMessage::Flush() @ 0x7f564368eaff google::LogMessageFatal::~LogMessageFatal() @ 0x7f5646af02a4 yb::consensus::ReplicaState::ApplyPendingOperationsUnlocked() @ 0x7f5646aef5bb yb::consensus::ReplicaState::AdvanceCommittedOpIdUnlocked() @ 0x7f5646aef0c5 yb::consensus::ReplicaState::UpdateMajorityReplicatedUnlocked() @ 0x7f5646acd8f1 yb::consensus::RaftConsensus::UpdateMajorityReplicated() @ 0x7f5646aa2481 yb::consensus::PeerMessageQueue::NotifyObserversOfMajorityReplOpChangeTask() @ 0x7f56439931ca yb::ThreadPool::DispatchThread() @ 0x7f564398990d yb::Thread::SuperviseThread() @ 0x7f5641fbe694 start_thread @ 0x7f5641d0041d __clone ``` LST logs: [lst_2022-09-27_17:03:57_798299.zip](https://github.com/yugabyte/yugabyte-db/files/9658434/lst_2022-09-27_17.03.57_798299.zip) yugabyte-data directory accessible from within Yugabyte org: ",1, corruption new operation s index does not follow the previous op s index description cd code yugabyte db git checkout yb build sh release bin yb ctl replication factor create tserver flags enable deadlock detection true ysql max connections ysql enable packed row true yb enable read committed isolation true ysql num shards per tserver enable stream compression true stream compression algo yb num shards per tserver master flags yb enable read committed isolation true enable stream compression true stream compression algo enable automatic tablet splitting true tablet split low phase shard count per node tablet split high phase shard count per node ysql enable packed row true enable deadlock detection true cd code yb long system test git checkout long system test py nodes threads runtime max columns complexity full seed cat home deen yugabyte data node disk yb data tserver logs yb tserver fatal details txt home deen yugabyte data node disk yb data tserver logs yb tserver fatal details txt src yb consensus replica state cc check failed s ok bad status corruption yb consensus replica state cc new operation s index does not follow the previous op s index current previous google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb consensus replicastate applypendingoperationsunlocked yb consensus replicastate advancecommittedopidunlocked yb consensus replicastate updatemajorityreplicatedunlocked yb consensus raftconsensus updatemajorityreplicated yb consensus peermessagequeue notifyobserversofmajorityreplopchangetask yb threadpool dispatchthread yb thread supervisethread start thread clone lst logs yugabyte data directory accessible from within yugabyte org ,1 244050,18737916057.0,IssuesEvent,2021-11-04 10:03:36,kubewarden/docs,https://api.github.com/repos/kubewarden/docs,closed,Specify supported OCI registries,documentation,"On the [Distributing Policies](https://docs.kubewarden.io/distributing-policies.html) section of the book, provide a list of what are the OCI registries that have been tested, and if some registries are giving troubles, give an explicit list as well, so we can get feedback from the users if at some point they are working.",1.0,"Specify supported OCI registries - On the [Distributing Policies](https://docs.kubewarden.io/distributing-policies.html) section of the book, provide a list of what are the OCI registries that have been tested, and if some registries are giving troubles, give an explicit list as well, so we can get feedback from the users if at some point they are working.",0,specify supported oci registries on the section of the book provide a list of what are the oci registries that have been tested and if some registries are giving troubles give an explicit list as well so we can get feedback from the users if at some point they are working ,0 4315,3009528312.0,IssuesEvent,2015-07-28 07:10:04,axsh/wakame-vdc,https://api.github.com/repos/axsh/wakame-vdc,closed,Add changelog,Priority : High Type : Code Enhancement,"When we start releasing stable versions we should add a changelog file to the source code. Just like we did for OpenVNet, we should follow the following template: http://keepachangelog.com Tracking changes from older versions will be difficult for Wakame-vdc. In OpenVNet's case, we were releasing the first numbered version so we just had a feature list with the `added` prefix. Wakame-vdc's case is different since it's had numbered version releases before. One idea would be to just acknowledge the existence of older versions and refer to github's history for changes in them.",1.0,"Add changelog - When we start releasing stable versions we should add a changelog file to the source code. Just like we did for OpenVNet, we should follow the following template: http://keepachangelog.com Tracking changes from older versions will be difficult for Wakame-vdc. In OpenVNet's case, we were releasing the first numbered version so we just had a feature list with the `added` prefix. Wakame-vdc's case is different since it's had numbered version releases before. One idea would be to just acknowledge the existence of older versions and refer to github's history for changes in them.",0,add changelog when we start releasing stable versions we should add a changelog file to the source code just like we did for openvnet we should follow the following template tracking changes from older versions will be difficult for wakame vdc in openvnet s case we were releasing the first numbered version so we just had a feature list with the added prefix wakame vdc s case is different since it s had numbered version releases before one idea would be to just acknowledge the existence of older versions and refer to github s history for changes in them ,0 3307,13434274869.0,IssuesEvent,2020-09-07 11:06:27,elastic/beats,https://api.github.com/repos/elastic/beats,closed,[CI] APM beats update job is broken,Team:Automation [zube]: In Review automation bug ci,"It is failing on checkout with a durable task error ``` [INFO] gitCheckout: Checkout SCM master with default customisation from the Item. [Pipeline] echo [INFO] Override default checkout [Pipeline] retry [Pipeline] { [Pipeline] sleep Sleeping for 10 sec > git rev-parse --verify HEAD # timeout=10 Resetting working tree > git reset --hard # timeout=10 > git clean -fdx # timeout=10 [Pipeline] checkout using credential f6c7695a-671e-4f4f-a331-acdce44ff9ba Wiping out workspace first. Cloning the remote Git repository Using shallow clone with depth 3 Avoid fetching tags Cloning repository git@github.com:elastic/beats.git > git init /var/lib/jenkins/workspace/Beats_apm-beats-update_master/src/github.com/elastic/beats-local # timeout=10 Fetching upstream changes from git@github.com:elastic/beats.git > git --version # timeout=10 > git --version # 'git version 2.7.4' using GIT_SSH to set credentials GitHub user @elasticmachine SSH key > git fetch --no-tags --progress --depth=3 git@github.com:elastic/beats.git +refs/heads/*:refs/remotes/origin/* # timeout=15 Cleaning workspace Using shallow fetch with depth 3 Pruning obsolete local branches > git config remote.origin.url git@github.com:elastic/beats.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git@github.com:elastic/beats.git # timeout=10 > git rev-parse --verify HEAD # timeout=10 No valid HEAD. Skipping the resetting > git clean -fdx # timeout=10 Fetching upstream changes from git@github.com:elastic/beats.git using GIT_SSH to set credentials GitHub user @elasticmachine SSH key > git fetch --no-tags --progress --prune --depth=3 git@github.com:elastic/beats.git +refs/heads/master:refs/remotes/origin/master # timeout=15 Checking out Revision 1e0595754c8b2056937ec9d723401a6346d15e83 (master) Commit message: ""docs: update cipher suites (#20697)"" Cleaning workspace > git config core.sparsecheckout # timeout=10 > git checkout -f 1e0595754c8b2056937ec9d723401a6346d15e83 # timeout=15 [Pipeline] } [Pipeline] // retry [Pipeline] isUnix [Pipeline] withCredentials Masking supported pattern matches of $GIT_USERNAME or $GIT_PASSWORD [Pipeline] { [Pipeline] sh (Git fetch) > git rev-parse --verify HEAD # timeout=10 Resetting working tree > git reset --hard # timeout=10 > git clean -fdx # timeout=10 process apparently never started in /var/lib/jenkins/workspace/Beats_apm-beats-update_master/src/github.com/elastic/beats-local@tmp/durable-a95f73fd (running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer) [Pipeline] echo [WARN] gitCmd failed, further details in the archived file 'fetch.log' [Pipeline] archiveArtifacts Archiving artifacts ‘fetch.log’ doesn’t match anything ```",2.0,"[CI] APM beats update job is broken - It is failing on checkout with a durable task error ``` [INFO] gitCheckout: Checkout SCM master with default customisation from the Item. [Pipeline] echo [INFO] Override default checkout [Pipeline] retry [Pipeline] { [Pipeline] sleep Sleeping for 10 sec > git rev-parse --verify HEAD # timeout=10 Resetting working tree > git reset --hard # timeout=10 > git clean -fdx # timeout=10 [Pipeline] checkout using credential f6c7695a-671e-4f4f-a331-acdce44ff9ba Wiping out workspace first. Cloning the remote Git repository Using shallow clone with depth 3 Avoid fetching tags Cloning repository git@github.com:elastic/beats.git > git init /var/lib/jenkins/workspace/Beats_apm-beats-update_master/src/github.com/elastic/beats-local # timeout=10 Fetching upstream changes from git@github.com:elastic/beats.git > git --version # timeout=10 > git --version # 'git version 2.7.4' using GIT_SSH to set credentials GitHub user @elasticmachine SSH key > git fetch --no-tags --progress --depth=3 git@github.com:elastic/beats.git +refs/heads/*:refs/remotes/origin/* # timeout=15 Cleaning workspace Using shallow fetch with depth 3 Pruning obsolete local branches > git config remote.origin.url git@github.com:elastic/beats.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git@github.com:elastic/beats.git # timeout=10 > git rev-parse --verify HEAD # timeout=10 No valid HEAD. Skipping the resetting > git clean -fdx # timeout=10 Fetching upstream changes from git@github.com:elastic/beats.git using GIT_SSH to set credentials GitHub user @elasticmachine SSH key > git fetch --no-tags --progress --prune --depth=3 git@github.com:elastic/beats.git +refs/heads/master:refs/remotes/origin/master # timeout=15 Checking out Revision 1e0595754c8b2056937ec9d723401a6346d15e83 (master) Commit message: ""docs: update cipher suites (#20697)"" Cleaning workspace > git config core.sparsecheckout # timeout=10 > git checkout -f 1e0595754c8b2056937ec9d723401a6346d15e83 # timeout=15 [Pipeline] } [Pipeline] // retry [Pipeline] isUnix [Pipeline] withCredentials Masking supported pattern matches of $GIT_USERNAME or $GIT_PASSWORD [Pipeline] { [Pipeline] sh (Git fetch) > git rev-parse --verify HEAD # timeout=10 Resetting working tree > git reset --hard # timeout=10 > git clean -fdx # timeout=10 process apparently never started in /var/lib/jenkins/workspace/Beats_apm-beats-update_master/src/github.com/elastic/beats-local@tmp/durable-a95f73fd (running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer) [Pipeline] echo [WARN] gitCmd failed, further details in the archived file 'fetch.log' [Pipeline] archiveArtifacts Archiving artifacts ‘fetch.log’ doesn’t match anything ```",1, apm beats update job is broken it is failing on checkout with a durable task error gitcheckout checkout scm master with default customisation from the item echo override default checkout retry sleep sleeping for sec git rev parse verify head timeout resetting working tree git reset hard timeout git clean fdx timeout checkout using credential wiping out workspace first cloning the remote git repository using shallow clone with depth avoid fetching tags cloning repository git github com elastic beats git git init var lib jenkins workspace beats apm beats update master src github com elastic beats local timeout fetching upstream changes from git github com elastic beats git git version timeout git version git version using git ssh to set credentials github user elasticmachine ssh key git fetch no tags progress depth git github com elastic beats git refs heads refs remotes origin timeout cleaning workspace using shallow fetch with depth pruning obsolete local branches git config remote origin url git github com elastic beats git timeout git config add remote origin fetch refs heads refs remotes origin timeout git config remote origin url git github com elastic beats git timeout git rev parse verify head timeout no valid head skipping the resetting git clean fdx timeout fetching upstream changes from git github com elastic beats git using git ssh to set credentials github user elasticmachine ssh key git fetch no tags progress prune depth git github com elastic beats git refs heads master refs remotes origin master timeout checking out revision master commit message docs update cipher suites cleaning workspace git config core sparsecheckout timeout git checkout f timeout retry isunix withcredentials masking supported pattern matches of git username or git password sh git fetch git rev parse verify head timeout resetting working tree git reset hard timeout git clean fdx timeout process apparently never started in var lib jenkins workspace beats apm beats update master src github com elastic beats local tmp durable running jenkins temporarily with dorg jenkinsci plugins durabletask bourneshellscript launch diagnostics true might make the problem clearer echo gitcmd failed further details in the archived file fetch log archiveartifacts archiving artifacts ‘fetch log’ doesn’t match anything ,1 346995,10422877064.0,IssuesEvent,2019-09-16 10:01:32,wso2/product-apim,https://api.github.com/repos/wso2/product-apim,closed,"Observed an error when restarting the apim server after sometime from a load test done, while connected to apim-analytics",2.2.0 Priority/Normal Severity/Major Type/Bug,"**Description:** Observed an error when restarting the apim server after 5-10mins from a simple load test done. **Suggested Labels:** APIM-2.2.0 Type/bug Priority/high Severity/Major **Suggested Assignees:** **Affected Product Version:** apim-2.1.0-update4 apim-analytics-2.1.0-update3 **OS, DB, other environment details and versions:** apim-analytics configured **Steps to reproduce:** - Did a load test to access an api (with 3000 threads) - Finish the load completly - Kept some time to verify stats etc - Now I restarted the APIM server only => observed below exeption in carbon log `TID: [-1234] [] [2018-01-05 14:16:07,238] INFO {org.wso2.andes.server.handler.ChannelOpenHandler} - Connecting to: carbon {org.wso2.andes.server.handler.ChannelOpenHandler} TID: [-1234] [] [2018-01-05 14:16:07,509] INFO {org.wso2.andes.kernel.AndesChannel} - Channel created (ID: 10.100.7.104:40436) {org.wso2.andes.kernel.AndesChannel} TID: [-1] [] [2018-01-05 14:16:07,518] WARN {org.wso2.carbon.apimgt.jms.listener.utils.JMSUtils} - Cannot locate destination : throttleData {org.wso2.carbon.apimgt.jms.listener.utils.JMSUtils} TID: [-1] [] [2018-01-05 14:16:07,654] ERROR {org.wso2.andes.server.protocol.AMQProtocolEngine} - Unexpected exception while processing frame. Closing connection. {org.wso2.andes.server.protocol.AMQProtocolEngine} java.util.ConcurrentModificationException at java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:719) at java.util.LinkedHashMap$LinkedKeyIterator.next(LinkedHashMap.java:742) at org.wso2.carbon.registry.core.jdbc.handlers.HandlerManager.putChild(HandlerManager.java:2676) at org.wso2.carbon.registry.core.jdbc.handlers.HandlerLifecycleManager.putChild(HandlerLifecycleManager.java:476) at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.put(EmbeddedRegistry.java:694) at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.put(CacheBackedRegistry.java:591) at org.wso2.carbon.registry.core.session.UserRegistry.putInternal(UserRegistry.java:828) at org.wso2.carbon.registry.core.session.UserRegistry.access$1000(UserRegistry.java:61) at org.wso2.carbon.registry.core.session.UserRegistry$11.run(UserRegistry.java:804) at org.wso2.carbon.registry.core.session.UserRegistry$11.run(UserRegistry.java:801) at java.security.AccessController.doPrivileged(Native Method) at org.wso2.carbon.registry.core.session.UserRegistry.put(UserRegistry.java:801) at org.wso2.carbon.andes.commons.registry.RegistryClient.createQueue(RegistryClient.java:104) at org.wso2.carbon.andes.authorization.andes.AndesAuthorizationHandler.registerAndAuthorizeQueue(AndesAuthorizationHandler.java:727) at org.wso2.carbon.andes.authorization.andes.AndesAuthorizationHandler.handleCreateQueue(AndesAuthorizationHandler.java:164) at org.wso2.carbon.andes.authorization.service.andes.AndesAuthorizationPlugin.authorise(AndesAuthorizationPlugin.java:148) at org.wso2.andes.server.security.SecurityManager$9.allowed(SecurityManager.java:361) at org.wso2.andes.server.security.SecurityManager.checkAllPlugins(SecurityManager.java:238) at org.wso2.andes.server.security.SecurityManager.authoriseCreateQueue(SecurityManager.java:357) at org.wso2.andes.server.queue.AMQQueueFactory.createAMQQueueImpl(AMQQueueFactory.java:177) at org.wso2.andes.server.queue.AMQQueueFactory.createAMQQueueImpl(AMQQueueFactory.java:138) at org.wso2.andes.server.handler.QueueDeclareHandler.createQueue(QueueDeclareHandler.java:209) at org.wso2.andes.server.handler.QueueDeclareHandler.methodReceived(QueueDeclareHandler.java:96) at org.wso2.andes.server.handler.ServerMethodDispatcherImpl.dispatchQueueDeclare(ServerMethodDispatcherImpl.java:600) at org.wso2.andes.framing.amqp_0_91.QueueDeclareBodyImpl.execute(QueueDeclareBodyImpl.java:187) at org.wso2.andes.server.state.AMQStateManager.methodReceived(AMQStateManager.java:169) at org.wso2.andes.server.protocol.AMQProtocolEngine.methodFrameReceived(AMQProtocolEngine.java:388) at org.wso2.andes.framing.AMQMethodBodyImpl.handle(AMQMethodBodyImpl.java:96) at org.wso2.andes.server.protocol.AMQProtocolEngine.frameReceived(AMQProtocolEngine.java:333) at org.wso2.andes.server.protocol.AMQProtocolEngine.dataBlockReceived(AMQProtocolEngine.java:282) at org.wso2.andes.server.protocol.AMQProtocolEngine$1.run(AMQProtocolEngine.java:251) at org.wso2.andes.pool.Job.processAll(Job.java:111) at org.wso2.andes.pool.Job.run(Job.java:158) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) TID: [-1] [] [2018-01-05 14:16:07,670] ERROR {org.wso2.andes.client.state.AMQStateManager} - No Waiters for error saving as last error:Exception thrown against AMQConnection: Host: 172.18.0.1 Port: 5672 Virtual Host: carbon Client ID: clientid Active session count: 1: org.wso2.andes.AMQDisconnectedException: Server closed connection and reconnection not permitted. {org.wso2.andes.client.state.AMQStateManager} TID: [-1] [] [2018-01-05 14:16:07,671] ERROR {org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager} - Error creating JMS consumer for Siddhi-JMS-Consumer {org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager} javax.jms.JMSException: Error registering consumer: org.wso2.andes.AMQException: Woken up due to class javax.jms.JMSException at org.wso2.andes.client.AMQSession$6.execute(AMQSession.java:2143) at org.wso2.andes.client.AMQSession$6.execute(AMQSession.java:2086) at org.wso2.andes.client.AMQConnectionDelegate_8_0.executeRetrySupport(AMQConnectionDelegate_8_0.java:333) at org.wso2.andes.client.AMQConnection$3.run(AMQConnection.java:655) at java.security.AccessController.doPrivileged(Native Method) at org.wso2.andes.client.AMQConnection.executeRetrySupport(AMQConnection.java:652) at org.wso2.andes.client.failover.FailoverRetrySupport.execute(FailoverRetrySupport.java:102) at org.wso2.andes.client.AMQSession.createConsumerImpl(AMQSession.java:2084) at org.wso2.andes.client.AMQSession.createConsumer(AMQSession.java:1072) at org.wso2.carbon.apimgt.jms.listener.utils.JMSUtils.createConsumer(JMSUtils.java:478) at org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager$MessageListenerTask.createConsumer(JMSTaskManager.java:998) at org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager$MessageListenerTask.getMessageConsumer(JMSTaskManager.java:853) at org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager$MessageListenerTask.receiveMessage(JMSTaskManager.java:600) at org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager$MessageListenerTask.run(JMSTaskManager.java:521) at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.wso2.andes.AMQException: Woken up due to class javax.jms.JMSException at org.wso2.andes.client.util.BlockingWaiter.block(BlockingWaiter.java:207) at org.wso2.andes.client.protocol.BlockingMethodFrameListener.blockForFrame(BlockingMethodFrameListener.java:123) at org.wso2.andes.client.protocol.AMQProtocolHandler.writeCommandFrameAndWaitForReply(AMQProtocolHandler.java:655) at org.wso2.andes.client.protocol.AMQProtocolHandler.syncWrite(AMQProtocolHandler.java:676) at org.wso2.andes.client.protocol.AMQProtocolHandler.syncWrite(AMQProtocolHandler.java:670) at org.wso2.andes.client.AMQSession_0_8.sendQueueDeclare(AMQSession_0_8.java:374) at org.wso2.andes.client.AMQSession$12.execute(AMQSession.java:2810) at org.wso2.andes.client.AMQSession$12.execute(AMQSession.java:2801) at org.wso2.andes.client.failover.FailoverNoopSupport.execute(FailoverNoopSupport.java:67) at org.wso2.andes.client.AMQSession.declareQueue(AMQSession.java:2799) at org.wso2.andes.client.AMQSession.registerConsumer(AMQSession.java:2944) at org.wso2.andes.client.AMQSession.access$800(AMQSession.java:112) at org.wso2.andes.client.AMQSession$6.execute(AMQSession.java:2120) ... 17 more Caused by: javax.jms.JMSException: Exception thrown against AMQConnection: Host: 172.18.0.1 Port: 5672 Virtual Host: carbon Client ID: clientid Active session count: 1: org.wso2.andes.AMQDisconnectedException: Server closed connection and reconnection not permitted. at org.wso2.andes.client.AMQConnection.exceptionReceived(AMQConnection.java:1315) at org.wso2.andes.client.protocol.AMQProtocolHandler.closed(AMQProtocolHandler.java:260) at org.wso2.andes.transport.network.mina.MinaNetworkHandler.sessionClosed(MinaNetworkHandler.java:138) at org.apache.mina.common.support.AbstractIoFilterChain$TailFilter.sessionClosed(AbstractIoFilterChain.java:550) at org.apache.mina.common.support.AbstractIoFilterChain.callNextSessionClosed(AbstractIoFilterChain.java:269) at org.apache.mina.common.support.AbstractIoFilterChain.access$800(AbstractIoFilterChain.java:53) at org.apache.mina.common.support.AbstractIoFilterChain$EntryImpl$1.sessionClosed(AbstractIoFilterChain.java:633) at org.apache.mina.common.IoFilterAdapter.sessionClosed(IoFilterAdapter.java:65) at org.apache.mina.common.support.AbstractIoFilterChain.callNextSessionClosed(AbstractIoFilterChain.java:269) at org.apache.mina.common.support.AbstractIoFilterChain.access$800(AbstractIoFilterChain.java:53) at org.apache.mina.common.support.AbstractIoFilterChain$EntryImpl$1.sessionClosed(AbstractIoFilterChain.java:633) at org.apache.mina.filter.executor.ExecutorFilter.processEvent(ExecutorFilter.java:230) at org.apache.mina.filter.executor.ExecutorFilter$ProcessEventsRunnable.run(ExecutorFilter.java:264) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:51) ... 1 more Caused by: org.wso2.andes.AMQDisconnectedException: Server closed connection and reconnection not permitted. ... 16 more TID: [-1] [] [2018-01-05 14:16:07,674] INFO {org.wso2.andes.server.AMQChannel} - No consumers to unsubscribe on channel [/10.100.7.104:40436(admin):1] {org.wso2.andes.server.AMQChannel} TID: [-1] [] [2018-01-05 14:16:07,671] ERROR {org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager} - JMS Connection failed : Exception thrown against AMQConnection: Host: 172.18.0.1 Port: 5672 Virtual Host: carbon Client ID: clientid `",1.0,"Observed an error when restarting the apim server after sometime from a load test done, while connected to apim-analytics - **Description:** Observed an error when restarting the apim server after 5-10mins from a simple load test done. **Suggested Labels:** APIM-2.2.0 Type/bug Priority/high Severity/Major **Suggested Assignees:** **Affected Product Version:** apim-2.1.0-update4 apim-analytics-2.1.0-update3 **OS, DB, other environment details and versions:** apim-analytics configured **Steps to reproduce:** - Did a load test to access an api (with 3000 threads) - Finish the load completly - Kept some time to verify stats etc - Now I restarted the APIM server only => observed below exeption in carbon log `TID: [-1234] [] [2018-01-05 14:16:07,238] INFO {org.wso2.andes.server.handler.ChannelOpenHandler} - Connecting to: carbon {org.wso2.andes.server.handler.ChannelOpenHandler} TID: [-1234] [] [2018-01-05 14:16:07,509] INFO {org.wso2.andes.kernel.AndesChannel} - Channel created (ID: 10.100.7.104:40436) {org.wso2.andes.kernel.AndesChannel} TID: [-1] [] [2018-01-05 14:16:07,518] WARN {org.wso2.carbon.apimgt.jms.listener.utils.JMSUtils} - Cannot locate destination : throttleData {org.wso2.carbon.apimgt.jms.listener.utils.JMSUtils} TID: [-1] [] [2018-01-05 14:16:07,654] ERROR {org.wso2.andes.server.protocol.AMQProtocolEngine} - Unexpected exception while processing frame. Closing connection. {org.wso2.andes.server.protocol.AMQProtocolEngine} java.util.ConcurrentModificationException at java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:719) at java.util.LinkedHashMap$LinkedKeyIterator.next(LinkedHashMap.java:742) at org.wso2.carbon.registry.core.jdbc.handlers.HandlerManager.putChild(HandlerManager.java:2676) at org.wso2.carbon.registry.core.jdbc.handlers.HandlerLifecycleManager.putChild(HandlerLifecycleManager.java:476) at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.put(EmbeddedRegistry.java:694) at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.put(CacheBackedRegistry.java:591) at org.wso2.carbon.registry.core.session.UserRegistry.putInternal(UserRegistry.java:828) at org.wso2.carbon.registry.core.session.UserRegistry.access$1000(UserRegistry.java:61) at org.wso2.carbon.registry.core.session.UserRegistry$11.run(UserRegistry.java:804) at org.wso2.carbon.registry.core.session.UserRegistry$11.run(UserRegistry.java:801) at java.security.AccessController.doPrivileged(Native Method) at org.wso2.carbon.registry.core.session.UserRegistry.put(UserRegistry.java:801) at org.wso2.carbon.andes.commons.registry.RegistryClient.createQueue(RegistryClient.java:104) at org.wso2.carbon.andes.authorization.andes.AndesAuthorizationHandler.registerAndAuthorizeQueue(AndesAuthorizationHandler.java:727) at org.wso2.carbon.andes.authorization.andes.AndesAuthorizationHandler.handleCreateQueue(AndesAuthorizationHandler.java:164) at org.wso2.carbon.andes.authorization.service.andes.AndesAuthorizationPlugin.authorise(AndesAuthorizationPlugin.java:148) at org.wso2.andes.server.security.SecurityManager$9.allowed(SecurityManager.java:361) at org.wso2.andes.server.security.SecurityManager.checkAllPlugins(SecurityManager.java:238) at org.wso2.andes.server.security.SecurityManager.authoriseCreateQueue(SecurityManager.java:357) at org.wso2.andes.server.queue.AMQQueueFactory.createAMQQueueImpl(AMQQueueFactory.java:177) at org.wso2.andes.server.queue.AMQQueueFactory.createAMQQueueImpl(AMQQueueFactory.java:138) at org.wso2.andes.server.handler.QueueDeclareHandler.createQueue(QueueDeclareHandler.java:209) at org.wso2.andes.server.handler.QueueDeclareHandler.methodReceived(QueueDeclareHandler.java:96) at org.wso2.andes.server.handler.ServerMethodDispatcherImpl.dispatchQueueDeclare(ServerMethodDispatcherImpl.java:600) at org.wso2.andes.framing.amqp_0_91.QueueDeclareBodyImpl.execute(QueueDeclareBodyImpl.java:187) at org.wso2.andes.server.state.AMQStateManager.methodReceived(AMQStateManager.java:169) at org.wso2.andes.server.protocol.AMQProtocolEngine.methodFrameReceived(AMQProtocolEngine.java:388) at org.wso2.andes.framing.AMQMethodBodyImpl.handle(AMQMethodBodyImpl.java:96) at org.wso2.andes.server.protocol.AMQProtocolEngine.frameReceived(AMQProtocolEngine.java:333) at org.wso2.andes.server.protocol.AMQProtocolEngine.dataBlockReceived(AMQProtocolEngine.java:282) at org.wso2.andes.server.protocol.AMQProtocolEngine$1.run(AMQProtocolEngine.java:251) at org.wso2.andes.pool.Job.processAll(Job.java:111) at org.wso2.andes.pool.Job.run(Job.java:158) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) TID: [-1] [] [2018-01-05 14:16:07,670] ERROR {org.wso2.andes.client.state.AMQStateManager} - No Waiters for error saving as last error:Exception thrown against AMQConnection: Host: 172.18.0.1 Port: 5672 Virtual Host: carbon Client ID: clientid Active session count: 1: org.wso2.andes.AMQDisconnectedException: Server closed connection and reconnection not permitted. {org.wso2.andes.client.state.AMQStateManager} TID: [-1] [] [2018-01-05 14:16:07,671] ERROR {org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager} - Error creating JMS consumer for Siddhi-JMS-Consumer {org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager} javax.jms.JMSException: Error registering consumer: org.wso2.andes.AMQException: Woken up due to class javax.jms.JMSException at org.wso2.andes.client.AMQSession$6.execute(AMQSession.java:2143) at org.wso2.andes.client.AMQSession$6.execute(AMQSession.java:2086) at org.wso2.andes.client.AMQConnectionDelegate_8_0.executeRetrySupport(AMQConnectionDelegate_8_0.java:333) at org.wso2.andes.client.AMQConnection$3.run(AMQConnection.java:655) at java.security.AccessController.doPrivileged(Native Method) at org.wso2.andes.client.AMQConnection.executeRetrySupport(AMQConnection.java:652) at org.wso2.andes.client.failover.FailoverRetrySupport.execute(FailoverRetrySupport.java:102) at org.wso2.andes.client.AMQSession.createConsumerImpl(AMQSession.java:2084) at org.wso2.andes.client.AMQSession.createConsumer(AMQSession.java:1072) at org.wso2.carbon.apimgt.jms.listener.utils.JMSUtils.createConsumer(JMSUtils.java:478) at org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager$MessageListenerTask.createConsumer(JMSTaskManager.java:998) at org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager$MessageListenerTask.getMessageConsumer(JMSTaskManager.java:853) at org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager$MessageListenerTask.receiveMessage(JMSTaskManager.java:600) at org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager$MessageListenerTask.run(JMSTaskManager.java:521) at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.wso2.andes.AMQException: Woken up due to class javax.jms.JMSException at org.wso2.andes.client.util.BlockingWaiter.block(BlockingWaiter.java:207) at org.wso2.andes.client.protocol.BlockingMethodFrameListener.blockForFrame(BlockingMethodFrameListener.java:123) at org.wso2.andes.client.protocol.AMQProtocolHandler.writeCommandFrameAndWaitForReply(AMQProtocolHandler.java:655) at org.wso2.andes.client.protocol.AMQProtocolHandler.syncWrite(AMQProtocolHandler.java:676) at org.wso2.andes.client.protocol.AMQProtocolHandler.syncWrite(AMQProtocolHandler.java:670) at org.wso2.andes.client.AMQSession_0_8.sendQueueDeclare(AMQSession_0_8.java:374) at org.wso2.andes.client.AMQSession$12.execute(AMQSession.java:2810) at org.wso2.andes.client.AMQSession$12.execute(AMQSession.java:2801) at org.wso2.andes.client.failover.FailoverNoopSupport.execute(FailoverNoopSupport.java:67) at org.wso2.andes.client.AMQSession.declareQueue(AMQSession.java:2799) at org.wso2.andes.client.AMQSession.registerConsumer(AMQSession.java:2944) at org.wso2.andes.client.AMQSession.access$800(AMQSession.java:112) at org.wso2.andes.client.AMQSession$6.execute(AMQSession.java:2120) ... 17 more Caused by: javax.jms.JMSException: Exception thrown against AMQConnection: Host: 172.18.0.1 Port: 5672 Virtual Host: carbon Client ID: clientid Active session count: 1: org.wso2.andes.AMQDisconnectedException: Server closed connection and reconnection not permitted. at org.wso2.andes.client.AMQConnection.exceptionReceived(AMQConnection.java:1315) at org.wso2.andes.client.protocol.AMQProtocolHandler.closed(AMQProtocolHandler.java:260) at org.wso2.andes.transport.network.mina.MinaNetworkHandler.sessionClosed(MinaNetworkHandler.java:138) at org.apache.mina.common.support.AbstractIoFilterChain$TailFilter.sessionClosed(AbstractIoFilterChain.java:550) at org.apache.mina.common.support.AbstractIoFilterChain.callNextSessionClosed(AbstractIoFilterChain.java:269) at org.apache.mina.common.support.AbstractIoFilterChain.access$800(AbstractIoFilterChain.java:53) at org.apache.mina.common.support.AbstractIoFilterChain$EntryImpl$1.sessionClosed(AbstractIoFilterChain.java:633) at org.apache.mina.common.IoFilterAdapter.sessionClosed(IoFilterAdapter.java:65) at org.apache.mina.common.support.AbstractIoFilterChain.callNextSessionClosed(AbstractIoFilterChain.java:269) at org.apache.mina.common.support.AbstractIoFilterChain.access$800(AbstractIoFilterChain.java:53) at org.apache.mina.common.support.AbstractIoFilterChain$EntryImpl$1.sessionClosed(AbstractIoFilterChain.java:633) at org.apache.mina.filter.executor.ExecutorFilter.processEvent(ExecutorFilter.java:230) at org.apache.mina.filter.executor.ExecutorFilter$ProcessEventsRunnable.run(ExecutorFilter.java:264) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:51) ... 1 more Caused by: org.wso2.andes.AMQDisconnectedException: Server closed connection and reconnection not permitted. ... 16 more TID: [-1] [] [2018-01-05 14:16:07,674] INFO {org.wso2.andes.server.AMQChannel} - No consumers to unsubscribe on channel [/10.100.7.104:40436(admin):1] {org.wso2.andes.server.AMQChannel} TID: [-1] [] [2018-01-05 14:16:07,671] ERROR {org.wso2.carbon.apimgt.jms.listener.utils.JMSTaskManager} - JMS Connection failed : Exception thrown against AMQConnection: Host: 172.18.0.1 Port: 5672 Virtual Host: carbon Client ID: clientid `",0,observed an error when restarting the apim server after sometime from a load test done while connected to apim analytics description observed an error when restarting the apim server after from a simple load test done suggested labels apim type bug priority high severity major suggested assignees affected product version apim apim analytics os db other environment details and versions apim analytics configured steps to reproduce did a load test to access an api with threads finish the load completly kept some time to verify stats etc now i restarted the apim server only observed below exeption in carbon log tid info org andes server handler channelopenhandler connecting to carbon org andes server handler channelopenhandler tid info org andes kernel andeschannel channel created id org andes kernel andeschannel tid warn org carbon apimgt jms listener utils jmsutils cannot locate destination throttledata org carbon apimgt jms listener utils jmsutils tid error org andes server protocol amqprotocolengine unexpected exception while processing frame closing connection org andes server protocol amqprotocolengine java util concurrentmodificationexception at java util linkedhashmap linkedhashiterator nextnode linkedhashmap java at java util linkedhashmap linkedkeyiterator next linkedhashmap java at org carbon registry core jdbc handlers handlermanager putchild handlermanager java at org carbon registry core jdbc handlers handlerlifecyclemanager putchild handlerlifecyclemanager java at org carbon registry core jdbc embeddedregistry put embeddedregistry java at org carbon registry core caching cachebackedregistry put cachebackedregistry java at org carbon registry core session userregistry putinternal userregistry java at org carbon registry core session userregistry access userregistry java at org carbon registry core session userregistry run userregistry java at org carbon registry core session userregistry run userregistry java at java security accesscontroller doprivileged native method at org carbon registry core session userregistry put userregistry java at org carbon andes commons registry registryclient createqueue registryclient java at org carbon andes authorization andes andesauthorizationhandler registerandauthorizequeue andesauthorizationhandler java at org carbon andes authorization andes andesauthorizationhandler handlecreatequeue andesauthorizationhandler java at org carbon andes authorization service andes andesauthorizationplugin authorise andesauthorizationplugin java at org andes server security securitymanager allowed securitymanager java at org andes server security securitymanager checkallplugins securitymanager java at org andes server security securitymanager authorisecreatequeue securitymanager java at org andes server queue amqqueuefactory createamqqueueimpl amqqueuefactory java at org andes server queue amqqueuefactory createamqqueueimpl amqqueuefactory java at org andes server handler queuedeclarehandler createqueue queuedeclarehandler java at org andes server handler queuedeclarehandler methodreceived queuedeclarehandler java at org andes server handler servermethoddispatcherimpl dispatchqueuedeclare servermethoddispatcherimpl java at org andes framing amqp queuedeclarebodyimpl execute queuedeclarebodyimpl java at org andes server state amqstatemanager methodreceived amqstatemanager java at org andes server protocol amqprotocolengine methodframereceived amqprotocolengine java at org andes framing amqmethodbodyimpl handle amqmethodbodyimpl java at org andes server protocol amqprotocolengine framereceived amqprotocolengine java at org andes server protocol amqprotocolengine datablockreceived amqprotocolengine java at org andes server protocol amqprotocolengine run amqprotocolengine java at org andes pool job processall job java at org andes pool job run job java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java tid error org andes client state amqstatemanager no waiters for error saving as last error exception thrown against amqconnection host port virtual host carbon client id clientid active session count org andes amqdisconnectedexception server closed connection and reconnection not permitted org andes client state amqstatemanager tid error org carbon apimgt jms listener utils jmstaskmanager error creating jms consumer for siddhi jms consumer org carbon apimgt jms listener utils jmstaskmanager javax jms jmsexception error registering consumer org andes amqexception woken up due to class javax jms jmsexception at org andes client amqsession execute amqsession java at org andes client amqsession execute amqsession java at org andes client amqconnectiondelegate executeretrysupport amqconnectiondelegate java at org andes client amqconnection run amqconnection java at java security accesscontroller doprivileged native method at org andes client amqconnection executeretrysupport amqconnection java at org andes client failover failoverretrysupport execute failoverretrysupport java at org andes client amqsession createconsumerimpl amqsession java at org andes client amqsession createconsumer amqsession java at org carbon apimgt jms listener utils jmsutils createconsumer jmsutils java at org carbon apimgt jms listener utils jmstaskmanager messagelistenertask createconsumer jmstaskmanager java at org carbon apimgt jms listener utils jmstaskmanager messagelistenertask getmessageconsumer jmstaskmanager java at org carbon apimgt jms listener utils jmstaskmanager messagelistenertask receivemessage jmstaskmanager java at org carbon apimgt jms listener utils jmstaskmanager messagelistenertask run jmstaskmanager java at org apache transport base threads nativeworkerpool run nativeworkerpool java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by org andes amqexception woken up due to class javax jms jmsexception at org andes client util blockingwaiter block blockingwaiter java at org andes client protocol blockingmethodframelistener blockforframe blockingmethodframelistener java at org andes client protocol amqprotocolhandler writecommandframeandwaitforreply amqprotocolhandler java at org andes client protocol amqprotocolhandler syncwrite amqprotocolhandler java at org andes client protocol amqprotocolhandler syncwrite amqprotocolhandler java at org andes client amqsession sendqueuedeclare amqsession java at org andes client amqsession execute amqsession java at org andes client amqsession execute amqsession java at org andes client failover failovernoopsupport execute failovernoopsupport java at org andes client amqsession declarequeue amqsession java at org andes client amqsession registerconsumer amqsession java at org andes client amqsession access amqsession java at org andes client amqsession execute amqsession java more caused by javax jms jmsexception exception thrown against amqconnection host port virtual host carbon client id clientid active session count org andes amqdisconnectedexception server closed connection and reconnection not permitted at org andes client amqconnection exceptionreceived amqconnection java at org andes client protocol amqprotocolhandler closed amqprotocolhandler java at org andes transport network mina minanetworkhandler sessionclosed minanetworkhandler java at org apache mina common support abstractiofilterchain tailfilter sessionclosed abstractiofilterchain java at org apache mina common support abstractiofilterchain callnextsessionclosed abstractiofilterchain java at org apache mina common support abstractiofilterchain access abstractiofilterchain java at org apache mina common support abstractiofilterchain entryimpl sessionclosed abstractiofilterchain java at org apache mina common iofilteradapter sessionclosed iofilteradapter java at org apache mina common support abstractiofilterchain callnextsessionclosed abstractiofilterchain java at org apache mina common support abstractiofilterchain access abstractiofilterchain java at org apache mina common support abstractiofilterchain entryimpl sessionclosed abstractiofilterchain java at org apache mina filter executor executorfilter processevent executorfilter java at org apache mina filter executor executorfilter processeventsrunnable run executorfilter java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache mina util namepreservingrunnable run namepreservingrunnable java more caused by org andes amqdisconnectedexception server closed connection and reconnection not permitted more tid info org andes server amqchannel no consumers to unsubscribe on channel org andes server amqchannel tid error org carbon apimgt jms listener utils jmstaskmanager jms connection failed exception thrown against amqconnection host port virtual host carbon client id clientid ,0 96166,3965879265.0,IssuesEvent,2016-05-03 10:21:23,gfrewqpoiu/MusicBot,https://api.github.com/repos/gfrewqpoiu/MusicBot,closed,Woot or Hype Command,enhancement help wanted Medium Priority,A Woot or Hype Command which would tell people after the song is finished playing how much Hype it has achieved.,1.0,Woot or Hype Command - A Woot or Hype Command which would tell people after the song is finished playing how much Hype it has achieved.,0,woot or hype command a woot or hype command which would tell people after the song is finished playing how much hype it has achieved ,0 106868,9195744476.0,IssuesEvent,2019-03-07 03:48:51,asriz7777/FXSCRIPTS-TEST-AUTOMATION,https://api.github.com/repos/asriz7777/FXSCRIPTS-TEST-AUTOMATION,closed,Vulnerability [Unsecured] : PUT:/api/v1/primary-transaction,sanity test,"Project : sanity test Template : ApiV1PrimaryTransactionPutAnonymousInvalid Run Id : 8a8081fd69563def01695643843401a5 Job : uat Env : Default Category : Unsecured Tags : [ OWASP - OTG-AUTHN-002, FX Top 10 - API Vulnerability, Non-Intrusive] Severity : Major Region : CENTRAL_INDIA Result : fail Status Code : 200 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 07 Mar 2019 03:47:57 GMT]} Endpoint : http://54.215.136.217/api/v1/primary-transaction Request : { ""amount"" : ""3540"", ""availableBalance"" : ""783227875"", ""createdBy"" : """", ""createdDate"" : """", ""description"" : ""4HrTVAwk"", ""id"" : """", ""inactive"" : false, ""modifiedBy"" : """", ""modifiedDate"" : """", ""status"" : ""4HrTVAwk"", ""type"" : ""4HrTVAwk"", ""user"" : { ""createdBy"" : """", ""createdDate"" : """", ""id"" : """", ""inactive"" : false, ""modifiedBy"" : """", ""modifiedDate"" : """", ""name"" : ""4HrTVAwk"", ""version"" : """" }, ""version"" : """" } Response : { ""requestId"" : ""None"", ""requestTime"" : ""2019-03-07T03:47:58.501+0000"", ""errors"" : true, ""messages"" : [ { ""type"" : ""ERROR"", ""key"" : """", ""value"" : null } ], ""data"" : null, ""totalPages"" : 0, ""totalElements"" : 0 } Logs : 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : URL [http://54.215.136.217/api/v1/primary-transaction] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Method [PUT] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Request [{ ""amount"" : ""3540"", ""availableBalance"" : ""783227875"", ""createdBy"" : """", ""createdDate"" : """", ""description"" : ""4HrTVAwk"", ""id"" : """", ""inactive"" : false, ""modifiedBy"" : """", ""modifiedDate"" : """", ""status"" : ""4HrTVAwk"", ""type"" : ""4HrTVAwk"", ""user"" : { ""createdBy"" : """", ""createdDate"" : """", ""id"" : """", ""inactive"" : false, ""modifiedBy"" : """", ""modifiedDate"" : """", ""name"" : ""4HrTVAwk"", ""version"" : """" }, ""version"" : """" }] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Request-Headers [{Content-Type=[application/json], Accept=[application/json]}] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Response [{ ""requestId"" : ""None"", ""requestTime"" : ""2019-03-07T03:47:58.501+0000"", ""errors"" : true, ""messages"" : [ { ""type"" : ""ERROR"", ""key"" : """", ""value"" : null } ], ""data"" : null, ""totalPages"" : 0, ""totalElements"" : 0 }] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Response-Headers [{X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 07 Mar 2019 03:47:57 GMT]}] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : StatusCode [200] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Time [502] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Size [176] 2019-03-07 03:47:58 ERROR [ApiV1PrimaryTransactionPutAnonymousInvalid] : Assertion [@StatusCode == 401 OR @StatusCode == 403] resolved-to [200 == 401 OR 200 == 403] result [Failed] --- FX Bot ---",1.0,"Vulnerability [Unsecured] : PUT:/api/v1/primary-transaction - Project : sanity test Template : ApiV1PrimaryTransactionPutAnonymousInvalid Run Id : 8a8081fd69563def01695643843401a5 Job : uat Env : Default Category : Unsecured Tags : [ OWASP - OTG-AUTHN-002, FX Top 10 - API Vulnerability, Non-Intrusive] Severity : Major Region : CENTRAL_INDIA Result : fail Status Code : 200 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 07 Mar 2019 03:47:57 GMT]} Endpoint : http://54.215.136.217/api/v1/primary-transaction Request : { ""amount"" : ""3540"", ""availableBalance"" : ""783227875"", ""createdBy"" : """", ""createdDate"" : """", ""description"" : ""4HrTVAwk"", ""id"" : """", ""inactive"" : false, ""modifiedBy"" : """", ""modifiedDate"" : """", ""status"" : ""4HrTVAwk"", ""type"" : ""4HrTVAwk"", ""user"" : { ""createdBy"" : """", ""createdDate"" : """", ""id"" : """", ""inactive"" : false, ""modifiedBy"" : """", ""modifiedDate"" : """", ""name"" : ""4HrTVAwk"", ""version"" : """" }, ""version"" : """" } Response : { ""requestId"" : ""None"", ""requestTime"" : ""2019-03-07T03:47:58.501+0000"", ""errors"" : true, ""messages"" : [ { ""type"" : ""ERROR"", ""key"" : """", ""value"" : null } ], ""data"" : null, ""totalPages"" : 0, ""totalElements"" : 0 } Logs : 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : URL [http://54.215.136.217/api/v1/primary-transaction] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Method [PUT] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Request [{ ""amount"" : ""3540"", ""availableBalance"" : ""783227875"", ""createdBy"" : """", ""createdDate"" : """", ""description"" : ""4HrTVAwk"", ""id"" : """", ""inactive"" : false, ""modifiedBy"" : """", ""modifiedDate"" : """", ""status"" : ""4HrTVAwk"", ""type"" : ""4HrTVAwk"", ""user"" : { ""createdBy"" : """", ""createdDate"" : """", ""id"" : """", ""inactive"" : false, ""modifiedBy"" : """", ""modifiedDate"" : """", ""name"" : ""4HrTVAwk"", ""version"" : """" }, ""version"" : """" }] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Request-Headers [{Content-Type=[application/json], Accept=[application/json]}] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Response [{ ""requestId"" : ""None"", ""requestTime"" : ""2019-03-07T03:47:58.501+0000"", ""errors"" : true, ""messages"" : [ { ""type"" : ""ERROR"", ""key"" : """", ""value"" : null } ], ""data"" : null, ""totalPages"" : 0, ""totalElements"" : 0 }] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Response-Headers [{X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 07 Mar 2019 03:47:57 GMT]}] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : StatusCode [200] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Time [502] 2019-03-07 03:47:58 DEBUG [ApiV1PrimaryTransactionPutAnonymousInvalid] : Size [176] 2019-03-07 03:47:58 ERROR [ApiV1PrimaryTransactionPutAnonymousInvalid] : Assertion [@StatusCode == 401 OR @StatusCode == 403] resolved-to [200 == 401 OR 200 == 403] result [Failed] --- FX Bot ---",0,vulnerability put api primary transaction project sanity test template run id job uat env default category unsecured tags severity major region central india result fail status code headers x content type options x xss protection cache control pragma expires x frame options content type transfer encoding date endpoint request amount availablebalance createdby createddate description id inactive false modifiedby modifieddate status type user createdby createddate id inactive false modifiedby modifieddate name version version response requestid none requesttime errors true messages type error key value null data null totalpages totalelements logs debug url debug method debug request amount availablebalance createdby createddate description id inactive false modifiedby modifieddate status type user createdby createddate id inactive false modifiedby modifieddate name version version debug request headers accept debug response requestid none requesttime errors true messages type error key value null data null totalpages totalelements debug response headers x xss protection cache control pragma expires x frame options content type transfer encoding date debug statuscode debug time debug size error assertion resolved to result fx bot ,0 36954,15105964046.0,IssuesEvent,2021-02-08 13:44:14,tuna/issues,https://api.github.com/repos/tuna/issues,opened,[tuna]404 at /anaconda/cloud/paddle,Service Issue," #### 发生了什么(What happened) ```bash $ conda install paddlepaddle==2.0.0 -c paddle ``` 报错: ``` Collecting package metadata (current_repodata.json): failed UnavailableInvalidChannel: The channel is not accessible or is invalid. channel name: paddle channel url: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/paddle error code: 404 You will need to adjust your conda configuration to proceed. Use `conda config --show channels` to view your configuration's current state, and use `conda config --show-sources` to view config file locations. ``` #### 期望的现象(What you expected to happen) 希望基于清华源的PaddlePaddle2.0安装成功 #### 如何重现(How to reproduce it) 设置 ~/.condarc 文件内容如下: ``` channels: - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/Paddle/ - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/ - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ - defaults show_channel_urls: true default_channels: - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2 custom_channels: conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud msys2: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud bioconda: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud menpo: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud simpleitk: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud paddle: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud ``` #### 其他事项(Anything else we need to know) - 该问题是否被之前的 issue 提出过:未检索。 #### 您的环境(Environment) - 操作系统(OS Version):windows subsystem linux(WSL) ubuntu 20.04 - 浏览器(如果适用)(Browser version, if applicable):遨游5浏览器 - 其他(Others): ",1.0,"[tuna]404 at /anaconda/cloud/paddle - #### 发生了什么(What happened) ```bash $ conda install paddlepaddle==2.0.0 -c paddle ``` 报错: ``` Collecting package metadata (current_repodata.json): failed UnavailableInvalidChannel: The channel is not accessible or is invalid. channel name: paddle channel url: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/paddle error code: 404 You will need to adjust your conda configuration to proceed. Use `conda config --show channels` to view your configuration's current state, and use `conda config --show-sources` to view config file locations. ``` #### 期望的现象(What you expected to happen) 希望基于清华源的PaddlePaddle2.0安装成功 #### 如何重现(How to reproduce it) 设置 ~/.condarc 文件内容如下: ``` channels: - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/Paddle/ - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/ - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ - defaults show_channel_urls: true default_channels: - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2 custom_channels: conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud msys2: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud bioconda: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud menpo: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud simpleitk: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud paddle: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud ``` #### 其他事项(Anything else we need to know) - 该问题是否被之前的 issue 提出过:未检索。 #### 您的环境(Environment) - 操作系统(OS Version):windows subsystem linux(WSL) ubuntu 20.04 - 浏览器(如果适用)(Browser version, if applicable):遨游5浏览器 - 其他(Others): ",0, at anaconda cloud paddle 请使用此模板来报告 bug,并尽可能多地提供信息。 please use this template while reporting a bug and provide as much info as possible 发生了什么(what happened) bash conda install paddlepaddle c paddle 报错: collecting package metadata current repodata json failed unavailableinvalidchannel the channel is not accessible or is invalid channel name paddle channel url error code you will need to adjust your conda configuration to proceed use conda config show channels to view your configuration s current state and use conda config show sources to view config file locations 期望的现象(what you expected to happen) 如何重现(how to reproduce it) 设置 condarc 文件内容如下: channels defaults show channel urls true default channels custom channels conda forge bioconda menpo pytorch simpleitk paddle 其他事项(anything else we need to know) 该问题是否被之前的 issue 提出过:未检索。 您的环境(environment) 操作系统(os version):windows subsystem linux wsl ubuntu 浏览器(如果适用)(browser version if applicable): 其他(others): ,0 30148,6033371252.0,IssuesEvent,2017-06-09 08:07:03,moosetechnology/Moose,https://api.github.com/repos/moosetechnology/Moose,closed,Not all shapes support borderColor/borderWidth,Priority-Medium Type-Defect,"Originally reported on Google Code with ID 1097 ``` Some shapes are using the default stroke and/or the default stroke width, even if the user sets a borderColor/borderWidth |v ver circle box poly es | v := RTView new. ver := (1 to:5)collect:[:i | Point r:100 degrees:(360/5*i)]. circle := RTEllipse new size: 200; color: Color red; borderWidth:5;borderColor: Color green. box := RTBox new size: 200; color: Color red; borderWidth:5;borderColor: Color green. poly := RTPolygon new size: 200; vertices:ver; color: Color red; borderWidth:5;borderColor: Color green. es := circle elementOn:'hello'. v add: es. es := box elementOn:'hello'. v add: es. es := poly elementOn:'hello'. v add: es. v @ RTDraggableView . RTGridLayout on: v elements. v ""all shapes should use the provided borderWidth (5) and borderColor (Green)"" moose build 3147 * Type-Defect * Component-Roassal2 ``` Reported by `nicolaihess` on 2014-11-13 11:40:38
- _Attachment: shapes.png
![shapes.png](https://storage.googleapis.com/google-code-attachments/moose-technology/issue-1097/comment-0/shapes.png)_ ",1.0,"Not all shapes support borderColor/borderWidth - Originally reported on Google Code with ID 1097 ``` Some shapes are using the default stroke and/or the default stroke width, even if the user sets a borderColor/borderWidth |v ver circle box poly es | v := RTView new. ver := (1 to:5)collect:[:i | Point r:100 degrees:(360/5*i)]. circle := RTEllipse new size: 200; color: Color red; borderWidth:5;borderColor: Color green. box := RTBox new size: 200; color: Color red; borderWidth:5;borderColor: Color green. poly := RTPolygon new size: 200; vertices:ver; color: Color red; borderWidth:5;borderColor: Color green. es := circle elementOn:'hello'. v add: es. es := box elementOn:'hello'. v add: es. es := poly elementOn:'hello'. v add: es. v @ RTDraggableView . RTGridLayout on: v elements. v ""all shapes should use the provided borderWidth (5) and borderColor (Green)"" moose build 3147 * Type-Defect * Component-Roassal2 ``` Reported by `nicolaihess` on 2014-11-13 11:40:38
- _Attachment: shapes.png
![shapes.png](https://storage.googleapis.com/google-code-attachments/moose-technology/issue-1097/comment-0/shapes.png)_ ",0,not all shapes support bordercolor borderwidth originally reported on google code with id some shapes are using the default stroke and or the default stroke width even if the user sets a bordercolor borderwidth v ver circle box poly es v rtview new ver to collect circle rtellipse new size color color red borderwidth bordercolor color green box rtbox new size color color red borderwidth bordercolor color green poly rtpolygon new size vertices ver color color red borderwidth bordercolor color green es circle elementon hello v add es es box elementon hello v add es es poly elementon hello v add es v rtdraggableview rtgridlayout on v elements v all shapes should use the provided borderwidth and bordercolor green moose build type defect component reported by nicolaihess on attachment shapes png ,0 206,4696770117.0,IssuesEvent,2016-10-12 06:36:57,rancher/rancher,https://api.github.com/repos/rancher/rancher,opened,host.stats() returns array of dictionary.,area/agent kind/bug setup/automation,"Rancher Version: Build from master Following automation run fails since it expects host.stats() tp return dictionary instead of an array. ``` Error Message assert ('{') + where = '[{""resourceType"":""host"",""memLimit"":1778900992,""timestamp"":""2016-10-11T15:18:13.380555434Z"",""cpu"":{""usage"":{""total"":86...ors"":0,""rx_dropped"":0,""tx_bytes"":592736681,""tx_packets"":0,""tx_errors"":0,""tx_dropped"":0},""memory"":{""usage"":509063168}}]'.startswith Stacktrace client = def test_host_api_token(client): hosts = client.list_host(kind='docker', removed_null=True) assert len(hosts) > 0 # valid token and a url to the websocket stats = hosts[0].stats() conn = ws.create_connection(stats.url+'?token='+stats.token) result = conn.recv() assert result is not None > assert result.startswith('{') E assert ('{') E + where = '[{""resourceType"":""host"",""memLimit"":1778900992,""timestamp"":""2016-10-11T15:18:13.380555434Z"",""cpu"":{""usage"":{""total"":86...ors"":0,""rx_dropped"":0,""tx_bytes"":592736681,""tx_packets"":0,""tx_errors"":0,""tx_dropped"":0},""memory"":{""usage"":509063168}}]'.startswith tests/v2_validation/cattlevalidationtest/core/test_host_api.py:15: AssertionError ```",1.0,"host.stats() returns array of dictionary. - Rancher Version: Build from master Following automation run fails since it expects host.stats() tp return dictionary instead of an array. ``` Error Message assert ('{') + where = '[{""resourceType"":""host"",""memLimit"":1778900992,""timestamp"":""2016-10-11T15:18:13.380555434Z"",""cpu"":{""usage"":{""total"":86...ors"":0,""rx_dropped"":0,""tx_bytes"":592736681,""tx_packets"":0,""tx_errors"":0,""tx_dropped"":0},""memory"":{""usage"":509063168}}]'.startswith Stacktrace client = def test_host_api_token(client): hosts = client.list_host(kind='docker', removed_null=True) assert len(hosts) > 0 # valid token and a url to the websocket stats = hosts[0].stats() conn = ws.create_connection(stats.url+'?token='+stats.token) result = conn.recv() assert result is not None > assert result.startswith('{') E assert ('{') E + where = '[{""resourceType"":""host"",""memLimit"":1778900992,""timestamp"":""2016-10-11T15:18:13.380555434Z"",""cpu"":{""usage"":{""total"":86...ors"":0,""rx_dropped"":0,""tx_bytes"":592736681,""tx_packets"":0,""tx_errors"":0,""tx_dropped"":0},""memory"":{""usage"":509063168}}]'.startswith tests/v2_validation/cattlevalidationtest/core/test_host_api.py:15: AssertionError ```",1,host stats returns array of dictionary rancher version build from master following automation run fails since it expects host stats tp return dictionary instead of an array error message assert where startswith stacktrace client def test host api token client hosts client list host kind docker removed null true assert len hosts valid token and a url to the websocket stats hosts stats conn ws create connection stats url token stats token result conn recv assert result is not none assert result startswith e assert e where startswith tests validation cattlevalidationtest core test host api py assertionerror ,1 469,6560953344.0,IssuesEvent,2017-09-07 11:22:16,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,Automation: missing handler after restart,Automation,"I seem to have run into an issue whereby after creating a rule and restarting, it fails to restore the rule properly. The rule looks as follows after the restart: ``` { ""enabled"": true, ""status"": { ""status"": ""NOT_INITIALIZED"", ""statusDetail"": ""HANDLER_INITIALIZING_ERROR"", ""description"": ""Missing handler 'ItemStateChangeTrigger' for module 'trigger_1'\nMissing handler 'ItemStateCondition' for module 'condition_2'\nMissing handler 'ItemPostCommandAction' for module 'action_3'\n"" }, ""triggers"": [ { ""id"": ""trigger_1"", ""label"": ""Item State Trigger"", ""description"": ""This triggers a rule if an items state changed"", ""configuration"": { ""itemName"": ""morning_alarm"" }, ""type"": ""ItemStateChangeTrigger"" } ], ""conditions"": [ { ""id"": ""condition_2"", ""label"": ""Item state condition"", ""description"": ""compares the items current state with the given"", ""configuration"": { ""itemName"": ""morning_alarm"", ""state"": ""ON"", ""operator"": ""="" }, ""type"": ""ItemStateCondition"" } ], ""actions"": [ { ""id"": ""action_3"", ""label"": ""Post command to an item"", ""description"": ""posts commands on items"", ""configuration"": { ""itemName"": ""XBMC_OpenMedia"", ""command"": ""\""http://a.files.bbci.co.uk/media/live/manifesto/audio/simulcast/hls/uk/sbr_high/ak/bbc_1xtra.m3u8\"""" }, ""type"": ""ItemPostCommandAction"" } ], ""configuration"": {}, ""configDescriptions"": [], ""uid"": ""rule_2"", ""name"": ""Morning Alarm"", ""tags"": [], ""description"": ""What to do when my alarm goes off"" } ``` To resolve this I have to go into edit mode on the rule and then OK the rule to update/fix. ",1.0,"Automation: missing handler after restart - I seem to have run into an issue whereby after creating a rule and restarting, it fails to restore the rule properly. The rule looks as follows after the restart: ``` { ""enabled"": true, ""status"": { ""status"": ""NOT_INITIALIZED"", ""statusDetail"": ""HANDLER_INITIALIZING_ERROR"", ""description"": ""Missing handler 'ItemStateChangeTrigger' for module 'trigger_1'\nMissing handler 'ItemStateCondition' for module 'condition_2'\nMissing handler 'ItemPostCommandAction' for module 'action_3'\n"" }, ""triggers"": [ { ""id"": ""trigger_1"", ""label"": ""Item State Trigger"", ""description"": ""This triggers a rule if an items state changed"", ""configuration"": { ""itemName"": ""morning_alarm"" }, ""type"": ""ItemStateChangeTrigger"" } ], ""conditions"": [ { ""id"": ""condition_2"", ""label"": ""Item state condition"", ""description"": ""compares the items current state with the given"", ""configuration"": { ""itemName"": ""morning_alarm"", ""state"": ""ON"", ""operator"": ""="" }, ""type"": ""ItemStateCondition"" } ], ""actions"": [ { ""id"": ""action_3"", ""label"": ""Post command to an item"", ""description"": ""posts commands on items"", ""configuration"": { ""itemName"": ""XBMC_OpenMedia"", ""command"": ""\""http://a.files.bbci.co.uk/media/live/manifesto/audio/simulcast/hls/uk/sbr_high/ak/bbc_1xtra.m3u8\"""" }, ""type"": ""ItemPostCommandAction"" } ], ""configuration"": {}, ""configDescriptions"": [], ""uid"": ""rule_2"", ""name"": ""Morning Alarm"", ""tags"": [], ""description"": ""What to do when my alarm goes off"" } ``` To resolve this I have to go into edit mode on the rule and then OK the rule to update/fix. ",1,automation missing handler after restart i seem to have run into an issue whereby after creating a rule and restarting it fails to restore the rule properly the rule looks as follows after the restart enabled true status status not initialized statusdetail handler initializing error description missing handler itemstatechangetrigger for module trigger nmissing handler itemstatecondition for module condition nmissing handler itempostcommandaction for module action n triggers id trigger label item state trigger description this triggers a rule if an items state changed configuration itemname morning alarm type itemstatechangetrigger conditions id condition label item state condition description compares the items current state with the given configuration itemname morning alarm state on operator type itemstatecondition actions id action label post command to an item description posts commands on items configuration itemname xbmc openmedia command type itempostcommandaction configuration configdescriptions uid rule name morning alarm tags description what to do when my alarm goes off to resolve this i have to go into edit mode on the rule and then ok the rule to update fix ,1 64521,7809870341.0,IssuesEvent,2018-06-12 03:17:32,MSO4SC/resources,https://api.github.com/repos/MSO4SC/resources,closed,Add job_partition as input for the Feel++ application,domain:design domain:documentation madf:feelpp,"`job_partition` is currently fixed in the blueprint, it needs to be provided as `inputs` to facilitate the end-users life to use CEGA/ft2 partitions. This will need to be documented in feelpp/toolbox mso4sc page",1.0,"Add job_partition as input for the Feel++ application - `job_partition` is currently fixed in the blueprint, it needs to be provided as `inputs` to facilitate the end-users life to use CEGA/ft2 partitions. This will need to be documented in feelpp/toolbox mso4sc page",0,add job partition as input for the feel application job partition is currently fixed in the blueprint it needs to be provided as inputs to facilitate the end users life to use cega partitions this will need to be documented in feelpp toolbox page,0 7352,24697907539.0,IssuesEvent,2022-10-19 13:25:55,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,opened,[YSQL][LST] ERROR: Internal error: Cannot find status tablet for write_batch transaction,area/ysql status/awaiting-triage qa_automation,"### Description ``` $ cd ~/code/yugabyte-db $ git checkout e90228e57a4ae34f0010840646805abe6152a3ab $ ./yb_build.sh $ cd ~/code/yb-long-system-test $ git checkout 8e455618 && ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=10 --runtime=60 --max-columns=20 --complexity=full --seed=010159 ``` ``` 2022-10-19 12:58:52,224 MainThread INFO Reproduce with: git checkout 8e455618 && ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=10 --runtime=60 --max-columns=20 --complexity=full --seed=010159 2022-10-19 12:58:52,837 MainThread INFO Database version: PostgreSQL 11.2-YB-2.17.1.0-b0 on x86_64-pc-linux-gnu, compiled by clang version 14.0.6 (https://github.com/yugabyte/llvm-project.git 32585159229a671b457dad40608b9a5246f52b6b), 64-bit 2022-10-19 12:58:52,839 MainThread INFO Creating tables for database db_lst_010159 2022-10-19 12:59:17,591 MainThread INFO Starting worker_0: RandomSelectAction, SetConfigAction 2022-10-19 12:59:17,592 MainThread INFO Starting worker_1: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-10-19 12:59:17,608 MainThread INFO Starting worker_2: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-10-19 12:59:17,609 MainThread INFO Starting worker_3: RandomSelectAction, SetConfigAction 2022-10-19 12:59:17,611 MainThread INFO Starting worker_4: RandomSelectAction, SetConfigAction 2022-10-19 12:59:17,626 MainThread INFO Starting worker_5: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-10-19 12:59:17,651 MainThread INFO Starting worker_6: RandomSelectAction, SetConfigAction 2022-10-19 12:59:17,653 MainThread INFO Starting worker_7: RandomSelectAction, SetConfigAction 2022-10-19 12:59:17,661 MainThread INFO Starting worker_8: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-10-19 12:59:17,677 MainThread INFO Starting worker_9: RandomSelectAction, SetConfigAction 2022-10-19 12:59:27,742 MainThread INFO Worker queries/s: [000.8][004.9][002.1][000.8][000.7][004.1][001.0][000.6][001.8][001.8] [...] 2022-10-19 13:03:38,002 MainThread INFO Worker queries/s: [015.8][001.4][006.4][006.5][000.0][000.0][012.7][000.0][018.5][003.2] 2022-10-19 13:03:40,145 worker_9 ERROR Unexpected query failure: InternalError_ Query: SELECT '78' FROM v0 WHERE (-59.254076209800054) != (8.552138891990907) ORDER BY 1 ASC FOR NO KEY UPDATE OF v0 NOWAIT; values: None runtime: 2022-10-19 13:03:38.408 - 2022-10-19 13:03:40.145 supports explain: True supports rollback: True affected rows: None Action: RandomSelectAction Error class: InternalError_ Error code: XX000 Error message: ERROR: Internal error: Cannot find status tablet for write_batch transaction ae8115d8-9400-4b96-9ced-69c9f9490981 Transaction isolation level: read uncommitted DB Node: host: 127.0.0.1, port: 5433 DB Backend PID: 3364913 ``` LST logs: [lst.zip](https://github.com/yugabyte/yugabyte-db/files/9820866/lst.zip)",1.0,"[YSQL][LST] ERROR: Internal error: Cannot find status tablet for write_batch transaction - ### Description ``` $ cd ~/code/yugabyte-db $ git checkout e90228e57a4ae34f0010840646805abe6152a3ab $ ./yb_build.sh $ cd ~/code/yb-long-system-test $ git checkout 8e455618 && ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=10 --runtime=60 --max-columns=20 --complexity=full --seed=010159 ``` ``` 2022-10-19 12:58:52,224 MainThread INFO Reproduce with: git checkout 8e455618 && ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=10 --runtime=60 --max-columns=20 --complexity=full --seed=010159 2022-10-19 12:58:52,837 MainThread INFO Database version: PostgreSQL 11.2-YB-2.17.1.0-b0 on x86_64-pc-linux-gnu, compiled by clang version 14.0.6 (https://github.com/yugabyte/llvm-project.git 32585159229a671b457dad40608b9a5246f52b6b), 64-bit 2022-10-19 12:58:52,839 MainThread INFO Creating tables for database db_lst_010159 2022-10-19 12:59:17,591 MainThread INFO Starting worker_0: RandomSelectAction, SetConfigAction 2022-10-19 12:59:17,592 MainThread INFO Starting worker_1: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-10-19 12:59:17,608 MainThread INFO Starting worker_2: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-10-19 12:59:17,609 MainThread INFO Starting worker_3: RandomSelectAction, SetConfigAction 2022-10-19 12:59:17,611 MainThread INFO Starting worker_4: RandomSelectAction, SetConfigAction 2022-10-19 12:59:17,626 MainThread INFO Starting worker_5: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-10-19 12:59:17,651 MainThread INFO Starting worker_6: RandomSelectAction, SetConfigAction 2022-10-19 12:59:17,653 MainThread INFO Starting worker_7: RandomSelectAction, SetConfigAction 2022-10-19 12:59:17,661 MainThread INFO Starting worker_8: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction 2022-10-19 12:59:17,677 MainThread INFO Starting worker_9: RandomSelectAction, SetConfigAction 2022-10-19 12:59:27,742 MainThread INFO Worker queries/s: [000.8][004.9][002.1][000.8][000.7][004.1][001.0][000.6][001.8][001.8] [...] 2022-10-19 13:03:38,002 MainThread INFO Worker queries/s: [015.8][001.4][006.4][006.5][000.0][000.0][012.7][000.0][018.5][003.2] 2022-10-19 13:03:40,145 worker_9 ERROR Unexpected query failure: InternalError_ Query: SELECT '78' FROM v0 WHERE (-59.254076209800054) != (8.552138891990907) ORDER BY 1 ASC FOR NO KEY UPDATE OF v0 NOWAIT; values: None runtime: 2022-10-19 13:03:38.408 - 2022-10-19 13:03:40.145 supports explain: True supports rollback: True affected rows: None Action: RandomSelectAction Error class: InternalError_ Error code: XX000 Error message: ERROR: Internal error: Cannot find status tablet for write_batch transaction ae8115d8-9400-4b96-9ced-69c9f9490981 Transaction isolation level: read uncommitted DB Node: host: 127.0.0.1, port: 5433 DB Backend PID: 3364913 ``` LST logs: [lst.zip](https://github.com/yugabyte/yugabyte-db/files/9820866/lst.zip)",1, error internal error cannot find status tablet for write batch transaction description cd code yugabyte db git checkout yb build sh cd code yb long system test git checkout long system test py nodes threads runtime max columns complexity full seed mainthread info reproduce with git checkout long system test py nodes threads runtime max columns complexity full seed mainthread info database version postgresql yb on pc linux gnu compiled by clang version bit mainthread info creating tables for database db lst mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker singleinsertaction singleupdateaction singledeleteaction bulkinsertaction bulkupdateaction setconfigaction mainthread info starting worker singleinsertaction singleupdateaction singledeleteaction bulkinsertaction bulkupdateaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker singleinsertaction singleupdateaction singledeleteaction bulkinsertaction bulkupdateaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker singleinsertaction singleupdateaction singledeleteaction bulkinsertaction bulkupdateaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info worker queries s mainthread info worker queries s worker error unexpected query failure internalerror query select from where order by asc for no key update of nowait values none runtime supports explain true supports rollback true affected rows none action randomselectaction error class internalerror error code error message error internal error cannot find status tablet for write batch transaction transaction isolation level read uncommitted db node host port db backend pid lst logs ,1 7721,25466026118.0,IssuesEvent,2022-11-25 04:29:05,keycloak/keycloak-benchmark,https://api.github.com/repos/keycloak/keycloak-benchmark,closed,Update the provision-minikube.yaml to perform dataset operations on the different storage configurations,enhancement provision automation github_actions,"### Description Update the provision-minikube.yaml to perform dataset operations on the different storage configurations from within GH Actions workflow. ### Discussion _No response_ ### Motivation _No response_ ### Details _No response_",1.0,"Update the provision-minikube.yaml to perform dataset operations on the different storage configurations - ### Description Update the provision-minikube.yaml to perform dataset operations on the different storage configurations from within GH Actions workflow. ### Discussion _No response_ ### Motivation _No response_ ### Details _No response_",1,update the provision minikube yaml to perform dataset operations on the different storage configurations description update the provision minikube yaml to perform dataset operations on the different storage configurations from within gh actions workflow discussion no response motivation no response details no response ,1 14221,3386515731.0,IssuesEvent,2015-11-27 18:16:24,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,opened,Test failure in CI build 9632,test-failure,"The following test appears to have failed: [#9632](https://circleci.com/gh/cockroachdb/cockroach/9632): ``` W1127 18:13:00.305853 498 rpc/server.go:435 rpc: write response failed: write tcp 127.0.0.1:45310->127.0.0.1:46144: use of closed network connection W1127 18:13:00.305965 498 rpc/server.go:435 rpc: write response failed: write tcp 127.0.0.1:47009->127.0.0.1:58477: use of closed network connection W1127 18:13:00.306374 498 rpc/server.go:435 rpc: write response failed: write tcp 127.0.0.1:47009->127.0.0.1:58477: use of closed network connection --- PASS: TestConvergence (0.68s) PASS Too many goroutines running after tests. 1 instances of: github.com/cockroachdb/cockroach/gossip.(*fakeGossipServer).Gossip-fm(0x7f1bbc452b98, 0xc82000bb00, 0x0, 0x0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/gossip/client_test.go:82 +0x16a github.com/cockroachdb/cockroach/rpc.syncAdapter.exec.func1(0xc8201d6600, 0xc8201d8900, 0x7f1bbc452b98, 0xc82000bb00) /go/src/github.com/cockroachdb/cockroach/rpc/server.go:53 +0x3e created by github.com/cockroachdb/cockroach/rpc.syncAdapter.exec /go/src/github.com/cockroachdb/cockroach/rpc/server.go:54 +0x61 1 instances of: github.com/cockroachdb/cockroach/rpc.(*Server).sendResponses(0xc8200fce00, 0x7f1bbc4547b0, 0xc820148480, 0xc820102a20) /go/src/github.com/cockroachdb/cockroach/rpc/server.go:426 +0xb5 -- /tmp/workdir/go/src/net/http/server.go:1862 +0x207 net/http.(*conn).serve(0xc8200dd760) /tmp/workdir/go/src/net/http/server.go:1361 +0x117d created by net/http.(*Server).Serve /tmp/workdir/go/src/net/http/server.go:1910 +0x465 FAIL github.com/cockroachdb/cockroach/gossip 1.006s === RUN TestParseResolverSpec --- PASS: TestParseResolverSpec (0.00s) === RUN TestGetAddress --- PASS: TestGetAddress (0.00s) PASS ok github.com/cockroachdb/cockroach/gossip/resolver 1.023s ? github.com/cockroachdb/cockroach/gossip/simulation [no test files] === RUN TestKeySorting --- PASS: TestKeySorting (0.00s) === RUN TestMakeKey ``` Please assign, take a look and update the issue accordingly.",1.0,"Test failure in CI build 9632 - The following test appears to have failed: [#9632](https://circleci.com/gh/cockroachdb/cockroach/9632): ``` W1127 18:13:00.305853 498 rpc/server.go:435 rpc: write response failed: write tcp 127.0.0.1:45310->127.0.0.1:46144: use of closed network connection W1127 18:13:00.305965 498 rpc/server.go:435 rpc: write response failed: write tcp 127.0.0.1:47009->127.0.0.1:58477: use of closed network connection W1127 18:13:00.306374 498 rpc/server.go:435 rpc: write response failed: write tcp 127.0.0.1:47009->127.0.0.1:58477: use of closed network connection --- PASS: TestConvergence (0.68s) PASS Too many goroutines running after tests. 1 instances of: github.com/cockroachdb/cockroach/gossip.(*fakeGossipServer).Gossip-fm(0x7f1bbc452b98, 0xc82000bb00, 0x0, 0x0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/gossip/client_test.go:82 +0x16a github.com/cockroachdb/cockroach/rpc.syncAdapter.exec.func1(0xc8201d6600, 0xc8201d8900, 0x7f1bbc452b98, 0xc82000bb00) /go/src/github.com/cockroachdb/cockroach/rpc/server.go:53 +0x3e created by github.com/cockroachdb/cockroach/rpc.syncAdapter.exec /go/src/github.com/cockroachdb/cockroach/rpc/server.go:54 +0x61 1 instances of: github.com/cockroachdb/cockroach/rpc.(*Server).sendResponses(0xc8200fce00, 0x7f1bbc4547b0, 0xc820148480, 0xc820102a20) /go/src/github.com/cockroachdb/cockroach/rpc/server.go:426 +0xb5 -- /tmp/workdir/go/src/net/http/server.go:1862 +0x207 net/http.(*conn).serve(0xc8200dd760) /tmp/workdir/go/src/net/http/server.go:1361 +0x117d created by net/http.(*Server).Serve /tmp/workdir/go/src/net/http/server.go:1910 +0x465 FAIL github.com/cockroachdb/cockroach/gossip 1.006s === RUN TestParseResolverSpec --- PASS: TestParseResolverSpec (0.00s) === RUN TestGetAddress --- PASS: TestGetAddress (0.00s) PASS ok github.com/cockroachdb/cockroach/gossip/resolver 1.023s ? github.com/cockroachdb/cockroach/gossip/simulation [no test files] === RUN TestKeySorting --- PASS: TestKeySorting (0.00s) === RUN TestMakeKey ``` Please assign, take a look and update the issue accordingly.",0,test failure in ci build the following test appears to have failed rpc server go rpc write response failed write tcp use of closed network connection rpc server go rpc write response failed write tcp use of closed network connection rpc server go rpc write response failed write tcp use of closed network connection pass testconvergence pass too many goroutines running after tests instances of github com cockroachdb cockroach gossip fakegossipserver gossip fm go src github com cockroachdb cockroach gossip client test go github com cockroachdb cockroach rpc syncadapter exec go src github com cockroachdb cockroach rpc server go created by github com cockroachdb cockroach rpc syncadapter exec go src github com cockroachdb cockroach rpc server go instances of github com cockroachdb cockroach rpc server sendresponses go src github com cockroachdb cockroach rpc server go tmp workdir go src net http server go net http conn serve tmp workdir go src net http server go created by net http server serve tmp workdir go src net http server go fail github com cockroachdb cockroach gossip run testparseresolverspec pass testparseresolverspec run testgetaddress pass testgetaddress pass ok github com cockroachdb cockroach gossip resolver github com cockroachdb cockroach gossip simulation run testkeysorting pass testkeysorting run testmakekey please assign take a look and update the issue accordingly ,0 2940,12837337146.0,IssuesEvent,2020-07-07 15:38:10,ManageIQ/manageiq-ui-classic,https://api.github.com/repos/ManageIQ/manageiq-ui-classic,closed,Broken editing of custom buttons,automation/automate bug pinned,"Custom button editing is impemented 2x (for reasons that I really do not understand). 1. under Automate in Ruby 1. under GO in Angular The Ruby implementation in Automate has bug. When trying to add a Custom button group for GO I get: ``` [----] D, [2019-08-16T16:27:02.487953 #17654:481af34] DEBUG -- : (0.2ms) SELECT ""custom_buttons"".""id"" FROM ""custom_buttons"" WHERE ""custom_buttons"".""id"" IN (13, 18) [----] D, [2019-08-16T16:27:02.488728 #17654:481af34] DEBUG -- : (0.1ms) ROLLBACK [----] F, [2019-08-16T16:27:02.489673 #17654:481af34] FATAL -- : Error caught: [ActiveRecord::RecordNotSaved] Failed to save the record /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/persistence.rb:162:in `save!' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/validations.rb:50:in `save!' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/attribute_methods/dirty.rb:43:in `save!' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/transactions.rb:313:in `block in save!' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/transactions.rb:384:in `block in with_transaction_returning_status' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/connection_adapters/abstract/database_statements.rb:235:in `block in transaction' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/connection_adapters/abstract/transaction.rb:194:in `block in within_new_transaction' /home/martin/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/connection_adapters/abstract/transaction.rb:191:in `within_new_transaction' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/connection_adapters/abstract/database_statements.rb:235:in `transaction' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/transactions.rb:210:in `transaction' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/transactions.rb:381:in `with_transaction_returning_status' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/transactions.rb:313:in `save!' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/suppressor.rb:46:in `save!' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:421:in `block in group_button_add_save' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:419:in `each' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:419:in `each_with_index' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:419:in `group_button_add_save' /home/martin/Projects/manageiq-ui-classic/app/controllers/miq_ae_customization_controller.rb:427:in `group_button_add_save' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:469:in `group_create_update' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:86:in `group_create' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/basic_implicit_render.rb:4:in `send_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/abstract_controller/base.rb:186:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/rendering.rb:30:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/abstract_controller/callbacks.rb:20:in `block in process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/activesupport-5.1.7/lib/active_support/callbacks.rb:131:in `run_callbacks' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/abstract_controller/callbacks.rb:19:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/rescue.rb:20:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/instrumentation.rb:32:in `block in process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/activesupport-5.1.7/lib/active_support/notifications.rb:166:in `block in instrument' /home/martin/.rvm/gems/ruby-2.5.5/gems/activesupport-5.1.7/lib/active_support/notifications/instrumenter.rb:21:in `instrument' /home/martin/.rvm/gems/ruby-2.5.5/gems/activesupport-5.1.7/lib/active_support/notifications.rb:166:in `instrument' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/instrumentation.rb:30:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/params_wrapper.rb:252:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/railties/controller_runtime.rb:22:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/abstract_controller/base.rb:124:in `process' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionview-5.1.7/lib/action_view/rendering.rb:30:in `process' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal.rb:189:in `dispatch' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal.rb:253:in `dispatch' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/routing/route_set.rb:49:in `dispatch' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/routing/route_set.rb:31:in `serve' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/journey/router.rb:50:in `block in serve' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/journey/router.rb:33:in `each' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/journey/router.rb:33:in `serve' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/routing/route_set.rb:844:in `call' /home/martin/.rvm/gems/ruby-2.5.5/bundler/gems/manageiq-graphql-b5602ca62a42/lib/manageiq/graphql/rest_api_proxy.rb:18:in `call' /home/martin/.rvm/gems/ruby-2.5.5/gems/meta_request-0.7.2/lib/meta_request/middlewares/app_request_handler.rb:13:in `call' /home/martin/.rvm/gems/ruby-2.5.5/gems/meta_request-0.7.2/lib/meta_request/middlewares/meta_request_handler.rb:13:in `call' /home/martin/.rvm/gems/ruby-2.5.5/gems/secure_headers-3.0.3/lib/secure_headers/middleware.rb:10:in `call' /home/martin/Projects/manageiq/lib/request_started_on_middleware.rb:12:in `call' ``` We need this fixed. And we need just one implementation of the editing. One that uses the API but is written in React. fyi: @ZitaNemeckova ",1.0,"Broken editing of custom buttons - Custom button editing is impemented 2x (for reasons that I really do not understand). 1. under Automate in Ruby 1. under GO in Angular The Ruby implementation in Automate has bug. When trying to add a Custom button group for GO I get: ``` [----] D, [2019-08-16T16:27:02.487953 #17654:481af34] DEBUG -- : (0.2ms) SELECT ""custom_buttons"".""id"" FROM ""custom_buttons"" WHERE ""custom_buttons"".""id"" IN (13, 18) [----] D, [2019-08-16T16:27:02.488728 #17654:481af34] DEBUG -- : (0.1ms) ROLLBACK [----] F, [2019-08-16T16:27:02.489673 #17654:481af34] FATAL -- : Error caught: [ActiveRecord::RecordNotSaved] Failed to save the record /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/persistence.rb:162:in `save!' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/validations.rb:50:in `save!' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/attribute_methods/dirty.rb:43:in `save!' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/transactions.rb:313:in `block in save!' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/transactions.rb:384:in `block in with_transaction_returning_status' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/connection_adapters/abstract/database_statements.rb:235:in `block in transaction' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/connection_adapters/abstract/transaction.rb:194:in `block in within_new_transaction' /home/martin/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/connection_adapters/abstract/transaction.rb:191:in `within_new_transaction' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/connection_adapters/abstract/database_statements.rb:235:in `transaction' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/transactions.rb:210:in `transaction' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/transactions.rb:381:in `with_transaction_returning_status' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/transactions.rb:313:in `save!' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/suppressor.rb:46:in `save!' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:421:in `block in group_button_add_save' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:419:in `each' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:419:in `each_with_index' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:419:in `group_button_add_save' /home/martin/Projects/manageiq-ui-classic/app/controllers/miq_ae_customization_controller.rb:427:in `group_button_add_save' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:469:in `group_create_update' /home/martin/Projects/manageiq-ui-classic/app/controllers/application_controller/buttons.rb:86:in `group_create' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/basic_implicit_render.rb:4:in `send_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/abstract_controller/base.rb:186:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/rendering.rb:30:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/abstract_controller/callbacks.rb:20:in `block in process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/activesupport-5.1.7/lib/active_support/callbacks.rb:131:in `run_callbacks' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/abstract_controller/callbacks.rb:19:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/rescue.rb:20:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/instrumentation.rb:32:in `block in process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/activesupport-5.1.7/lib/active_support/notifications.rb:166:in `block in instrument' /home/martin/.rvm/gems/ruby-2.5.5/gems/activesupport-5.1.7/lib/active_support/notifications/instrumenter.rb:21:in `instrument' /home/martin/.rvm/gems/ruby-2.5.5/gems/activesupport-5.1.7/lib/active_support/notifications.rb:166:in `instrument' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/instrumentation.rb:30:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal/params_wrapper.rb:252:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/activerecord-5.1.7/lib/active_record/railties/controller_runtime.rb:22:in `process_action' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/abstract_controller/base.rb:124:in `process' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionview-5.1.7/lib/action_view/rendering.rb:30:in `process' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal.rb:189:in `dispatch' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_controller/metal.rb:253:in `dispatch' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/routing/route_set.rb:49:in `dispatch' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/routing/route_set.rb:31:in `serve' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/journey/router.rb:50:in `block in serve' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/journey/router.rb:33:in `each' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/journey/router.rb:33:in `serve' /home/martin/.rvm/gems/ruby-2.5.5/gems/actionpack-5.1.7/lib/action_dispatch/routing/route_set.rb:844:in `call' /home/martin/.rvm/gems/ruby-2.5.5/bundler/gems/manageiq-graphql-b5602ca62a42/lib/manageiq/graphql/rest_api_proxy.rb:18:in `call' /home/martin/.rvm/gems/ruby-2.5.5/gems/meta_request-0.7.2/lib/meta_request/middlewares/app_request_handler.rb:13:in `call' /home/martin/.rvm/gems/ruby-2.5.5/gems/meta_request-0.7.2/lib/meta_request/middlewares/meta_request_handler.rb:13:in `call' /home/martin/.rvm/gems/ruby-2.5.5/gems/secure_headers-3.0.3/lib/secure_headers/middleware.rb:10:in `call' /home/martin/Projects/manageiq/lib/request_started_on_middleware.rb:12:in `call' ``` We need this fixed. And we need just one implementation of the editing. One that uses the API but is written in React. fyi: @ZitaNemeckova ",1,broken editing of custom buttons custom button editing is impemented for reasons that i really do not understand under automate in ruby under go in angular the ruby implementation in automate has bug when trying to add a custom button group for go i get d debug select custom buttons id from custom buttons where custom buttons id in d debug rollback f fatal error caught failed to save the record home martin rvm gems ruby gems activerecord lib active record persistence rb in save home martin rvm gems ruby gems activerecord lib active record validations rb in save home martin rvm gems ruby gems activerecord lib active record attribute methods dirty rb in save home martin rvm gems ruby gems activerecord lib active record transactions rb in block in save home martin rvm gems ruby gems activerecord lib active record transactions rb in block in with transaction returning status home martin rvm gems ruby gems activerecord lib active record connection adapters abstract database statements rb in block in transaction home martin rvm gems ruby gems activerecord lib active record connection adapters abstract transaction rb in block in within new transaction home martin rvm rubies ruby lib ruby monitor rb in mon synchronize home martin rvm gems ruby gems activerecord lib active record connection adapters abstract transaction rb in within new transaction home martin rvm gems ruby gems activerecord lib active record connection adapters abstract database statements rb in transaction home martin rvm gems ruby gems activerecord lib active record transactions rb in transaction home martin rvm gems ruby gems activerecord lib active record transactions rb in with transaction returning status home martin rvm gems ruby gems activerecord lib active record transactions rb in save home martin rvm gems ruby gems activerecord lib active record suppressor rb in save home martin projects manageiq ui classic app controllers application controller buttons rb in block in group button add save home martin projects manageiq ui classic app controllers application controller buttons rb in each home martin projects manageiq ui classic app controllers application controller buttons rb in each with index home martin projects manageiq ui classic app controllers application controller buttons rb in group button add save home martin projects manageiq ui classic app controllers miq ae customization controller rb in group button add save home martin projects manageiq ui classic app controllers application controller buttons rb in group create update home martin projects manageiq ui classic app controllers application controller buttons rb in group create home martin rvm gems ruby gems actionpack lib action controller metal basic implicit render rb in send action home martin rvm gems ruby gems actionpack lib abstract controller base rb in process action home martin rvm gems ruby gems actionpack lib action controller metal rendering rb in process action home martin rvm gems ruby gems actionpack lib abstract controller callbacks rb in block in process action home martin rvm gems ruby gems activesupport lib active support callbacks rb in run callbacks home martin rvm gems ruby gems actionpack lib abstract controller callbacks rb in process action home martin rvm gems ruby gems actionpack lib action controller metal rescue rb in process action home martin rvm gems ruby gems actionpack lib action controller metal instrumentation rb in block in process action home martin rvm gems ruby gems activesupport lib active support notifications rb in block in instrument home martin rvm gems ruby gems activesupport lib active support notifications instrumenter rb in instrument home martin rvm gems ruby gems activesupport lib active support notifications rb in instrument home martin rvm gems ruby gems actionpack lib action controller metal instrumentation rb in process action home martin rvm gems ruby gems actionpack lib action controller metal params wrapper rb in process action home martin rvm gems ruby gems activerecord lib active record railties controller runtime rb in process action home martin rvm gems ruby gems actionpack lib abstract controller base rb in process home martin rvm gems ruby gems actionview lib action view rendering rb in process home martin rvm gems ruby gems actionpack lib action controller metal rb in dispatch home martin rvm gems ruby gems actionpack lib action controller metal rb in dispatch home martin rvm gems ruby gems actionpack lib action dispatch routing route set rb in dispatch home martin rvm gems ruby gems actionpack lib action dispatch routing route set rb in serve home martin rvm gems ruby gems actionpack lib action dispatch journey router rb in block in serve home martin rvm gems ruby gems actionpack lib action dispatch journey router rb in each home martin rvm gems ruby gems actionpack lib action dispatch journey router rb in serve home martin rvm gems ruby gems actionpack lib action dispatch routing route set rb in call home martin rvm gems ruby bundler gems manageiq graphql lib manageiq graphql rest api proxy rb in call home martin rvm gems ruby gems meta request lib meta request middlewares app request handler rb in call home martin rvm gems ruby gems meta request lib meta request middlewares meta request handler rb in call home martin rvm gems ruby gems secure headers lib secure headers middleware rb in call home martin projects manageiq lib request started on middleware rb in call we need this fixed and we need just one implementation of the editing one that uses the api but is written in react fyi zitanemeckova ,1 340,5578235005.0,IssuesEvent,2017-03-28 11:51:59,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,opened,Value and selectedIndex are not valid in input event handler for dropdown select element,AREA: client SYSTEM: automations TYPE: bug,"### Are you requesting a feature or reporting a bug? Bug ### What is the current behavior? The `value` and `selectedIndex` properties of a dropdown select are not set in `input` event handler. ### What is the expected behavior? The `value` and `selectedIndex` properties should be changed according to the selected child when executing `input` event handler. ### How would you reproduce the current behavior (if this is a bug)? Run the test. #### Provide the test code and the tested page URL (if applicable) Tested page URL: Test code ```js ``` ### Specify your * operating system: Windows 10 * testcafe version: 0.14.0-alpha6 * node.js version: 7.4.0",1.0,"Value and selectedIndex are not valid in input event handler for dropdown select element - ### Are you requesting a feature or reporting a bug? Bug ### What is the current behavior? The `value` and `selectedIndex` properties of a dropdown select are not set in `input` event handler. ### What is the expected behavior? The `value` and `selectedIndex` properties should be changed according to the selected child when executing `input` event handler. ### How would you reproduce the current behavior (if this is a bug)? Run the test. #### Provide the test code and the tested page URL (if applicable) Tested page URL: Test code ```js ``` ### Specify your * operating system: Windows 10 * testcafe version: 0.14.0-alpha6 * node.js version: 7.4.0",1,value and selectedindex are not valid in input event handler for dropdown select element are you requesting a feature or reporting a bug bug what is the current behavior the value and selectedindex properties of a dropdown select are not set in input event handler what is the expected behavior the value and selectedindex properties should be changed according to the selected child when executing input event handler how would you reproduce the current behavior if this is a bug run the test provide the test code and the tested page url if applicable tested page url test code js specify your operating system windows testcafe version node js version ,1 2340,11777472869.0,IssuesEvent,2020-03-16 14:52:29,coolOrangeLabs/powerGateTemplate,https://api.github.com/repos/coolOrangeLabs/powerGateTemplate,closed,Fragebogen: Stückliste - Materialanlage,DE PGClient_Automation,"## Frage Kann ein Material ohne Benutzereingaben angelegt werden? | Github Issue | Reihenfolge | Abschätzung | | - | - | - | | # | 1. | ?h |",1.0,"Fragebogen: Stückliste - Materialanlage - ## Frage Kann ein Material ohne Benutzereingaben angelegt werden? | Github Issue | Reihenfolge | Abschätzung | | - | - | - | | # | 1. | ?h |",1,fragebogen stückliste materialanlage frage kann ein material ohne benutzereingaben angelegt werden github issue reihenfolge abschätzung h ,1 2996,12961359793.0,IssuesEvent,2020-07-20 15:33:59,jcallaghan/home-assistant-config,https://api.github.com/repos/jcallaghan/home-assistant-config,reopened,Car left unlocked reminder 🔓🔑🚗,integration: automation integration: volvo on call task: routine,"# Objective I often forget to lock my car. While I have a very connected car I deliberately chose not to have keyless entry due to the high level of car theft in and around my home. The [Volvo on call integration](https://www.home-assistant.io/integrations/volvooncall/) for Home Assistant is very extensive and provides me with a lot of information that I have already used to drive automations such as my ""you need to leave"" reminder. The Volvo on-call app provides a push notification to say the car is unlocked but I have noticed this is typically when the car has been unlocked and no door has been opened rather than left open (I should test when this happens). Using the integration I plan to create an automation to notify me and avoid me leaving my car open. * Car must be is home. Leverage Volvo on-call device track or Tile tracker for this. * Use alert integration rather than the notification integration. * Leverage actionable alerts and TTS. * Use the doors and windows open entity from the Volvo on call integration. # Testing * Test if this it is possible to close windows and sunroof.",1.0,"Car left unlocked reminder 🔓🔑🚗 - # Objective I often forget to lock my car. While I have a very connected car I deliberately chose not to have keyless entry due to the high level of car theft in and around my home. The [Volvo on call integration](https://www.home-assistant.io/integrations/volvooncall/) for Home Assistant is very extensive and provides me with a lot of information that I have already used to drive automations such as my ""you need to leave"" reminder. The Volvo on-call app provides a push notification to say the car is unlocked but I have noticed this is typically when the car has been unlocked and no door has been opened rather than left open (I should test when this happens). Using the integration I plan to create an automation to notify me and avoid me leaving my car open. * Car must be is home. Leverage Volvo on-call device track or Tile tracker for this. * Use alert integration rather than the notification integration. * Leverage actionable alerts and TTS. * Use the doors and windows open entity from the Volvo on call integration. # Testing * Test if this it is possible to close windows and sunroof.",1,car left unlocked reminder 🔓🔑🚗 objective i often forget to lock my car while i have a very connected car i deliberately chose not to have keyless entry due to the high level of car theft in and around my home the for home assistant is very extensive and provides me with a lot of information that i have already used to drive automations such as my you need to leave reminder the volvo on call app provides a push notification to say the car is unlocked but i have noticed this is typically when the car has been unlocked and no door has been opened rather than left open i should test when this happens using the integration i plan to create an automation to notify me and avoid me leaving my car open car must be is home leverage volvo on call device track or tile tracker for this use alert integration rather than the notification integration leverage actionable alerts and tts use the doors and windows open entity from the volvo on call integration testing test if this it is possible to close windows and sunroof ,1 5249,18939909670.0,IssuesEvent,2021-11-18 00:48:38,McFlyboy/TouhouLauncher,https://api.github.com/repos/McFlyboy/TouhouLauncher,closed,Add automated release builds,automation,Add automated release builds with Github Actions. Release builds should be uploaded as artifacts first so that the binaries can be tested out manually before continuing through the automated steps towards release,1.0,Add automated release builds - Add automated release builds with Github Actions. Release builds should be uploaded as artifacts first so that the binaries can be tested out manually before continuing through the automated steps towards release,1,add automated release builds add automated release builds with github actions release builds should be uploaded as artifacts first so that the binaries can be tested out manually before continuing through the automated steps towards release,1 209196,16178132429.0,IssuesEvent,2021-05-03 10:17:15,kubewarden/docs,https://api.github.com/repos/kubewarden/docs,closed,Update the architecture docs: talk about OCI registries,documentation,The charts on the page have been updated to show also the involvement of OCI registry. The text should be updated to reflect that.,1.0,Update the architecture docs: talk about OCI registries - The charts on the page have been updated to show also the involvement of OCI registry. The text should be updated to reflect that.,0,update the architecture docs talk about oci registries the charts on the page have been updated to show also the involvement of oci registry the text should be updated to reflect that ,0 3674,14274312772.0,IssuesEvent,2020-11-22 03:05:36,mozilla/addons-frontend,https://api.github.com/repos/mozilla/addons-frontend,closed,Automate extraction of new strings ready for translation,component: i18n needs: automation priority: p3 state: stale triaged,"We should extract if we detect new strings. This process would cover: - [ ] Extracting new strings - [ ] Merge locales - [ ] Build debug locales - [x] Lint with dennis or similar - [ ] Land it once green",1.0,"Automate extraction of new strings ready for translation - We should extract if we detect new strings. This process would cover: - [ ] Extracting new strings - [ ] Merge locales - [ ] Build debug locales - [x] Lint with dennis or similar - [ ] Land it once green",1,automate extraction of new strings ready for translation we should extract if we detect new strings this process would cover extracting new strings merge locales build debug locales lint with dennis or similar land it once green,1 32799,13929567489.0,IssuesEvent,2020-10-22 00:01:50,Ryujinx/Ryujinx-Games-List,https://api.github.com/repos/Ryujinx/Ryujinx-Games-List,opened,Gravity Rider Zero,crash services status-nothing,"## Gravity Rider Zero #### Game Update Version : 1.0.0 #### Current on `master` : 1.0.5553 Crashes on launch. ``` CommonServer Application : Unhandled exception caught: Ryujinx.HLE.Exceptions.ServiceNotImplementedException: Ryujinx.HLE.HOS.Services.Apm.ISession: 2 ``` #### Hardware Specs : ##### CPU: i5-6600K ##### GPU: NVIDIA GTX 1080 ##### RAM: 16GB #### Outstanding Issues: #### Log file : [GRZ.log](https://github.com/Ryujinx/Ryujinx-Games-List/files/5419247/GRZ.log) ",1.0,"Gravity Rider Zero - ## Gravity Rider Zero #### Game Update Version : 1.0.0 #### Current on `master` : 1.0.5553 Crashes on launch. ``` CommonServer Application : Unhandled exception caught: Ryujinx.HLE.Exceptions.ServiceNotImplementedException: Ryujinx.HLE.HOS.Services.Apm.ISession: 2 ``` #### Hardware Specs : ##### CPU: i5-6600K ##### GPU: NVIDIA GTX 1080 ##### RAM: 16GB #### Outstanding Issues: #### Log file : [GRZ.log](https://github.com/Ryujinx/Ryujinx-Games-List/files/5419247/GRZ.log) ",0,gravity rider zero gravity rider zero game update version current on master crashes on launch commonserver application unhandled exception caught ryujinx hle exceptions servicenotimplementedexception ryujinx hle hos services apm isession hardware specs cpu gpu nvidia gtx ram outstanding issues log file ,0 1533,10298886683.0,IssuesEvent,2019-08-28 12:20:47,plan-player-analytics/Plan,https://api.github.com/repos/plan-player-analytics/Plan,closed,Configure MySQL for Jenkins,Automation,"### Is your feature request related to a problem? Please describe. Moving CI to another service from Travis is in progress (#926) - It is being moved to a Jenkins server. MySQL stuff needs some new environment variables. ### I would like to be able to.. Run MySQL Tests on Jenkins ### Todo - [x] Install MySQL on Jenkins server - [x] Remove `IS_CI` env variable usage, replace with Mysql info environment variables. - [x] Add environment variables to Jenkins pipeline",1.0,"Configure MySQL for Jenkins - ### Is your feature request related to a problem? Please describe. Moving CI to another service from Travis is in progress (#926) - It is being moved to a Jenkins server. MySQL stuff needs some new environment variables. ### I would like to be able to.. Run MySQL Tests on Jenkins ### Todo - [x] Install MySQL on Jenkins server - [x] Remove `IS_CI` env variable usage, replace with Mysql info environment variables. - [x] Add environment variables to Jenkins pipeline",1,configure mysql for jenkins is your feature request related to a problem please describe moving ci to another service from travis is in progress it is being moved to a jenkins server mysql stuff needs some new environment variables i would like to be able to run mysql tests on jenkins todo install mysql on jenkins server remove is ci env variable usage replace with mysql info environment variables add environment variables to jenkins pipeline,1 111981,24221447205.0,IssuesEvent,2022-09-26 11:14:01,tarantool/tarantool,https://api.github.com/repos/tarantool/tarantool,opened,Replace asserts with luatest asserts in luatest tests,qa code health test,"Some luatest framework tests use Lua assertions which are incomprehensible when failed (the only information provided is 'assertion failed!'), making debugging difficult: replace them with luatest assertions and their context-specific varieties. ",1.0,"Replace asserts with luatest asserts in luatest tests - Some luatest framework tests use Lua assertions which are incomprehensible when failed (the only information provided is 'assertion failed!'), making debugging difficult: replace them with luatest assertions and their context-specific varieties. ",0,replace asserts with luatest asserts in luatest tests some luatest framework tests use lua assertions which are incomprehensible when failed the only information provided is assertion failed making debugging difficult replace them with luatest assertions and their context specific varieties ,0 2753,12540726303.0,IssuesEvent,2020-06-05 10:55:15,bandprotocol/bandchain,https://api.github.com/repos/bandprotocol/bandchain,closed,Single node for develop instruction,automation chain,Documentation for spin chain + oracled + db + (hasura) for develop make sure after edit code. All processes will fast enough for productivity.,1.0,Single node for develop instruction - Documentation for spin chain + oracled + db + (hasura) for develop make sure after edit code. All processes will fast enough for productivity.,1,single node for develop instruction documentation for spin chain oracled db hasura for develop make sure after edit code all processes will fast enough for productivity ,1 177821,29146991392.0,IssuesEvent,2023-05-18 04:33:34,zynaddsubfx/zyn-fusion-issues,https://api.github.com/repos/zynaddsubfx/zyn-fusion-issues,closed,Toggle button labels,enhancement design-choice,"This is a very minor but old peeve of mine: A few of the toggle buttons in Zyn have labels which don't actually indicate what the button does when it is depressed/illuminated. For instance, the ADSR envelopes have a LIN/LOG button, but that just says that it toggles between those two, not which is the enabled or disabled value. So IMHO, the label should be LIN ENV or something, so that when it is enabled, it indicates that the envelope is linear. Of course this doesn't explain what the alternative value is, but I think it's more important given the choice to clearly indicate what the enabled value is - the alternative value is either fairly obvious or readable in the documentation. Similarily, the portamento TR.TYPE button should be labelled >= THRESH or something similar to indicate what is actually does when enabled. The corresponding tooltips should also be updated accordingly.",1.0,"Toggle button labels - This is a very minor but old peeve of mine: A few of the toggle buttons in Zyn have labels which don't actually indicate what the button does when it is depressed/illuminated. For instance, the ADSR envelopes have a LIN/LOG button, but that just says that it toggles between those two, not which is the enabled or disabled value. So IMHO, the label should be LIN ENV or something, so that when it is enabled, it indicates that the envelope is linear. Of course this doesn't explain what the alternative value is, but I think it's more important given the choice to clearly indicate what the enabled value is - the alternative value is either fairly obvious or readable in the documentation. Similarily, the portamento TR.TYPE button should be labelled >= THRESH or something similar to indicate what is actually does when enabled. The corresponding tooltips should also be updated accordingly.",0,toggle button labels this is a very minor but old peeve of mine a few of the toggle buttons in zyn have labels which don t actually indicate what the button does when it is depressed illuminated for instance the adsr envelopes have a lin log button but that just says that it toggles between those two not which is the enabled or disabled value so imho the label should be lin env or something so that when it is enabled it indicates that the envelope is linear of course this doesn t explain what the alternative value is but i think it s more important given the choice to clearly indicate what the enabled value is the alternative value is either fairly obvious or readable in the documentation similarily the portamento tr type button should be labelled thresh or something similar to indicate what is actually does when enabled the corresponding tooltips should also be updated accordingly ,0 128297,27233867273.0,IssuesEvent,2023-02-21 15:02:30,guardicore/monkey,https://api.github.com/repos/guardicore/monkey,opened,Enable --experimental-string-processing on black,Feature Beginner friendly Impact: Low Complexity: Low Code Quality," **Is your feature request related to a problem? Please describe.** We have to manually fixup [this issue](https://github.com/psf/black/issues/2737). Use `--experimental-string-processing` flag on black to fix it. **Describe alternatives you've considered** Waiting until [this](https://github.com/psf/black/issues/2188) gets merged and updating ",1.0,"Enable --experimental-string-processing on black - **Is your feature request related to a problem? Please describe.** We have to manually fixup [this issue](https://github.com/psf/black/issues/2737). Use `--experimental-string-processing` flag on black to fix it. **Describe alternatives you've considered** Waiting until [this](https://github.com/psf/black/issues/2188) gets merged and updating ",0,enable experimental string processing on black thank you for suggesting an idea to make infection monkey better please fill in as much of the template below as you re able is your feature request related to a problem please describe we have to manually fixup use experimental string processing flag on black to fix it describe alternatives you ve considered waiting until gets merged and updating ,0 690270,23652938984.0,IssuesEvent,2022-08-26 08:30:40,kubernetes/ingress-nginx,https://api.github.com/repos/kubernetes/ingress-nginx,closed,"ingress-nginx-admission do not work. log show ""got secret, but it did not contain a 'ca' key""",kind/bug needs-triage needs-priority," ![image](https://user-images.githubusercontent.com/39783760/186818462-adb91566-959c-4862-aec7-2b9cda8ceb0a.png) ```bash [root@node136 tmp.9n5adjQhNn]# kubectl logs -f -n ingress-nginx ingress-nginx-admission-create-855fz W0826 04:01:30.263082 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. {""level"":""fatal"",""msg"":""got secret, but it did not contain a 'ca' key"",""source"":""k8s/k8s.go:237"",""time"":""2022-08-26T04:01:30Z""} ```",1.0,"ingress-nginx-admission do not work. log show ""got secret, but it did not contain a 'ca' key"" - ![image](https://user-images.githubusercontent.com/39783760/186818462-adb91566-959c-4862-aec7-2b9cda8ceb0a.png) ```bash [root@node136 tmp.9n5adjQhNn]# kubectl logs -f -n ingress-nginx ingress-nginx-admission-create-855fz W0826 04:01:30.263082 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. {""level"":""fatal"",""msg"":""got secret, but it did not contain a 'ca' key"",""source"":""k8s/k8s.go:237"",""time"":""2022-08-26T04:01:30Z""} ```",0,ingress nginx admission do not work log show got secret but it did not contain a ca key bash kubectl logs f n ingress nginx ingress nginx admission create client config go neither kubeconfig nor master was specified using the inclusterconfig this might not work level fatal msg got secret but it did not contain a ca key source go time ,0 6664,23675009366.0,IssuesEvent,2022-08-28 00:17:05,home-assistant/home-assistant.io,https://api.github.com/repos/home-assistant/home-assistant.io,closed,"""Settings"" is called ""Configuration"" in my home assistant.",Stale automation,"### Feedback Name change? Older version? Confusing for a beginner. ### URL https://www.home-assistant.io/docs/automation/editor/ ### Version 2022.6.6 ### Additional information _No response_",1.0,"""Settings"" is called ""Configuration"" in my home assistant. - ### Feedback Name change? Older version? Confusing for a beginner. ### URL https://www.home-assistant.io/docs/automation/editor/ ### Version 2022.6.6 ### Additional information _No response_",1, settings is called configuration in my home assistant feedback name change older version confusing for a beginner url version additional information no response ,1 142001,13003859624.0,IssuesEvent,2020-07-24 07:45:40,abpframework/abp,https://api.github.com/repos/abpframework/abp,opened,Complete the Book to Author Relation Angular UI for the web application tutorial,documentation effort-2 ui-angular,"I currently didn't write it. We are changing ""service-proxy"" creation to angular schematics. I will complete it once we finish this.",1.0,"Complete the Book to Author Relation Angular UI for the web application tutorial - I currently didn't write it. We are changing ""service-proxy"" creation to angular schematics. I will complete it once we finish this.",0,complete the book to author relation angular ui for the web application tutorial i currently didn t write it we are changing service proxy creation to angular schematics i will complete it once we finish this ,0 52319,10821869627.0,IssuesEvent,2019-11-08 19:45:13,Regalis11/Barotrauma,https://api.github.com/repos/Regalis11/Barotrauma,closed,Placement of a water sensor (or other components) during gameplay,Code Feature request,"When placing a water sensor, there is no way to place it back on the floor of the sub. When your character is crouched, the water sensor can be placed slightly above the ground, but not on it. This is an issue because water will still remain if the pump is triggered by a water sensor.",1.0,"Placement of a water sensor (or other components) during gameplay - When placing a water sensor, there is no way to place it back on the floor of the sub. When your character is crouched, the water sensor can be placed slightly above the ground, but not on it. This is an issue because water will still remain if the pump is triggered by a water sensor.",0,placement of a water sensor or other components during gameplay when placing a water sensor there is no way to place it back on the floor of the sub when your character is crouched the water sensor can be placed slightly above the ground but not on it this is an issue because water will still remain if the pump is triggered by a water sensor ,0 202195,15822434469.0,IssuesEvent,2021-04-05 22:17:28,marksmoore/robot-gladiators,https://api.github.com/repos/marksmoore/robot-gladiators,closed,Initial game functionality - MVP,documentation,"Title: Initial game functionality - MVP **Description** _Must Have_ - Build a game where a player's robot can fight another robot until one of them loses. - If the enemy-robot loses first, the player's robot will move on to fight another enemy-robot. _Features_ - The player's robot's name can be dynamically created by the player through the browser. - The player is given the option to skip the fight by paying a penalty fee, or continue with the fight.",1.0,"Initial game functionality - MVP - Title: Initial game functionality - MVP **Description** _Must Have_ - Build a game where a player's robot can fight another robot until one of them loses. - If the enemy-robot loses first, the player's robot will move on to fight another enemy-robot. _Features_ - The player's robot's name can be dynamically created by the player through the browser. - The player is given the option to skip the fight by paying a penalty fee, or continue with the fight.",0,initial game functionality mvp title initial game functionality mvp description must have build a game where a player s robot can fight another robot until one of them loses if the enemy robot loses first the player s robot will move on to fight another enemy robot features the player s robot s name can be dynamically created by the player through the browser the player is given the option to skip the fight by paying a penalty fee or continue with the fight ,0 307487,23201923605.0,IssuesEvent,2022-08-01 22:38:11,fga-eps-mds/2022-1-PokeRanking,https://api.github.com/repos/fga-eps-mds/2022-1-PokeRanking,closed,Criar o Roadmap,documentation,"## Descrição Realizar a criação do Roadmap da Release 1 do projeto. ## Tarefas - [x] Escolher um dos templates decididos em #12 - [x] Realizar a criação do Roadmap para a Release 1. ## Critérios de Aceitação - [x] Subir o Roadmap finalizado para o repositório. ## Informação adicional O Figma pode ser utilizado como ferramenta de criação.",1.0,"Criar o Roadmap - ## Descrição Realizar a criação do Roadmap da Release 1 do projeto. ## Tarefas - [x] Escolher um dos templates decididos em #12 - [x] Realizar a criação do Roadmap para a Release 1. ## Critérios de Aceitação - [x] Subir o Roadmap finalizado para o repositório. ## Informação adicional O Figma pode ser utilizado como ferramenta de criação.",0,criar o roadmap descrição realizar a criação do roadmap da release do projeto tarefas escolher um dos templates decididos em realizar a criação do roadmap para a release critérios de aceitação subir o roadmap finalizado para o repositório informação adicional o figma pode ser utilizado como ferramenta de criação ,0 296206,22292411891.0,IssuesEvent,2022-06-12 15:03:02,Praneeth-rdy/static-build-export-action,https://api.github.com/repos/Praneeth-rdy/static-build-export-action,opened,"Document the usage, contributing guidelines, etc",documentation,"## TODO - Update the README with the project's description, usage and an entry point to the contributing guidelines. - Add the relevant files with contributing guidelines and practices.",1.0,"Document the usage, contributing guidelines, etc - ## TODO - Update the README with the project's description, usage and an entry point to the contributing guidelines. - Add the relevant files with contributing guidelines and practices.",0,document the usage contributing guidelines etc todo update the readme with the project s description usage and an entry point to the contributing guidelines add the relevant files with contributing guidelines and practices ,0 394092,11631801569.0,IssuesEvent,2020-02-28 02:40:29,kubernetes/website,https://api.github.com/repos/kubernetes/website,closed,Translate docs/tasks/access-application-cluster/port-forward-access-application-cluster.md in Korean,language/ko priority/backlog,"**This is a Feature Request** **What would you like to be added** Translate docs/tasks/access-application-cluster/port-forward-access-application-cluster.md in Korean **Why is this needed** No translation exsist in Korean on Translate docs/tasks/access-application-cluster/port-forward-access-application-cluster.md **Comments** Page to update: https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ /language ko /assign /priority backlog ",1.0,"Translate docs/tasks/access-application-cluster/port-forward-access-application-cluster.md in Korean - **This is a Feature Request** **What would you like to be added** Translate docs/tasks/access-application-cluster/port-forward-access-application-cluster.md in Korean **Why is this needed** No translation exsist in Korean on Translate docs/tasks/access-application-cluster/port-forward-access-application-cluster.md **Comments** Page to update: https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ /language ko /assign /priority backlog ",0,translate docs tasks access application cluster port forward access application cluster md in korean this is a feature request what would you like to be added translate docs tasks access application cluster port forward access application cluster md in korean why is this needed no translation exsist in korean on translate docs tasks access application cluster port forward access application cluster md comments page to update language ko assign priority backlog ,0 627379,19903267391.0,IssuesEvent,2022-01-25 10:09:41,zephyrproject-rtos/zephyr,https://api.github.com/repos/zephyrproject-rtos/zephyr,opened,efr32mg_sltb004a: Build issue on `tests/drivers/spi/spi_loopback/drivers.spi.loopback`,bug priority: medium,"**Describe the bug** CI is failing on this board due to the following error: ``` /__w/zephyr/zephyr/drivers/spi/spi_gecko.c /__w/zephyr/zephyr/drivers/spi/spi_gecko.c: In function 'spi_gecko_xfer': /__w/zephyr/zephyr/drivers/spi/spi_gecko.c:176:25: error: redefinition of 'data' 176 | struct spi_gecko_data *data = dev->data; | ^~~~ /__w/zephyr/zephyr/drivers/spi/spi_gecko.c:173:25: note: previous definition of 'data' was here 173 | struct spi_gecko_data *data = dev->data; | ^~~~ ``` **Impact** Blocks CI, cf https://github.com/zephyrproject-rtos/zephyr/runs/4934340081?check_suite_focus=true ",1.0,"efr32mg_sltb004a: Build issue on `tests/drivers/spi/spi_loopback/drivers.spi.loopback` - **Describe the bug** CI is failing on this board due to the following error: ``` /__w/zephyr/zephyr/drivers/spi/spi_gecko.c /__w/zephyr/zephyr/drivers/spi/spi_gecko.c: In function 'spi_gecko_xfer': /__w/zephyr/zephyr/drivers/spi/spi_gecko.c:176:25: error: redefinition of 'data' 176 | struct spi_gecko_data *data = dev->data; | ^~~~ /__w/zephyr/zephyr/drivers/spi/spi_gecko.c:173:25: note: previous definition of 'data' was here 173 | struct spi_gecko_data *data = dev->data; | ^~~~ ``` **Impact** Blocks CI, cf https://github.com/zephyrproject-rtos/zephyr/runs/4934340081?check_suite_focus=true ",0, build issue on tests drivers spi spi loopback drivers spi loopback describe the bug ci is failing on this board due to the following error w zephyr zephyr drivers spi spi gecko c w zephyr zephyr drivers spi spi gecko c in function spi gecko xfer w zephyr zephyr drivers spi spi gecko c error redefinition of data struct spi gecko data data dev data w zephyr zephyr drivers spi spi gecko c note previous definition of data was here struct spi gecko data data dev data impact blocks ci cf ,0 1880,11027303449.0,IssuesEvent,2019-12-06 09:09:37,polycube-network/polycube,https://api.github.com/repos/polycube-network/polycube,closed,Automate all the manual test procedure,automation,"Also, we might want to think of automating this procedure, rather than manual. _Originally posted by @acloudiator in https://github.com/polycube-network/polycube/pull/160#issuecomment-506518016_",1.0,"Automate all the manual test procedure - Also, we might want to think of automating this procedure, rather than manual. _Originally posted by @acloudiator in https://github.com/polycube-network/polycube/pull/160#issuecomment-506518016_",1,automate all the manual test procedure also we might want to think of automating this procedure rather than manual originally posted by acloudiator in ,1 131477,12485350610.0,IssuesEvent,2020-05-30 19:15:48,JuliaMolSim/DFTK.jl,https://api.github.com/repos/JuliaMolSim/DFTK.jl,closed,Improve theory and background documentation,documentation,"Improve on the theory tex files and related documentation. Maybe it is possible to automatically build the pdfs or integrate this into documenter easily.",1.0,"Improve theory and background documentation - Improve on the theory tex files and related documentation. Maybe it is possible to automatically build the pdfs or integrate this into documenter easily.",0,improve theory and background documentation improve on the theory tex files and related documentation maybe it is possible to automatically build the pdfs or integrate this into documenter easily ,0 603610,18669834163.0,IssuesEvent,2021-10-30 13:50:52,AY2122S1-CS2103T-W17-4/tp,https://api.github.com/repos/AY2122S1-CS2103T-W17-4/tp,closed,[PE-D] Find Validation Output,priority.Medium bug.FeatureFlaw,"Sample `find d/2021-10-10` given in UG under section 4.5 does not work. Furthermore, the use of tag is not documented here. ![image.png](https://raw.githubusercontent.com/rohit0718/ped/main/files/0a5454c3-8b0d-4b11-b3b9-0a9148db4c68.png) ------------- Labels: `type.FeatureFlaw` `severity.Medium` original: rohit0718/ped#10",1.0,"[PE-D] Find Validation Output - Sample `find d/2021-10-10` given in UG under section 4.5 does not work. Furthermore, the use of tag is not documented here. ![image.png](https://raw.githubusercontent.com/rohit0718/ped/main/files/0a5454c3-8b0d-4b11-b3b9-0a9148db4c68.png) ------------- Labels: `type.FeatureFlaw` `severity.Medium` original: rohit0718/ped#10",0, find validation output sample find d given in ug under section does not work furthermore the use of tag is not documented here labels type featureflaw severity medium original ped ,0 117638,11951903150.0,IssuesEvent,2020-04-03 17:45:16,saros-project/saros,https://api.github.com/repos/saros-project/saros,opened,Page redesign issues and design notes,Area: Documentation,"This all related to the prototype for the new page layout, available [here](https://m273d15.github.io/saros/) (but probably only temporarily). #### Issues I have found found so far: - [ ] The navbar on the left no longer scrolls with the view but rather remains at the top of the page. This makes it harder to navigate the site as we have quite a few long pages. - [ ] The titlebar no longer scrolls with the view but rather remains at the top of the page. This makes it harder to navigate the site as we have quite a few long pages. - [ ] Then entry 'Continuous Integration' is not highlighted in the nav-bar when selected (see [here](https://m273d15.github.io/saros/contribute/processes/continuous-integration.html)). - [ ] Clicking the Saros logo in the header leads to a 404 (see [here](https://m273d15.github.io/)). - [ ] On the ""Releases"" landing page, the first entry of the navbar (named ""Get Saros"") is highlighted (see [here](https://m273d15.github.io/saros/releases/)). This seems to be the only category where this happens. - [ ] In the ""Releases"" category, the entry ""Get Saros"" is highlighted in the navbar, even when we are actually browsing one of the release notes pages (see [here](https://m273d15.github.io/saros/releases/saros-i_0.2.1.html)). - [ ] Scrolling the section out of view removes the box from the selected IDE tab (see [here](https://m273d15.github.io/saros/documentation/installation.html?tab=intellij#from-disk)). - [ ] Scrolling the section out of view removes the highlighting from the current navbar entry (see [here](https://m273d15.github.io/saros/documentation/installation.html?tab=eclipse#as-dropin)). - [ ] The size of the TOC changes depending on the position on the page. It is quite small when at the top of the page but then stretches to the border of the page when starting to scroll (see [here](https://m273d15.github.io/saros/documentation/faq.html)). - [ ] The ""Support"" category displays the navbar of the category ""Contribute"" (see [here](https://m273d15.github.io/saros/support/)). - [ ] The ""Research"" category displays the navbar of the category ""Docs"" (see [here](https://m273d15.github.io/saros/research/)). - [ ] The page no longer sets a page icon displayed in the browser tab. (The current page uses the Saros icon.) #### Notes on the design: - The position of the Saros logo in the header bar seem a bit weird to me. Previously, it lined up with the navbar. Now, it looks so ""disconnected"". - With the navbar being so close to the content, it is harder to differentiate them. Especially with navbar entries that can be unfolded this looks weird as the arrow is so close to the content (see [here](https://m273d15.github.io/saros/releases/)). In general, there is a lot of unused space on the left and right edge. - The contact info is only visible at the bottom of the page, making it harder to find on longer pages that can't entirely be displayed at once (see [here](https://m273d15.github.io/saros/contribute/best-practices.html)). - The highlighting of the sections in the TOC on the right is nice but has some weird side-effects (see below). So if it is to hard to fix/make consistent, I would suggest just dropping it. - At the top of the page, no entry is highlighted. - At the bottom of some pages (probably tabbed ones), the first displayed entry is highlighted (see [here](https://m273d15.github.io/saros/documentation/installation.html?tab=eclipse#good-to-know)), even if another section is specifically selected by clicking the TOC. - On the other pages (see [troubleshooting page](https://m273d15.github.io/saros/documentation/troubleshooting.html)), the TOC highlighting jumps directly from ""Editing > Network Issues"" to ""Known Issues > About Eclipse Plugins"". It seems like the last entry is always highlighted in the TOC when at the bottom of the page. - Renaming the category ""Docs"" into something more descriptive (like ""User Documentation"", as previously displayed on the landing page) might be nice, especially since it is now one of the main ways to navigate the site from the landing page. - Having the ""Host"" field on the ""Docs"" landing page and ""Getting Started"" page seems sensible, but with the current color scheme, it looks like a serious warning instead of a ""good to know"" fact. Maybe use a different highlighting color instead of yellow? - I am not a fan of the listing of the awareness information on the [landing page](https://m273d15.github.io/saros/). The layout for selection and contribution annotations would be ok, but for the viewport annotations, it just looks super weird to have such a long text over a tiny, vertically stretched picture. But I don't know hot to improve it. - The Saros UI contains more elements than the annotations mentioned on the [landing page](https://m273d15.github.io/saros/). These other elements (the Saros view and its components as well as the project view highlights) are now never mentioned on the page. Previously, they were described in the screenshot section. So i would suggest extending the information on the landing page or re-adding the screenshots (or, even better, adding current screenshots) and their description. - The category entries in the titlebar are underlined when moused over. This is not a big deal but it does not match the rest of the page styling. I would suggest to remove it. - Just to mention it, I consciously moved the order of the contact entries around to put twitter at the end as it is very rarely used (basically only for announcements). Furthermore, I wanted to minimize the amount of user asking for help for technical issues over twitter. But it isn't that important. - As an idea: To improve usability of the ""Getting started"" section, we could add gifs for all explained stuff (e.g. how to add a contact, how to start a session, etc.). But this would take a lot of work, so I can understand if you don't want to do it as part of this update. ",1.0,"Page redesign issues and design notes - This all related to the prototype for the new page layout, available [here](https://m273d15.github.io/saros/) (but probably only temporarily). #### Issues I have found found so far: - [ ] The navbar on the left no longer scrolls with the view but rather remains at the top of the page. This makes it harder to navigate the site as we have quite a few long pages. - [ ] The titlebar no longer scrolls with the view but rather remains at the top of the page. This makes it harder to navigate the site as we have quite a few long pages. - [ ] Then entry 'Continuous Integration' is not highlighted in the nav-bar when selected (see [here](https://m273d15.github.io/saros/contribute/processes/continuous-integration.html)). - [ ] Clicking the Saros logo in the header leads to a 404 (see [here](https://m273d15.github.io/)). - [ ] On the ""Releases"" landing page, the first entry of the navbar (named ""Get Saros"") is highlighted (see [here](https://m273d15.github.io/saros/releases/)). This seems to be the only category where this happens. - [ ] In the ""Releases"" category, the entry ""Get Saros"" is highlighted in the navbar, even when we are actually browsing one of the release notes pages (see [here](https://m273d15.github.io/saros/releases/saros-i_0.2.1.html)). - [ ] Scrolling the section out of view removes the box from the selected IDE tab (see [here](https://m273d15.github.io/saros/documentation/installation.html?tab=intellij#from-disk)). - [ ] Scrolling the section out of view removes the highlighting from the current navbar entry (see [here](https://m273d15.github.io/saros/documentation/installation.html?tab=eclipse#as-dropin)). - [ ] The size of the TOC changes depending on the position on the page. It is quite small when at the top of the page but then stretches to the border of the page when starting to scroll (see [here](https://m273d15.github.io/saros/documentation/faq.html)). - [ ] The ""Support"" category displays the navbar of the category ""Contribute"" (see [here](https://m273d15.github.io/saros/support/)). - [ ] The ""Research"" category displays the navbar of the category ""Docs"" (see [here](https://m273d15.github.io/saros/research/)). - [ ] The page no longer sets a page icon displayed in the browser tab. (The current page uses the Saros icon.) #### Notes on the design: - The position of the Saros logo in the header bar seem a bit weird to me. Previously, it lined up with the navbar. Now, it looks so ""disconnected"". - With the navbar being so close to the content, it is harder to differentiate them. Especially with navbar entries that can be unfolded this looks weird as the arrow is so close to the content (see [here](https://m273d15.github.io/saros/releases/)). In general, there is a lot of unused space on the left and right edge. - The contact info is only visible at the bottom of the page, making it harder to find on longer pages that can't entirely be displayed at once (see [here](https://m273d15.github.io/saros/contribute/best-practices.html)). - The highlighting of the sections in the TOC on the right is nice but has some weird side-effects (see below). So if it is to hard to fix/make consistent, I would suggest just dropping it. - At the top of the page, no entry is highlighted. - At the bottom of some pages (probably tabbed ones), the first displayed entry is highlighted (see [here](https://m273d15.github.io/saros/documentation/installation.html?tab=eclipse#good-to-know)), even if another section is specifically selected by clicking the TOC. - On the other pages (see [troubleshooting page](https://m273d15.github.io/saros/documentation/troubleshooting.html)), the TOC highlighting jumps directly from ""Editing > Network Issues"" to ""Known Issues > About Eclipse Plugins"". It seems like the last entry is always highlighted in the TOC when at the bottom of the page. - Renaming the category ""Docs"" into something more descriptive (like ""User Documentation"", as previously displayed on the landing page) might be nice, especially since it is now one of the main ways to navigate the site from the landing page. - Having the ""Host"" field on the ""Docs"" landing page and ""Getting Started"" page seems sensible, but with the current color scheme, it looks like a serious warning instead of a ""good to know"" fact. Maybe use a different highlighting color instead of yellow? - I am not a fan of the listing of the awareness information on the [landing page](https://m273d15.github.io/saros/). The layout for selection and contribution annotations would be ok, but for the viewport annotations, it just looks super weird to have such a long text over a tiny, vertically stretched picture. But I don't know hot to improve it. - The Saros UI contains more elements than the annotations mentioned on the [landing page](https://m273d15.github.io/saros/). These other elements (the Saros view and its components as well as the project view highlights) are now never mentioned on the page. Previously, they were described in the screenshot section. So i would suggest extending the information on the landing page or re-adding the screenshots (or, even better, adding current screenshots) and their description. - The category entries in the titlebar are underlined when moused over. This is not a big deal but it does not match the rest of the page styling. I would suggest to remove it. - Just to mention it, I consciously moved the order of the contact entries around to put twitter at the end as it is very rarely used (basically only for announcements). Furthermore, I wanted to minimize the amount of user asking for help for technical issues over twitter. But it isn't that important. - As an idea: To improve usability of the ""Getting started"" section, we could add gifs for all explained stuff (e.g. how to add a contact, how to start a session, etc.). But this would take a lot of work, so I can understand if you don't want to do it as part of this update. ",0,page redesign issues and design notes this all related to the prototype for the new page layout available but probably only temporarily issues i have found found so far the navbar on the left no longer scrolls with the view but rather remains at the top of the page this makes it harder to navigate the site as we have quite a few long pages the titlebar no longer scrolls with the view but rather remains at the top of the page this makes it harder to navigate the site as we have quite a few long pages then entry continuous integration is not highlighted in the nav bar when selected see clicking the saros logo in the header leads to a see on the releases landing page the first entry of the navbar named get saros is highlighted see this seems to be the only category where this happens in the releases category the entry get saros is highlighted in the navbar even when we are actually browsing one of the release notes pages see scrolling the section out of view removes the box from the selected ide tab see scrolling the section out of view removes the highlighting from the current navbar entry see the size of the toc changes depending on the position on the page it is quite small when at the top of the page but then stretches to the border of the page when starting to scroll see the support category displays the navbar of the category contribute see the research category displays the navbar of the category docs see the page no longer sets a page icon displayed in the browser tab the current page uses the saros icon notes on the design the position of the saros logo in the header bar seem a bit weird to me previously it lined up with the navbar now it looks so disconnected with the navbar being so close to the content it is harder to differentiate them especially with navbar entries that can be unfolded this looks weird as the arrow is so close to the content see in general there is a lot of unused space on the left and right edge the contact info is only visible at the bottom of the page making it harder to find on longer pages that can t entirely be displayed at once see the highlighting of the sections in the toc on the right is nice but has some weird side effects see below so if it is to hard to fix make consistent i would suggest just dropping it at the top of the page no entry is highlighted at the bottom of some pages probably tabbed ones the first displayed entry is highlighted see even if another section is specifically selected by clicking the toc on the other pages see the toc highlighting jumps directly from editing network issues to known issues about eclipse plugins it seems like the last entry is always highlighted in the toc when at the bottom of the page renaming the category docs into something more descriptive like user documentation as previously displayed on the landing page might be nice especially since it is now one of the main ways to navigate the site from the landing page having the host field on the docs landing page and getting started page seems sensible but with the current color scheme it looks like a serious warning instead of a good to know fact maybe use a different highlighting color instead of yellow i am not a fan of the listing of the awareness information on the the layout for selection and contribution annotations would be ok but for the viewport annotations it just looks super weird to have such a long text over a tiny vertically stretched picture but i don t know hot to improve it the saros ui contains more elements than the annotations mentioned on the these other elements the saros view and its components as well as the project view highlights are now never mentioned on the page previously they were described in the screenshot section so i would suggest extending the information on the landing page or re adding the screenshots or even better adding current screenshots and their description the category entries in the titlebar are underlined when moused over this is not a big deal but it does not match the rest of the page styling i would suggest to remove it just to mention it i consciously moved the order of the contact entries around to put twitter at the end as it is very rarely used basically only for announcements furthermore i wanted to minimize the amount of user asking for help for technical issues over twitter but it isn t that important as an idea to improve usability of the getting started section we could add gifs for all explained stuff e g how to add a contact how to start a session etc but this would take a lot of work so i can understand if you don t want to do it as part of this update ,0 3826,14663080317.0,IssuesEvent,2020-12-29 08:53:27,SAP/fundamental-ngx,https://api.github.com/repos/SAP/fundamental-ngx,closed,Bub. Switch component is broken. ,E2E automation High bug platform,"#### Is this a bug, enhancement, or feature request? Bug #### Briefly describe your proposal. #### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.) v 0.26.0-rc.10 #### If this is a bug, please provide steps for reproducing it. Open Platform Switch component Look at any Switch example AR:
has zero size #### Please provide relevant source code if applicable. #### Is there anything else we should know? Some new code was merged since 21.12 and broke CI regression. Need to be fixed asap to unblock merging other PR's. ![switch](https://user-images.githubusercontent.com/5969492/102902927-49cf0380-4478-11eb-94d9-b48cfd232c65.png) ",1.0,"Bub. Switch component is broken. - #### Is this a bug, enhancement, or feature request? Bug #### Briefly describe your proposal. #### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.) v 0.26.0-rc.10 #### If this is a bug, please provide steps for reproducing it. Open Platform Switch component Look at any Switch example AR:
has zero size #### Please provide relevant source code if applicable. #### Is there anything else we should know? Some new code was merged since 21.12 and broke CI regression. Need to be fixed asap to unblock merging other PR's. ![switch](https://user-images.githubusercontent.com/5969492/102902927-49cf0380-4478-11eb-94d9-b48cfd232c65.png) ",1,bub switch component is broken is this a bug enhancement or feature request bug briefly describe your proposal which versions of angular and fundamental library for angular are affected if this is a feature request use current version v rc if this is a bug please provide steps for reproducing it open platform switch component look at any switch example ar has zero size please provide relevant source code if applicable is there anything else we should know some new code was merged since and broke ci regression need to be fixed asap to unblock merging other pr s ,1 1847,10933543504.0,IssuesEvent,2019-11-24 03:05:30,IBM/FHIR,https://api.github.com/repos/IBM/FHIR,closed,occasional pipeline failure caused by R4ExamplesDriver when inserting resource into derby embedded ,automation bug,"When the ""all"" index file is processed by R4ExamplesDriver in pipeline, from time to time, the pipeline fails with tons of exceptions like: SEVERE: SQLException encountered while inserting Resource. java.sql.SQLException: Too much contention on sequence FHIR_SEQUENCE. This is probably caused by an uncommitted scan of the SYS.SYSSEQUENCES catalog. Do not query this catalog directly. Instead, use the SYSCS_UTIL.SYSCS_PEEK_AT_SEQUENCE function to view the current value of a sequence generator. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source) at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedResultSet.closeOnTransactionError(Unknown Source) at org.apache.derby.impl.jdbc.EmbedResultSet.movePosition(Unknown Source) at org.apache.derby.impl.jdbc.EmbedResultSet.next(Unknown Source) at com.ibm.fhir.persistence.jdbc.derby.DerbyResourceDAO.storeResource(DerbyResourceDAO.java:143) at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceDAOImpl.insertToDerby(ResourceDAOImpl.java:678) at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceDAOImpl.insert(ResourceDAOImpl.java:520) at com.ibm.fhir.persistence.jdbc.impl.FHIRPersistenceJDBCImpl.create(FHIRPersistenceJDBCImpl.java:238) at com.ibm.fhir.persistence.jdbc.test.spec.CreateOperation.process(CreateOperation.java:48) at com.ibm.fhir.persistence.jdbc.test.spec.R4JDBCExamplesProcessor.process(R4JDBCExamplesProcessor.java:165) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.processExample(R4ExamplesDriver.java:463) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.processExample(R4ExamplesDriver.java:362) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.submitExample(R4ExamplesDriver.java:318) ... Caused by: ERROR X0Y84: Too much contention on sequence FHIR_SEQUENCE. This is probably caused by an uncommitted scan of the SYS.SYSSEQUENCES catalog. Do not query this catalog directly. Instead, use the SYSCS_UTIL.SYSCS_PEEK_AT_SEQUENCE function to view the current value of a sequence generator. at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.impl.sql.catalog.SequenceUpdater.tooMuchContentionException(Unknown Source) at org.apache.derby.impl.sql.catalog.SequenceUpdater.getCurrentValueAndAdvance(Unknown Source) at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getCurrentValueAndAdvance(Unknown Source) at org.apache.derby.impl.sql.execute.BaseActivation.getCurrentValueAndAdvance(Unknown Source) at org.apache.derby.exe.acbd0ea02fx016ex6b5ex6326x0000060270585.e0(Unknown Source) at org.apache.derby.impl.services.reflect.DirectCall.invoke(Unknown Source) at org.apache.derby.impl.sql.execute.RowResultSet.getNextRowCore(Unknown Source) at org.apache.derby.impl.sql.execute.BasicNoPutResultSetImpl.getNextRow(Unknown Source) ... 43 more Nov 14, 2019 7:24:35 PM com.ibm.fhir.persistence.jdbc.impl.FHIRPersistenceJDBCImpl create SEVERE: FK violation com.ibm.fhir.persistence.jdbc.exception.FHIRPersistenceFKVException: SQLException encountered while inserting Resource. [probeId=a-1-0-4-64dc1c5b-c9d2-4aa2-aded-9902b5b4e610] at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceDAOImpl.insertToDerby(ResourceDAOImpl.java:705) at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceDAOImpl.insert(ResourceDAOImpl.java:520) at com.ibm.fhir.persistence.jdbc.impl.FHIRPersistenceJDBCImpl.create(FHIRPersistenceJDBCImpl.java:238) at com.ibm.fhir.persistence.jdbc.test.spec.CreateOperation.process(CreateOperation.java:48) at com.ibm.fhir.persistence.jdbc.test.spec.R4JDBCExamplesProcessor.process(R4JDBCExamplesProcessor.java:165) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.processExample(R4ExamplesDriver.java:463) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.processExample(R4ExamplesDriver.java:362) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.submitExample(R4ExamplesDriver.java:318)",1.0,"occasional pipeline failure caused by R4ExamplesDriver when inserting resource into derby embedded - When the ""all"" index file is processed by R4ExamplesDriver in pipeline, from time to time, the pipeline fails with tons of exceptions like: SEVERE: SQLException encountered while inserting Resource. java.sql.SQLException: Too much contention on sequence FHIR_SEQUENCE. This is probably caused by an uncommitted scan of the SYS.SYSSEQUENCES catalog. Do not query this catalog directly. Instead, use the SYSCS_UTIL.SYSCS_PEEK_AT_SEQUENCE function to view the current value of a sequence generator. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source) at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedResultSet.closeOnTransactionError(Unknown Source) at org.apache.derby.impl.jdbc.EmbedResultSet.movePosition(Unknown Source) at org.apache.derby.impl.jdbc.EmbedResultSet.next(Unknown Source) at com.ibm.fhir.persistence.jdbc.derby.DerbyResourceDAO.storeResource(DerbyResourceDAO.java:143) at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceDAOImpl.insertToDerby(ResourceDAOImpl.java:678) at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceDAOImpl.insert(ResourceDAOImpl.java:520) at com.ibm.fhir.persistence.jdbc.impl.FHIRPersistenceJDBCImpl.create(FHIRPersistenceJDBCImpl.java:238) at com.ibm.fhir.persistence.jdbc.test.spec.CreateOperation.process(CreateOperation.java:48) at com.ibm.fhir.persistence.jdbc.test.spec.R4JDBCExamplesProcessor.process(R4JDBCExamplesProcessor.java:165) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.processExample(R4ExamplesDriver.java:463) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.processExample(R4ExamplesDriver.java:362) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.submitExample(R4ExamplesDriver.java:318) ... Caused by: ERROR X0Y84: Too much contention on sequence FHIR_SEQUENCE. This is probably caused by an uncommitted scan of the SYS.SYSSEQUENCES catalog. Do not query this catalog directly. Instead, use the SYSCS_UTIL.SYSCS_PEEK_AT_SEQUENCE function to view the current value of a sequence generator. at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.impl.sql.catalog.SequenceUpdater.tooMuchContentionException(Unknown Source) at org.apache.derby.impl.sql.catalog.SequenceUpdater.getCurrentValueAndAdvance(Unknown Source) at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getCurrentValueAndAdvance(Unknown Source) at org.apache.derby.impl.sql.execute.BaseActivation.getCurrentValueAndAdvance(Unknown Source) at org.apache.derby.exe.acbd0ea02fx016ex6b5ex6326x0000060270585.e0(Unknown Source) at org.apache.derby.impl.services.reflect.DirectCall.invoke(Unknown Source) at org.apache.derby.impl.sql.execute.RowResultSet.getNextRowCore(Unknown Source) at org.apache.derby.impl.sql.execute.BasicNoPutResultSetImpl.getNextRow(Unknown Source) ... 43 more Nov 14, 2019 7:24:35 PM com.ibm.fhir.persistence.jdbc.impl.FHIRPersistenceJDBCImpl create SEVERE: FK violation com.ibm.fhir.persistence.jdbc.exception.FHIRPersistenceFKVException: SQLException encountered while inserting Resource. [probeId=a-1-0-4-64dc1c5b-c9d2-4aa2-aded-9902b5b4e610] at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceDAOImpl.insertToDerby(ResourceDAOImpl.java:705) at com.ibm.fhir.persistence.jdbc.dao.impl.ResourceDAOImpl.insert(ResourceDAOImpl.java:520) at com.ibm.fhir.persistence.jdbc.impl.FHIRPersistenceJDBCImpl.create(FHIRPersistenceJDBCImpl.java:238) at com.ibm.fhir.persistence.jdbc.test.spec.CreateOperation.process(CreateOperation.java:48) at com.ibm.fhir.persistence.jdbc.test.spec.R4JDBCExamplesProcessor.process(R4JDBCExamplesProcessor.java:165) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.processExample(R4ExamplesDriver.java:463) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.processExample(R4ExamplesDriver.java:362) at com.ibm.fhir.model.spec.test.R4ExamplesDriver.submitExample(R4ExamplesDriver.java:318)",1,occasional pipeline failure caused by when inserting resource into derby embedded when the all index file is processed by in pipeline from time to time the pipeline fails with tons of exceptions like severe sqlexception encountered while inserting resource java sql sqlexception too much contention on sequence fhir sequence this is probably caused by an uncommitted scan of the sys syssequences catalog do not query this catalog directly instead use the syscs util syscs peek at sequence function to view the current value of a sequence generator at org apache derby impl jdbc sqlexceptionfactory getsqlexception unknown source at org apache derby impl jdbc util generatecssqlexception unknown source at org apache derby impl jdbc transactionresourceimpl wrapinsqlexception unknown source at org apache derby impl jdbc transactionresourceimpl handleexception unknown source at org apache derby impl jdbc embedconnection handleexception unknown source at org apache derby impl jdbc connectionchild handleexception unknown source at org apache derby impl jdbc embedresultset closeontransactionerror unknown source at org apache derby impl jdbc embedresultset moveposition unknown source at org apache derby impl jdbc embedresultset next unknown source at com ibm fhir persistence jdbc derby derbyresourcedao storeresource derbyresourcedao java at com ibm fhir persistence jdbc dao impl resourcedaoimpl inserttoderby resourcedaoimpl java at com ibm fhir persistence jdbc dao impl resourcedaoimpl insert resourcedaoimpl java at com ibm fhir persistence jdbc impl fhirpersistencejdbcimpl create fhirpersistencejdbcimpl java at com ibm fhir persistence jdbc test spec createoperation process createoperation java at com ibm fhir persistence jdbc test spec process java at com ibm fhir model spec test processexample java at com ibm fhir model spec test processexample java at com ibm fhir model spec test submitexample java caused by error too much contention on sequence fhir sequence this is probably caused by an uncommitted scan of the sys syssequences catalog do not query this catalog directly instead use the syscs util syscs peek at sequence function to view the current value of a sequence generator at org apache derby iapi error standardexception newexception unknown source at org apache derby iapi error standardexception newexception unknown source at org apache derby impl sql catalog sequenceupdater toomuchcontentionexception unknown source at org apache derby impl sql catalog sequenceupdater getcurrentvalueandadvance unknown source at org apache derby impl sql catalog datadictionaryimpl getcurrentvalueandadvance unknown source at org apache derby impl sql execute baseactivation getcurrentvalueandadvance unknown source at org apache derby exe unknown source at org apache derby impl services reflect directcall invoke unknown source at org apache derby impl sql execute rowresultset getnextrowcore unknown source at org apache derby impl sql execute basicnoputresultsetimpl getnextrow unknown source more nov pm com ibm fhir persistence jdbc impl fhirpersistencejdbcimpl create severe fk violation com ibm fhir persistence jdbc exception fhirpersistencefkvexception sqlexception encountered while inserting resource at com ibm fhir persistence jdbc dao impl resourcedaoimpl inserttoderby resourcedaoimpl java at com ibm fhir persistence jdbc dao impl resourcedaoimpl insert resourcedaoimpl java at com ibm fhir persistence jdbc impl fhirpersistencejdbcimpl create fhirpersistencejdbcimpl java at com ibm fhir persistence jdbc test spec createoperation process createoperation java at com ibm fhir persistence jdbc test spec process java at com ibm fhir model spec test processexample java at com ibm fhir model spec test processexample java at com ibm fhir model spec test submitexample java ,1 3728,14435150614.0,IssuesEvent,2020-12-07 08:18:55,syslog-ng/syslog-ng,https://api.github.com/repos/syslog-ng/syslog-ng,opened,Automated release does not create annotated tag,release-automation,"GitHub only creates soft tags, when publishing releases through the website. We need to have a solution, to create an annotated tag instead, or in addition. An automatic Actions job can do this, if we add ""release"" as the trigger, but there could be other solutions as well.",1.0,"Automated release does not create annotated tag - GitHub only creates soft tags, when publishing releases through the website. We need to have a solution, to create an annotated tag instead, or in addition. An automatic Actions job can do this, if we add ""release"" as the trigger, but there could be other solutions as well.",1,automated release does not create annotated tag github only creates soft tags when publishing releases through the website we need to have a solution to create an annotated tag instead or in addition an automatic actions job can do this if we add release as the trigger but there could be other solutions as well ,1 9421,28302120916.0,IssuesEvent,2023-04-10 07:17:02,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,CDC storage sink data inconsistency when cdc scales and owner switch during workload running,type/bug severity/major found/automation area/ticdc affects-6.5,"### What did you do? 1. Create storage sink changefeed with below configurations, and run storage consumer to consumer the data to downstream mysql ``` [sink] protocol='csv' [sink.csv] include-commit-ts=true ``` 2. Run sysbench workload prepare ``` sysbench --db-driver=mysql --mysql-host=`nslookup upstream-tidb.cdc-testbed-tps-1683010-1-178 | awk -F: '{print $2}' | awk 'NR==5' | sed s/[[:space:]]//g` --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=32 --table-size=100000 --create_secondary=off --debug=true --threads=32 --mysql-ignore-errors=2013,1213,1105,1205,8022,8027,8028,9004,9007,1062 oltp_write_only prepare ``` 4. Run sysbench workload and at the same time do cdc scales and owner switch ( 3 > 6 -> owner switch - > 1 -> 5 -> owner swtich -> 2) 5. create a table in TiDB, and wait the table sync to downstream mysql (by consumer) 6. do data consistency check ### What did you expect to see? Data consistency check should pass ### What did you see instead? Data consistency check failed. e.g. ![image](https://user-images.githubusercontent.com/7403864/227423447-e85bab2f-dc52-4b89-b4c2-ee5e24197001.png) ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console [root@upstream-tidb-0 /]# /tidb-server -V Release Version: v7.0.0 Edition: Community Git Commit Hash: 624aa887d74a2e6a28ec610b94918bbe25e48517 Git Branch: heads/refs/tags/v7.0.0 UTC Build Time: 2023-03-23 14:01:44 GoVersion: go1.20.2 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false Store: unistore ``` Upstream TiKV version (execute `tikv-server --version`): ```console │[root@upstream-tikv-0 /]# /tikv-server -V │TiKV │Release Version: 7.0.0 │Edition: Community │Git Commit Hash: 412614bf024808d6a45fec67b443df4e365a94eb │Git Commit Branch: heads/refs/tags/v7.0.0 │UTC Build Time: 2023-03-23 12:15:00 │Rust Version: rustc 1.67.0-nightly (96ddd32c4 2022-11-14) │Enable Features: pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raf │t-engine cloud-aws cloud-gcp cloud-azure │Profile: dist_release ``` TiCDC version (execute `cdc version`): ```console [root@upstream-ticdc-0 /]# /cdc version Release Version: v7.0.0 Git Commit Hash: fd2efea950937113af39856d00670577745da576 Git Branch: heads/refs/tags/v7.0.0 UTC Build Time: 2023-03-23 12:02:40 Go Version: go version go1.20.2 linux/amd64 Failpoint Build: false ```",1.0,"CDC storage sink data inconsistency when cdc scales and owner switch during workload running - ### What did you do? 1. Create storage sink changefeed with below configurations, and run storage consumer to consumer the data to downstream mysql ``` [sink] protocol='csv' [sink.csv] include-commit-ts=true ``` 2. Run sysbench workload prepare ``` sysbench --db-driver=mysql --mysql-host=`nslookup upstream-tidb.cdc-testbed-tps-1683010-1-178 | awk -F: '{print $2}' | awk 'NR==5' | sed s/[[:space:]]//g` --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=32 --table-size=100000 --create_secondary=off --debug=true --threads=32 --mysql-ignore-errors=2013,1213,1105,1205,8022,8027,8028,9004,9007,1062 oltp_write_only prepare ``` 4. Run sysbench workload and at the same time do cdc scales and owner switch ( 3 > 6 -> owner switch - > 1 -> 5 -> owner swtich -> 2) 5. create a table in TiDB, and wait the table sync to downstream mysql (by consumer) 6. do data consistency check ### What did you expect to see? Data consistency check should pass ### What did you see instead? Data consistency check failed. e.g. ![image](https://user-images.githubusercontent.com/7403864/227423447-e85bab2f-dc52-4b89-b4c2-ee5e24197001.png) ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console [root@upstream-tidb-0 /]# /tidb-server -V Release Version: v7.0.0 Edition: Community Git Commit Hash: 624aa887d74a2e6a28ec610b94918bbe25e48517 Git Branch: heads/refs/tags/v7.0.0 UTC Build Time: 2023-03-23 14:01:44 GoVersion: go1.20.2 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false Store: unistore ``` Upstream TiKV version (execute `tikv-server --version`): ```console │[root@upstream-tikv-0 /]# /tikv-server -V │TiKV │Release Version: 7.0.0 │Edition: Community │Git Commit Hash: 412614bf024808d6a45fec67b443df4e365a94eb │Git Commit Branch: heads/refs/tags/v7.0.0 │UTC Build Time: 2023-03-23 12:15:00 │Rust Version: rustc 1.67.0-nightly (96ddd32c4 2022-11-14) │Enable Features: pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raf │t-engine cloud-aws cloud-gcp cloud-azure │Profile: dist_release ``` TiCDC version (execute `cdc version`): ```console [root@upstream-ticdc-0 /]# /cdc version Release Version: v7.0.0 Git Commit Hash: fd2efea950937113af39856d00670577745da576 Git Branch: heads/refs/tags/v7.0.0 UTC Build Time: 2023-03-23 12:02:40 Go Version: go version go1.20.2 linux/amd64 Failpoint Build: false ```",1,cdc storage sink data inconsistency when cdc scales and owner switch during workload running what did you do create storage sink changefeed with below configurations and run storage consumer to consumer the data to downstream mysql protocol csv include commit ts true run sysbench workload prepare sysbench db driver mysql mysql host nslookup upstream tidb cdc testbed tps awk f print awk nr sed s g mysql port mysql user root mysql db workload tables table size create secondary off debug true threads mysql ignore errors oltp write only prepare run sysbench workload and at the same time do cdc scales and owner switch owner switch owner swtich create a table in tidb and wait the table sync to downstream mysql by consumer do data consistency check what did you expect to see data consistency check should pass what did you see instead data consistency check failed e g versions of the cluster upstream tidb cluster version execute select tidb version in a mysql client console tidb server v release version edition community git commit hash git branch heads refs tags utc build time goversion race enabled false tikv min version alpha check table before drop false store unistore upstream tikv version execute tikv server version console │ tikv server v │tikv │release version │edition community │git commit hash │git commit branch heads refs tags │utc build time │rust version rustc nightly │enable features pprof fp jemalloc mem profiling portable sse test engine kv rocksdb test engine raft raf │t engine cloud aws cloud gcp cloud azure │profile dist release ticdc version execute cdc version console cdc version release version git commit hash git branch heads refs tags utc build time go version go version linux failpoint build false ,1 4315,16048726909.0,IssuesEvent,2021-04-22 16:25:19,pc2ccs/pc2v9,https://api.github.com/repos/pc2ccs/pc2v9,opened,Automate Division/Region scoreboard,automation enhancement,"**Is your feature request related to a problem?** Yes, lacks automation. **Feature Description**: When there are divisions/groups defined, there is a manual process to add/update XSLT files to provide a scoreboard for each division and a index/main page with links to all those division web pages. Provide a way to automate the creation of the Division scoreboard web pages. **Have you considered other ways to accomplish the same thing?** Yes. **Do you have any specific suggestions for how your feature would be ***implemented*** in PC^2?** **Additional context**: ",1.0,"Automate Division/Region scoreboard - **Is your feature request related to a problem?** Yes, lacks automation. **Feature Description**: When there are divisions/groups defined, there is a manual process to add/update XSLT files to provide a scoreboard for each division and a index/main page with links to all those division web pages. Provide a way to automate the creation of the Division scoreboard web pages. **Have you considered other ways to accomplish the same thing?** Yes. **Do you have any specific suggestions for how your feature would be ***implemented*** in PC^2?** **Additional context**: ",1,automate division region scoreboard is your feature request related to a problem yes lacks automation feature description when there are divisions groups defined there is a manual process to add update xslt files to provide a scoreboard for each division and a index main page with links to all those division web pages provide a way to automate the creation of the division scoreboard web pages have you considered other ways to accomplish the same thing yes do you have any specific suggestions for how your feature would be implemented in pc additional context ,1 18697,3080680728.0,IssuesEvent,2015-08-22 00:32:24,jameslh/pagedown,https://api.github.com/repos/jameslh/pagedown,closed,Please pull handleUndo changes,auto-migrated Priority-Medium Type-Defect,"``` Hi, I implemented a shared settings object which allows you to tweak the editor at creation time without editing the code. I also added a handleUndo switch which allows you to turn off the undo functionality which I needed for my project with shareJS. The change is at http://code.google.com/r/woutmertens-noundo/source/detail?r=52c43623e71b1c912d68 4bd4b513c9fe1cc23c09 ``` Original issue reported on code.google.com by `wout.mer...@gmail.com` on 5 Jan 2012 at 3:56",1.0,"Please pull handleUndo changes - ``` Hi, I implemented a shared settings object which allows you to tweak the editor at creation time without editing the code. I also added a handleUndo switch which allows you to turn off the undo functionality which I needed for my project with shareJS. The change is at http://code.google.com/r/woutmertens-noundo/source/detail?r=52c43623e71b1c912d68 4bd4b513c9fe1cc23c09 ``` Original issue reported on code.google.com by `wout.mer...@gmail.com` on 5 Jan 2012 at 3:56",0,please pull handleundo changes hi i implemented a shared settings object which allows you to tweak the editor at creation time without editing the code i also added a handleundo switch which allows you to turn off the undo functionality which i needed for my project with sharejs the change is at original issue reported on code google com by wout mer gmail com on jan at ,0 6959,24064145243.0,IssuesEvent,2022-09-17 08:19:09,smcnab1/op-question-mark,https://api.github.com/repos/smcnab1/op-question-mark,closed,[FR] Notification when temperature reaches 20 and when too hot,✔️Status: Confirmed 🔬Status: Review Needed 🏝Priority: Low 🚗For: Automations,Create notification reminding to close windows,1.0,[FR] Notification when temperature reaches 20 and when too hot - Create notification reminding to close windows,1, notification when temperature reaches and when too hot create notification reminding to close windows,1 176382,21411021079.0,IssuesEvent,2022-04-22 05:57:53,pazhanivel07/frameworks_base_Aosp10_r33,https://api.github.com/repos/pazhanivel07/frameworks_base_Aosp10_r33,opened,CVE-2021-0432 (High) detected in platform_frameworks_baseplatform-tools-29.0.6,security vulnerability,"## CVE-2021-0432 - High Severity Vulnerability
Vulnerable Library - platform_frameworks_baseplatform-tools-29.0.6

Library home page: https://github.com/aosp-mirror/platform_frameworks_base.git

Found in HEAD commit: d0a412c03562493a433dc7e698ff88ab06a3468a

Found in base branch: main

Vulnerable Source Files (1)

/cmds/statsd/src/external/StatsPullerManager.cpp

Vulnerability Details

In ClearPullerCacheIfNecessary and ForceClearPullerCache of StatsPullerManager.cpp, there is a possible use-after-free due to a race condition. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11Android ID: A-173552790

Publish Date: 2021-04-13

URL: CVE-2021-0432

CVSS 3 Score Details (7.0)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://source.android.com/security/bulletin/2021-04-01

Release Date: 2020-11-07

Fix Resolution: android-11.0.0_r34

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-0432 (High) detected in platform_frameworks_baseplatform-tools-29.0.6 - ## CVE-2021-0432 - High Severity Vulnerability
Vulnerable Library - platform_frameworks_baseplatform-tools-29.0.6

Library home page: https://github.com/aosp-mirror/platform_frameworks_base.git

Found in HEAD commit: d0a412c03562493a433dc7e698ff88ab06a3468a

Found in base branch: main

Vulnerable Source Files (1)

/cmds/statsd/src/external/StatsPullerManager.cpp

Vulnerability Details

In ClearPullerCacheIfNecessary and ForceClearPullerCache of StatsPullerManager.cpp, there is a possible use-after-free due to a race condition. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11Android ID: A-173552790

Publish Date: 2021-04-13

URL: CVE-2021-0432

CVSS 3 Score Details (7.0)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://source.android.com/security/bulletin/2021-04-01

Release Date: 2020-11-07

Fix Resolution: android-11.0.0_r34

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in platform frameworks baseplatform tools cve high severity vulnerability vulnerable library platform frameworks baseplatform tools library home page a href found in head commit a href found in base branch main vulnerable source files cmds statsd src external statspullermanager cpp vulnerability details in clearpullercacheifnecessary and forceclearpullercache of statspullermanager cpp there is a possible use after free due to a race condition this could lead to local escalation of privilege with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with whitesource ,0 347356,31160298164.0,IssuesEvent,2023-08-16 15:32:42,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,"Failing test: Security Solution Cypress.x-pack/plugins/security_solution/public/management/cypress/e2e/mocked_data/policy_details·cy·ts - Policy Details Malware Protection card ""after all"" hook for ""user should be able to see related rules"" ""after all"" hook for ""user should be able to see related rules""",failed-test Team:Defend Workflows,"A test failed on a tracked branch ``` TypeError: Cannot read properties of undefined (reading 'cleanup') Because this error occurred during a `after all` hook we are skipping the remaining tests in the current suite: `Policy Details` Although you have test retries enabled, we do not retry tests when `before all` or `after all` hooks fail at Context.eval (webpack:///./cypress/e2e/mocked_data/policy_details.cy.ts:42:21) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge-unsupported-ftrs/builds/5512#0189f556-73de-4855-b10e-d9b19c6e8fbf) ",1.0,"Failing test: Security Solution Cypress.x-pack/plugins/security_solution/public/management/cypress/e2e/mocked_data/policy_details·cy·ts - Policy Details Malware Protection card ""after all"" hook for ""user should be able to see related rules"" ""after all"" hook for ""user should be able to see related rules"" - A test failed on a tracked branch ``` TypeError: Cannot read properties of undefined (reading 'cleanup') Because this error occurred during a `after all` hook we are skipping the remaining tests in the current suite: `Policy Details` Although you have test retries enabled, we do not retry tests when `before all` or `after all` hooks fail at Context.eval (webpack:///./cypress/e2e/mocked_data/policy_details.cy.ts:42:21) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge-unsupported-ftrs/builds/5512#0189f556-73de-4855-b10e-d9b19c6e8fbf) ",0,failing test security solution cypress x pack plugins security solution public management cypress mocked data policy details·cy·ts policy details malware protection card after all hook for user should be able to see related rules after all hook for user should be able to see related rules a test failed on a tracked branch typeerror cannot read properties of undefined reading cleanup because this error occurred during a after all hook we are skipping the remaining tests in the current suite policy details although you have test retries enabled we do not retry tests when before all or after all hooks fail at context eval webpack cypress mocked data policy details cy ts first failure ,0 785639,27620456717.0,IssuesEvent,2023-03-09 23:27:24,lowRISC/opentitan,https://api.github.com/repos/lowRISC/opentitan,opened,[fpga] connect external clk to pin,Component:FPGA Priority:P1 Type:Enhancement Component:RTL Manufacturing,"### Description _**Related to #17491.**_ # Background For [manufacturing tests](https://github.com/lowRISC/opentitan/blob/master/sw/device/silicon_creator/manuf/data/manuf_testplan.hjson), we need to be able to drive LC transitions via the LC JTAG TAP, which requires driving the external clk. # Problem On CW310 FPGA, the external clk is [tied off to 0](https://github.com/lowRISC/opentitan/blob/060354c410761db905f385909c2b163b0076258d/hw/top_earlgrey/rtl/autogen/chip_earlgrey_cw310.sv#L792), so we currently cannot develop said [manufacturing tests](https://github.com/lowRISC/opentitan/blob/master/sw/device/silicon_creator/manuf/data/manuf_testplan.hjson) on the FPGA. # Solution / Requirements - [ ] Investigate if the FPGA pin can be driven at either 48 or 96 MHz, as this is what is [required](https://opentitan.org/book/hw/ip/clkmgr/index.html#software-requested-external-clocks). - [ ] Investigate what pin the external clk can be routed too such that it can reach the HyperDebug PWM pin through the swizzle board (if #17491 is feasible) - [ ] Update RLT to route external clock to a pin, preferably `IOC6` as this is the [pin used for the external clk on the ASIC](https://github.com/lowRISC/opentitan/blob/060354c410761db905f385909c2b163b0076258d/hw/top_earlgrey/rtl/autogen/chip_earlgrey_asic.sv#L842). ",1.0,"[fpga] connect external clk to pin - ### Description _**Related to #17491.**_ # Background For [manufacturing tests](https://github.com/lowRISC/opentitan/blob/master/sw/device/silicon_creator/manuf/data/manuf_testplan.hjson), we need to be able to drive LC transitions via the LC JTAG TAP, which requires driving the external clk. # Problem On CW310 FPGA, the external clk is [tied off to 0](https://github.com/lowRISC/opentitan/blob/060354c410761db905f385909c2b163b0076258d/hw/top_earlgrey/rtl/autogen/chip_earlgrey_cw310.sv#L792), so we currently cannot develop said [manufacturing tests](https://github.com/lowRISC/opentitan/blob/master/sw/device/silicon_creator/manuf/data/manuf_testplan.hjson) on the FPGA. # Solution / Requirements - [ ] Investigate if the FPGA pin can be driven at either 48 or 96 MHz, as this is what is [required](https://opentitan.org/book/hw/ip/clkmgr/index.html#software-requested-external-clocks). - [ ] Investigate what pin the external clk can be routed too such that it can reach the HyperDebug PWM pin through the swizzle board (if #17491 is feasible) - [ ] Update RLT to route external clock to a pin, preferably `IOC6` as this is the [pin used for the external clk on the ASIC](https://github.com/lowRISC/opentitan/blob/060354c410761db905f385909c2b163b0076258d/hw/top_earlgrey/rtl/autogen/chip_earlgrey_asic.sv#L842). ",0, connect external clk to pin description related to background for we need to be able to drive lc transitions via the lc jtag tap which requires driving the external clk problem on fpga the external clk is so we currently cannot develop said on the fpga solution requirements investigate if the fpga pin can be driven at either or mhz as this is what is investigate what pin the external clk can be routed too such that it can reach the hyperdebug pwm pin through the swizzle board if is feasible update rlt to route external clock to a pin preferably as this is the ,0 34694,16655920534.0,IssuesEvent,2021-06-05 14:20:57,ericcornelissen/webmangler,https://api.github.com/repos/ericcornelissen/webmangler,opened,The HTML Language Plugin's embed finders are not efficient,package:core performance,"# Bug Report ## Description The [embed finders of the HTML Language Plugin](https://github.com/ericcornelissen/webmangler/tree/2349fcbb930504369afc1f0564cbadc7024e4a67/packages/core/src/languages/html/embeds) are not very efficient. Even though they're benchmarked, in a real-world setting where 5.1MB of HTML need to be mangled these embed finders increase the overall mangling time from ~1 seconds to ~50 seconds. ## Steps to reproduce 1. Get the source code of [the simple-icons-website repository](https://github.com/simple-icons/simple-icons-website). 2. Run `$ npm install webmangler@v0.1.21 webmangler-cli@v0.1.6`. 3. Add a simple WebMangler config, e.g.: ```js // .webmanglerrc.js const { BuiltInLanguagesSupport } = require(""webmangler/languages""); const { RecommendedManglers } = require(""webmangler/manglers""); module.exports = { plugins: [ // Mangle CSS classes, CSS variables, and data attributes new RecommendedManglers(), ], languages: [ // Mangle in CSS, HTML, and JavaScript new BuiltInLanguagesSupport(), ], }; ``` 4. Run `$ npm run build`. 5. Run webmangler. E.g. add `""mangle"": ""webmangler --write --stats ./_site"",` and run `$ npm run mangle`. Note the time. 6. Go into `node_modules/webmangler` and disable the embed finders of the HTML language plugin. 7. Rebuild and mangle again. Note the time.",True,"The HTML Language Plugin's embed finders are not efficient - # Bug Report ## Description The [embed finders of the HTML Language Plugin](https://github.com/ericcornelissen/webmangler/tree/2349fcbb930504369afc1f0564cbadc7024e4a67/packages/core/src/languages/html/embeds) are not very efficient. Even though they're benchmarked, in a real-world setting where 5.1MB of HTML need to be mangled these embed finders increase the overall mangling time from ~1 seconds to ~50 seconds. ## Steps to reproduce 1. Get the source code of [the simple-icons-website repository](https://github.com/simple-icons/simple-icons-website). 2. Run `$ npm install webmangler@v0.1.21 webmangler-cli@v0.1.6`. 3. Add a simple WebMangler config, e.g.: ```js // .webmanglerrc.js const { BuiltInLanguagesSupport } = require(""webmangler/languages""); const { RecommendedManglers } = require(""webmangler/manglers""); module.exports = { plugins: [ // Mangle CSS classes, CSS variables, and data attributes new RecommendedManglers(), ], languages: [ // Mangle in CSS, HTML, and JavaScript new BuiltInLanguagesSupport(), ], }; ``` 4. Run `$ npm run build`. 5. Run webmangler. E.g. add `""mangle"": ""webmangler --write --stats ./_site"",` and run `$ npm run mangle`. Note the time. 6. Go into `node_modules/webmangler` and disable the embed finders of the HTML language plugin. 7. Rebuild and mangle again. Note the time.",0,the html language plugin s embed finders are not efficient bug report description the are not very efficient even though they re benchmarked in a real world setting where of html need to be mangled these embed finders increase the overall mangling time from seconds to seconds steps to reproduce get the source code of run npm install webmangler webmangler cli add a simple webmangler config e g js webmanglerrc js const builtinlanguagessupport require webmangler languages const recommendedmanglers require webmangler manglers module exports plugins mangle css classes css variables and data attributes new recommendedmanglers languages mangle in css html and javascript new builtinlanguagessupport run npm run build run webmangler e g add mangle webmangler write stats site and run npm run mangle note the time go into node modules webmangler and disable the embed finders of the html language plugin rebuild and mangle again note the time ,0 4838,17694797430.0,IssuesEvent,2021-08-24 14:14:00,iGEM-Engineering/iGEM-distribution,https://api.github.com/repos/iGEM-Engineering/iGEM-distribution,opened,Retrieve missing parts needs to pull,bug automation,"If the package structure was updated by the ""check package structure"" job, the new state won't necessarily be visible for the ""retrieve missing parts"" job, which needs to not just check out but pull to make sure it gets the most updated state.",1.0,"Retrieve missing parts needs to pull - If the package structure was updated by the ""check package structure"" job, the new state won't necessarily be visible for the ""retrieve missing parts"" job, which needs to not just check out but pull to make sure it gets the most updated state.",1,retrieve missing parts needs to pull if the package structure was updated by the check package structure job the new state won t necessarily be visible for the retrieve missing parts job which needs to not just check out but pull to make sure it gets the most updated state ,1 3037,13019446137.0,IssuesEvent,2020-07-26 22:40:23,akail/homeassistant,https://api.github.com/repos/akail/homeassistant,opened,Morning Routine,Automation Ideas,"Identify ways to automate my morning routine. - Playing morning music - Some othe rnotifications - Let me know why son gets to daycare.",1.0,"Morning Routine - Identify ways to automate my morning routine. - Playing morning music - Some othe rnotifications - Let me know why son gets to daycare.",1,morning routine identify ways to automate my morning routine playing morning music some othe rnotifications let me know why son gets to daycare ,1 4742,17370213379.0,IssuesEvent,2021-07-30 13:03:35,appsmithorg/appsmith,https://api.github.com/repos/appsmithorg/appsmith,closed,[Bug] ExecutionParams_spec.js fails due to incorrect check,Automation Bug,"ExecutionParams_spec.js fails due to incorrect check [What happened] ### Steps to reproduce the behaviour: Need to fix below line as data is missing . https://github.com/appsmithorg/appsmith/blob/release/app/client/cypress/integration/Smoke_TestSuite/ClientSideTests/ActionExecution/ExecutionParams_spec.js#L89 ### Important Details - Version: [Cloud / Self-Hosted vx.x] - OS: [e.g.MacOSX] - Browser [e.g. chrome, safari] - Environment [production, release, deploy preview] ",1.0,"[Bug] ExecutionParams_spec.js fails due to incorrect check - ExecutionParams_spec.js fails due to incorrect check [What happened] ### Steps to reproduce the behaviour: Need to fix below line as data is missing . https://github.com/appsmithorg/appsmith/blob/release/app/client/cypress/integration/Smoke_TestSuite/ClientSideTests/ActionExecution/ExecutionParams_spec.js#L89 ### Important Details - Version: [Cloud / Self-Hosted vx.x] - OS: [e.g.MacOSX] - Browser [e.g. chrome, safari] - Environment [production, release, deploy preview] ",1, executionparams spec js fails due to incorrect check executionparams spec js fails due to incorrect check steps to reproduce the behaviour need to fix below line as data is missing important details version os browser environment ,1 9647,30110787753.0,IssuesEvent,2023-06-30 07:34:21,kiwicom/orbit-swiftui,https://api.github.com/repos/kiwicom/orbit-swiftui,opened,Snapshot test older iOS versions,automation,"We could add snapshot tests for older iOS version (iOS13,14,15) to guard against regressions in layout changes ",1.0,"Snapshot test older iOS versions - We could add snapshot tests for older iOS version (iOS13,14,15) to guard against regressions in layout changes ",1,snapshot test older ios versions we could add snapshot tests for older ios version to guard against regressions in layout changes ,1 25368,12237904291.0,IssuesEvent,2020-05-04 18:50:54,Azure/azure-sdk-for-net,https://api.github.com/repos/Azure/azure-sdk-for-net,closed,Update TextAnalyticsServiceVersion when service GA,Client Cognitive Services TextAnalytics blocking-release,"Currently, service is still under preview and using version V3_0_preview_1(""v3.0-preview.1""). When service GA, this required to be updated.",1.0,"Update TextAnalyticsServiceVersion when service GA - Currently, service is still under preview and using version V3_0_preview_1(""v3.0-preview.1""). When service GA, this required to be updated.",0,update textanalyticsserviceversion when service ga currently service is still under preview and using version preview preview when service ga this required to be updated ,0 4217,15821679312.0,IssuesEvent,2021-04-05 20:55:12,alan-if/alan-docs,https://api.github.com/repos/alan-if/alan-docs,opened,Storing generated PDF creates rebase conflicts,:microscope: research :warning: important automation,"I know there are some advantages of storing a generated version of the PDF:s in the repo, e.g. there is always a version that can be pointed to from the website. But just having been through the hazzle of rebasing the dev-man branch onto a recent chunk of text changes, and been forced to handle a conflict for each and every commit (and probably also forcing a lot of that onto whoever wants to pull now), I think we need to re-think this. (I had no other problems...) I propose that we figure out a way to upload the generated files to repo releases or somewhere else (website), so that we can get rid of the generated binary files from the repo to streamline the editing of the two branches. I quickly looked for a git option to always ignore conflicts for some files but didn't find much. Possibly we could use a (custom?) merge-driver. Never heard about it before so that's why I added the ""research"" tag...",1.0,"Storing generated PDF creates rebase conflicts - I know there are some advantages of storing a generated version of the PDF:s in the repo, e.g. there is always a version that can be pointed to from the website. But just having been through the hazzle of rebasing the dev-man branch onto a recent chunk of text changes, and been forced to handle a conflict for each and every commit (and probably also forcing a lot of that onto whoever wants to pull now), I think we need to re-think this. (I had no other problems...) I propose that we figure out a way to upload the generated files to repo releases or somewhere else (website), so that we can get rid of the generated binary files from the repo to streamline the editing of the two branches. I quickly looked for a git option to always ignore conflicts for some files but didn't find much. Possibly we could use a (custom?) merge-driver. Never heard about it before so that's why I added the ""research"" tag...",1,storing generated pdf creates rebase conflicts i know there are some advantages of storing a generated version of the pdf s in the repo e g there is always a version that can be pointed to from the website but just having been through the hazzle of rebasing the dev man branch onto a recent chunk of text changes and been forced to handle a conflict for each and every commit and probably also forcing a lot of that onto whoever wants to pull now i think we need to re think this i had no other problems i propose that we figure out a way to upload the generated files to repo releases or somewhere else website so that we can get rid of the generated binary files from the repo to streamline the editing of the two branches i quickly looked for a git option to always ignore conflicts for some files but didn t find much possibly we could use a custom merge driver never heard about it before so that s why i added the research tag ,1 4282,15953569901.0,IssuesEvent,2021-04-15 12:36:28,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,opened,[automation/dotnet] Program output contains ASP.NET ,area/automation-api impact/usability kind/enhancement language/dotnet,"Working with [Automation API Inline Program in C#](https://github.com/pulumi/automation-api-examples/tree/main/dotnet/InlineProgram), I get a bunch of ASP.NET debug messages in my program's output. ``` $ dotnet run successfully initialized stack installing plugins... plugins installed setting up config... config set refreshing stack... Refreshing (dev) View Live: https://app.pulumi.com/mikhailshilkov/auto-cs-inline-azure/dev/updates/1 Resources: Duration: 1s refresh complete updating stack... info: Microsoft.Hosting.Lifetime[0] Now listening on: http://0.0.0.0:58218 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Production info: Microsoft.Hosting.Lifetime[0] Content root path: /Users/mikhailshilkov/Work/play/auto-api-azure-cs Updating (dev) View Live: https://app.pulumi.com/mikhailshilkov/auto-cs-inline-azure/dev/updates/2 info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/2 POST http://127.0.0.1:58218/pulumirpc.LanguageRuntime/GetRequiredPlugins application/grpc - info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0] Executing endpoint 'gRPC - /pulumirpc.LanguageRuntime/GetRequiredPlugins' info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1] Executed endpoint 'gRPC - /pulumirpc.LanguageRuntime/GetRequiredPlugins' info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/2 POST http://127.0.0.1:58218/pulumirpc.LanguageRuntime/GetRequiredPlugins application/grpc - - 200 - application/grpc 96.7268ms info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/2 POST http://127.0.0.1:58218/pulumirpc.LanguageRuntime/Run application/grpc - info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0] Executing endpoint 'gRPC - /pulumirpc.LanguageRuntime/Run' + pulumi:pulumi:Stack auto-cs-inline-azure-dev creating info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1] Executed endpoint 'gRPC - /pulumirpc.LanguageRuntime/Run' info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/2 POST http://127.0.0.1:58218/pulumirpc.LanguageRuntime/Run application/grpc - - 200 - application/grpc 1005.6498ms + pulumi:pulumi:Stack auto-cs-inline-azure-dev created Outputs: website_url: ""test"" Resources: + 1 created Duration: 2s info: Microsoft.Hosting.Lifetime[0] Application is shutting down... update summary: Create: 1 website url: test ``` Ideally, I wouldn't see it at all.",1.0,"[automation/dotnet] Program output contains ASP.NET - Working with [Automation API Inline Program in C#](https://github.com/pulumi/automation-api-examples/tree/main/dotnet/InlineProgram), I get a bunch of ASP.NET debug messages in my program's output. ``` $ dotnet run successfully initialized stack installing plugins... plugins installed setting up config... config set refreshing stack... Refreshing (dev) View Live: https://app.pulumi.com/mikhailshilkov/auto-cs-inline-azure/dev/updates/1 Resources: Duration: 1s refresh complete updating stack... info: Microsoft.Hosting.Lifetime[0] Now listening on: http://0.0.0.0:58218 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Production info: Microsoft.Hosting.Lifetime[0] Content root path: /Users/mikhailshilkov/Work/play/auto-api-azure-cs Updating (dev) View Live: https://app.pulumi.com/mikhailshilkov/auto-cs-inline-azure/dev/updates/2 info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/2 POST http://127.0.0.1:58218/pulumirpc.LanguageRuntime/GetRequiredPlugins application/grpc - info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0] Executing endpoint 'gRPC - /pulumirpc.LanguageRuntime/GetRequiredPlugins' info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1] Executed endpoint 'gRPC - /pulumirpc.LanguageRuntime/GetRequiredPlugins' info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/2 POST http://127.0.0.1:58218/pulumirpc.LanguageRuntime/GetRequiredPlugins application/grpc - - 200 - application/grpc 96.7268ms info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/2 POST http://127.0.0.1:58218/pulumirpc.LanguageRuntime/Run application/grpc - info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0] Executing endpoint 'gRPC - /pulumirpc.LanguageRuntime/Run' + pulumi:pulumi:Stack auto-cs-inline-azure-dev creating info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1] Executed endpoint 'gRPC - /pulumirpc.LanguageRuntime/Run' info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/2 POST http://127.0.0.1:58218/pulumirpc.LanguageRuntime/Run application/grpc - - 200 - application/grpc 1005.6498ms + pulumi:pulumi:Stack auto-cs-inline-azure-dev created Outputs: website_url: ""test"" Resources: + 1 created Duration: 2s info: Microsoft.Hosting.Lifetime[0] Application is shutting down... update summary: Create: 1 website url: test ``` Ideally, I wouldn't see it at all.",1, program output contains asp net working with i get a bunch of asp net debug messages in my program s output dotnet run successfully initialized stack installing plugins plugins installed setting up config config set refreshing stack refreshing dev view live resources duration refresh complete updating stack info microsoft hosting lifetime now listening on info microsoft hosting lifetime application started press ctrl c to shut down info microsoft hosting lifetime hosting environment production info microsoft hosting lifetime content root path users mikhailshilkov work play auto api azure cs updating dev view live info microsoft aspnetcore hosting diagnostics request starting http post application grpc info microsoft aspnetcore routing endpointmiddleware executing endpoint grpc pulumirpc languageruntime getrequiredplugins info microsoft aspnetcore routing endpointmiddleware executed endpoint grpc pulumirpc languageruntime getrequiredplugins info microsoft aspnetcore hosting diagnostics request finished http post application grpc application grpc info microsoft aspnetcore hosting diagnostics request starting http post application grpc info microsoft aspnetcore routing endpointmiddleware executing endpoint grpc pulumirpc languageruntime run pulumi pulumi stack auto cs inline azure dev creating info microsoft aspnetcore routing endpointmiddleware executed endpoint grpc pulumirpc languageruntime run info microsoft aspnetcore hosting diagnostics request finished http post application grpc application grpc pulumi pulumi stack auto cs inline azure dev created outputs website url test resources created duration info microsoft hosting lifetime application is shutting down update summary create website url test ideally i wouldn t see it at all ,1 855,8410118061.0,IssuesEvent,2018-10-12 09:34:17,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Wrong 'enter' key press simulation in Firefox,AREA: client BROWSER: Firefox COMPLEXITY: easy SYSTEM: automations TYPE: bug,"Test should hide the first grid column, but instead of this test sorts first column ```js import { Selector } from 'testcafe'; fixture `fix` .page `http://dolzhikov-w10/181/RegressionTestsSite/ASPxGridView/Bugs/T634867.aspx`; test('t', async t => { await t .setTestSpeed(0.2) .rightClick(Selector('a').withText('C1')) .pressKey('down down down down enter') .debug(); }); ``` Please fix status panel UI too ![image](https://user-images.githubusercontent.com/12034551/40230736-5d5827b0-5aa1-11e8-902d-4604e7bfdfef.png) ",1.0,"Wrong 'enter' key press simulation in Firefox - Test should hide the first grid column, but instead of this test sorts first column ```js import { Selector } from 'testcafe'; fixture `fix` .page `http://dolzhikov-w10/181/RegressionTestsSite/ASPxGridView/Bugs/T634867.aspx`; test('t', async t => { await t .setTestSpeed(0.2) .rightClick(Selector('a').withText('C1')) .pressKey('down down down down enter') .debug(); }); ``` Please fix status panel UI too ![image](https://user-images.githubusercontent.com/12034551/40230736-5d5827b0-5aa1-11e8-902d-4604e7bfdfef.png) ",1,wrong enter key press simulation in firefox test should hide the first grid column but instead of this test sorts first column js import selector from testcafe fixture fix page test t async t await t settestspeed rightclick selector a withtext presskey down down down down enter debug please fix status panel ui too ,1 93049,15872989940.0,IssuesEvent,2021-04-09 01:16:38,rishimehta365/YEET.ME.API,https://api.github.com/repos/rishimehta365/YEET.ME.API,opened,CVE-2020-7754 (High) detected in npm-user-validate-1.0.0.tgz,security vulnerability,"## CVE-2020-7754 - High Severity Vulnerability
Vulnerable Library - npm-user-validate-1.0.0.tgz

User validations for npm

Library home page: https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-1.0.0.tgz

Path to dependency file: YEET.ME.API/package.json

Path to vulnerable library: YEET.ME.API/node_modules/npm/node_modules/npm-user-validate/package.json

Dependency Hierarchy: - npm-6.14.6.tgz (Root Library) - :x: **npm-user-validate-1.0.0.tgz** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

This affects the package npm-user-validate before 1.0.1. The regex that validates user emails took exponentially longer to process long input strings beginning with @ characters.

Publish Date: 2020-10-27

URL: CVE-2020-7754

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7754

Release Date: 2020-07-21

Fix Resolution: 1.0.1

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-7754 (High) detected in npm-user-validate-1.0.0.tgz - ## CVE-2020-7754 - High Severity Vulnerability
Vulnerable Library - npm-user-validate-1.0.0.tgz

User validations for npm

Library home page: https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-1.0.0.tgz

Path to dependency file: YEET.ME.API/package.json

Path to vulnerable library: YEET.ME.API/node_modules/npm/node_modules/npm-user-validate/package.json

Dependency Hierarchy: - npm-6.14.6.tgz (Root Library) - :x: **npm-user-validate-1.0.0.tgz** (Vulnerable Library)

Found in base branch: master

Vulnerability Details

This affects the package npm-user-validate before 1.0.1. The regex that validates user emails took exponentially longer to process long input strings beginning with @ characters.

Publish Date: 2020-10-27

URL: CVE-2020-7754

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7754

Release Date: 2020-07-21

Fix Resolution: 1.0.1

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in npm user validate tgz cve high severity vulnerability vulnerable library npm user validate tgz user validations for npm library home page a href path to dependency file yeet me api package json path to vulnerable library yeet me api node modules npm node modules npm user validate package json dependency hierarchy npm tgz root library x npm user validate tgz vulnerable library found in base branch master vulnerability details this affects the package npm user validate before the regex that validates user emails took exponentially longer to process long input strings beginning with characters publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 5666,20680191277.0,IssuesEvent,2022-03-10 13:12:52,gchq/gaffer-docker,https://api.github.com/repos/gchq/gaffer-docker,opened,Add dependency version management,enhancement automation dependencies,"The version numbers of the helm charts are managed automatically with a [script](https://github.com/gchq/gaffer-docker/blob/41f9f516300666ace81be02bb94ea93ed045d3ee/cd/update_app_version.sh): https://github.com/gchq/gaffer-docker/blob/41f9f516300666ace81be02bb94ea93ed045d3ee/kubernetes/gaffer-road-traffic/Chart.yaml#L19 However, the versions of all the dependencies across the Dockerfiles, docker-compose .env files, and helm charts, are not managed automatically and must be updated manually before a gaffer-docker release: https://github.com/gchq/gaffer-docker/blob/377655a59d9ad603cdbd86d1d6b4e89d8fa67a3e/docker/gaffer/Dockerfile#L18-L24 https://github.com/gchq/gaffer-docker/blob/377655a59d9ad603cdbd86d1d6b4e89d8fa67a3e/docker/gaffer/.env#L1-L4 https://github.com/gchq/gaffer-docker/blob/377655a59d9ad603cdbd86d1d6b4e89d8fa67a3e/kubernetes/gaffer/values.yaml#L56-L58 Not only is this tedious for releases, requiring manual [update commits](https://github.com/gchq/gaffer-docker/commit/7ff2992172fc3dfa14ca2ce55d71c82e1d80d18b), but it is also error-prone as this manual process could be done wrong, or different versions could be inconsistent with each other. Therefore, a way of managing these versions should be added so it can be done automatically at time of release, and safely. ",1.0,"Add dependency version management - The version numbers of the helm charts are managed automatically with a [script](https://github.com/gchq/gaffer-docker/blob/41f9f516300666ace81be02bb94ea93ed045d3ee/cd/update_app_version.sh): https://github.com/gchq/gaffer-docker/blob/41f9f516300666ace81be02bb94ea93ed045d3ee/kubernetes/gaffer-road-traffic/Chart.yaml#L19 However, the versions of all the dependencies across the Dockerfiles, docker-compose .env files, and helm charts, are not managed automatically and must be updated manually before a gaffer-docker release: https://github.com/gchq/gaffer-docker/blob/377655a59d9ad603cdbd86d1d6b4e89d8fa67a3e/docker/gaffer/Dockerfile#L18-L24 https://github.com/gchq/gaffer-docker/blob/377655a59d9ad603cdbd86d1d6b4e89d8fa67a3e/docker/gaffer/.env#L1-L4 https://github.com/gchq/gaffer-docker/blob/377655a59d9ad603cdbd86d1d6b4e89d8fa67a3e/kubernetes/gaffer/values.yaml#L56-L58 Not only is this tedious for releases, requiring manual [update commits](https://github.com/gchq/gaffer-docker/commit/7ff2992172fc3dfa14ca2ce55d71c82e1d80d18b), but it is also error-prone as this manual process could be done wrong, or different versions could be inconsistent with each other. Therefore, a way of managing these versions should be added so it can be done automatically at time of release, and safely. ",1,add dependency version management the version numbers of the helm charts are managed automatically with a however the versions of all the dependencies across the dockerfiles docker compose env files and helm charts are not managed automatically and must be updated manually before a gaffer docker release not only is this tedious for releases requiring manual but it is also error prone as this manual process could be done wrong or different versions could be inconsistent with each other therefore a way of managing these versions should be added so it can be done automatically at time of release and safely ,1 178076,14659476900.0,IssuesEvent,2020-12-28 20:36:04,darigovresearch/Universal-Foreign-Language-Flashcards,https://api.github.com/repos/darigovresearch/Universal-Foreign-Language-Flashcards,opened,[es-it]: Missing Translation Chapter 10,bug documentation enhancement good first issue help wanted,"The below lines have not yet been translated in the Chapter - [ ] fashion - [ ] fashion show - [ ] brand - [ ] hat - [ ] cap - [ ] clothes - [ ] tracksuit - [ ] underwear - [ ] bra - [ ] briefs - [ ] panties - [ ] boxers - [ ] man's shirt - [ ] blouse - [ ] polo shirt - [ ] T-shirt - [ ] sweater - [ ] (a pair of) pants - [ ] jeans - [ ] (shorter) shorts - [ ] (longer) shorts - [ ] belt - [ ] socks - [ ] shoes - [ ] basketball shoes - [ ] boots - [ ] sandals - [ ] tennis shoes - [ ] flip-flops - [ ] high-heels - [ ] dress - [ ] skirt - [ ] woman's suit - [ ] man's suit - [ ] tie - [ ] sunglasses - [ ] swimsuit - [ ] short jacket, leather jacket - [ ] jacket - [ ] coat - [ ] parka - [ ] raincoat - [ ] scarf - [ ] gloves - [ ] pastimes - [ ] running / to go running - [ ] jogging / to go jogging - [ ] roller blading / to go roller blading - [ ] cycling / to go cycling - [ ] rock-climbing / to go rock-climbing - [ ] weight training / to train with weights - [ ] swimming / to go swimming - [ ] canoeing / to go canoeing - [ ] kayaking / to go kayaking - [ ] painting / to paint (art) - [ ] chess / to play chess - [ ] yoga - [ ] surfing - [ ] horseback riding - [ ] fitness and health - [ ] allergy - [ ] the flu - [ ] migraine headache - [ ] a cold - [ ] medicine - [ ] vitamins - [ ] sleeping pill - [ ] the body - [ ] throat - [ ] arm - [ ] elbow - [ ] wrist - [ ] hand - [ ] finger - [ ] chest - [ ] stomach - [ ] back - [ ] rear, behind - [ ] buttocks - [ ] leg - [ ] knee / knees - [ ] ankle - [ ] foot - [ ] toes - [ ] adjectives - [ ] in fashion - [ ] out of style - [ ] casual - [ ] dressy - [ ] disciplined - [ ] athletic - [ ] stressed - [ ] sick - [ ] to get dressed - [ ] to put (on) - [ ] to wear - [ ] to fit well/poorly - [ ] to gain weight - [ ] to be on a diet - [ ] to exercise - [ ] to lose weight - [ ] to be in shape - [ ] to get sick - [ ] to cough - [ ] to hurt (body part) - [ ] (to have a headache, a backache, sore feet, etc) - [ ] to rest - [ ] to relax - [ ] to be in good health - [ ] interrogative expressions - [ ] What is going on? - [ ] What interests you? - [ ] What bothers you? - [ ] What scares you? - [ ] What pleases you? - [ ] What amuses you? - [ ] impersonal questions - [ ] It is necessary (to) - [ ] It is advisable (to), It is better (to) - [ ] It is important (to) - [ ] It is necessary (to) - [ ] It is essential (to) - [ ] It is essential (to) - [ ] It is fun (to) ... - [ ] It is difficult (to) ... - [ ] It is easy (to) ... - [ ] It is tiring/annoying (to) ...",1.0,"[es-it]: Missing Translation Chapter 10 - The below lines have not yet been translated in the Chapter - [ ] fashion - [ ] fashion show - [ ] brand - [ ] hat - [ ] cap - [ ] clothes - [ ] tracksuit - [ ] underwear - [ ] bra - [ ] briefs - [ ] panties - [ ] boxers - [ ] man's shirt - [ ] blouse - [ ] polo shirt - [ ] T-shirt - [ ] sweater - [ ] (a pair of) pants - [ ] jeans - [ ] (shorter) shorts - [ ] (longer) shorts - [ ] belt - [ ] socks - [ ] shoes - [ ] basketball shoes - [ ] boots - [ ] sandals - [ ] tennis shoes - [ ] flip-flops - [ ] high-heels - [ ] dress - [ ] skirt - [ ] woman's suit - [ ] man's suit - [ ] tie - [ ] sunglasses - [ ] swimsuit - [ ] short jacket, leather jacket - [ ] jacket - [ ] coat - [ ] parka - [ ] raincoat - [ ] scarf - [ ] gloves - [ ] pastimes - [ ] running / to go running - [ ] jogging / to go jogging - [ ] roller blading / to go roller blading - [ ] cycling / to go cycling - [ ] rock-climbing / to go rock-climbing - [ ] weight training / to train with weights - [ ] swimming / to go swimming - [ ] canoeing / to go canoeing - [ ] kayaking / to go kayaking - [ ] painting / to paint (art) - [ ] chess / to play chess - [ ] yoga - [ ] surfing - [ ] horseback riding - [ ] fitness and health - [ ] allergy - [ ] the flu - [ ] migraine headache - [ ] a cold - [ ] medicine - [ ] vitamins - [ ] sleeping pill - [ ] the body - [ ] throat - [ ] arm - [ ] elbow - [ ] wrist - [ ] hand - [ ] finger - [ ] chest - [ ] stomach - [ ] back - [ ] rear, behind - [ ] buttocks - [ ] leg - [ ] knee / knees - [ ] ankle - [ ] foot - [ ] toes - [ ] adjectives - [ ] in fashion - [ ] out of style - [ ] casual - [ ] dressy - [ ] disciplined - [ ] athletic - [ ] stressed - [ ] sick - [ ] to get dressed - [ ] to put (on) - [ ] to wear - [ ] to fit well/poorly - [ ] to gain weight - [ ] to be on a diet - [ ] to exercise - [ ] to lose weight - [ ] to be in shape - [ ] to get sick - [ ] to cough - [ ] to hurt (body part) - [ ] (to have a headache, a backache, sore feet, etc) - [ ] to rest - [ ] to relax - [ ] to be in good health - [ ] interrogative expressions - [ ] What is going on? - [ ] What interests you? - [ ] What bothers you? - [ ] What scares you? - [ ] What pleases you? - [ ] What amuses you? - [ ] impersonal questions - [ ] It is necessary (to) - [ ] It is advisable (to), It is better (to) - [ ] It is important (to) - [ ] It is necessary (to) - [ ] It is essential (to) - [ ] It is essential (to) - [ ] It is fun (to) ... - [ ] It is difficult (to) ... - [ ] It is easy (to) ... - [ ] It is tiring/annoying (to) ...",0, missing translation chapter the below lines have not yet been translated in the chapter fashion fashion show brand hat cap clothes tracksuit underwear bra briefs panties boxers man s shirt blouse polo shirt t shirt sweater a pair of pants jeans shorter shorts longer shorts belt socks shoes basketball shoes boots sandals tennis shoes flip flops high heels dress skirt woman s suit man s suit tie sunglasses swimsuit short jacket leather jacket jacket coat parka raincoat scarf gloves pastimes running to go running jogging to go jogging roller blading to go roller blading cycling to go cycling rock climbing to go rock climbing weight training to train with weights swimming to go swimming canoeing to go canoeing kayaking to go kayaking painting to paint art chess to play chess yoga surfing horseback riding fitness and health allergy the flu migraine headache a cold medicine vitamins sleeping pill the body throat arm elbow wrist hand finger chest stomach back rear behind buttocks leg knee knees ankle foot toes adjectives in fashion out of style casual dressy disciplined athletic stressed sick to get dressed to put on to wear to fit well poorly to gain weight to be on a diet to exercise to lose weight to be in shape to get sick to cough to hurt body part to have a headache a backache sore feet etc to rest to relax to be in good health interrogative expressions what is going on what interests you what bothers you what scares you what pleases you what amuses you impersonal questions it is necessary to it is advisable to it is better to it is important to it is necessary to it is essential to it is essential to it is fun to it is difficult to it is easy to it is tiring annoying to ,0 2585,12311094912.0,IssuesEvent,2020-05-12 11:48:47,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,Syscheck automated tests: Test file_limit option,automation core/fim,"| Related issue | |----------------------| | wazuh/wazuh#4687 | ## Description Add tests for new option: `file_limit`. It should work on both manager and agent. The test must check: * Only positive values are allowed ``` (1235): Invalid value for element 'file_limit': -10. ``` * Disable `file_limit` ``` (6343): No limit set to maximum number of files to be monitored ``` * `file_limit` set to default: 100000 ``` (6342): Maximum number of files to be monitored: '100000' ``` * `file_limit` set to: * 1 * 10 * 100 * 1000 ``` (6342): Maximum number of files to be monitored: '1000' ``` * Alert for 80% of capacity (_scheduled_) ``` (6039): Sending DB 80% full alert. ``` * Alert for 90% of capacity (_scheduled_) ``` (6039): Sending DB 90% full alert. ``` * Alert for full database ``` (6041): Sending DB 100% full alert. ``` * Alert for returning to <80% of capacity (_scheduled_) ``` (6038): Sending DB back to normal alert. ``` ## Platforms * Linux * macOS * Windows",1.0,"Syscheck automated tests: Test file_limit option - | Related issue | |----------------------| | wazuh/wazuh#4687 | ## Description Add tests for new option: `file_limit`. It should work on both manager and agent. The test must check: * Only positive values are allowed ``` (1235): Invalid value for element 'file_limit': -10. ``` * Disable `file_limit` ``` (6343): No limit set to maximum number of files to be monitored ``` * `file_limit` set to default: 100000 ``` (6342): Maximum number of files to be monitored: '100000' ``` * `file_limit` set to: * 1 * 10 * 100 * 1000 ``` (6342): Maximum number of files to be monitored: '1000' ``` * Alert for 80% of capacity (_scheduled_) ``` (6039): Sending DB 80% full alert. ``` * Alert for 90% of capacity (_scheduled_) ``` (6039): Sending DB 90% full alert. ``` * Alert for full database ``` (6041): Sending DB 100% full alert. ``` * Alert for returning to <80% of capacity (_scheduled_) ``` (6038): Sending DB back to normal alert. ``` ## Platforms * Linux * macOS * Windows",1,syscheck automated tests test file limit option related issue wazuh wazuh description add tests for new option file limit it should work on both manager and agent the test must check only positive values are allowed invalid value for element file limit disable file limit no limit set to maximum number of files to be monitored file limit set to default maximum number of files to be monitored file limit set to maximum number of files to be monitored alert for of capacity scheduled sending db full alert alert for of capacity scheduled sending db full alert alert for full database sending db full alert alert for returning to of capacity scheduled sending db back to normal alert platforms linux macos windows,1 3950,15021538178.0,IssuesEvent,2021-02-01 15:55:28,coolOrangeLabs/powerGateTemplate,https://api.github.com/repos/coolOrangeLabs/powerGateTemplate,closed,Initial Checks: Server Test Environment,Automation,"The server is required for the coolOrange powerGateServer and coolOrange powerJobs therefore all of the below steps must be completed: + **The reseller or IT** of the customer prepares this environment by executing the steps below. + coolOrange will verify this environment as soon as all points are checked by the Reseller/Customer ### Server Checklist - [ ] Operating System: see [powerGate Server Installation Requirements](https://www.coolorange.com/wiki/doku.php?id=powergateserver:installation#requirements) - [ ] Hardware: Identical to the Vault Server Hardware recommendations - [ ] dedicated Autodesk Vault Test Database (dedicated SQL Server __not__ needed) - [ ] .Net Framework 4.7 or higher - [ ] Autodesk Vault Workgroup/Professional Client (in order to guarantee a working environment, the client can be uninstalled at the end of the project) - [ ] Create Windows User with local Administrator rights and document it in the Github wiki ""Test environment"" - if Administrator user is not possible minimum requirements are - [ ] Read/Write Permission for `C:\ProgramData\coolOrange\` - [ ] Read/Write Permission for `C:\ProgramData\Autodesk\` - [ ] Install Setups - [ ] Permission to install/execute Windows Services (Background: [powerGate Documentation](https://www.coolorange.com/wiki/doku.php?id=powergateserver:installation#windows_permissions)) - [ ] TeamViewer with [VPN feature](https://community.teamviewer.com/t5/Knowledge-Base/About-TeamViewer-VPN/ta-p/6354#toc-hId-707135802) - [ ] If TeamViewer is not possible the remote connection **must** support easy copy/paste from local to remote machine and vice versa - [ ] Configure following Firewall rules - [ ] Internal network TCP for 8080, required for powerGateServer - [ ] External network TCP for 4024, required for remote debugging tools for VS 2019: [read here more details](https://docs.microsoft.com/en-us/visualstudio/debugger/remote-debugger-port-assignments?view=vs-2019) - [ ] Install coolOrange powerGateServer: [Download](http://download.coolorange.com/) and [Installation guide](https://www.coolorange.com/wiki/doku.php?id=powergateserver:installation) - [ ] Activate the test license, you will find it the Github Wiki under ""Licenses"" - [ ] Install coolOrange powerJobs: [Download](http://download.coolorange.com/) and [Installation guide](https://www.coolorange.com/wiki/doku.php?id=powerjobs:installation) - [ ] Activate the test license, you will find it the Github Wiki under ""Licenses"" - [ ] Install [Pester](https://github.com/pester/Pester/wiki) v4 or higher by executing command `Install-Module Pester -Force -SkipPublisherCheck` in a PowerShell session started by Administrator - [ ] Install Telerik Fiddler, [Download here](https://www.telerik.com/download/fiddler) - [ ] Install Visual Studio Code with the below settings: [Download here](https://code.visualstudio.com/Download) - [ ] Select in the Installer `Add ""Open with Code"" action Windows Explorer file context menu` - [ ] Select in the Installer `Add ""Open with Code"" action Windows Explorer directory context menu` - [ ] Install Visual Studio Community with the below settings: [Download here](https://visualstudio.microsoft.com/de/vs/community/) - [ ] In the Installer select the package '.Net Desktop development' - [ ] Install Visual Studio 2019 remote debugging: [Download here](https://visualstudio.microsoft.com/de/downloads/) - [ ] Install git commant tools: [Download here](https://git-scm.com/downloads)",1.0,"Initial Checks: Server Test Environment - The server is required for the coolOrange powerGateServer and coolOrange powerJobs therefore all of the below steps must be completed: + **The reseller or IT** of the customer prepares this environment by executing the steps below. + coolOrange will verify this environment as soon as all points are checked by the Reseller/Customer ### Server Checklist - [ ] Operating System: see [powerGate Server Installation Requirements](https://www.coolorange.com/wiki/doku.php?id=powergateserver:installation#requirements) - [ ] Hardware: Identical to the Vault Server Hardware recommendations - [ ] dedicated Autodesk Vault Test Database (dedicated SQL Server __not__ needed) - [ ] .Net Framework 4.7 or higher - [ ] Autodesk Vault Workgroup/Professional Client (in order to guarantee a working environment, the client can be uninstalled at the end of the project) - [ ] Create Windows User with local Administrator rights and document it in the Github wiki ""Test environment"" - if Administrator user is not possible minimum requirements are - [ ] Read/Write Permission for `C:\ProgramData\coolOrange\` - [ ] Read/Write Permission for `C:\ProgramData\Autodesk\` - [ ] Install Setups - [ ] Permission to install/execute Windows Services (Background: [powerGate Documentation](https://www.coolorange.com/wiki/doku.php?id=powergateserver:installation#windows_permissions)) - [ ] TeamViewer with [VPN feature](https://community.teamviewer.com/t5/Knowledge-Base/About-TeamViewer-VPN/ta-p/6354#toc-hId-707135802) - [ ] If TeamViewer is not possible the remote connection **must** support easy copy/paste from local to remote machine and vice versa - [ ] Configure following Firewall rules - [ ] Internal network TCP for 8080, required for powerGateServer - [ ] External network TCP for 4024, required for remote debugging tools for VS 2019: [read here more details](https://docs.microsoft.com/en-us/visualstudio/debugger/remote-debugger-port-assignments?view=vs-2019) - [ ] Install coolOrange powerGateServer: [Download](http://download.coolorange.com/) and [Installation guide](https://www.coolorange.com/wiki/doku.php?id=powergateserver:installation) - [ ] Activate the test license, you will find it the Github Wiki under ""Licenses"" - [ ] Install coolOrange powerJobs: [Download](http://download.coolorange.com/) and [Installation guide](https://www.coolorange.com/wiki/doku.php?id=powerjobs:installation) - [ ] Activate the test license, you will find it the Github Wiki under ""Licenses"" - [ ] Install [Pester](https://github.com/pester/Pester/wiki) v4 or higher by executing command `Install-Module Pester -Force -SkipPublisherCheck` in a PowerShell session started by Administrator - [ ] Install Telerik Fiddler, [Download here](https://www.telerik.com/download/fiddler) - [ ] Install Visual Studio Code with the below settings: [Download here](https://code.visualstudio.com/Download) - [ ] Select in the Installer `Add ""Open with Code"" action Windows Explorer file context menu` - [ ] Select in the Installer `Add ""Open with Code"" action Windows Explorer directory context menu` - [ ] Install Visual Studio Community with the below settings: [Download here](https://visualstudio.microsoft.com/de/vs/community/) - [ ] In the Installer select the package '.Net Desktop development' - [ ] Install Visual Studio 2019 remote debugging: [Download here](https://visualstudio.microsoft.com/de/downloads/) - [ ] Install git commant tools: [Download here](https://git-scm.com/downloads)",1,initial checks server test environment the server is required for the coolorange powergateserver and coolorange powerjobs therefore all of the below steps must be completed the reseller or it of the customer prepares this environment by executing the steps below coolorange will verify this environment as soon as all points are checked by the reseller customer server checklist operating system see hardware identical to the vault server hardware recommendations dedicated autodesk vault test database dedicated sql server not needed net framework or higher autodesk vault workgroup professional client in order to guarantee a working environment the client can be uninstalled at the end of the project create windows user with local administrator rights and document it in the github wiki test environment if administrator user is not possible minimum requirements are read write permission for c programdata coolorange read write permission for c programdata autodesk install setups permission to install execute windows services background teamviewer with if teamviewer is not possible the remote connection must support easy copy paste from local to remote machine and vice versa configure following firewall rules internal network tcp for required for powergateserver external network tcp for required for remote debugging tools for vs install coolorange powergateserver and activate the test license you will find it the github wiki under licenses install coolorange powerjobs and activate the test license you will find it the github wiki under licenses install or higher by executing command install module pester force skippublishercheck in a powershell session started by administrator install telerik fiddler install visual studio code with the below settings select in the installer add open with code action windows explorer file context menu select in the installer add open with code action windows explorer directory context menu install visual studio community with the below settings in the installer select the package net desktop development install visual studio remote debugging install git commant tools ,1 1919,11097189215.0,IssuesEvent,2019-12-16 12:51:02,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,opened,FIM v2.0: Analysisd Integration tests: Error messages,automation component/fim,"## Description This issue covers the integration test for bad formated messages handling by analysisd. We will treat analysisd as a black box that receives integrity events by its input Unix socket, checking that the correct output is forwarded to the desired socket (simulating Wazuh DB). Twelve use cases have been defined to check that the FIM event messages are handled properly. These cases should be implemented in the same test. - [ ] No `timestamp` in a FIM scan message. - [ ] No `type` in a FIM message - [ ] Empty `type` in an event message. - [ ] Incorrect `type` in an event message. - [ ] The JSON in a DB sync message cannot be parsed. - [ ] The item `component` cannot be parsed as a string in a DB sync message. - [ ] The item `type` cannot be parsed as a string in a DB sync message. - [ ] The item `type` is unknown in a DB sync message. - [ ] No `data` field in a DB sync message. **Input location** The input location for all checks is the analysisd socket: `/var/ossec/queue/ossec/queue` **Output location** The output location for all checks is `ossec.log` file: `/var/ossec/logs/ossec.log` ## No `timestamp` in a FIM scan message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{""type"":""scan_end"",""data"":{}}` **Output message**: `No such member \""timestamp\"" in FIM scan info event.` ## No `type` in a FIM message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{""data"":{""timestamp"":1575442712}}` **Output message**: `Invalid FIM event` ## Empty `type` in an event message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{""type"":""event"",""data"":{""path"":""/home/test/file"",""mode"":""real-time"",""type"":""NULL"",""timestamp"":1575421671,""attributes"":{""type"":""file"",""size"":5,""perm"":""rw-r--r--"",""uid"":""0"",""gid"":""0"",""user_name"":""root"",""group_name"":""root"",""inode"":125,""mtime"":1575421671,""hash_md5"":""7be8ec9774fc128d067782134fbc37eb"",""hash_sha1"":""fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7"",""hash_sha256"":""ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7"",""checksum"":""00eaef78d06924374cb291957a1f63e224d76320""},""changed_attributes"":[""size"",""mtime"",""md5"",""sha1"",""sha256""],""old_attributes"":{""type"":""file"",""size"":18,""perm"":""rw-r--r--"",""uid"":""0"",""gid"":""0"",""user_name"":""root"",""group_name"":""root"",""inode"":125,""mtime"":1575416596,""hash_md5"":""a3ee12884966cb2512805d2500361913"",""hash_sha1"":""e6e8a61093715af1e4f2a3c0618ce014f0d94fde"",""hash_sha256"":""79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a"",""checksum"":""a02381378af3739e81bea813c1ff6e3d0027498d""}}} ` **Output message**: `No member 'type' in Syscheck JSON payload` ## Incorrect event `type` in an event message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{""type"":""event"",""data"":{""path"":""/home/test/file"",""mode"":""real-time"",""type"":""other"",""timestamp"":1575421671,""attributes"":{""type"":""file"",""size"":5,""perm"":""rw-r--r--"",""uid"":""0"",""gid"":""0"",""user_name"":""root"",""group_name"":""root"",""inode"":125,""mtime"":1575421671,""hash_md5"":""7be8ec9774fc128d067782134fbc37eb"",""hash_sha1"":""fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7"",""hash_sha256"":""ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7"",""checksum"":""00eaef78d06924374cb291957a1f63e224d76320""},""changed_attributes"":[""size"",""mtime"",""md5"",""sha1"",""sha256""],""old_attributes"":{""type"":""file"",""size"":18,""perm"":""rw-r--r--"",""uid"":""0"",""gid"":""0"",""user_name"":""root"",""group_name"":""root"",""inode"":125,""mtime"":1575416596,""hash_md5"":""a3ee12884966cb2512805d2500361913"",""hash_sha1"":""e6e8a61093715af1e4f2a3c0618ce014f0d94fde"",""hash_sha256"":""79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a"",""checksum"":""a02381378af3739e81bea813c1ff6e3d0027498d""}}} ` **Output message**: `Invalid 'type' value 'incorrect_value' in JSON payload.` ## The JSON in a DB sync message cannot be parsed **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{{""component"":""syscheck"",""type"":""integrity_check_global"",""data"":{""id"": 1575421330,""begin"":""/home/test/file"",""end"":""/home/test/file2"",""checksum"":""6bdaf5656029544cf0d08e7c4f4feceb0c45853c""}} ` **Output message**: `dbsync: Cannot parse JSON: %s"", lf->log` ## The item `component` cannot be parsed as a string in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{""type"":""integrity_check_global"",""data"":{""id"": 1575421330,""begin"":""/home/test/file"",""end"":""/home/test/file2"",""checksum"":""6bdaf5656029544cf0d08e7c4f4feceb0c45853c""}}` **Output message**: `dbsync: Corrupt message: cannot get component member.` ## The item `type` cannot be parsed as a string in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{""component"":""syscheck"",""data"":{""id"": 1575421330,""begin"":""/home/test/file"",""end"":""/home/test/file2"",""checksum"":""6bdaf5656029544cf0d08e7c4f4feceb0c45853c""}} ` **Output message**: `dbsync: Corrupt message: cannot get type member.` ## No `data` field in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{""component"":""syscheck"",""type"":""integrity_check_global"","""":{""id"": 1575421330,""begin"":""/home/test/file"",""end"":""/home/test/file2"",""checksum"":""6bdaf5656029544cf0d08e7c4f4feceb0c45853c""}} ` **Output message**: `dbsync: Corrupt message: cannot get data member.`",1.0,"FIM v2.0: Analysisd Integration tests: Error messages - ## Description This issue covers the integration test for bad formated messages handling by analysisd. We will treat analysisd as a black box that receives integrity events by its input Unix socket, checking that the correct output is forwarded to the desired socket (simulating Wazuh DB). Twelve use cases have been defined to check that the FIM event messages are handled properly. These cases should be implemented in the same test. - [ ] No `timestamp` in a FIM scan message. - [ ] No `type` in a FIM message - [ ] Empty `type` in an event message. - [ ] Incorrect `type` in an event message. - [ ] The JSON in a DB sync message cannot be parsed. - [ ] The item `component` cannot be parsed as a string in a DB sync message. - [ ] The item `type` cannot be parsed as a string in a DB sync message. - [ ] The item `type` is unknown in a DB sync message. - [ ] No `data` field in a DB sync message. **Input location** The input location for all checks is the analysisd socket: `/var/ossec/queue/ossec/queue` **Output location** The output location for all checks is `ossec.log` file: `/var/ossec/logs/ossec.log` ## No `timestamp` in a FIM scan message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{""type"":""scan_end"",""data"":{}}` **Output message**: `No such member \""timestamp\"" in FIM scan info event.` ## No `type` in a FIM message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{""data"":{""timestamp"":1575442712}}` **Output message**: `Invalid FIM event` ## Empty `type` in an event message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{""type"":""event"",""data"":{""path"":""/home/test/file"",""mode"":""real-time"",""type"":""NULL"",""timestamp"":1575421671,""attributes"":{""type"":""file"",""size"":5,""perm"":""rw-r--r--"",""uid"":""0"",""gid"":""0"",""user_name"":""root"",""group_name"":""root"",""inode"":125,""mtime"":1575421671,""hash_md5"":""7be8ec9774fc128d067782134fbc37eb"",""hash_sha1"":""fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7"",""hash_sha256"":""ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7"",""checksum"":""00eaef78d06924374cb291957a1f63e224d76320""},""changed_attributes"":[""size"",""mtime"",""md5"",""sha1"",""sha256""],""old_attributes"":{""type"":""file"",""size"":18,""perm"":""rw-r--r--"",""uid"":""0"",""gid"":""0"",""user_name"":""root"",""group_name"":""root"",""inode"":125,""mtime"":1575416596,""hash_md5"":""a3ee12884966cb2512805d2500361913"",""hash_sha1"":""e6e8a61093715af1e4f2a3c0618ce014f0d94fde"",""hash_sha256"":""79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a"",""checksum"":""a02381378af3739e81bea813c1ff6e3d0027498d""}}} ` **Output message**: `No member 'type' in Syscheck JSON payload` ## Incorrect event `type` in an event message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{""type"":""event"",""data"":{""path"":""/home/test/file"",""mode"":""real-time"",""type"":""other"",""timestamp"":1575421671,""attributes"":{""type"":""file"",""size"":5,""perm"":""rw-r--r--"",""uid"":""0"",""gid"":""0"",""user_name"":""root"",""group_name"":""root"",""inode"":125,""mtime"":1575421671,""hash_md5"":""7be8ec9774fc128d067782134fbc37eb"",""hash_sha1"":""fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7"",""hash_sha256"":""ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7"",""checksum"":""00eaef78d06924374cb291957a1f63e224d76320""},""changed_attributes"":[""size"",""mtime"",""md5"",""sha1"",""sha256""],""old_attributes"":{""type"":""file"",""size"":18,""perm"":""rw-r--r--"",""uid"":""0"",""gid"":""0"",""user_name"":""root"",""group_name"":""root"",""inode"":125,""mtime"":1575416596,""hash_md5"":""a3ee12884966cb2512805d2500361913"",""hash_sha1"":""e6e8a61093715af1e4f2a3c0618ce014f0d94fde"",""hash_sha256"":""79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a"",""checksum"":""a02381378af3739e81bea813c1ff6e3d0027498d""}}} ` **Output message**: `Invalid 'type' value 'incorrect_value' in JSON payload.` ## The JSON in a DB sync message cannot be parsed **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{{""component"":""syscheck"",""type"":""integrity_check_global"",""data"":{""id"": 1575421330,""begin"":""/home/test/file"",""end"":""/home/test/file2"",""checksum"":""6bdaf5656029544cf0d08e7c4f4feceb0c45853c""}} ` **Output message**: `dbsync: Cannot parse JSON: %s"", lf->log` ## The item `component` cannot be parsed as a string in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{""type"":""integrity_check_global"",""data"":{""id"": 1575421330,""begin"":""/home/test/file"",""end"":""/home/test/file2"",""checksum"":""6bdaf5656029544cf0d08e7c4f4feceb0c45853c""}}` **Output message**: `dbsync: Corrupt message: cannot get component member.` ## The item `type` cannot be parsed as a string in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{""component"":""syscheck"",""data"":{""id"": 1575421330,""begin"":""/home/test/file"",""end"":""/home/test/file2"",""checksum"":""6bdaf5656029544cf0d08e7c4f4feceb0c45853c""}} ` **Output message**: `dbsync: Corrupt message: cannot get type member.` ## No `data` field in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{""component"":""syscheck"",""type"":""integrity_check_global"","""":{""id"": 1575421330,""begin"":""/home/test/file"",""end"":""/home/test/file2"",""checksum"":""6bdaf5656029544cf0d08e7c4f4feceb0c45853c""}} ` **Output message**: `dbsync: Corrupt message: cannot get data member.`",1,fim analysisd integration tests error messages description this issue covers the integration test for bad formated messages handling by analysisd we will treat analysisd as a black box that receives integrity events by its input unix socket checking that the correct output is forwarded to the desired socket simulating wazuh db twelve use cases have been defined to check that the fim event messages are handled properly these cases should be implemented in the same test no timestamp in a fim scan message no type in a fim message empty type in an event message incorrect type in an event message the json in a db sync message cannot be parsed the item component cannot be parsed as a string in a db sync message the item type cannot be parsed as a string in a db sync message the item type is unknown in a db sync message no data field in a db sync message input location the input location for all checks is the analysisd socket var ossec queue ossec queue output location the output location for all checks is ossec log file var ossec logs ossec log no timestamp in a fim scan message input message vm ubuntu agent syscheck type scan end data output message no such member timestamp in fim scan info event no type in a fim message input message vm ubuntu agent syscheck data timestamp output message invalid fim event empty type in an event message input message vm ubuntu agent syscheck type event data path home test file mode real time type null timestamp attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum changed attributes old attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum output message no member type in syscheck json payload incorrect event type in an event message input message vm ubuntu agent syscheck type event data path home test file mode real time type other timestamp attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum changed attributes old attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum output message invalid type value incorrect value in json payload the json in a db sync message cannot be parsed input message vm test agent syscheck component syscheck type integrity check global data id begin home test file end home test checksum output message dbsync cannot parse json s lf log the item component cannot be parsed as a string in a db sync message input message vm test agent syscheck type integrity check global data id begin home test file end home test checksum output message dbsync corrupt message cannot get component member the item type cannot be parsed as a string in a db sync message input message vm test agent syscheck component syscheck data id begin home test file end home test checksum output message dbsync corrupt message cannot get type member no data field in a db sync message input message vm test agent syscheck component syscheck type integrity check global id begin home test file end home test checksum output message dbsync corrupt message cannot get data member ,1 104666,11418274188.0,IssuesEvent,2020-02-03 03:51:20,consento-org/consento-website,https://api.github.com/repos/consento-org/consento-website,closed,Add H2020 funding note,documentation,"Add the following sentence to the project website: > This project has received funding from the European Union’s Horizon 2020 > research and innovation programme within the framework of the LEDGER > Project funded under grant agreement No825268. Together with the EU emblem and LEDGER logo: ![Screenshot from 2020-01-31 12-25-41](https://user-images.githubusercontent.com/227762/73510231-d4ec7880-4424-11ea-836d-85d5c9167f98.png) (this is just a screenshot, use original logos with appropriate quality)",1.0,"Add H2020 funding note - Add the following sentence to the project website: > This project has received funding from the European Union’s Horizon 2020 > research and innovation programme within the framework of the LEDGER > Project funded under grant agreement No825268. Together with the EU emblem and LEDGER logo: ![Screenshot from 2020-01-31 12-25-41](https://user-images.githubusercontent.com/227762/73510231-d4ec7880-4424-11ea-836d-85d5c9167f98.png) (this is just a screenshot, use original logos with appropriate quality)",0,add funding note add the following sentence to the project website this project has received funding from the european union’s horizon research and innovation programme within the framework of the ledger project funded under grant agreement together with the eu emblem and ledger logo this is just a screenshot use original logos with appropriate quality ,0 10155,31813653857.0,IssuesEvent,2023-09-13 18:43:16,inbucket/inbucket,https://api.github.com/repos/inbucket/inbucket,closed,goreleaser: archives.rlcp should not be used anymore,automation,"Printed by goreleaser 1.20 > DEPRECATED: archives.rlcp should not be used anymore, check https://goreleaser.com/deprecations#archivesrlcp for more info",1.0,"goreleaser: archives.rlcp should not be used anymore - Printed by goreleaser 1.20 > DEPRECATED: archives.rlcp should not be used anymore, check https://goreleaser.com/deprecations#archivesrlcp for more info",1,goreleaser archives rlcp should not be used anymore printed by goreleaser deprecated archives rlcp should not be used anymore check for more info,1 15873,20049294003.0,IssuesEvent,2022-02-03 03:00:23,joelmiller/InfectiousMath,https://api.github.com/repos/joelmiller/InfectiousMath,opened,Branching processes and epidemic probability,Dynamical Systems Branching Process,"If we start with an offspring distribution we can calculate the probability of an epidemic. This is a ""Galton-Watson"" process. The probability of extinction by generation `g` can be expressed through a cobweb diagram.",1.0,"Branching processes and epidemic probability - If we start with an offspring distribution we can calculate the probability of an epidemic. This is a ""Galton-Watson"" process. The probability of extinction by generation `g` can be expressed through a cobweb diagram.",0,branching processes and epidemic probability if we start with an offspring distribution we can calculate the probability of an epidemic this is a galton watson process the probability of extinction by generation g can be expressed through a cobweb diagram ,0 340853,24674630744.0,IssuesEvent,2022-10-18 16:00:46,esemi/prague_base,https://api.github.com/repos/esemi/prague_base,closed,describe vision,documentation,"Problem --- Искать-отслеживать дешёвые билеты ради ""слетать чисто пиццу поесть куданить по чипу на выходные"" требует много времени и помнить об этом. - лоукостеров много - направлений ещё больше - большие агрегаторы не умеют толком в лоукостеры (гуглофлай, момондо, авиаселс) - а небольших агрегаторов мало, буквально один (кек) и он умеет совсем не во все авиакомпании - и фильтров у них не хватает для поиска ""вылететь после рабочего дня или ночью на субботу, а обратно вернуться до стендапа"" - каналы с дешёвыми билетами работают по всей европе, а хочется только из Праги - фликсбасы и поезда - не тот комфорт, хочу только самолёты - пересадки и стыковки - слишком много сил едят, можно без них - даты не принципиальны, хочу спонтанности вида ""о, на след выхи можно мотнуться выпить пива в Дублин, погнали"" Solver --- Канал в телеграмме, в который в начале недели прилетает список рейсов лоукостеров из праги на ближайший викенд. Все билеты предварительно фильтруются по времени отправления из Праги (после 16:00 пятница и до 09:00 субботы) и по времени возвращения в Прагу же (с 22:00 воскресенья и до 12:00 понедельника). Все билеты сгруппированы по городам направления. Все города отсортированы по цене туда+обратно Future --- Возможность подписаться на индивидуальный поиск за отдельную плату Группировать направления по тегам Договориться с лоукостерами о сотрудничестве Парсить сайты лоукостеров самим, вместо агрегаторов ",1.0,"describe vision - Problem --- Искать-отслеживать дешёвые билеты ради ""слетать чисто пиццу поесть куданить по чипу на выходные"" требует много времени и помнить об этом. - лоукостеров много - направлений ещё больше - большие агрегаторы не умеют толком в лоукостеры (гуглофлай, момондо, авиаселс) - а небольших агрегаторов мало, буквально один (кек) и он умеет совсем не во все авиакомпании - и фильтров у них не хватает для поиска ""вылететь после рабочего дня или ночью на субботу, а обратно вернуться до стендапа"" - каналы с дешёвыми билетами работают по всей европе, а хочется только из Праги - фликсбасы и поезда - не тот комфорт, хочу только самолёты - пересадки и стыковки - слишком много сил едят, можно без них - даты не принципиальны, хочу спонтанности вида ""о, на след выхи можно мотнуться выпить пива в Дублин, погнали"" Solver --- Канал в телеграмме, в который в начале недели прилетает список рейсов лоукостеров из праги на ближайший викенд. Все билеты предварительно фильтруются по времени отправления из Праги (после 16:00 пятница и до 09:00 субботы) и по времени возвращения в Прагу же (с 22:00 воскресенья и до 12:00 понедельника). Все билеты сгруппированы по городам направления. Все города отсортированы по цене туда+обратно Future --- Возможность подписаться на индивидуальный поиск за отдельную плату Группировать направления по тегам Договориться с лоукостерами о сотрудничестве Парсить сайты лоукостеров самим, вместо агрегаторов ",0,describe vision problem искать отслеживать дешёвые билеты ради слетать чисто пиццу поесть куданить по чипу на выходные требует много времени и помнить об этом лоукостеров много направлений ещё больше большие агрегаторы не умеют толком в лоукостеры гуглофлай момондо авиаселс а небольших агрегаторов мало буквально один кек и он умеет совсем не во все авиакомпании и фильтров у них не хватает для поиска вылететь после рабочего дня или ночью на субботу а обратно вернуться до стендапа каналы с дешёвыми билетами работают по всей европе а хочется только из праги фликсбасы и поезда не тот комфорт хочу только самолёты пересадки и стыковки слишком много сил едят можно без них даты не принципиальны хочу спонтанности вида о на след выхи можно мотнуться выпить пива в дублин погнали solver канал в телеграмме в который в начале недели прилетает список рейсов лоукостеров из праги на ближайший викенд все билеты предварительно фильтруются по времени отправления из праги после пятница и до субботы и по времени возвращения в прагу же с воскресенья и до понедельника все билеты сгруппированы по городам направления все города отсортированы по цене туда обратно future возможность подписаться на индивидуальный поиск за отдельную плату группировать направления по тегам договориться с лоукостерами о сотрудничестве парсить сайты лоукостеров самим вместо агрегаторов ,0 355372,25175915448.0,IssuesEvent,2022-11-11 09:14:52,loyhongshenggg/pe,https://api.github.com/repos/loyhongshenggg/pe,opened,Unable to scroll despite the existance of scrollbar in UG,severity.Low type.DocumentationBug,"![image.png](https://raw.githubusercontent.com/loyhongshenggg/pe/main/files/1dbe6f99-f2f4-429f-99e6-d7a478e6be08.png) Refer to this section of the DG, specifically example input. Try scrolling the scroll bar to get the example input. ",1.0,"Unable to scroll despite the existance of scrollbar in UG - ![image.png](https://raw.githubusercontent.com/loyhongshenggg/pe/main/files/1dbe6f99-f2f4-429f-99e6-d7a478e6be08.png) Refer to this section of the DG, specifically example input. Try scrolling the scroll bar to get the example input. ",0,unable to scroll despite the existance of scrollbar in ug refer to this section of the dg specifically example input try scrolling the scroll bar to get the example input ,0 9500,29086301843.0,IssuesEvent,2023-05-16 00:22:19,AzureAD/microsoft-authentication-library-for-objc,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-objc,closed,Automation tests failure,duplicate automation failure,"@AzureAD/appleidentity Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Merge pull request #1712 from AzureAD/fidelianawar/ciam_authority [0a6d75cb490b0b275fbec7f8ca0df993674bac8d] Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1097238&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1097238&view=logs)",1.0,"Automation tests failure - @AzureAD/appleidentity Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Merge pull request #1712 from AzureAD/fidelianawar/ciam_authority [0a6d75cb490b0b275fbec7f8ca0df993674bac8d] Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1097238&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1097238&view=logs)",1,automation tests failure azuread appleidentity automation failed for ran against commit merge pull request from azuread fidelianawar ciam authority pipeline url ,1 26332,4216424277.0,IssuesEvent,2016-06-30 09:13:36,CloudOpting/cloudopting-manager,https://api.github.com/repos/CloudOpting/cloudopting-manager,closed,Create a drawing for ilustrate the interficies between cloudopting modules,help wanted in testing,Create a drawing for ilustrate the interficies between cloudopting modules,1.0,Create a drawing for ilustrate the interficies between cloudopting modules - Create a drawing for ilustrate the interficies between cloudopting modules,0,create a drawing for ilustrate the interficies between cloudopting modules create a drawing for ilustrate the interficies between cloudopting modules,0 4915,18023523838.0,IssuesEvent,2021-09-16 23:18:12,rancher/qa-tasks,https://api.github.com/repos/rancher/qa-tasks,closed,v1 (Dashboard) Cluster Provisioning,[zube]: Feature Automation,Automate test cases around cluster provisioning from the v1 API (Dashboard) ,1.0,v1 (Dashboard) Cluster Provisioning - Automate test cases around cluster provisioning from the v1 API (Dashboard) ,1, dashboard cluster provisioning automate test cases around cluster provisioning from the api dashboard ,1 356710,25176246568.0,IssuesEvent,2022-11-11 09:30:57,Edfernape/pe,https://api.github.com/repos/Edfernape/pe,opened,Incomplete section in the Developer Guide,type.DocumentationBug severity.VeryLow,"The section ""[Proposed] Data archiving"" in the Developer Guide is incomplete. It might better to indicate ""To be updated"" or ""Coming soon"" instead so the reader would know its status. ![Screenshot (45).png](https://raw.githubusercontent.com/Edfernape/pe/main/files/f582fea3-4ef7-4099-8ade-41b839f81832.png) ",1.0,"Incomplete section in the Developer Guide - The section ""[Proposed] Data archiving"" in the Developer Guide is incomplete. It might better to indicate ""To be updated"" or ""Coming soon"" instead so the reader would know its status. ![Screenshot (45).png](https://raw.githubusercontent.com/Edfernape/pe/main/files/f582fea3-4ef7-4099-8ade-41b839f81832.png) ",0,incomplete section in the developer guide the section data archiving in the developer guide is incomplete it might better to indicate to be updated or coming soon instead so the reader would know its status ,0 252,5031880570.0,IssuesEvent,2016-12-16 09:12:44,xcat2/xcat-core,https://api.github.com/repos/xcat2/xcat-core,closed,[FVT] Failed to get node message in /var/lib/dhcpd/dhcpd.leases after run makedhcp -a,component:automation component:dhcp status:pending,"Using latest daily build to run diskfull installation by automation ``` [root@c910f04x12v02 result]# lsxcatd -v Version 2.13 (git commit c44f3c60d6a6f57cda9e1c58a285c294dfe4ca26, built Thu Dec 1 09:00:50 EST 2016) ``` After run ``makedhcp -a`` and waiting for 10 seconds, failed to get CN node message in ``/var/lib/dhcpd/dhcpd.leases`` ``` 1. RUN:makedhcp -a 2. RUN:sleep 10 3. RUN: cat /var/lib/dhcpd/dhcpd.leases|grep c910f04x12v04 RETURN: rc = 1 CHECK:output =~ c910f04x12v04 [Failed] ``` But after more than 10 seconds the node message is shown.",1.0,"[FVT] Failed to get node message in /var/lib/dhcpd/dhcpd.leases after run makedhcp -a - Using latest daily build to run diskfull installation by automation ``` [root@c910f04x12v02 result]# lsxcatd -v Version 2.13 (git commit c44f3c60d6a6f57cda9e1c58a285c294dfe4ca26, built Thu Dec 1 09:00:50 EST 2016) ``` After run ``makedhcp -a`` and waiting for 10 seconds, failed to get CN node message in ``/var/lib/dhcpd/dhcpd.leases`` ``` 1. RUN:makedhcp -a 2. RUN:sleep 10 3. RUN: cat /var/lib/dhcpd/dhcpd.leases|grep c910f04x12v04 RETURN: rc = 1 CHECK:output =~ c910f04x12v04 [Failed] ``` But after more than 10 seconds the node message is shown.",1, failed to get node message in var lib dhcpd dhcpd leases after run makedhcp a using latest daily build to run diskfull installation by automation lsxcatd v version git commit built thu dec est after run makedhcp a and waiting for seconds failed to get cn node message in var lib dhcpd dhcpd leases run makedhcp a run sleep run cat var lib dhcpd dhcpd leases grep return rc check output but after more than seconds the node message is shown ,1 424942,12325582560.0,IssuesEvent,2020-05-13 15:15:10,eclipse/codewind,https://api.github.com/repos/eclipse/codewind,closed,SVT : Eclipse : Mac : Appsody Spring Boot Kafka project does not start,area/appsody kind/bug priority/stopship," **Codewind version:**0.12.0 **OS:**Mac **Che version:** **IDE extension version:**Eclipse jee 2019-09 **IDE version:** **Kubernetes cluster:** **Description:** Not sure if this is as expected but Appsody Spring Boot Quarkus project does not start [pfe.log](https://github.com/eclipse/codewind/files/4593916/pfe.log) [springkafka.log](https://github.com/eclipse/codewind/files/4593917/springkafka.log) **Steps to reproduce:** **Workaround:** ",1.0,"SVT : Eclipse : Mac : Appsody Spring Boot Kafka project does not start - **Codewind version:**0.12.0 **OS:**Mac **Che version:** **IDE extension version:**Eclipse jee 2019-09 **IDE version:** **Kubernetes cluster:** **Description:** Not sure if this is as expected but Appsody Spring Boot Quarkus project does not start [pfe.log](https://github.com/eclipse/codewind/files/4593916/pfe.log) [springkafka.log](https://github.com/eclipse/codewind/files/4593917/springkafka.log) **Steps to reproduce:** **Workaround:** ",0,svt eclipse mac appsody spring boot kafka project does not start codewind version os mac che version ide extension version eclipse jee ide version kubernetes cluster description not sure if this is as expected but appsody spring boot quarkus project does not start steps to reproduce workaround ,0 269781,28960280753.0,IssuesEvent,2023-05-10 01:29:21,joshbnewton31080/python-poetry-tutorial,https://api.github.com/repos/joshbnewton31080/python-poetry-tutorial,opened,py-1.9.0-py2.py3-none-any.whl: 2 vulnerabilities (highest severity is: 7.5),Mend: dependency security vulnerability,"
Vulnerable Library - py-1.9.0-py2.py3-none-any.whl

library with cross-python path, ini-parsing, io, code, log facilities

Library home page: https://files.pythonhosted.org/packages/68/0f/41a43535b52a81e4f29e420a151032d26f08b62206840c48d14b70e53376/py-1.9.0-py2.py3-none-any.whl

Path to dependency file: /requirements.txt

Path to vulnerable library: /requirements.txt,/requirements.txt

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (py version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2020-29651](https://www.mend.io/vulnerability-database/CVE-2020-29651) | High | 7.5 | py-1.9.0-py2.py3-none-any.whl | Direct | 1.11.0 | ✅ | | [CVE-2022-42969](https://www.mend.io/vulnerability-database/CVE-2022-42969) | High | 7.5 | py-1.9.0-py2.py3-none-any.whl | Direct | N/A | ❌ | ## Details
CVE-2020-29651 ### Vulnerable Library - py-1.9.0-py2.py3-none-any.whl

library with cross-python path, ini-parsing, io, code, log facilities

Library home page: https://files.pythonhosted.org/packages/68/0f/41a43535b52a81e4f29e420a151032d26f08b62206840c48d14b70e53376/py-1.9.0-py2.py3-none-any.whl

Path to dependency file: /requirements.txt

Path to vulnerable library: /requirements.txt,/requirements.txt

Dependency Hierarchy: - :x: **py-1.9.0-py2.py3-none-any.whl** (Vulnerable Library)

Found in base branch: master

### Vulnerability Details

A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality.

Publish Date: 2020-12-09

URL: CVE-2020-29651

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-hj5v-574p-mj7c

Release Date: 2020-12-09

Fix Resolution: 1.11.0

:rescue_worker_helmet: Automatic Remediation is available for this issue
CVE-2022-42969 ### Vulnerable Library - py-1.9.0-py2.py3-none-any.whl

library with cross-python path, ini-parsing, io, code, log facilities

Library home page: https://files.pythonhosted.org/packages/68/0f/41a43535b52a81e4f29e420a151032d26f08b62206840c48d14b70e53376/py-1.9.0-py2.py3-none-any.whl

Path to dependency file: /requirements.txt

Path to vulnerable library: /requirements.txt,/requirements.txt

Dependency Hierarchy: - :x: **py-1.9.0-py2.py3-none-any.whl** (Vulnerable Library)

Found in base branch: master

### Vulnerability Details

The py library through 1.11.0 for Python allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data, because the InfoSvnCommand argument is mishandled.

Publish Date: 2022-10-16

URL: CVE-2022-42969

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

***

:rescue_worker_helmet: Automatic Remediation is available for this issue.

",True,"py-1.9.0-py2.py3-none-any.whl: 2 vulnerabilities (highest severity is: 7.5) -
Vulnerable Library - py-1.9.0-py2.py3-none-any.whl

library with cross-python path, ini-parsing, io, code, log facilities

Library home page: https://files.pythonhosted.org/packages/68/0f/41a43535b52a81e4f29e420a151032d26f08b62206840c48d14b70e53376/py-1.9.0-py2.py3-none-any.whl

Path to dependency file: /requirements.txt

Path to vulnerable library: /requirements.txt,/requirements.txt

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (py version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2020-29651](https://www.mend.io/vulnerability-database/CVE-2020-29651) | High | 7.5 | py-1.9.0-py2.py3-none-any.whl | Direct | 1.11.0 | ✅ | | [CVE-2022-42969](https://www.mend.io/vulnerability-database/CVE-2022-42969) | High | 7.5 | py-1.9.0-py2.py3-none-any.whl | Direct | N/A | ❌ | ## Details
CVE-2020-29651 ### Vulnerable Library - py-1.9.0-py2.py3-none-any.whl

library with cross-python path, ini-parsing, io, code, log facilities

Library home page: https://files.pythonhosted.org/packages/68/0f/41a43535b52a81e4f29e420a151032d26f08b62206840c48d14b70e53376/py-1.9.0-py2.py3-none-any.whl

Path to dependency file: /requirements.txt

Path to vulnerable library: /requirements.txt,/requirements.txt

Dependency Hierarchy: - :x: **py-1.9.0-py2.py3-none-any.whl** (Vulnerable Library)

Found in base branch: master

### Vulnerability Details

A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality.

Publish Date: 2020-12-09

URL: CVE-2020-29651

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://github.com/advisories/GHSA-hj5v-574p-mj7c

Release Date: 2020-12-09

Fix Resolution: 1.11.0

:rescue_worker_helmet: Automatic Remediation is available for this issue
CVE-2022-42969 ### Vulnerable Library - py-1.9.0-py2.py3-none-any.whl

library with cross-python path, ini-parsing, io, code, log facilities

Library home page: https://files.pythonhosted.org/packages/68/0f/41a43535b52a81e4f29e420a151032d26f08b62206840c48d14b70e53376/py-1.9.0-py2.py3-none-any.whl

Path to dependency file: /requirements.txt

Path to vulnerable library: /requirements.txt,/requirements.txt

Dependency Hierarchy: - :x: **py-1.9.0-py2.py3-none-any.whl** (Vulnerable Library)

Found in base branch: master

### Vulnerability Details

The py library through 1.11.0 for Python allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data, because the InfoSvnCommand argument is mishandled.

Publish Date: 2022-10-16

URL: CVE-2022-42969

### CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

***

:rescue_worker_helmet: Automatic Remediation is available for this issue.

",0,py none any whl vulnerabilities highest severity is vulnerable library py none any whl library with cross python path ini parsing io code log facilities library home page a href path to dependency file requirements txt path to vulnerable library requirements txt requirements txt vulnerabilities cve severity cvss dependency type fixed in py version remediation available high py none any whl direct high py none any whl direct n a details cve vulnerable library py none any whl library with cross python path ini parsing io code log facilities library home page a href path to dependency file requirements txt path to vulnerable library requirements txt requirements txt dependency hierarchy x py none any whl vulnerable library found in base branch master vulnerability details a denial of service via regular expression in the py path svnwc component of py aka python py through could be used by attackers to cause a compute time denial of service attack by supplying malicious input to the blame functionality publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library py none any whl library with cross python path ini parsing io code log facilities library home page a href path to dependency file requirements txt path to vulnerable library requirements txt requirements txt dependency hierarchy x py none any whl vulnerable library found in base branch master vulnerability details the py library through for python allows remote attackers to conduct a redos regular expression denial of service attack via a subversion repository with crafted info data because the infosvncommand argument is mishandled publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href rescue worker helmet automatic remediation is available for this issue ,0 5719,20834721434.0,IssuesEvent,2022-03-20 01:54:07,theglus/Home-Assistant-Config,https://api.github.com/repos/theglus/Home-Assistant-Config,closed,Notify when batteries are low,automation notifications,"# Requirements - [x] Create automation to notify when batteries are low. # Resources * [Low battery level detection & notification for all battery sensors](https://community.home-assistant.io/t/low-battery-level-detection-notification-for-all-battery-sensors/258664)",1.0,"Notify when batteries are low - # Requirements - [x] Create automation to notify when batteries are low. # Resources * [Low battery level detection & notification for all battery sensors](https://community.home-assistant.io/t/low-battery-level-detection-notification-for-all-battery-sensors/258664)",1,notify when batteries are low requirements create automation to notify when batteries are low resources ,1 5041,18366164683.0,IssuesEvent,2021-10-10 04:27:33,theglus/Home-Assistant-Config,https://api.github.com/repos/theglus/Home-Assistant-Config,opened,Alert if sensor.sink = true,automation,"# Requirements - [ ] Create automation to alert when water sensor triggered. - [ ] Create automation/script to [flash lights](https://community.home-assistant.io/t/simple-flashing-lights-via-on-off-and-delay-restore-previous-light-states/258099). **Trigger** - If washer state = `on`. **Action** - Notify smartphones. - Flash lights red # Resources - [Looking for Blueprint to make lights flash](https://community.home-assistant.io/t/looking-for-blueprint-to-make-lights-flash/256865). - [Simple … flashing light’s via on off and delay. Restore previous light states](https://community.home-assistant.io/t/simple-flashing-lights-via-on-off-and-delay-restore-previous-light-states/258099).",1.0,"Alert if sensor.sink = true - # Requirements - [ ] Create automation to alert when water sensor triggered. - [ ] Create automation/script to [flash lights](https://community.home-assistant.io/t/simple-flashing-lights-via-on-off-and-delay-restore-previous-light-states/258099). **Trigger** - If washer state = `on`. **Action** - Notify smartphones. - Flash lights red # Resources - [Looking for Blueprint to make lights flash](https://community.home-assistant.io/t/looking-for-blueprint-to-make-lights-flash/256865). - [Simple … flashing light’s via on off and delay. Restore previous light states](https://community.home-assistant.io/t/simple-flashing-lights-via-on-off-and-delay-restore-previous-light-states/258099).",1,alert if sensor sink true requirements create automation to alert when water sensor triggered create automation script to trigger if washer state on action notify smartphones flash lights red resources ,1 7326,24648543871.0,IssuesEvent,2022-10-17 16:37:39,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Test - Content.Publish Permission,automation,"1. Verify Content Publish Permission 1.1 authenticates Janis (api owner) 1.2 Get the authorization token for the service account created with out ""Content.Publish"" permission 1.3 Prepare the Request Specification for the API 1.4 Verify that the document is not published without ""Content.Publish"" permission",1.0,"Test - Content.Publish Permission - 1. Verify Content Publish Permission 1.1 authenticates Janis (api owner) 1.2 Get the authorization token for the service account created with out ""Content.Publish"" permission 1.3 Prepare the Request Specification for the API 1.4 Verify that the document is not published without ""Content.Publish"" permission",1,test content publish permission verify content publish permission authenticates janis api owner get the authorization token for the service account created with out content publish permission prepare the request specification for the api verify that the document is not published without content publish permission,1 100451,11195859290.0,IssuesEvent,2020-01-03 08:14:31,jekyll/jekyll,https://api.github.com/repos/jekyll/jekyll,opened,docs: Installation procedure for Debian is wrong,documentation,"## Motivation Following your instruction on Debian 10, doesn't work. Here the detailed explanation: https://stackoverflow.com/questions/59569136/how-to-install-jekyll-on-ubuntu ## Suggestion Update your instruction in order to allow Debian users to install Jekyll. In detail, remove the installation of `Ruby` from the repository and instead: ``` curl -sSL https://get.rvm.io | bash -s stable restart the shell rvm install 2.6.0 ```",1.0,"docs: Installation procedure for Debian is wrong - ## Motivation Following your instruction on Debian 10, doesn't work. Here the detailed explanation: https://stackoverflow.com/questions/59569136/how-to-install-jekyll-on-ubuntu ## Suggestion Update your instruction in order to allow Debian users to install Jekyll. In detail, remove the installation of `Ruby` from the repository and instead: ``` curl -sSL https://get.rvm.io | bash -s stable restart the shell rvm install 2.6.0 ```",0,docs installation procedure for debian is wrong motivation following your instruction on debian doesn t work here the detailed explanation suggestion update your instruction in order to allow debian users to install jekyll in detail remove the installation of ruby from the repository and instead curl ssl bash s stable restart the shell rvm install ,0 175,4402558397.0,IssuesEvent,2016-08-11 01:52:13,dolson334/bantor,https://api.github.com/repos/dolson334/bantor,closed,Create files for every table ,Scripting Automation task,"Create a dir in bantor/database called sqlTables. In this dir create a new sql file for every database: 1. We need the table create scripts in each file. 2. To get the create script right click on the table --> Script table as --> Create to --> file --> select one of your new sql files So when your done, for each sql table in bantor there should be a file in bantor/database containing that tables create scripts. ",1.0,"Create files for every table - Create a dir in bantor/database called sqlTables. In this dir create a new sql file for every database: 1. We need the table create scripts in each file. 2. To get the create script right click on the table --> Script table as --> Create to --> file --> select one of your new sql files So when your done, for each sql table in bantor there should be a file in bantor/database containing that tables create scripts. ",1,create files for every table create a dir in bantor database called sqltables in this dir create a new sql file for every database we need the table create scripts in each file to get the create script right click on the table script table as create to file select one of your new sql files so when your done for each sql table in bantor there should be a file in bantor database containing that tables create scripts ,1 1342,9940232109.0,IssuesEvent,2019-07-03 08:43:30,mozilla-mobile/android-components,https://api.github.com/repos/mozilla-mobile/android-components,opened,Codecov doesn't post coverage reports to PR anymore,🤖 automation,"It still generates reports for every PR: https://codecov.io/gh/mozilla-mobile/android-components But it doesn't seem to post them to PRs anymore..",1.0,"Codecov doesn't post coverage reports to PR anymore - It still generates reports for every PR: https://codecov.io/gh/mozilla-mobile/android-components But it doesn't seem to post them to PRs anymore..",1,codecov doesn t post coverage reports to pr anymore it still generates reports for every pr but it doesn t seem to post them to prs anymore ,1 15594,3476427873.0,IssuesEvent,2015-12-26 21:55:23,nnnick/Chart.js,https://api.github.com/repos/nnnick/Chart.js,closed,Hover removes grid lines in linechart,Needs test case,"Hi All, On hover of my linechart, gridllines and filltext is getting disappeared. Kindly help.",1.0,"Hover removes grid lines in linechart - Hi All, On hover of my linechart, gridllines and filltext is getting disappeared. Kindly help.",0,hover removes grid lines in linechart hi all on hover of my linechart gridllines and filltext is getting disappeared kindly help ,0 131925,12494167561.0,IssuesEvent,2020-06-01 10:39:28,kinvolk/lokomotive,https://api.github.com/repos/kinvolk/lokomotive,opened,Document variable `service_monitor` for cert-manager,area/components kind/documentation,There is no documentation about the variable in cert-manager component docs https://github.com/kinvolk/lokomotive/blob/master/docs/configuration-reference/components/cert-manager.md.,1.0,Document variable `service_monitor` for cert-manager - There is no documentation about the variable in cert-manager component docs https://github.com/kinvolk/lokomotive/blob/master/docs/configuration-reference/components/cert-manager.md.,0,document variable service monitor for cert manager there is no documentation about the variable in cert manager component docs ,0 6852,23974847122.0,IssuesEvent,2022-09-13 10:42:26,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,closed,Bitrise - Update NewXcodeVersions to use available simulator,eng:automation,"The `NewXcodeVersions` [workflow](https://github.com/mozilla-mobile/firefox-ios/blob/f53dc6211b76d3b88722bc8865391db39e1b3703/bitrise.yml#L588) is using iPhone 8Plus simulator. This device is no longer available with Xcode 14 GM. We need to change it so that the workflow keeps working. cc @clarmso FYi ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4928) ",1.0,"Bitrise - Update NewXcodeVersions to use available simulator - The `NewXcodeVersions` [workflow](https://github.com/mozilla-mobile/firefox-ios/blob/f53dc6211b76d3b88722bc8865391db39e1b3703/bitrise.yml#L588) is using iPhone 8Plus simulator. This device is no longer available with Xcode 14 GM. We need to change it so that the workflow keeps working. cc @clarmso FYi ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4928) ",1,bitrise update newxcodeversions to use available simulator the newxcodeversions is using iphone simulator this device is no longer available with xcode gm we need to change it so that the workflow keeps working cc clarmso fyi ┆issue is synchronized with this ,1 3292,13384505954.0,IssuesEvent,2020-09-02 12:07:38,elastic/apm-agent-python,https://api.github.com/repos/elastic/apm-agent-python,closed,[CI] unstash coverage reports can failed on some axis,automation bug ci team:automation,"There are some cases that the coverage report is not stashed and causes a fail on the post stage ``` [2020-09-01T16:49:55.461Z] Error when executing always post condition: [2020-09-01T16:49:55.462Z] hudson.AbortException: No such saved stash ‘coverage-python-2.7-twisted-18’ [2020-09-01T16:49:55.462Z] at org.jenkinsci.plugins.workflow.flow.StashManager.unstash(StashManager.java:159) [2020-09-01T16:49:55.462Z] at org.jenkinsci.plugins.workflow.support.steps.stash.UnstashStep$Execution.run(UnstashStep.java:76) [2020-09-01T16:49:55.462Z] at org.jenkinsci.plugins.workflow.support.steps.stash.UnstashStep$Execution.run(UnstashStep.java:63) [2020-09-01T16:49:55.462Z] at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) [2020-09-01T16:49:55.462Z] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [2020-09-01T16:49:55.462Z] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [2020-09-01T16:49:55.462Z] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [2020-09-01T16:49:55.462Z] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [2020-09-01T16:49:55.462Z] at java.lang.Thread.run(Thread.java:748) [2020-09-01T16:49:55.462Z] ```",2.0,"[CI] unstash coverage reports can failed on some axis - There are some cases that the coverage report is not stashed and causes a fail on the post stage ``` [2020-09-01T16:49:55.461Z] Error when executing always post condition: [2020-09-01T16:49:55.462Z] hudson.AbortException: No such saved stash ‘coverage-python-2.7-twisted-18’ [2020-09-01T16:49:55.462Z] at org.jenkinsci.plugins.workflow.flow.StashManager.unstash(StashManager.java:159) [2020-09-01T16:49:55.462Z] at org.jenkinsci.plugins.workflow.support.steps.stash.UnstashStep$Execution.run(UnstashStep.java:76) [2020-09-01T16:49:55.462Z] at org.jenkinsci.plugins.workflow.support.steps.stash.UnstashStep$Execution.run(UnstashStep.java:63) [2020-09-01T16:49:55.462Z] at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) [2020-09-01T16:49:55.462Z] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [2020-09-01T16:49:55.462Z] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [2020-09-01T16:49:55.462Z] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [2020-09-01T16:49:55.462Z] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [2020-09-01T16:49:55.462Z] at java.lang.Thread.run(Thread.java:748) [2020-09-01T16:49:55.462Z] ```",1, unstash coverage reports can failed on some axis there are some cases that the coverage report is not stashed and causes a fail on the post stage error when executing always post condition hudson abortexception no such saved stash ‘coverage python twisted ’ at org jenkinsci plugins workflow flow stashmanager unstash stashmanager java at org jenkinsci plugins workflow support steps stash unstashstep execution run unstashstep java at org jenkinsci plugins workflow support steps stash unstashstep execution run unstashstep java at org jenkinsci plugins workflow steps synchronousnonblockingstepexecution lambda start synchronousnonblockingstepexecution java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java ,1 8109,26200803187.0,IssuesEvent,2023-01-03 17:16:33,kotools/types,https://api.github.com/repos/kotools/types,opened,Deployment warnings,enhancement automation,"## Description Resolve warnings reported in [this run](https://github.com/kotools/types/actions/runs/3831378752). ## Checklist - [ ] Resolve warnings. - [ ] Test. - [ ] Refactor. ",1.0,"Deployment warnings - ## Description Resolve warnings reported in [this run](https://github.com/kotools/types/actions/runs/3831378752). ## Checklist - [ ] Resolve warnings. - [ ] Test. - [ ] Refactor. ",1,deployment warnings description resolve warnings reported in checklist resolve warnings test refactor ,1 198410,22640052209.0,IssuesEvent,2022-07-01 00:21:03,NCIOCPL/cgov-digital-platform,https://api.github.com/repos/NCIOCPL/cgov-digital-platform,closed,Prevent window opener redirection in new window links,Security,"The footer links on cancer.gov go to sites on other domains using `target=_blank`. Technically the other site can redirect the parent window using `window.opener.location`. So if you link to a compromised site it can redirect the parent to a phishing site. Given that _these_ links go to government websites there is no actual risk here. But there could theoretically be risk more broadly for `_blank` links created via the CMS. ```
  • HHS Vulnerability Disclosure
  • ...
  • U.S. Department of Health and Human Services
  • National Institutes of Health
  • National Cancer Institute
  • USA.gov
  • ``` **Remedy** When using `target=""_blank""` we need to also include `rel=""noopener noreferrer""` Since these are content based, not code based, we need to do one of a few things: 1. Edit the footer content to include the appropriate code for these specific links 1. Add some sort of global transform on page render that will always add the additional code if `_blank` is present on a link 1. Add some sort of pre-save filter that strips `_blank` out of most content unless the user has certain privileges **More Information** - [OWASP](https://owasp.org/www-community/attacks/Reverse_Tabnabbing) - [Helpful article](https://medium.com/whatever-io/on-the-security-implications-of-window-opener-location-replace-a4a93c110768)",True,"Prevent window opener redirection in new window links - The footer links on cancer.gov go to sites on other domains using `target=_blank`. Technically the other site can redirect the parent window using `window.opener.location`. So if you link to a compromised site it can redirect the parent to a phishing site. Given that _these_ links go to government websites there is no actual risk here. But there could theoretically be risk more broadly for `_blank` links created via the CMS. ```
  • HHS Vulnerability Disclosure
  • ...
  • U.S. Department of Health and Human Services
  • National Institutes of Health
  • National Cancer Institute
  • USA.gov
  • ``` **Remedy** When using `target=""_blank""` we need to also include `rel=""noopener noreferrer""` Since these are content based, not code based, we need to do one of a few things: 1. Edit the footer content to include the appropriate code for these specific links 1. Add some sort of global transform on page render that will always add the additional code if `_blank` is present on a link 1. Add some sort of pre-save filter that strips `_blank` out of most content unless the user has certain privileges **More Information** - [OWASP](https://owasp.org/www-community/attacks/Reverse_Tabnabbing) - [Helpful article](https://medium.com/whatever-io/on-the-security-implications-of-window-opener-location-replace-a4a93c110768)",0,prevent window opener redirection in new window links the footer links on cancer gov go to sites on other domains using target blank technically the other site can redirect the parent window using window opener location so if you link to a compromised site it can redirect the parent to a phishing site given that these links go to government websites there is no actual risk here but there could theoretically be risk more broadly for blank links created via the cms hhs vulnerability disclosure u s department of health and human services national institutes of health national cancer institute usa gov remedy when using target blank we need to also include rel noopener noreferrer since these are content based not code based we need to do one of a few things edit the footer content to include the appropriate code for these specific links add some sort of global transform on page render that will always add the additional code if blank is present on a link add some sort of pre save filter that strips blank out of most content unless the user has certain privileges more information ,0 2025,11274380108.0,IssuesEvent,2020-01-14 18:26:37,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,Temp disable flaky verifyContextSaveImage UI test,eng:automation 🐞 bug," It passes on Pixel2 but intermittent failures on Nexus6 ",1.0,"Temp disable flaky verifyContextSaveImage UI test - It passes on Pixel2 but intermittent failures on Nexus6 ",1,temp disable flaky verifycontextsaveimage ui test it passes on but intermittent failures on ,1 9498,29086282993.0,IssuesEvent,2023-05-16 00:20:40,AzureAD/microsoft-authentication-library-for-objc,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-objc,closed,Automation tests failure,duplicate automation failure,"@AzureAD/appleidentity Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Update version & submodule [981afa330b38de1daf37b9a92cbbef445667a4f2] Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1097658&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1097658&view=logs)",1.0,"Automation tests failure - @AzureAD/appleidentity Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Update version & submodule [981afa330b38de1daf37b9a92cbbef445667a4f2] Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1097658&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1097658&view=logs)",1,automation tests failure azuread appleidentity automation failed for ran against commit update version submodule pipeline url ,1 5230,18892369707.0,IssuesEvent,2021-11-15 14:32:13,betagouv/preuve-covoiturage,https://api.github.com/repos/betagouv/preuve-covoiturage,closed,"Retour 405, Method not allowed sur les calls à data.gouv",BUG Open Data Automation,"`{""level"":50,""time"":1636980594600,""pid"":54,""hostname"":""pdc-prod-one-off-2179"",""msg"":""Error while calling data gouv API; 405 : {\""message\"":\""The method is not allowed for the requested URL.\""}""} ` Souci sur data.gouv. Le même call sur demo.data.gouv est OK",1.0,"Retour 405, Method not allowed sur les calls à data.gouv - `{""level"":50,""time"":1636980594600,""pid"":54,""hostname"":""pdc-prod-one-off-2179"",""msg"":""Error while calling data gouv API; 405 : {\""message\"":\""The method is not allowed for the requested URL.\""}""} ` Souci sur data.gouv. Le même call sur demo.data.gouv est OK",1,retour method not allowed sur les calls à data gouv level time pid hostname pdc prod one off msg error while calling data gouv api message the method is not allowed for the requested url souci sur data gouv le même call sur demo data gouv est ok,1 315295,27063080383.0,IssuesEvent,2023-02-13 21:27:20,ethyca/fides,https://api.github.com/repos/ethyca/fides,closed,Add a new Cypress E2E test runner that executes against a fully configured environment,enhancement Test Automation Improvements,"### Description In some previous releases, we worked on building out test setup scripts (#1291) and a local manual test environment (#1292), which gives us the ability to do true E2E manual tests to confirm all features are working as expected when running together. Our existing Cypress suites for the `admin-ui` and `privacy-center` are great, but they are isolated to their specific applications and tend to stub out their backends for performance. Therefore, there's still an open need for an E2E test runner to use for regression tests on an ongoing basis. Once the runner is configured, let's put it to work by defining a core smoke test for releases. We should ensure this can run against our local test env first, but I want to immediately start using this against some hosted staging & demo environments in the short-term, so let's ensure that any URLs are easy to configure and override (e.g. http://localhost:3000 will quickly become something like https://fides.fides-staging.ethyca.com/) In addition, let's signup for Cypress Cloud and publish results there, it's a great product! ### Acceptance Criteria * MUST implement a new Cypress runner to run E2E tests, targeting the local ""manual test"" environment (see #1292) * MUST support targeting different hosts via environment variables, so we can run against staging / demo / etc. * MUST have a `nox` session (e.g. `nox -s e2e_test`) that runs the Cypress suite in headless mode * MUST configure the test reporter to publish results to our Cypress Cloud account * MUST write a single Cypress E2E test that runs a simple smoke test covering: 1. Confirm can login to the Admin UI 2. Confirm that the Postgres & Mongo connectors are configured 3. Confirm that the Privacy Center can be accessed 4. Submit an access request via the Privacy Center 5. Approve the access request via the Admin UI 6. Confirm the access request succeeds",1.0,"Add a new Cypress E2E test runner that executes against a fully configured environment - ### Description In some previous releases, we worked on building out test setup scripts (#1291) and a local manual test environment (#1292), which gives us the ability to do true E2E manual tests to confirm all features are working as expected when running together. Our existing Cypress suites for the `admin-ui` and `privacy-center` are great, but they are isolated to their specific applications and tend to stub out their backends for performance. Therefore, there's still an open need for an E2E test runner to use for regression tests on an ongoing basis. Once the runner is configured, let's put it to work by defining a core smoke test for releases. We should ensure this can run against our local test env first, but I want to immediately start using this against some hosted staging & demo environments in the short-term, so let's ensure that any URLs are easy to configure and override (e.g. http://localhost:3000 will quickly become something like https://fides.fides-staging.ethyca.com/) In addition, let's signup for Cypress Cloud and publish results there, it's a great product! ### Acceptance Criteria * MUST implement a new Cypress runner to run E2E tests, targeting the local ""manual test"" environment (see #1292) * MUST support targeting different hosts via environment variables, so we can run against staging / demo / etc. * MUST have a `nox` session (e.g. `nox -s e2e_test`) that runs the Cypress suite in headless mode * MUST configure the test reporter to publish results to our Cypress Cloud account * MUST write a single Cypress E2E test that runs a simple smoke test covering: 1. Confirm can login to the Admin UI 2. Confirm that the Postgres & Mongo connectors are configured 3. Confirm that the Privacy Center can be accessed 4. Submit an access request via the Privacy Center 5. Approve the access request via the Admin UI 6. Confirm the access request succeeds",0,add a new cypress test runner that executes against a fully configured environment description in some previous releases we worked on building out test setup scripts and a local manual test environment which gives us the ability to do true manual tests to confirm all features are working as expected when running together our existing cypress suites for the admin ui and privacy center are great but they are isolated to their specific applications and tend to stub out their backends for performance therefore there s still an open need for an test runner to use for regression tests on an ongoing basis once the runner is configured let s put it to work by defining a core smoke test for releases we should ensure this can run against our local test env first but i want to immediately start using this against some hosted staging demo environments in the short term so let s ensure that any urls are easy to configure and override e g will quickly become something like in addition let s signup for cypress cloud and publish results there it s a great product acceptance criteria must implement a new cypress runner to run tests targeting the local manual test environment see must support targeting different hosts via environment variables so we can run against staging demo etc must have a nox session e g nox s test that runs the cypress suite in headless mode must configure the test reporter to publish results to our cypress cloud account must write a single cypress test that runs a simple smoke test covering confirm can login to the admin ui confirm that the postgres mongo connectors are configured confirm that the privacy center can be accessed submit an access request via the privacy center approve the access request via the admin ui confirm the access request succeeds,0 1741,10677294594.0,IssuesEvent,2019-10-21 15:11:53,plan-player-analytics/Plan,https://api.github.com/repos/plan-player-analytics/Plan,closed,Automatic html-branch pushing from Jenkins,Automation,"### Is your feature request related to a problem? Please describe. Currently `html`-branch is updated once per version, if even then. ### Describe the solution you'd like It might be possible to push to `html` branch from CI automatically since it has no CI configuration. Needed: - Figure what is needed for git pushing. - Check that build is on master branch - Check that the files have actually changed (Done by git I suppose)",1.0,"Automatic html-branch pushing from Jenkins - ### Is your feature request related to a problem? Please describe. Currently `html`-branch is updated once per version, if even then. ### Describe the solution you'd like It might be possible to push to `html` branch from CI automatically since it has no CI configuration. Needed: - Figure what is needed for git pushing. - Check that build is on master branch - Check that the files have actually changed (Done by git I suppose)",1,automatic html branch pushing from jenkins is your feature request related to a problem please describe currently html branch is updated once per version if even then describe the solution you d like it might be possible to push to html branch from ci automatically since it has no ci configuration needed figure what is needed for git pushing check that build is on master branch check that the files have actually changed done by git i suppose ,1 42545,5474690556.0,IssuesEvent,2017-03-11 02:34:35,phetsims/make-a-ten,https://api.github.com/repos/phetsims/make-a-ten,opened,Create sim primer,design:teaching-resources,This is the final item left on the master checklist #1 - and would probably be good to complete in the near'ish future (certainly before next school year),1.0,Create sim primer - This is the final item left on the master checklist #1 - and would probably be good to complete in the near'ish future (certainly before next school year),0,create sim primer this is the final item left on the master checklist and would probably be good to complete in the near ish future certainly before next school year ,0 2196,11568127349.0,IssuesEvent,2020-02-20 15:24:21,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,set apm-server mode to experimental by default,automation enhancement,+ Since we already know apm-server mode `experimental` feature works out of the box and helps immensely while developing new features and also for the space time projects. it makes total sense to set the mode by default for all supported versions. https://github.com/elastic/apm-server/pull/1961,1.0,set apm-server mode to experimental by default - + Since we already know apm-server mode `experimental` feature works out of the box and helps immensely while developing new features and also for the space time projects. it makes total sense to set the mode by default for all supported versions. https://github.com/elastic/apm-server/pull/1961,1,set apm server mode to experimental by default since we already know apm server mode experimental feature works out of the box and helps immensely while developing new features and also for the space time projects it makes total sense to set the mode by default for all supported versions ,1 4214,15817730763.0,IssuesEvent,2021-04-05 15:00:58,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,"integration tests are failing with ""Wildcard expressions or all indices are not allowed"" from `elasticsearch.clean()`",automation bug team:automation," https://apm-ci.elastic.co/blue/organizations/jenkins/apm-integration-tests-selector-mbp/activity/ shows that integration tests started failing for everyone (at least for node, rum, and java that have had test runs of it so far) about 9 hours ago. The failure is the same for all: ``` [2021-04-01T17:39:25.468Z] tests/utils.py:13: in check_agent_transaction [2021-04-01T17:39:25.468Z] elasticsearch.clean() [2021-04-01T17:39:25.468Z] tests/fixtures/es.py:22: in clean [2021-04-01T17:39:25.468Z] self.es.indices.delete(self.index) [2021-04-01T17:39:25.468Z] venv/lib/python3.7/site-packages/elasticsearch/client/utils.py:76: in _wrapped [2021-04-01T17:39:25.468Z] return func(*args, params=params, **kwargs) [2021-04-01T17:39:25.468Z] venv/lib/python3.7/site-packages/elasticsearch/client/indices.py:185: in delete [2021-04-01T17:39:25.468Z] params=params) [2021-04-01T17:39:25.468Z] venv/lib/python3.7/site-packages/elasticsearch/transport.py:318: in perform_request [2021-04-01T17:39:25.468Z] status, headers_response, data = connection.perform_request(method, url, params, body, headers=headers, ignore=ignore, timeout=timeout) [2021-04-01T17:39:25.468Z] venv/lib/python3.7/site-packages/elasticsearch/connection/http_requests.py:90: in perform_request [2021-04-01T17:39:25.468Z] self._raise_error(response.status_code, raw_data) [2021-04-01T17:39:25.468Z] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [2021-04-01T17:39:25.468Z] [2021-04-01T17:39:25.468Z] self = , status_code = 400 [2021-04-01T17:39:25.468Z] raw_data = '{""error"":{""root_cause"":[{""type"":""illegal_argument_exception"",""reason"":""Wildcard expressions or all indices are not al...d""}],""type"":""illegal_argument_exception"",""reason"":""Wildcard expressions or all indices are not allowed""},""status"":400}' ``` These integration tests are running using whatever the latest ""docker.elastic.co/elasticsearch/elasticsearch:8.0.0-SNAPSHOT"" image is. The current one is this commit: ``` ""org.label-schema.vcs-ref"": ""014cc2b759f64113b66247606d05b34145168c43"", ""org.label-schema.vcs-url"": ""https://github.com/elastic/elasticsearch"", ""org.label-schema.vendor"": ""Elastic"", ""org.label-schema.version"": ""8.0.0-SNAPSHOT"", ``` from about 17 hours ago. About *30* hours ago there was this change to the elasticsearch file that contains that error message: https://github.com/elastic/elasticsearch/issues/61074 $20 says that change means this code (https://github.com/elastic/apm-integration-testing/blob/0f72d15064ea44db1d5715daf06faa19b3a22b94/tests/utils.py#L13) needs to change to specify index name(s), or we need to config our ES in integration testing to use `action.destructive_requires_name = false`. ",2.0,"integration tests are failing with ""Wildcard expressions or all indices are not allowed"" from `elasticsearch.clean()` - https://apm-ci.elastic.co/blue/organizations/jenkins/apm-integration-tests-selector-mbp/activity/ shows that integration tests started failing for everyone (at least for node, rum, and java that have had test runs of it so far) about 9 hours ago. The failure is the same for all: ``` [2021-04-01T17:39:25.468Z] tests/utils.py:13: in check_agent_transaction [2021-04-01T17:39:25.468Z] elasticsearch.clean() [2021-04-01T17:39:25.468Z] tests/fixtures/es.py:22: in clean [2021-04-01T17:39:25.468Z] self.es.indices.delete(self.index) [2021-04-01T17:39:25.468Z] venv/lib/python3.7/site-packages/elasticsearch/client/utils.py:76: in _wrapped [2021-04-01T17:39:25.468Z] return func(*args, params=params, **kwargs) [2021-04-01T17:39:25.468Z] venv/lib/python3.7/site-packages/elasticsearch/client/indices.py:185: in delete [2021-04-01T17:39:25.468Z] params=params) [2021-04-01T17:39:25.468Z] venv/lib/python3.7/site-packages/elasticsearch/transport.py:318: in perform_request [2021-04-01T17:39:25.468Z] status, headers_response, data = connection.perform_request(method, url, params, body, headers=headers, ignore=ignore, timeout=timeout) [2021-04-01T17:39:25.468Z] venv/lib/python3.7/site-packages/elasticsearch/connection/http_requests.py:90: in perform_request [2021-04-01T17:39:25.468Z] self._raise_error(response.status_code, raw_data) [2021-04-01T17:39:25.468Z] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [2021-04-01T17:39:25.468Z] [2021-04-01T17:39:25.468Z] self = , status_code = 400 [2021-04-01T17:39:25.468Z] raw_data = '{""error"":{""root_cause"":[{""type"":""illegal_argument_exception"",""reason"":""Wildcard expressions or all indices are not al...d""}],""type"":""illegal_argument_exception"",""reason"":""Wildcard expressions or all indices are not allowed""},""status"":400}' ``` These integration tests are running using whatever the latest ""docker.elastic.co/elasticsearch/elasticsearch:8.0.0-SNAPSHOT"" image is. The current one is this commit: ``` ""org.label-schema.vcs-ref"": ""014cc2b759f64113b66247606d05b34145168c43"", ""org.label-schema.vcs-url"": ""https://github.com/elastic/elasticsearch"", ""org.label-schema.vendor"": ""Elastic"", ""org.label-schema.version"": ""8.0.0-SNAPSHOT"", ``` from about 17 hours ago. About *30* hours ago there was this change to the elasticsearch file that contains that error message: https://github.com/elastic/elasticsearch/issues/61074 $20 says that change means this code (https://github.com/elastic/apm-integration-testing/blob/0f72d15064ea44db1d5715daf06faa19b3a22b94/tests/utils.py#L13) needs to change to specify index name(s), or we need to config our ES in integration testing to use `action.destructive_requires_name = false`. ",1,integration tests are failing with wildcard expressions or all indices are not allowed from elasticsearch clean shows that integration tests started failing for everyone at least for node rum and java that have had test runs of it so far about hours ago the failure is the same for all tests utils py in check agent transaction elasticsearch clean tests fixtures es py in clean self es indices delete self index venv lib site packages elasticsearch client utils py in wrapped return func args params params kwargs venv lib site packages elasticsearch client indices py in delete params params venv lib site packages elasticsearch transport py in perform request status headers response data connection perform request method url params body headers headers ignore ignore timeout timeout venv lib site packages elasticsearch connection http requests py in perform request self raise error response status code raw data self requestshttpconnection status code raw data error root cause type illegal argument exception reason wildcard expressions or all indices are not allowed status these integration tests are running using whatever the latest docker elastic co elasticsearch elasticsearch snapshot image is the current one is this commit org label schema vcs ref org label schema vcs url org label schema vendor elastic org label schema version snapshot from about hours ago about hours ago there was this change to the elasticsearch file that contains that error message says that change means this code needs to change to specify index name s or we need to config our es in integration testing to use action destructive requires name false ,1 2431,11947230876.0,IssuesEvent,2020-04-03 09:33:59,bandprotocol/bandchain,https://api.github.com/repos/bandprotocol/bandchain,opened,Change moniker of the test 4 validator nodes,automation chore,"To make it look more realistic. Let's use the following names: - 🙎‍♀️Alice & Co. - Bobby.fish 🐡 - Carol - Eve 🦹🏿‍♂️the evil with a really long moniker name",1.0,"Change moniker of the test 4 validator nodes - To make it look more realistic. Let's use the following names: - 🙎‍♀️Alice & Co. - Bobby.fish 🐡 - Carol - Eve 🦹🏿‍♂️the evil with a really long moniker name",1,change moniker of the test validator nodes to make it look more realistic let s use the following names 🙎‍♀️alice co bobby fish 🐡 carol eve 🦹🏿‍♂️the evil with a really long moniker name,1 7550,25108819809.0,IssuesEvent,2022-11-08 18:43:59,iesahin/xvc,https://api.github.com/repos/iesahin/xvc,closed,Add storage tests to Github Actions,automation,"Remote tests use feature flags to prevent to run every time the test suite runs. These test could be run manually, either by per remote or in total. - [ ] Enter all relevant credentials as secrets to Github Actions - [ ] Create a new CI file that has separate testing for each of the remotes, via matrix ",1.0,"Add storage tests to Github Actions - Remote tests use feature flags to prevent to run every time the test suite runs. These test could be run manually, either by per remote or in total. - [ ] Enter all relevant credentials as secrets to Github Actions - [ ] Create a new CI file that has separate testing for each of the remotes, via matrix ",1,add storage tests to github actions remote tests use feature flags to prevent to run every time the test suite runs these test could be run manually either by per remote or in total enter all relevant credentials as secrets to github actions create a new ci file that has separate testing for each of the remotes via matrix ,1 28157,5201625265.0,IssuesEvent,2017-01-24 05:54:50,idaholab/moose,https://api.github.com/repos/idaholab/moose,closed,Problem with mesh adaptivity and stateful variables (in PorousFlow),C: Modules P: normal T: defect,"### Description of the enhancement or error report Tagging @cpgr, and asking for help from @permcody . I cannot use mesh adaptivity in PorousFlow. This could be related to #7116. I don't think I have the skill or the time to track this down. I get errors such as: ``` Process 27787 stopped * thread #1: tid = 0x14c83f, 0x000000010139b707 libmoose-dbg.0.dylib`shallowCopyData(stateful_prop_ids=0x000000010666e610, data=0x00000001063ac688, data_from=0x0000000107bcdf10) + 167 at MaterialPropertyStorage.C:37, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0) frame #0: 0x000000010139b707 libmoose-dbg.0.dylib`shallowCopyData(stateful_prop_ids=0x000000010666e610, data=0x00000001063ac688, data_from=0x0000000107bcdf10) + 167 at MaterialPropertyStorage.C:37 34 for (unsigned int i=0; i 37 PropertyValue * prop_from = data_from[i]; // do the look-up just once (OPT) 38 if (prop != NULL && prop_from != NULL) 39 prop->swap(prop_from); 40 } ``` ### Rationale for the enhancement or information for reproducing the error You may reproduce the error by running the test file modules/porous_flow/tests/convection/convect_1d.i in debug mode and using Adaptivity (remove the active = '' line in that file) ### Identified impact (i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted) ",1.0,"Problem with mesh adaptivity and stateful variables (in PorousFlow) - ### Description of the enhancement or error report Tagging @cpgr, and asking for help from @permcody . I cannot use mesh adaptivity in PorousFlow. This could be related to #7116. I don't think I have the skill or the time to track this down. I get errors such as: ``` Process 27787 stopped * thread #1: tid = 0x14c83f, 0x000000010139b707 libmoose-dbg.0.dylib`shallowCopyData(stateful_prop_ids=0x000000010666e610, data=0x00000001063ac688, data_from=0x0000000107bcdf10) + 167 at MaterialPropertyStorage.C:37, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0) frame #0: 0x000000010139b707 libmoose-dbg.0.dylib`shallowCopyData(stateful_prop_ids=0x000000010666e610, data=0x00000001063ac688, data_from=0x0000000107bcdf10) + 167 at MaterialPropertyStorage.C:37 34 for (unsigned int i=0; i 37 PropertyValue * prop_from = data_from[i]; // do the look-up just once (OPT) 38 if (prop != NULL && prop_from != NULL) 39 prop->swap(prop_from); 40 } ``` ### Rationale for the enhancement or information for reproducing the error You may reproduce the error by running the test file modules/porous_flow/tests/convection/convect_1d.i in debug mode and using Adaptivity (remove the active = '' line in that file) ### Identified impact (i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted) ",0,problem with mesh adaptivity and stateful variables in porousflow description of the enhancement or error report tagging cpgr and asking for help from permcody i cannot use mesh adaptivity in porousflow this could be related to i don t think i have the skill or the time to track this down i get errors such as process stopped thread tid libmoose dbg dylib shallowcopydata stateful prop ids data data from at materialpropertystorage c queue com apple main thread stop reason exc bad access code address frame libmoose dbg dylib shallowcopydata stateful prop ids data data from at materialpropertystorage c for unsigned int i i stateful prop ids size i propertyvalue prop data do the look up just once opt propertyvalue prop from data from do the look up just once opt if prop null prop from null prop swap prop from rationale for the enhancement or information for reproducing the error you may reproduce the error by running the test file modules porous flow tests convection convect i in debug mode and using adaptivity remove the active line in that file identified impact i e internal object changes limited interface changes public api change or a list of specific applications impacted ,0 9848,25384553893.0,IssuesEvent,2022-11-21 20:38:07,backend-br/vagas,https://api.github.com/repos/backend-br/vagas,closed,[REMOTO] Back-end developer - DB1 Group,CLT PJ Sênior PHP TDD Remoto MySQL DDD SOLID Hexagonal architecture Clean architecture,"## Nossa empresa A DB1 Global Software é uma empresa do DB1 Group e é uma Software House especializada no desenvolvimento de software para grandes players do mercado. Na DGS, como chamamos aqui, trabalhamos com a transformação digital alinhada a uma entrega de valor aos nossos clientes. Temos o propósito de enfrentar desafios e resolver problemas, buscando nos aperfeiçoar cada vez mais. Aqui, entendemos que o profissional e pessoal andam juntos, por isso, cultura importa tanto quanto conhecimento técnico. Como fazemos isso? Formando squads exclusivas para nossos projetos, com equipes multidisciplinares, que imergem no segmento do nosso cliente e focam em realizar entregas pautadas em resultados e com padrão global de qualidade. Além disso, somos sempre movidos por uma cultura que desenvolve e valoriza pessoas! ## Descrição da vaga Buscamos uma pessoa apaixonada por tecnologia, com experiência em desenvolvimento backend, que se interesse em desenvolver com PHP e busque entregar soluções de qualidade aplicando as melhores práticas de programação, tais como SOLID, design patterns, clean code. Fará parte de um time de pessoas que contribui para a evolução de uma plataforma de pagamentos. ## Local Remoto ## Requisitos **Para fazer parte dessa missão é importante:** - Atuar em desenvolvimento backend PHP, liderando o time de desenvolvedores provendo solução técnica, acessando as àreas de apoio se necessário (arquitetura de solução, devops, segurança, etc), garantindo que as entregas sejam feitas com qualidade, eficiência e gerando valor para a área de negócio; - Apoiar na evolução das habilidades técnicas do time, também participará do processo de evolução de conhecimento dos desenvolvedores que lidera, garantindo evolução e aprendizado. - Fazer code reviews e ajudar o time em decisões de arquitetura. **É imprescindível que você conheça:** - PHP 7+ - Composer - MySql - Orientação a Objetos - RestFull API - PHPUnit (unitário/integração) - TDD e DDD - Conceitos de boas práticas de desenvolvimento (SOLID, GRASP, Clean Architecture, Hexagonal Architecture) - Inglês técnico para comunicação escrita (Lingua padrão utilizada pelo cliente) - Inglês avançado para conversação (participará de reuniões em inglês com cliente) - Ser uma pessoa flexível e por dentro das novidades tecnológicas; - Grande capacidade de aprendizado para novas práticas, tecnologias, linguagens de programação e culturas de engenharia. ## Benefícios - Cartão de benefícios flexíveis DUCZ (Alimentação, Mobilidade, Qualidade de Vida, entre outros); - Plano de Saúde Unimed - Com mensalidade 100% paga pela DB1; - Auxílio home office; - Horários flexíveis!; - Gympass (opcional); - Plano odontológico (opcional); - Seguro de Vida (opcional); - Evolução de carreira: Plano de Desenvolvimento Individual (PDI), feedbacks constantes, Programa de Mentoria, Parceria com Cambly e Fluency para estudo de idiomas, eventos internos, subsídio para treinamentos, universidade corporativa e muito mais; - Programas de recompensa: Next1 para indicação de novos colaboradores e bônus anual atrelado a metas; - Comitês para você participar e contribuir com a comunidade e com as melhorias dos processos do DB1 Group. ## Contratação Preferencialmente, CLT ## Como se candidatar [Candidate-se aqui](https://db1group.pinpointhq.com/pt-BR/jobs/65741?utm_medium=organic_search&utm_source=GitHub) ## Labels ### Nível - Sênior ### Regime - CLT - PJ ### Alocação - Remoto ",2.0,"[REMOTO] Back-end developer - DB1 Group - ## Nossa empresa A DB1 Global Software é uma empresa do DB1 Group e é uma Software House especializada no desenvolvimento de software para grandes players do mercado. Na DGS, como chamamos aqui, trabalhamos com a transformação digital alinhada a uma entrega de valor aos nossos clientes. Temos o propósito de enfrentar desafios e resolver problemas, buscando nos aperfeiçoar cada vez mais. Aqui, entendemos que o profissional e pessoal andam juntos, por isso, cultura importa tanto quanto conhecimento técnico. Como fazemos isso? Formando squads exclusivas para nossos projetos, com equipes multidisciplinares, que imergem no segmento do nosso cliente e focam em realizar entregas pautadas em resultados e com padrão global de qualidade. Além disso, somos sempre movidos por uma cultura que desenvolve e valoriza pessoas! ## Descrição da vaga Buscamos uma pessoa apaixonada por tecnologia, com experiência em desenvolvimento backend, que se interesse em desenvolver com PHP e busque entregar soluções de qualidade aplicando as melhores práticas de programação, tais como SOLID, design patterns, clean code. Fará parte de um time de pessoas que contribui para a evolução de uma plataforma de pagamentos. ## Local Remoto ## Requisitos **Para fazer parte dessa missão é importante:** - Atuar em desenvolvimento backend PHP, liderando o time de desenvolvedores provendo solução técnica, acessando as àreas de apoio se necessário (arquitetura de solução, devops, segurança, etc), garantindo que as entregas sejam feitas com qualidade, eficiência e gerando valor para a área de negócio; - Apoiar na evolução das habilidades técnicas do time, também participará do processo de evolução de conhecimento dos desenvolvedores que lidera, garantindo evolução e aprendizado. - Fazer code reviews e ajudar o time em decisões de arquitetura. **É imprescindível que você conheça:** - PHP 7+ - Composer - MySql - Orientação a Objetos - RestFull API - PHPUnit (unitário/integração) - TDD e DDD - Conceitos de boas práticas de desenvolvimento (SOLID, GRASP, Clean Architecture, Hexagonal Architecture) - Inglês técnico para comunicação escrita (Lingua padrão utilizada pelo cliente) - Inglês avançado para conversação (participará de reuniões em inglês com cliente) - Ser uma pessoa flexível e por dentro das novidades tecnológicas; - Grande capacidade de aprendizado para novas práticas, tecnologias, linguagens de programação e culturas de engenharia. ## Benefícios - Cartão de benefícios flexíveis DUCZ (Alimentação, Mobilidade, Qualidade de Vida, entre outros); - Plano de Saúde Unimed - Com mensalidade 100% paga pela DB1; - Auxílio home office; - Horários flexíveis!; - Gympass (opcional); - Plano odontológico (opcional); - Seguro de Vida (opcional); - Evolução de carreira: Plano de Desenvolvimento Individual (PDI), feedbacks constantes, Programa de Mentoria, Parceria com Cambly e Fluency para estudo de idiomas, eventos internos, subsídio para treinamentos, universidade corporativa e muito mais; - Programas de recompensa: Next1 para indicação de novos colaboradores e bônus anual atrelado a metas; - Comitês para você participar e contribuir com a comunidade e com as melhorias dos processos do DB1 Group. ## Contratação Preferencialmente, CLT ## Como se candidatar [Candidate-se aqui](https://db1group.pinpointhq.com/pt-BR/jobs/65741?utm_medium=organic_search&utm_source=GitHub) ## Labels ### Nível - Sênior ### Regime - CLT - PJ ### Alocação - Remoto ",0, back end developer group nossa empresa a global software é uma empresa do group e é uma software house especializada no desenvolvimento de software para grandes players do mercado na dgs como chamamos aqui trabalhamos com a transformação digital alinhada a uma entrega de valor aos nossos clientes temos o propósito de enfrentar desafios e resolver problemas buscando nos aperfeiçoar cada vez mais aqui entendemos que o profissional e pessoal andam juntos por isso cultura importa tanto quanto conhecimento técnico como fazemos isso formando squads exclusivas para nossos projetos com equipes multidisciplinares que imergem no segmento do nosso cliente e focam em realizar entregas pautadas em resultados e com padrão global de qualidade além disso somos sempre movidos por uma cultura que desenvolve e valoriza pessoas descrição da vaga buscamos uma pessoa apaixonada por tecnologia com experiência em desenvolvimento backend que se interesse em desenvolver com php e busque entregar soluções de qualidade aplicando as melhores práticas de programação tais como solid design patterns clean code fará parte de um time de pessoas que contribui para a evolução de uma plataforma de pagamentos local remoto requisitos para fazer parte dessa missão é importante atuar em desenvolvimento backend php liderando o time de desenvolvedores provendo solução técnica acessando as àreas de apoio se necessário arquitetura de solução devops segurança etc garantindo que as entregas sejam feitas com qualidade eficiência e gerando valor para a área de negócio apoiar na evolução das habilidades técnicas do time também participará do processo de evolução de conhecimento dos desenvolvedores que lidera garantindo evolução e aprendizado fazer code reviews e ajudar o time em decisões de arquitetura é imprescindível que você conheça php composer mysql orientação a objetos restfull api phpunit unitário integração tdd e ddd conceitos de boas práticas de desenvolvimento solid grasp clean architecture hexagonal architecture inglês técnico para comunicação escrita lingua padrão utilizada pelo cliente inglês avançado para conversação participará de reuniões em inglês com cliente ser uma pessoa flexível e por dentro das novidades tecnológicas grande capacidade de aprendizado para novas práticas tecnologias linguagens de programação e culturas de engenharia benefícios cartão de benefícios flexíveis ducz alimentação mobilidade qualidade de vida entre outros plano de saúde unimed com mensalidade paga pela auxílio home office horários flexíveis gympass opcional plano odontológico opcional seguro de vida opcional evolução de carreira plano de desenvolvimento individual pdi feedbacks constantes programa de mentoria parceria com cambly e fluency para estudo de idiomas eventos internos subsídio para treinamentos universidade corporativa e muito mais programas de recompensa para indicação de novos colaboradores e bônus anual atrelado a metas comitês para você participar e contribuir com a comunidade e com as melhorias dos processos do group contratação preferencialmente clt como se candidatar labels nível sênior regime clt pj alocação remoto ,0 716,7874427465.0,IssuesEvent,2018-06-25 17:01:08,Shopify/quilt,https://api.github.com/repos/Shopify/quilt,closed,Add greenkeeper,automation,"Greenkeeper v3 just came out with much better support for monorepos. We should consider adding it to our repo in order to keep our dependencies up to date. https://blog.greenkeeper.io/announcing-greenkeeper-3-1504f5113998 https://greenkeeper.io/",1.0,"Add greenkeeper - Greenkeeper v3 just came out with much better support for monorepos. We should consider adding it to our repo in order to keep our dependencies up to date. https://blog.greenkeeper.io/announcing-greenkeeper-3-1504f5113998 https://greenkeeper.io/",1,add greenkeeper greenkeeper just came out with much better support for monorepos we should consider adding it to our repo in order to keep our dependencies up to date ,1 3273,13308429884.0,IssuesEvent,2020-08-26 00:58:31,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,opened,Add Workspace and Stack lifecycle method scoped env vars,area/automation-api,"Sometimes pulumi programs need environment variables for backend auth, pulumi SaaS auth, cloud auth etc. We should add the following methods to Workspace to allow for workspace scoped env vars: ```go func (w *Workspace) GetEnvVars(ctx context.Context) ([]string, error) {} func (w *Workspace) SetEvnVars(ctx context.Context) error{} ``` Then the various `runPulumiCmd` methods can read these variables and append to any CLI commands. In addition it would be good to add `optup.EnvVars`, `optpreview.EnvVars`, `optdestroy.EnvVars`, and `optfrefresh.EnvVars` to allow setting env vars that are scoped to a specific lifecycle operation. ",1.0,"Add Workspace and Stack lifecycle method scoped env vars - Sometimes pulumi programs need environment variables for backend auth, pulumi SaaS auth, cloud auth etc. We should add the following methods to Workspace to allow for workspace scoped env vars: ```go func (w *Workspace) GetEnvVars(ctx context.Context) ([]string, error) {} func (w *Workspace) SetEvnVars(ctx context.Context) error{} ``` Then the various `runPulumiCmd` methods can read these variables and append to any CLI commands. In addition it would be good to add `optup.EnvVars`, `optpreview.EnvVars`, `optdestroy.EnvVars`, and `optfrefresh.EnvVars` to allow setting env vars that are scoped to a specific lifecycle operation. ",1,add workspace and stack lifecycle method scoped env vars sometimes pulumi programs need environment variables for backend auth pulumi saas auth cloud auth etc we should add the following methods to workspace to allow for workspace scoped env vars go func w workspace getenvvars ctx context context string error func w workspace setevnvars ctx context context error then the various runpulumicmd methods can read these variables and append to any cli commands in addition it would be good to add optup envvars optpreview envvars optdestroy envvars and optfrefresh envvars to allow setting env vars that are scoped to a specific lifecycle operation ,1 5959,21765400137.0,IssuesEvent,2022-05-13 00:50:10,dotnet/arcade,https://api.github.com/repos/dotnet/arcade,closed,CG alerts for dotnet-arcade,First Responder Detected By - Automation External Dependency,"Per the newest scan, we have some additional CG alerts to address in the dotnet-arcade repo https://dev.azure.com/dnceng/internal/_componentGovernance/dotnet-arcade?_a=alerts&typeId=6283540&alerts-view-option=active Please note, there are issues in the main, release-3.x, release-5.x and release-6.x branches. ",1.0,"CG alerts for dotnet-arcade - Per the newest scan, we have some additional CG alerts to address in the dotnet-arcade repo https://dev.azure.com/dnceng/internal/_componentGovernance/dotnet-arcade?_a=alerts&typeId=6283540&alerts-view-option=active Please note, there are issues in the main, release-3.x, release-5.x and release-6.x branches. ",1,cg alerts for dotnet arcade per the newest scan we have some additional cg alerts to address in the dotnet arcade repo please note there are issues in the main release x release x and release x branches ,1 1666,10554849775.0,IssuesEvent,2019-10-03 20:26:01,unoplatform/uno,https://api.github.com/repos/unoplatform/uno,opened,Automatically deploy WebAssembly PR artifacts to a website so folks can click on a URL when reviewing the PR,area/automation kind/enhancement," ## What would you like to be added: 1. [x] Automatically build pull-requests 1. [ ] Create Azure Blob Storage 1. [ ] Create Azure DevOps deploy task 1. [ ] Take pull-request # from GitHub and create blobstorage/$pullRequestId folder and replacing existing WASM content 1. [ ] Take pull-request # from GitHub and create blobstorage/$pullRequestId folder and replacing existing APK content 1. [ ] [Set mime type for blobstorage/$pullRequestId for wasm to correct type](https://liftcodeplay.com/2017/11/28/how-to-fix-azure-storage-blob-content-types/). 1. [ ] [Automatically deploy pull-requests](https://docs.microsoft.com/en-us/azure/devops/pipelines/release/deploy-pull-request-builds?view=azure-devops) 1. [ ] [Comment on the pull-request with the URL](https://marketplace.visualstudio.com/items?itemName=SOUTHWORKS.github-pr-comment) 1. [ ] Create Azure Workbooks task to delete any folders older than 90 days in the blog storage ## Why is this needed: Make reviewing pull-requests easier, improve productivity and quality. ## For which Platform: - [ ] iOS - [x] Android - [x] WebAssembly - [ ] Windows ## Anything else we need to know? ",1.0,"Automatically deploy WebAssembly PR artifacts to a website so folks can click on a URL when reviewing the PR - ## What would you like to be added: 1. [x] Automatically build pull-requests 1. [ ] Create Azure Blob Storage 1. [ ] Create Azure DevOps deploy task 1. [ ] Take pull-request # from GitHub and create blobstorage/$pullRequestId folder and replacing existing WASM content 1. [ ] Take pull-request # from GitHub and create blobstorage/$pullRequestId folder and replacing existing APK content 1. [ ] [Set mime type for blobstorage/$pullRequestId for wasm to correct type](https://liftcodeplay.com/2017/11/28/how-to-fix-azure-storage-blob-content-types/). 1. [ ] [Automatically deploy pull-requests](https://docs.microsoft.com/en-us/azure/devops/pipelines/release/deploy-pull-request-builds?view=azure-devops) 1. [ ] [Comment on the pull-request with the URL](https://marketplace.visualstudio.com/items?itemName=SOUTHWORKS.github-pr-comment) 1. [ ] Create Azure Workbooks task to delete any folders older than 90 days in the blog storage ## Why is this needed: Make reviewing pull-requests easier, improve productivity and quality. ## For which Platform: - [ ] iOS - [x] Android - [x] WebAssembly - [ ] Windows ## Anything else we need to know? ",1,automatically deploy webassembly pr artifacts to a website so folks can click on a url when reviewing the pr what would you like to be added automatically build pull requests create azure blob storage create azure devops deploy task take pull request from github and create blobstorage pullrequestid folder and replacing existing wasm content take pull request from github and create blobstorage pullrequestid folder and replacing existing apk content create azure workbooks task to delete any folders older than days in the blog storage why is this needed make reviewing pull requests easier improve productivity and quality for which platform ios android webassembly windows anything else we need to know ,1 217155,16680940340.0,IssuesEvent,2021-06-07 23:36:38,KwanLab/Autometa,https://api.github.com/repos/KwanLab/Autometa,opened,Animations in methods documentation,documentation stretch,"Add manim animations in methods section of documentation. Some scenes have not yet been written. Scenes for particular methods are linked where they have already been generated. ## 🎨 Scenes / Animations 🎨 1. length filter 2. coverage calculation 3. ORF calling 4. marker annotation 5. taxon assignment 6. [_K_-mer counting](https://youtu.be/h-UbyaHzzNs?t=349) (5:49 - 6:42) 7. [_K_-mer embedding](https://youtu.be/h-UbyaHzzNs?t=422) (7:02) 8. [3 dimensions of clustering features](https://youtu.be/h-UbyaHzzNs?t=533) (8:53 - 9:02) 9. [Binning with recursive DBSCAN](https://youtu.be/h-UbyaHzzNs?t=438) (7:18 - 8:46) 10. Unclustered Recruitment ",1.0,"Animations in methods documentation - Add manim animations in methods section of documentation. Some scenes have not yet been written. Scenes for particular methods are linked where they have already been generated. ## 🎨 Scenes / Animations 🎨 1. length filter 2. coverage calculation 3. ORF calling 4. marker annotation 5. taxon assignment 6. [_K_-mer counting](https://youtu.be/h-UbyaHzzNs?t=349) (5:49 - 6:42) 7. [_K_-mer embedding](https://youtu.be/h-UbyaHzzNs?t=422) (7:02) 8. [3 dimensions of clustering features](https://youtu.be/h-UbyaHzzNs?t=533) (8:53 - 9:02) 9. [Binning with recursive DBSCAN](https://youtu.be/h-UbyaHzzNs?t=438) (7:18 - 8:46) 10. Unclustered Recruitment ",0,animations in methods documentation add manim animations in methods section of documentation some scenes have not yet been written scenes for particular methods are linked where they have already been generated 🎨 scenes animations 🎨 length filter coverage calculation orf calling marker annotation taxon assignment unclustered recruitment ,0 63041,7675493832.0,IssuesEvent,2018-05-15 08:52:01,pxlshpr/travel-escapes-website,https://api.github.com/repos/pxlshpr/travel-escapes-website,reopened,Drop-down boxes on slider form have no background colour on iOS devices,bug design,"The drop down boxes on the main slider of the homepage aren't displaying the white background that the other form fields have. This results in them being hard to see, in addition to breaking the design. ![screenshot-ipad](https://user-images.githubusercontent.com/2699772/39980875-9bc42068-5780-11e8-9ac4-f7867ea5e2c5.jpeg) ![screenshot-iphone](https://user-images.githubusercontent.com/2699772/39980984-08a937fe-5781-11e8-9b79-14bcf7db74bb.jpeg)",1.0,"Drop-down boxes on slider form have no background colour on iOS devices - The drop down boxes on the main slider of the homepage aren't displaying the white background that the other form fields have. This results in them being hard to see, in addition to breaking the design. ![screenshot-ipad](https://user-images.githubusercontent.com/2699772/39980875-9bc42068-5780-11e8-9ac4-f7867ea5e2c5.jpeg) ![screenshot-iphone](https://user-images.githubusercontent.com/2699772/39980984-08a937fe-5781-11e8-9b79-14bcf7db74bb.jpeg)",0,drop down boxes on slider form have no background colour on ios devices the drop down boxes on the main slider of the homepage aren t displaying the white background that the other form fields have this results in them being hard to see in addition to breaking the design ,0 217267,16848846855.0,IssuesEvent,2021-06-20 04:16:44,hakehuang/infoflow,https://api.github.com/repos/hakehuang/infoflow,opened," tests-ci :kernel.memory_protection.kobject_access_invalid_kobject : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout ",area: Tests," **Describe the bug** kernel.memory_protection.kobject_access_invalid_kobject test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28 see logs for details **To Reproduce** 1. ``` scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test kernel.memory_protection ``` 2. See error **Expected behavior** test pass **Impact** **Logs and console output** ``` - *** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac *** Running test suite memory_protection_test_suite =================================================================== START - test_permission_inheritance ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993 ESF could not be retrieved successfully. Shall never occur. ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993 ESF could not be retrieved successfully. Shall never occur. ``` **Environment (please complete the following information):** - OS: (e.g. Linux ) - Toolchain (e.g Zephyr SDK) - Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac ",1.0," tests-ci :kernel.memory_protection.kobject_access_invalid_kobject : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout - **Describe the bug** kernel.memory_protection.kobject_access_invalid_kobject test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28 see logs for details **To Reproduce** 1. ``` scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test kernel.memory_protection ``` 2. See error **Expected behavior** test pass **Impact** **Logs and console output** ``` - *** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac *** Running test suite memory_protection_test_suite =================================================================== START - test_permission_inheritance ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993 ESF could not be retrieved successfully. Shall never occur. ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993 ESF could not be retrieved successfully. Shall never occur. ``` **Environment (please complete the following information):** - OS: (e.g. Linux ) - Toolchain (e.g Zephyr SDK) - Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac ",0, tests ci kernel memory protection kobject access invalid kobject zephyr test timeout describe the bug kernel memory protection kobject access invalid kobject test is timeout on zephyr on see logs for details to reproduce scripts twister device testing device serial dev p testcase root tests sub test kernel memory protection see error expected behavior test pass impact logs and console output booting zephyr os build zephyr running test suite memory protection test suite start test permission inheritance assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used zephyr ,0 8891,27172373457.0,IssuesEvent,2023-02-17 20:43:41,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Maxed out OneDrive webhook subscriptions - Unable to delete existing subscriptions,Needs: Triage :mag: automation:Closed," How am I supposed to delete a subscription when I don't have an id? I accidentally created a loop that maxed out my subscriptions and now I have no way of getting rid of them because I don't have the ids recorded anywhere. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 70301fdf-007d-834f-e5e3-cb2bfbbd2d84 * Version Independent ID: 5bada6e0-9402-e46a-9f0a-4a9c9e181795 * Content: [Update a webhook subscription - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/subscription_update?view=odsp-graph-online#feedback) * Content Source: [docs/rest-api/api/subscription_update.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/api/subscription_update.md) * Product: **onedrive** * GitHub Login: @JeremyKelley * Microsoft Alias: **JeremyKe**",1.0,"Maxed out OneDrive webhook subscriptions - Unable to delete existing subscriptions - How am I supposed to delete a subscription when I don't have an id? I accidentally created a loop that maxed out my subscriptions and now I have no way of getting rid of them because I don't have the ids recorded anywhere. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 70301fdf-007d-834f-e5e3-cb2bfbbd2d84 * Version Independent ID: 5bada6e0-9402-e46a-9f0a-4a9c9e181795 * Content: [Update a webhook subscription - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/subscription_update?view=odsp-graph-online#feedback) * Content Source: [docs/rest-api/api/subscription_update.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/api/subscription_update.md) * Product: **onedrive** * GitHub Login: @JeremyKelley * Microsoft Alias: **JeremyKe**",1,maxed out onedrive webhook subscriptions unable to delete existing subscriptions how am i supposed to delete a subscription when i don t have an id i accidentally created a loop that maxed out my subscriptions and now i have no way of getting rid of them because i don t have the ids recorded anywhere document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product onedrive github login jeremykelley microsoft alias jeremyke ,1 85,3518690844.0,IssuesEvent,2016-01-12 14:07:02,blackbaud/skyux,https://api.github.com/repos/blackbaud/skyux,closed,Visual test label screenshots don't show the whole label,automation,"The screenshots of labels taken during our visual regression tests chop off the top and bottom of each label. The screenshot should show the whole label. ![mac_chrome_labels_full labels 1280px baseline](https://cloud.githubusercontent.com/assets/11655766/12237734/2d98c82e-b84e-11e5-9c3f-f6f3eb07496c.png) ",1.0,"Visual test label screenshots don't show the whole label - The screenshots of labels taken during our visual regression tests chop off the top and bottom of each label. The screenshot should show the whole label. ![mac_chrome_labels_full labels 1280px baseline](https://cloud.githubusercontent.com/assets/11655766/12237734/2d98c82e-b84e-11e5-9c3f-f6f3eb07496c.png) ",1,visual test label screenshots don t show the whole label the screenshots of labels taken during our visual regression tests chop off the top and bottom of each label the screenshot should show the whole label ,1 4786,17470855630.0,IssuesEvent,2021-08-07 05:17:34,ThinkingEngine-net/PickleTestSuite,https://api.github.com/repos/ThinkingEngine-net/PickleTestSuite,opened,Add selenium command to navigate browser windows,enhancement Browser Automation Selenium,Extend the gherkin support to include the switchto commands.,1.0,Add selenium command to navigate browser windows - Extend the gherkin support to include the switchto commands.,1,add selenium command to navigate browser windows extend the gherkin support to include the switchto commands ,1 7401,24783059926.0,IssuesEvent,2022-10-24 07:30:50,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Wrong element is clicked in Chrome mobile emulation mode,TYPE: bug AREA: client SYSTEM: automations FREQUENCY: level 1,"### What is your Test Scenario? I'm using Chrome mobile device emulation mode with the `device` parameter. In my test code, I click a button selected by its text content. ### What is the Current behavior? The test clicks a wrong element and proceeds. ### What is the Expected behavior? The test clicks the element I've specified. ### What is your web application and your TestCafe test code? Your website URL (or attach your complete example): [change.org](https://www.change.org)
    Your complete test code (or attach your test files): ```js import { Selector } from 'testcafe'; fixture `My fixture`; test('Demo test', async t => { await t .navigateTo('https://www.change.org') //.debug() .click(Selector('div').withText('Start a petition')) .expect(true).ok(); }); ``` This test clicks an empty area if you run it without debug mode. In debug mode, if you wait for the petition list to load, the test scrolls down and clicks a petition title.
    Your complete configuration file (if any):
    Your complete test report:
    Screenshots:
    ### Steps to Reproduce: Use Chrome mobile device emulation mode to run a test that: 1. Navigates to [change.org](https://www.change.org) 2. Clicks `Start a petition`. ### Your Environment details: * testcafe version: 1.5.0 * node.js version: 8.16.0 * command-line arguments: `testcafe 'chrome:emulation:device=iphone X' test.js` * browser name and version: Chrome 77 * platform and version: Windows 10 or macOS 10.14.6 ",1.0,"Wrong element is clicked in Chrome mobile emulation mode - ### What is your Test Scenario? I'm using Chrome mobile device emulation mode with the `device` parameter. In my test code, I click a button selected by its text content. ### What is the Current behavior? The test clicks a wrong element and proceeds. ### What is the Expected behavior? The test clicks the element I've specified. ### What is your web application and your TestCafe test code? Your website URL (or attach your complete example): [change.org](https://www.change.org)
    Your complete test code (or attach your test files): ```js import { Selector } from 'testcafe'; fixture `My fixture`; test('Demo test', async t => { await t .navigateTo('https://www.change.org') //.debug() .click(Selector('div').withText('Start a petition')) .expect(true).ok(); }); ``` This test clicks an empty area if you run it without debug mode. In debug mode, if you wait for the petition list to load, the test scrolls down and clicks a petition title.
    Your complete configuration file (if any):
    Your complete test report:
    Screenshots:
    ### Steps to Reproduce: Use Chrome mobile device emulation mode to run a test that: 1. Navigates to [change.org](https://www.change.org) 2. Clicks `Start a petition`. ### Your Environment details: * testcafe version: 1.5.0 * node.js version: 8.16.0 * command-line arguments: `testcafe 'chrome:emulation:device=iphone X' test.js` * browser name and version: Chrome 77 * platform and version: Windows 10 or macOS 10.14.6 ",1,wrong element is clicked in chrome mobile emulation mode what is your test scenario i m using chrome mobile device emulation mode with the device parameter in my test code i click a button selected by its text content what is the current behavior the test clicks a wrong element and proceeds what is the expected behavior the test clicks the element i ve specified what is your web application and your testcafe test code your website url or attach your complete example your complete test code or attach your test files js import selector from testcafe fixture my fixture test demo test async t await t navigateto debug click selector div withtext start a petition expect true ok this test clicks an empty area if you run it without debug mode in debug mode if you wait for the petition list to load the test scrolls down and clicks a petition title your complete configuration file if any your complete test report screenshots steps to reproduce use chrome mobile device emulation mode to run a test that navigates to clicks start a petition your environment details testcafe version node js version command line arguments testcafe chrome emulation device iphone x test js browser name and version chrome platform and version windows or macos ,1 8548,27117232401.0,IssuesEvent,2023-02-15 19:35:39,o3de/o3de,https://api.github.com/repos/o3de/o3de,closed,Periodic Test Failure: MaterialEditor_Atom_PeriodicTests is consistently failing in Periodic test suites,feature/graphics kind/bug needs-triage sig/graphics-audio kind/automation,"**Describe the bug** MaterialEditor_Atom_PeriodicTests is consistently failing in Periodic test suites. **Failed Jenkins Job Information:** https://jenkins.build.o3de.org/blue/organizations/jenkins/O3DE_periodic-incremental-daily/detail/development/251/pipeline/1907 **Attachments** Windows - [log.txt](https://github.com/o3de/o3de/files/10711209/log.txt) Linux - [log(1).txt](https://github.com/o3de/o3de/files/10711210/log.1.txt) **Additional context** This is the only test executing in this suite. Updating CMakeLists.txt to move this test to execute in the Sandbox test suite. ",1.0,"Periodic Test Failure: MaterialEditor_Atom_PeriodicTests is consistently failing in Periodic test suites - **Describe the bug** MaterialEditor_Atom_PeriodicTests is consistently failing in Periodic test suites. **Failed Jenkins Job Information:** https://jenkins.build.o3de.org/blue/organizations/jenkins/O3DE_periodic-incremental-daily/detail/development/251/pipeline/1907 **Attachments** Windows - [log.txt](https://github.com/o3de/o3de/files/10711209/log.txt) Linux - [log(1).txt](https://github.com/o3de/o3de/files/10711210/log.1.txt) **Additional context** This is the only test executing in this suite. Updating CMakeLists.txt to move this test to execute in the Sandbox test suite. ",1,periodic test failure materialeditor atom periodictests is consistently failing in periodic test suites describe the bug materialeditor atom periodictests is consistently failing in periodic test suites failed jenkins job information attachments windows linux additional context this is the only test executing in this suite updating cmakelists txt to move this test to execute in the sandbox test suite ,1 8518,27046149359.0,IssuesEvent,2023-02-13 09:55:45,scanapi/scanapi,https://api.github.com/repos/scanapi/scanapi,closed,Move Changelog Lint from CircleCI to GitHub Actions,Automation,"## Description We want to use only one CI solution for the project. Currently, we use both CircleCI and GitHub Actions. This issue is to move [Changelog Lint job](https://github.com/scanapi/scanapi/blob/main/.circleci/config.yml#L10) from CircleCI to [GitHub Actions](https://github.com/features/actions). This is the only job missing to finish the migration from CircleCI to GitHub Actions. Currently, there is no github action for the Changelog Lint job. So we have three options: 1. Implement the diff command on @rcmachado 's tool (or as he points out in Add --check option to fmt rcmachado/changelog#71: fmt --check) 2. Do not use his GitHub Action and simply download and execute the command yourself 3. Keep using his GitHub Action but try to play with shell arguments to execute fmt, get the results from it, and then call system's diff Probably: 1. is the best option if you know (or want to play with) Golang. 3. might work well if you know how to use xargs (I don't 😅) 2. could be achieved by some sort of the following commands: https://github.com/scanapi/scanapi/pull/246#discussion_r464083195 --- [CircleCI current config](https://github.com/scanapi/scanapi/blob/main/.circleci/config.yml#L63-L65) [Github Actions config](https://github.com/scanapi/scanapi/tree/main/.github/workflows) --- Child of #137 ",1.0,"Move Changelog Lint from CircleCI to GitHub Actions - ## Description We want to use only one CI solution for the project. Currently, we use both CircleCI and GitHub Actions. This issue is to move [Changelog Lint job](https://github.com/scanapi/scanapi/blob/main/.circleci/config.yml#L10) from CircleCI to [GitHub Actions](https://github.com/features/actions). This is the only job missing to finish the migration from CircleCI to GitHub Actions. Currently, there is no github action for the Changelog Lint job. So we have three options: 1. Implement the diff command on @rcmachado 's tool (or as he points out in Add --check option to fmt rcmachado/changelog#71: fmt --check) 2. Do not use his GitHub Action and simply download and execute the command yourself 3. Keep using his GitHub Action but try to play with shell arguments to execute fmt, get the results from it, and then call system's diff Probably: 1. is the best option if you know (or want to play with) Golang. 3. might work well if you know how to use xargs (I don't 😅) 2. could be achieved by some sort of the following commands: https://github.com/scanapi/scanapi/pull/246#discussion_r464083195 --- [CircleCI current config](https://github.com/scanapi/scanapi/blob/main/.circleci/config.yml#L63-L65) [Github Actions config](https://github.com/scanapi/scanapi/tree/main/.github/workflows) --- Child of #137 ",1,move changelog lint from circleci to github actions description we want to use only one ci solution for the project currently we use both circleci and github actions this issue is to move from circleci to this is the only job missing to finish the migration from circleci to github actions currently there is no github action for the changelog lint job so we have three options implement the diff command on rcmachado s tool or as he points out in add check option to fmt rcmachado changelog fmt check do not use his github action and simply download and execute the command yourself keep using his github action but try to play with shell arguments to execute fmt get the results from it and then call system s diff probably is the best option if you know or want to play with golang might work well if you know how to use xargs i don t 😅 could be achieved by some sort of the following commands child of ,1 7394,24778079287.0,IssuesEvent,2022-10-24 00:10:36,tm24fan8/Home-Assistant-Configs,https://api.github.com/repos/tm24fan8/Home-Assistant-Configs,closed,Upstairs bathroom motion flow doesn't always shut off the lights,bug lighting automation,"It seems that if motion is still detected when the initial 5 minute timer expires, the loop to the 30 second timer does not work.",1.0,"Upstairs bathroom motion flow doesn't always shut off the lights - It seems that if motion is still detected when the initial 5 minute timer expires, the loop to the 30 second timer does not work.",1,upstairs bathroom motion flow doesn t always shut off the lights it seems that if motion is still detected when the initial minute timer expires the loop to the second timer does not work ,1 3399,13668879190.0,IssuesEvent,2020-09-29 00:16:56,surge-synthesizer/surge,https://api.github.com/repos/surge-synthesizer/surge,closed,MIDI controls 1-8 weird track control behavior (Reaper),Host Automation,"1. Lload Surge in Reaper 2. Click on any of the 8 macro sliders 3. Click on Param button in Reaper's plugin header, then Show in track controls 4. Try moving the track control up/down As you reach the min or max value, you will see it is slowing down asymptotically, never fully showing 0.00% or 100.00%. Other automatable controls don't seem to behave like this. ![asymptote](https://user-images.githubusercontent.com/2393720/70568674-a4146380-1b98-11ea-97eb-f4452ea143c0.gif) Note in the GIF that in some cases you can also see the param value display changing on its own, without interacting with the track control at all",1.0,"MIDI controls 1-8 weird track control behavior (Reaper) - 1. Lload Surge in Reaper 2. Click on any of the 8 macro sliders 3. Click on Param button in Reaper's plugin header, then Show in track controls 4. Try moving the track control up/down As you reach the min or max value, you will see it is slowing down asymptotically, never fully showing 0.00% or 100.00%. Other automatable controls don't seem to behave like this. ![asymptote](https://user-images.githubusercontent.com/2393720/70568674-a4146380-1b98-11ea-97eb-f4452ea143c0.gif) Note in the GIF that in some cases you can also see the param value display changing on its own, without interacting with the track control at all",1,midi controls weird track control behavior reaper lload surge in reaper click on any of the macro sliders click on param button in reaper s plugin header then show in track controls try moving the track control up down as you reach the min or max value you will see it is slowing down asymptotically never fully showing or other automatable controls don t seem to behave like this note in the gif that in some cases you can also see the param value display changing on its own without interacting with the track control at all,1 3878,14879523867.0,IssuesEvent,2021-01-20 07:49:59,elastic/apm-pipeline-library,https://api.github.com/repos/elastic/apm-pipeline-library,opened,Add a pre-commit hook to follow markdown links,automation enhancement,It would be nice if a precommit hook checks for public links,1.0,Add a pre-commit hook to follow markdown links - It would be nice if a precommit hook checks for public links,1,add a pre commit hook to follow markdown links it would be nice if a precommit hook checks for public links,1 210057,7182769562.0,IssuesEvent,2018-02-01 10:55:29,grmToolbox/grmpy,https://api.github.com/repos/grmToolbox/grmpy,opened,UserError Assertions,pb-software-engineering priority-medium size-M size-S,"Please replace with UserError ... For example + if not os.path.isfile(init_file): + raise AssertionError",1.0,"UserError Assertions - Please replace with UserError ... For example + if not os.path.isfile(init_file): + raise AssertionError",0,usererror assertions please replace with usererror for example if not os path isfile init file raise assertionerror,0 2066,11352919649.0,IssuesEvent,2020-01-24 14:36:57,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,opened,a8n: minor UI improvements,automation web,"- [ ] Add field labels to form inputs, so they are visible when there is input text: ![image](https://user-images.githubusercontent.com/1559622/73075196-3e421980-3e70-11ea-9792-f50b1c81ea47.png) - [ ] Confirm code hosts all use numeric changeset numbers, then make the changeset number input field a number field. - [ ] Add helper text below the repo input field to make it clear what is expected as input: ""This is the name of the repository in Sourcegraph (everything following /use this"" - [ ] Add some padding between the form inputs and the errors: ![image](https://user-images.githubusercontent.com/1559622/73075641-3afb5d80-3e71-11ea-8297-78e604205d20.png) ",1.0,"a8n: minor UI improvements - - [ ] Add field labels to form inputs, so they are visible when there is input text: ![image](https://user-images.githubusercontent.com/1559622/73075196-3e421980-3e70-11ea-9792-f50b1c81ea47.png) - [ ] Confirm code hosts all use numeric changeset numbers, then make the changeset number input field a number field. - [ ] Add helper text below the repo input field to make it clear what is expected as input: ""This is the name of the repository in Sourcegraph (everything following /use this"" - [ ] Add some padding between the form inputs and the errors: ![image](https://user-images.githubusercontent.com/1559622/73075641-3afb5d80-3e71-11ea-8297-78e604205d20.png) ",1, minor ui improvements add field labels to form inputs so they are visible when there is input text confirm code hosts all use numeric changeset numbers then make the changeset number input field a number field add helper text below the repo input field to make it clear what is expected as input this is the name of the repository in sourcegraph everything following use this add some padding between the form inputs and the errors ,1 2538,5299988021.0,IssuesEvent,2017-02-10 02:26:10,mitchellh/packer,https://api.github.com/repos/mitchellh/packer,closed,Packer push causes inconsitent behavior in atlas builds,docs post-processor/atlas,"I started with the [Atlas Packer Vagrant Tutorial](https://github.com/hashicorp/atlas-packer-vagrant-tutorial.git) template and successfully pushed to atlas and completed a build with no changes following the tutorial steps: ``` packer push -name udev/ceapi template.json ``` After the successful build I added a push section to the template to include a directory created in the project root: ``` json ""push"": { ""name"": ""udev/ceapi"", ""vcs"": true, ""include"": [ ""directory"" ] } ``` The build fails with the following log: ``` ---- Started new build at 2015-08-16 16:56:26.660917832 +0000 UTC ---- Packer v0.8.2 vmware-iso output will be in this color. 7 error(s) occurred: * Bad script 'scripts/base.sh': stat scripts/base.sh: no such file or directory * Bad script 'scripts/virtualbox.sh': stat scripts/virtualbox.sh: no such file or directory * Bad script 'scripts/vmware.sh': stat scripts/vmware.sh: no such file or directory * Bad script 'scripts/vagrant.sh': stat scripts/vagrant.sh: no such file or directory * Bad script 'scripts/dep.sh': stat scripts/dep.sh: no such file or directory * Bad script 'scripts/cleanup.sh': stat scripts/cleanup.sh: no such file or directory * Bad script 'scripts/zerodisk.sh': stat scripts/zerodisk.sh: no such file or directory ``` expected: adding a push section with an array of included files/directories does not cause files in vcs to not be found. This issue occurred on 0.8.5 and 0.8.2 (installed via brew) ",1.0,"Packer push causes inconsitent behavior in atlas builds - I started with the [Atlas Packer Vagrant Tutorial](https://github.com/hashicorp/atlas-packer-vagrant-tutorial.git) template and successfully pushed to atlas and completed a build with no changes following the tutorial steps: ``` packer push -name udev/ceapi template.json ``` After the successful build I added a push section to the template to include a directory created in the project root: ``` json ""push"": { ""name"": ""udev/ceapi"", ""vcs"": true, ""include"": [ ""directory"" ] } ``` The build fails with the following log: ``` ---- Started new build at 2015-08-16 16:56:26.660917832 +0000 UTC ---- Packer v0.8.2 vmware-iso output will be in this color. 7 error(s) occurred: * Bad script 'scripts/base.sh': stat scripts/base.sh: no such file or directory * Bad script 'scripts/virtualbox.sh': stat scripts/virtualbox.sh: no such file or directory * Bad script 'scripts/vmware.sh': stat scripts/vmware.sh: no such file or directory * Bad script 'scripts/vagrant.sh': stat scripts/vagrant.sh: no such file or directory * Bad script 'scripts/dep.sh': stat scripts/dep.sh: no such file or directory * Bad script 'scripts/cleanup.sh': stat scripts/cleanup.sh: no such file or directory * Bad script 'scripts/zerodisk.sh': stat scripts/zerodisk.sh: no such file or directory ``` expected: adding a push section with an array of included files/directories does not cause files in vcs to not be found. This issue occurred on 0.8.5 and 0.8.2 (installed via brew) ",0,packer push causes inconsitent behavior in atlas builds i started with the template and successfully pushed to atlas and completed a build with no changes following the tutorial steps packer push name udev ceapi template json after the successful build i added a push section to the template to include a directory created in the project root json push name udev ceapi vcs true include directory the build fails with the following log started new build at utc packer vmware iso output will be in this color error s occurred bad script scripts base sh stat scripts base sh no such file or directory bad script scripts virtualbox sh stat scripts virtualbox sh no such file or directory bad script scripts vmware sh stat scripts vmware sh no such file or directory bad script scripts vagrant sh stat scripts vagrant sh no such file or directory bad script scripts dep sh stat scripts dep sh no such file or directory bad script scripts cleanup sh stat scripts cleanup sh no such file or directory bad script scripts zerodisk sh stat scripts zerodisk sh no such file or directory expected adding a push section with an array of included files directories does not cause files in vcs to not be found this issue occurred on and installed via brew ,0 9958,30834297917.0,IssuesEvent,2023-08-02 05:59:38,tikv/pd,https://api.github.com/repos/tikv/pd,closed,PD panic after inject PD failed,type/bug severity/critical found/automation may-affects-5.2 may-affects-5.3 may-affects-5.4 may-affects-6.1 may-affects-6.5 may-affects-7.1 affects-7.3,"## Bug Report ### What did you do? 1. TiDB cluster with 1 PD, 3 TiCDC, 3 TiKV, 1 TiDB 2. create changefeed 3. run workload 4. 18:15:17 - 10:15:37 inject PD failure for 20s ### What did you expect to see? PD should be rececovered after PD failure injection ### What did you see instead? PD repeatedly panic ``` |panic: runtime error: invalid memory address or nil pointer dereference │ │ [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x21cf4ef] │ │ │ │ goroutine 366 [running]: │ │ github.com/tikv/pd/server.(*GrpcServer).WatchGlobalConfig(0xc0007e2a30, 0xc000876200, {0x3b60b30, 0xc00034d200}) │ │ /home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/server/grpc_service.go:2366 +0x18f │ │ github.com/pingcap/kvproto/pkg/pdpb._PD_WatchGlobalConfig_Handler({0x2dbf240?, 0xc0007e2a30}, {0x3b5de68, 0xc001430a20}) │ │ /go/pkg/mod/github.com/pingcap/kvproto@v0.0.0-20230720094213-a3b4a77b4333/pkg/pdpb/pdpb.pb.go:10264 +0xd3 │ │ github.com/grpc-ecosystem/go-grpc-middleware.ChainStreamServer.func1.1({0x2dbf240?, 0xc0007e2a30?}, {0x3b5de68?, 0xc001430a20?}) │ │ /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.0.1-0.20190118093823-f849b5445de4/chain.go:71 +0x89 │ │ github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).StreamServerInterceptor.func1({0x2dbf240, 0xc0007e2a30}, {0x3b5e9a8?, 0xc0019ecf00}, 0xc000770240?, 0xc0007ca0a0) │ │ /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-prometheus@v1.2.0/server_metrics.go:121 +0x109 │ │ github.com/grpc-ecosystem/go-grpc-middleware.ChainStreamServer.func1.1({0x2dbf240?, 0xc0007e2a30?}, {0x3b5e9a8?, 0xc0019ecf00?}) │ │ /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.0.1-0.20190118093823-f849b5445de4/chain.go:74 +0x6f │ │ go.etcd.io/etcd/etcdserver/api/v3rpc.newStreamInterceptor.func1({0x2dbf240, 0xc0007e2a30}, {0x3b5e9a8, 0xc0019ecf00}, 0xc0019f3000?, 0xc0007ca0a0) │ │ /go/pkg/mod/go.etcd.io/etcd@v0.5.0-alpha.5.0.20220915004622-85b640cee793/etcdserver/api/v3rpc/interceptor.go:237 +0x483 │ │ github.com/grpc-ecosystem/go-grpc-middleware.ChainStreamServer.func1({0x2dbf240, 0xc0007e2a30}, {0x3b5e9a8, 0xc0019ecf00}, 0xc0014309c0, 0x38bf198) │ │ /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.0.1-0.20190118093823-f849b5445de4/chain.go:79 +0x1a3 │ │ google.golang.org/grpc.(*Server).processStreamingRPC(0xc001a12f00, {0x3b64f60, 0xc001a13380}, 0xc001a2ff00, 0xc0005a6e70, 0x4df1820, 0x0) │ │ /go/pkg/mod/google.golang.org/grpc@v1.26.0/server.go:1244 +0xdc6 │ │ google.golang.org/grpc.(*Server).handleStream(0xc001a12f00, {0x3b64f60, 0xc001a13380}, 0xc001a2ff00, 0x0) │ │ /go/pkg/mod/google.golang.org/grpc@v1.26.0/server.go:1317 +0x9de │ │ google.golang.org/grpc.(*Server).serveStreams.func1.1() │ │ /go/pkg/mod/google.golang.org/grpc@v1.26.0/server.go:722 +0x98 │ │ created by google.golang.org/grpc.(*Server).serveStreams.func1 │ │ /go/pkg/mod/google.golang.org/grpc@v1.26.0/server.go:720 +0xea ``` ![image](https://github.com/tikv/pd/assets/7403864/2599cfc2-2332-49e6-982e-d77d333fe1f2) ### What version of PD are you using (`pd-server -V`)? [root@upstream-pd-0 /]# /pd-server -V Release Version: v7.3.0-alpha Edition: Community Git Commit Hash: 4db1735974b95e2b9884715679ca509e183881eb Git Branch: heads/refs/tags/v7.3.0-alpha UTC Build Time: 2023-07-27 11:36:08 ",1.0,"PD panic after inject PD failed - ## Bug Report ### What did you do? 1. TiDB cluster with 1 PD, 3 TiCDC, 3 TiKV, 1 TiDB 2. create changefeed 3. run workload 4. 18:15:17 - 10:15:37 inject PD failure for 20s ### What did you expect to see? PD should be rececovered after PD failure injection ### What did you see instead? PD repeatedly panic ``` |panic: runtime error: invalid memory address or nil pointer dereference │ │ [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x21cf4ef] │ │ │ │ goroutine 366 [running]: │ │ github.com/tikv/pd/server.(*GrpcServer).WatchGlobalConfig(0xc0007e2a30, 0xc000876200, {0x3b60b30, 0xc00034d200}) │ │ /home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/server/grpc_service.go:2366 +0x18f │ │ github.com/pingcap/kvproto/pkg/pdpb._PD_WatchGlobalConfig_Handler({0x2dbf240?, 0xc0007e2a30}, {0x3b5de68, 0xc001430a20}) │ │ /go/pkg/mod/github.com/pingcap/kvproto@v0.0.0-20230720094213-a3b4a77b4333/pkg/pdpb/pdpb.pb.go:10264 +0xd3 │ │ github.com/grpc-ecosystem/go-grpc-middleware.ChainStreamServer.func1.1({0x2dbf240?, 0xc0007e2a30?}, {0x3b5de68?, 0xc001430a20?}) │ │ /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.0.1-0.20190118093823-f849b5445de4/chain.go:71 +0x89 │ │ github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).StreamServerInterceptor.func1({0x2dbf240, 0xc0007e2a30}, {0x3b5e9a8?, 0xc0019ecf00}, 0xc000770240?, 0xc0007ca0a0) │ │ /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-prometheus@v1.2.0/server_metrics.go:121 +0x109 │ │ github.com/grpc-ecosystem/go-grpc-middleware.ChainStreamServer.func1.1({0x2dbf240?, 0xc0007e2a30?}, {0x3b5e9a8?, 0xc0019ecf00?}) │ │ /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.0.1-0.20190118093823-f849b5445de4/chain.go:74 +0x6f │ │ go.etcd.io/etcd/etcdserver/api/v3rpc.newStreamInterceptor.func1({0x2dbf240, 0xc0007e2a30}, {0x3b5e9a8, 0xc0019ecf00}, 0xc0019f3000?, 0xc0007ca0a0) │ │ /go/pkg/mod/go.etcd.io/etcd@v0.5.0-alpha.5.0.20220915004622-85b640cee793/etcdserver/api/v3rpc/interceptor.go:237 +0x483 │ │ github.com/grpc-ecosystem/go-grpc-middleware.ChainStreamServer.func1({0x2dbf240, 0xc0007e2a30}, {0x3b5e9a8, 0xc0019ecf00}, 0xc0014309c0, 0x38bf198) │ │ /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.0.1-0.20190118093823-f849b5445de4/chain.go:79 +0x1a3 │ │ google.golang.org/grpc.(*Server).processStreamingRPC(0xc001a12f00, {0x3b64f60, 0xc001a13380}, 0xc001a2ff00, 0xc0005a6e70, 0x4df1820, 0x0) │ │ /go/pkg/mod/google.golang.org/grpc@v1.26.0/server.go:1244 +0xdc6 │ │ google.golang.org/grpc.(*Server).handleStream(0xc001a12f00, {0x3b64f60, 0xc001a13380}, 0xc001a2ff00, 0x0) │ │ /go/pkg/mod/google.golang.org/grpc@v1.26.0/server.go:1317 +0x9de │ │ google.golang.org/grpc.(*Server).serveStreams.func1.1() │ │ /go/pkg/mod/google.golang.org/grpc@v1.26.0/server.go:722 +0x98 │ │ created by google.golang.org/grpc.(*Server).serveStreams.func1 │ │ /go/pkg/mod/google.golang.org/grpc@v1.26.0/server.go:720 +0xea ``` ![image](https://github.com/tikv/pd/assets/7403864/2599cfc2-2332-49e6-982e-d77d333fe1f2) ### What version of PD are you using (`pd-server -V`)? [root@upstream-pd-0 /]# /pd-server -V Release Version: v7.3.0-alpha Edition: Community Git Commit Hash: 4db1735974b95e2b9884715679ca509e183881eb Git Branch: heads/refs/tags/v7.3.0-alpha UTC Build Time: 2023-07-27 11:36:08 ",1,pd panic after inject pd failed bug report what did you do tidb cluster with pd ticdc tikv tidb create changefeed run workload inject pd failure for what did you expect to see pd should be rececovered after pd failure injection what did you see instead pd repeatedly panic panic runtime error invalid memory address or nil pointer dereference │ │ │ │ │ │ goroutine │ │ github com tikv pd server grpcserver watchglobalconfig │ │ home jenkins agent workspace build common go src github com pingcap pd server grpc service go │ │ github com pingcap kvproto pkg pdpb pd watchglobalconfig handler │ │ go pkg mod github com pingcap kvproto pkg pdpb pdpb pb go │ │ github com grpc ecosystem go grpc middleware chainstreamserver │ │ go pkg mod github com grpc ecosystem go grpc middleware chain go │ │ github com grpc ecosystem go grpc prometheus servermetrics streamserverinterceptor │ │ go pkg mod github com grpc ecosystem go grpc prometheus server metrics go │ │ github com grpc ecosystem go grpc middleware chainstreamserver │ │ go pkg mod github com grpc ecosystem go grpc middleware chain go │ │ go etcd io etcd etcdserver api newstreaminterceptor │ │ go pkg mod go etcd io etcd alpha etcdserver api interceptor go │ │ github com grpc ecosystem go grpc middleware chainstreamserver │ │ go pkg mod github com grpc ecosystem go grpc middleware chain go │ │ google golang org grpc server processstreamingrpc │ │ go pkg mod google golang org grpc server go │ │ google golang org grpc server handlestream │ │ go pkg mod google golang org grpc server go │ │ google golang org grpc server servestreams │ │ go pkg mod google golang org grpc server go │ │ created by google golang org grpc server servestreams │ │ go pkg mod google golang org grpc server go what version of pd are you using pd server v pd server v release version alpha edition community git commit hash git branch heads refs tags alpha utc build time ,1 7456,24914625501.0,IssuesEvent,2022-10-30 08:47:47,plan-player-analytics/Plan,https://api.github.com/repos/plan-player-analytics/Plan,closed,Merge javadoc build action to CI pipeline,Automation,"### I would like to be able to.. Merge javadoc.yml to ci.yml ### Is your feature request related to a problem? Please describe. Javadoc build now also builds the yarn which means the task takes longer than it should. Merging the javadoc building and deploying to CI (when on master branch) should reduce resources going to waste",1.0,"Merge javadoc build action to CI pipeline - ### I would like to be able to.. Merge javadoc.yml to ci.yml ### Is your feature request related to a problem? Please describe. Javadoc build now also builds the yarn which means the task takes longer than it should. Merging the javadoc building and deploying to CI (when on master branch) should reduce resources going to waste",1,merge javadoc build action to ci pipeline i would like to be able to merge javadoc yml to ci yml is your feature request related to a problem please describe javadoc build now also builds the yarn which means the task takes longer than it should merging the javadoc building and deploying to ci when on master branch should reduce resources going to waste,1 3049,2652862155.0,IssuesEvent,2015-03-16 19:43:35,azavea/nyc-trees,https://api.github.com/repos/azavea/nyc-trees,opened,"Clicking ""no problems"" should hide sub-sections with root, trunk, and branch problems",Treecorder testing,This is meant to prevent users from accidentally changing their answer. ,1.0,"Clicking ""no problems"" should hide sub-sections with root, trunk, and branch problems - This is meant to prevent users from accidentally changing their answer. ",0,clicking no problems should hide sub sections with root trunk and branch problems this is meant to prevent users from accidentally changing their answer ,0 3953,15025116495.0,IssuesEvent,2021-02-01 20:36:26,BCDevOps/OpenShift4-RollOut,https://api.github.com/repos/BCDevOps/OpenShift4-RollOut,closed,Recent Python component upgrades break ansible k8s_info functionality,team/DXC tech/automation,"**Describe the issue** Recent OS patching revealed issues with ansible playbooks not able to run correctly and instead generate the following message. ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: Host value http://localhost should start with https:// when talking to HTTPS endpoint fatal: [localhost]: FAILED! => changed=false ansible_facts: discovered_interpreter_python: /usr/bin/python module_stderr: |- Traceback (most recent call last): File ""/root/.ansible/tmp/ansible-tmp-1609394741.33-22201-173486061827444/AnsiballZ_k8s_info.py"", line 102, in _ansiballz_main() File ""/root/.ansible/tmp/ansible-tmp-1609394741.33-22201-173486061827444/AnsiballZ_k8s_info.py"", line 94, in _ansiballz_main invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS) File ""/root/.ansible/tmp/ansible-tmp-1609394741.33-22201-173486061827444/AnsiballZ_k8s_info.py"", line 40, in invoke_module runpy.run_module(mod_name='ansible.modules.clustering.k8s.k8s_info', init_globals=None, run_name='__main__', alter_sys=True) File ""/usr/lib64/python2.7/runpy.py"", line 176, in run_module fname, loader, pkg_name) File ""/usr/lib64/python2.7/runpy.py"", line 82, in _run_module_code mod_name, mod_fname, mod_loader, pkg_name) File ""/usr/lib64/python2.7/runpy.py"", line 72, in _run_code exec code in run_globals File ""/tmp/ansible_k8s_info_payload_C3cNgi/ansible_k8s_info_payload.zip/ansible/modules/clustering/k8s/k8s_info.py"", line 179, in File ""/tmp/ansible_k8s_info_payload_C3cNgi/ansible_k8s_info_payload.zip/ansible/modules/clustering/k8s/k8s_info.py"", line 175, in main File ""/tmp/ansible_k8s_info_payload_C3cNgi/ansible_k8s_info_payload.zip/ansible/modules/clustering/k8s/k8s_info.py"", line 148, in execute_module File ""/tmp/ansible_k8s_info_payload_C3cNgi/ansible_k8s_info_payload.zip/ansible/module_utils/k8s/common.py"", line 200, in get_api_client File ""/usr/lib/python2.7/site-packages/openshift/dynamic/client.py"", line 71, in __init__ self.__discoverer = discoverer(self, cache_file) File ""/usr/lib/python2.7/site-packages/openshift/dynamic/discovery.py"", line 259, in __init__ Discoverer.__init__(self, client, cache_file) File ""/usr/lib/python2.7/site-packages/openshift/dynamic/discovery.py"", line 31, in __init__ self.__init_cache() File ""/usr/lib/python2.7/site-packages/openshift/dynamic/discovery.py"", line 78, in __init_cache self._load_server_info() File ""/usr/lib/python2.7/site-packages/openshift/dynamic/discovery.py"", line 165, in _load_server_info self.client.configuration.host) ValueError: Host value http://localhost should start with https:// when talking to HTTPS endpoint module_stdout: '' msg: |- MODULE FAILURE See stdout/stderr for the exact error rc: 1 ``` A sizable portion of the current ansible playbooks needed to complete configuration of a new cluster rely upon the k8s_info module working correctly, thus this issue needs to be addressed with a permanent solution if possible, and a work-around if viable to get the playbooks working again. **Which Sprint Goal is this issue related to?** **Additional context** **Definition of done Checklist (where applicable)** - [x] confirm cause. - [x] apply work-around if available - [ ] confirm if permanent solution is available and test in LAB - [ ] apply permanent solution in PROD",1.0,"Recent Python component upgrades break ansible k8s_info functionality - **Describe the issue** Recent OS patching revealed issues with ansible playbooks not able to run correctly and instead generate the following message. ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: Host value http://localhost should start with https:// when talking to HTTPS endpoint fatal: [localhost]: FAILED! => changed=false ansible_facts: discovered_interpreter_python: /usr/bin/python module_stderr: |- Traceback (most recent call last): File ""/root/.ansible/tmp/ansible-tmp-1609394741.33-22201-173486061827444/AnsiballZ_k8s_info.py"", line 102, in _ansiballz_main() File ""/root/.ansible/tmp/ansible-tmp-1609394741.33-22201-173486061827444/AnsiballZ_k8s_info.py"", line 94, in _ansiballz_main invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS) File ""/root/.ansible/tmp/ansible-tmp-1609394741.33-22201-173486061827444/AnsiballZ_k8s_info.py"", line 40, in invoke_module runpy.run_module(mod_name='ansible.modules.clustering.k8s.k8s_info', init_globals=None, run_name='__main__', alter_sys=True) File ""/usr/lib64/python2.7/runpy.py"", line 176, in run_module fname, loader, pkg_name) File ""/usr/lib64/python2.7/runpy.py"", line 82, in _run_module_code mod_name, mod_fname, mod_loader, pkg_name) File ""/usr/lib64/python2.7/runpy.py"", line 72, in _run_code exec code in run_globals File ""/tmp/ansible_k8s_info_payload_C3cNgi/ansible_k8s_info_payload.zip/ansible/modules/clustering/k8s/k8s_info.py"", line 179, in File ""/tmp/ansible_k8s_info_payload_C3cNgi/ansible_k8s_info_payload.zip/ansible/modules/clustering/k8s/k8s_info.py"", line 175, in main File ""/tmp/ansible_k8s_info_payload_C3cNgi/ansible_k8s_info_payload.zip/ansible/modules/clustering/k8s/k8s_info.py"", line 148, in execute_module File ""/tmp/ansible_k8s_info_payload_C3cNgi/ansible_k8s_info_payload.zip/ansible/module_utils/k8s/common.py"", line 200, in get_api_client File ""/usr/lib/python2.7/site-packages/openshift/dynamic/client.py"", line 71, in __init__ self.__discoverer = discoverer(self, cache_file) File ""/usr/lib/python2.7/site-packages/openshift/dynamic/discovery.py"", line 259, in __init__ Discoverer.__init__(self, client, cache_file) File ""/usr/lib/python2.7/site-packages/openshift/dynamic/discovery.py"", line 31, in __init__ self.__init_cache() File ""/usr/lib/python2.7/site-packages/openshift/dynamic/discovery.py"", line 78, in __init_cache self._load_server_info() File ""/usr/lib/python2.7/site-packages/openshift/dynamic/discovery.py"", line 165, in _load_server_info self.client.configuration.host) ValueError: Host value http://localhost should start with https:// when talking to HTTPS endpoint module_stdout: '' msg: |- MODULE FAILURE See stdout/stderr for the exact error rc: 1 ``` A sizable portion of the current ansible playbooks needed to complete configuration of a new cluster rely upon the k8s_info module working correctly, thus this issue needs to be addressed with a permanent solution if possible, and a work-around if viable to get the playbooks working again. **Which Sprint Goal is this issue related to?** **Additional context** **Definition of done Checklist (where applicable)** - [x] confirm cause. - [x] apply work-around if available - [ ] confirm if permanent solution is available and test in LAB - [ ] apply permanent solution in PROD",1,recent python component upgrades break ansible info functionality describe the issue recent os patching revealed issues with ansible playbooks not able to run correctly and instead generate the following message an exception occurred during task execution to see the full traceback use vvv the error was valueerror host value should start with https when talking to https endpoint fatal failed changed false ansible facts discovered interpreter python usr bin python module stderr traceback most recent call last file root ansible tmp ansible tmp ansiballz info py line in ansiballz main file root ansible tmp ansible tmp ansiballz info py line in ansiballz main invoke module zipped mod temp path ansiballz params file root ansible tmp ansible tmp ansiballz info py line in invoke module runpy run module mod name ansible modules clustering info init globals none run name main alter sys true file usr runpy py line in run module fname loader pkg name file usr runpy py line in run module code mod name mod fname mod loader pkg name file usr runpy py line in run code exec code in run globals file tmp ansible info payload ansible info payload zip ansible modules clustering info py line in file tmp ansible info payload ansible info payload zip ansible modules clustering info py line in main file tmp ansible info payload ansible info payload zip ansible modules clustering info py line in execute module file tmp ansible info payload ansible info payload zip ansible module utils common py line in get api client file usr lib site packages openshift dynamic client py line in init self discoverer discoverer self cache file file usr lib site packages openshift dynamic discovery py line in init discoverer init self client cache file file usr lib site packages openshift dynamic discovery py line in init self init cache file usr lib site packages openshift dynamic discovery py line in init cache self load server info file usr lib site packages openshift dynamic discovery py line in load server info self client configuration host valueerror host value should start with https when talking to https endpoint module stdout msg module failure see stdout stderr for the exact error rc a sizable portion of the current ansible playbooks needed to complete configuration of a new cluster rely upon the info module working correctly thus this issue needs to be addressed with a permanent solution if possible and a work around if viable to get the playbooks working again which sprint goal is this issue related to additional context definition of done checklist where applicable confirm cause apply work around if available confirm if permanent solution is available and test in lab apply permanent solution in prod,1 236905,7753575636.0,IssuesEvent,2018-05-31 01:29:43,Gloirin/m2gTest,https://api.github.com/repos/Gloirin/m2gTest,closed,"0006598: modlog filters should have the same label as the grid columns",Tinebase JavaScript high priority,"**Reported by pschuele on 8 Jun 2012 13:12** **Version:** Milan (2012-03-3) modlog filters should have the same label as the grid columns ",1.0,"0006598: modlog filters should have the same label as the grid columns - **Reported by pschuele on 8 Jun 2012 13:12** **Version:** Milan (2012-03-3) modlog filters should have the same label as the grid columns ",0, modlog filters should have the same label as the grid columns reported by pschuele on jun version milan modlog filters should have the same label as the grid columns ,0 152345,13452167323.0,IssuesEvent,2020-09-08 21:36:12,DeepRegNet/DeepReg,https://api.github.com/repos/DeepRegNet/DeepReg,closed,JOSS paper final proofread,documentation,"## Subject of the issue - update edu ch reference - update yang2020 reference - final proofread ",1.0,"JOSS paper final proofread - ## Subject of the issue - update edu ch reference - update yang2020 reference - final proofread ",0,joss paper final proofread subject of the issue update edu ch reference update reference final proofread ,0 4972,18157645628.0,IssuesEvent,2021-09-27 05:15:52,appsmithorg/appsmith,https://api.github.com/repos/appsmithorg/appsmith,closed,[Bug]The delete button functionality is not working in Generate page,Bug Critical Release Needs Triaging Platform Generate Page AutomationGap,"## Description [What happened]The delete button functionality is not working in Generate page ### Steps to reproduce the behaviour: [![LOOM DEMO](http://cdn.loom.com/sessions/thumbnails/62ab59f19943479484b69e447d31e795-00001.gif)](https://www.loom.com/share/62ab59f19943479484b69e447d31e795) ### Important Details - Version: [Cloud ] - OS: MacOS - Browser chrome - Environment release ",1.0,"[Bug]The delete button functionality is not working in Generate page - ## Description [What happened]The delete button functionality is not working in Generate page ### Steps to reproduce the behaviour: [![LOOM DEMO](http://cdn.loom.com/sessions/thumbnails/62ab59f19943479484b69e447d31e795-00001.gif)](https://www.loom.com/share/62ab59f19943479484b69e447d31e795) ### Important Details - Version: [Cloud ] - OS: MacOS - Browser chrome - Environment release ",1, the delete button functionality is not working in generate page description the delete button functionality is not working in generate page steps to reproduce the behaviour important details version os macos browser chrome environment release ,1 123571,17772264854.0,IssuesEvent,2021-08-30 14:54:43,kapseliboi/ac-web,https://api.github.com/repos/kapseliboi/ac-web,opened,CVE-2019-1010266 (Medium) detected in lodash-2.4.2.tgz,security vulnerability,"## CVE-2019-1010266 - Medium Severity Vulnerability
    Vulnerable Library - lodash-2.4.2.tgz

    A utility library delivering consistency, customization, performance, & extras.

    Library home page: https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz

    Path to dependency file: ac-web/package.json

    Path to vulnerable library: ac-web/node_modules/request-promise/node_modules/lodash/package.json

    Dependency Hierarchy: - request-promise-0.3.3.tgz (Root Library) - :x: **lodash-2.4.2.tgz** (Vulnerable Library)

    Found in HEAD commit: dfced36be0641d32ba1dbfcdd9969dd354b300c5

    Found in base branch: master

    Vulnerability Details

    lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.

    Publish Date: 2019-07-17

    URL: CVE-2019-1010266

    CVSS 3 Score Details (6.5)

    Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

    For more information on CVSS3 Scores, click here.

    Suggested Fix

    Type: Upgrade version

    Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266

    Release Date: 2019-07-17

    Fix Resolution: 4.17.11

    *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-1010266 (Medium) detected in lodash-2.4.2.tgz - ## CVE-2019-1010266 - Medium Severity Vulnerability
    Vulnerable Library - lodash-2.4.2.tgz

    A utility library delivering consistency, customization, performance, & extras.

    Library home page: https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz

    Path to dependency file: ac-web/package.json

    Path to vulnerable library: ac-web/node_modules/request-promise/node_modules/lodash/package.json

    Dependency Hierarchy: - request-promise-0.3.3.tgz (Root Library) - :x: **lodash-2.4.2.tgz** (Vulnerable Library)

    Found in HEAD commit: dfced36be0641d32ba1dbfcdd9969dd354b300c5

    Found in base branch: master

    Vulnerability Details

    lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.

    Publish Date: 2019-07-17

    URL: CVE-2019-1010266

    CVSS 3 Score Details (6.5)

    Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

    For more information on CVSS3 Scores, click here.

    Suggested Fix

    Type: Upgrade version

    Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266

    Release Date: 2019-07-17

    Fix Resolution: 4.17.11

    *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file ac web package json path to vulnerable library ac web node modules request promise node modules lodash package json dependency hierarchy request promise tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash prior to is affected by cwe uncontrolled resource consumption the impact is denial of service the component is date handler the attack vector is attacker provides very long strings which the library attempts to match using a regular expression the fixed version is publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0 2622,12345333980.0,IssuesEvent,2020-05-15 08:45:58,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,opened,Update FIM realtime mode in analysisd tests,automation core/fim,"## Description After wazuh/wazuh#4940 pull request was merged, _realtime_ FIM alerts appear without a hyphen. This was changed in _fim_ tests in #674, but it needs to changed also in _analysisd_ tests. Here are the affected files: * `tests/integration/test_analysisd/test_all_syscheckd_configurations/data/syscheck_events.yaml` * `tests/integration/test_analysisd/test_all_syscheckd_configurations/data/syscheck_events.yaml` * `tests/integration/test_analysisd/test_error_messages/data/error_messages.yaml` * `tests/integration/test_analysisd/test_event_messages/data/event_messages.yaml`",1.0,"Update FIM realtime mode in analysisd tests - ## Description After wazuh/wazuh#4940 pull request was merged, _realtime_ FIM alerts appear without a hyphen. This was changed in _fim_ tests in #674, but it needs to changed also in _analysisd_ tests. Here are the affected files: * `tests/integration/test_analysisd/test_all_syscheckd_configurations/data/syscheck_events.yaml` * `tests/integration/test_analysisd/test_all_syscheckd_configurations/data/syscheck_events.yaml` * `tests/integration/test_analysisd/test_error_messages/data/error_messages.yaml` * `tests/integration/test_analysisd/test_event_messages/data/event_messages.yaml`",1,update fim realtime mode in analysisd tests description after wazuh wazuh pull request was merged realtime fim alerts appear without a hyphen this was changed in fim tests in but it needs to changed also in analysisd tests here are the affected files tests integration test analysisd test all syscheckd configurations data syscheck events yaml tests integration test analysisd test all syscheckd configurations data syscheck events yaml tests integration test analysisd test error messages data error messages yaml tests integration test analysisd test event messages data event messages yaml ,1 114516,4635282724.0,IssuesEvent,2016-09-29 06:23:51,nextchan/infinity-next,https://api.github.com/repos/nextchan/infinity-next,opened,Investigate Server Sent Events,enhancement priority: 3 - wishful,"https://github.com/tonyhhyip/laravel-sse Instead of using JavaScript to hide NSFW images, make pages SFW by default, and use server sent events in order to pull down NSFW posts and images. Tricky, but keeps things off of the client and on the server, so we keep non-javascript users happy.",1.0,"Investigate Server Sent Events - https://github.com/tonyhhyip/laravel-sse Instead of using JavaScript to hide NSFW images, make pages SFW by default, and use server sent events in order to pull down NSFW posts and images. Tricky, but keeps things off of the client and on the server, so we keep non-javascript users happy.",0,investigate server sent events instead of using javascript to hide nsfw images make pages sfw by default and use server sent events in order to pull down nsfw posts and images tricky but keeps things off of the client and on the server so we keep non javascript users happy ,0 3767,14533426640.0,IssuesEvent,2020-12-15 00:32:56,apache/trafficcontrol,https://api.github.com/repos/apache/trafficcontrol,opened,Upgrade Ansible playbook use of Traffic Ops from API v1 to API v3,automation improvement," ## I'm submitting a ... - improvement request (usability, performance, tech debt, etc.) ## Traffic Control components affected ... - Ansible playbooks - unknown ## Current behavior: The following references to the Traffic Ops API in our Ansible playbooks need to be updated: | HTTP Method | Endpoint | Line of code | | ----------- | -------- | ------------ | | `POST` | `api/1.3/user/login` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/influxdb_relay.yml#L34 | | `GET` | `api/1.3/servers` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/influxdb_relay.yml#L49 | | * | `1.3` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/dataset_loader/defaults/main.yml#L25 | | `GET` | `internal/api/1.3/federations.json` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/dataset_loader/defaults/main.yml#L743 | | `GET` | `api/1.3/profiles/name/{{name}}/parameters` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/dataset_loader/profile.parameter.conversion.md#L23 | | `POST` | `api/1.2/user/login` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/login.yml#L17 | | `GET` | `api/1.2/cdns` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/queue_updates.yml#L20 | | `POST` | `api/1.2/cdns/{{name}}/queue_update` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/queue_updates.yml#L32 | | `GET` | `api/1.2/servers/hostname/{{name}}/details` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/set_server_status.yml#L20 | | `PUT` | `api/1.2/servers/{{id}}/status` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/set_server_status.yml#L43 | | `GET` | `api/1.2/cdns/{{name}}/snapshot/new` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/snapshot.yml#L67 | | `GET` | `api/1.3/servers/checks` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/traffic_portal/defaults/main.yml#L85 | ## New behavior: Those lines of code should reference API v2, reference API v3, or be removed. See https://traffic-control-cdn.readthedocs.io/en/latest/api/migrating-from-v1.html#updating-endpoints-manually for which Traffic Ops API v3 endpoint to update each API v1 endpoint to. ",1.0,"Upgrade Ansible playbook use of Traffic Ops from API v1 to API v3 - ## I'm submitting a ... - improvement request (usability, performance, tech debt, etc.) ## Traffic Control components affected ... - Ansible playbooks - unknown ## Current behavior: The following references to the Traffic Ops API in our Ansible playbooks need to be updated: | HTTP Method | Endpoint | Line of code | | ----------- | -------- | ------------ | | `POST` | `api/1.3/user/login` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/influxdb_relay.yml#L34 | | `GET` | `api/1.3/servers` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/influxdb_relay.yml#L49 | | * | `1.3` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/dataset_loader/defaults/main.yml#L25 | | `GET` | `internal/api/1.3/federations.json` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/dataset_loader/defaults/main.yml#L743 | | `GET` | `api/1.3/profiles/name/{{name}}/parameters` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/dataset_loader/profile.parameter.conversion.md#L23 | | `POST` | `api/1.2/user/login` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/login.yml#L17 | | `GET` | `api/1.2/cdns` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/queue_updates.yml#L20 | | `POST` | `api/1.2/cdns/{{name}}/queue_update` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/queue_updates.yml#L32 | | `GET` | `api/1.2/servers/hostname/{{name}}/details` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/set_server_status.yml#L20 | | `PUT` | `api/1.2/servers/{{id}}/status` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/set_server_status.yml#L43 | | `GET` | `api/1.2/cdns/{{name}}/snapshot/new` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/to_api/tasks/snapshot.yml#L67 | | `GET` | `api/1.3/servers/checks` | https://github.com/apache/trafficcontrol/blob/df92555ddfa90760320c64ec619ac1c34af7f0ca/infrastructure/ansible/roles/traffic_portal/defaults/main.yml#L85 | ## New behavior: Those lines of code should reference API v2, reference API v3, or be removed. See https://traffic-control-cdn.readthedocs.io/en/latest/api/migrating-from-v1.html#updating-endpoints-manually for which Traffic Ops API v3 endpoint to update each API v1 endpoint to. ",1,upgrade ansible playbook use of traffic ops from api to api stop if this issue identifies a security vulnerability do not submit it instead contact the apache traffic control security team at security trafficcontrol apache org and follow the guidelines at regarding vulnerability disclosure for support questions use the traffic control slack or traffic control mailing lists before submitting please search github for a similar issue or pr i m submitting a improvement request usability performance tech debt etc traffic control components affected ansible playbooks unknown current behavior the following references to the traffic ops api in our ansible playbooks need to be updated http method endpoint line of code post api user login get api servers get internal api federations json get api profiles name name parameters post api user login get api cdns post api cdns name queue update get api servers hostname name details put api servers id status get api cdns name snapshot new get api servers checks new behavior those lines of code should reference api reference api or be removed see for which traffic ops api endpoint to update each api endpoint to licensed to the apache software foundation asf under one or more contributor license agreements see the notice file distributed with this work for additional information regarding copyright ownership the asf licenses this file to you under the apache license version the license you may not use this file except in compliance with the license you may obtain a copy of the license at unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license ,1 138453,5341460875.0,IssuesEvent,2017-02-17 02:59:47,mmisw/mmiorr,https://api.github.com/repos/mmisw/mmiorr,closed,linked-csv,enhancement imported Priority-Medium voc2rdf wontfix,"_From [caru...@gmail.com](https://code.google.com/u/113886747689301365533/) on April 24, 2013 20:28:10_ Investigate how to leverage/integrate/align with http://jenit.github.io/linked-csv/ in the vocabulary tool and as an output format. (thanks Matt J. via developers@dataone.org mailing list for the interesting links, the other one being http://blog.okfn.org/2013/04/24/frictionless-data-making-it-radically-easier-to-get-stuff-done-with-data/ ) _Original issue: http://code.google.com/p/mmisw/issues/detail?id=316_ ",1.0,"linked-csv - _From [caru...@gmail.com](https://code.google.com/u/113886747689301365533/) on April 24, 2013 20:28:10_ Investigate how to leverage/integrate/align with http://jenit.github.io/linked-csv/ in the vocabulary tool and as an output format. (thanks Matt J. via developers@dataone.org mailing list for the interesting links, the other one being http://blog.okfn.org/2013/04/24/frictionless-data-making-it-radically-easier-to-get-stuff-done-with-data/ ) _Original issue: http://code.google.com/p/mmisw/issues/detail?id=316_ ",0,linked csv from on april investigate how to leverage integrate align with in the vocabulary tool and as an output format thanks matt j via developers dataone org mailing list for the interesting links the other one being original issue ,0 61095,12145941944.0,IssuesEvent,2020-04-24 10:12:30,strangerstudios/pmpro-register-helper,https://api.github.com/repos/strangerstudios/pmpro-register-helper,closed,Some field types don't respect the class attribute,Difficulty: Easy Impact: Low Status: Needs Code,"radio buttons, grouped check boxes, and hidden fields should add a class attribute to the main html element in the getHTML method. https://github.com/strangerstudios/pmpro-register-helper/blob/dev/classes/class.field.php#L403 A work around is to use the divclass property which adds the class to the wrapping div.",1.0,"Some field types don't respect the class attribute - radio buttons, grouped check boxes, and hidden fields should add a class attribute to the main html element in the getHTML method. https://github.com/strangerstudios/pmpro-register-helper/blob/dev/classes/class.field.php#L403 A work around is to use the divclass property which adds the class to the wrapping div.",0,some field types don t respect the class attribute radio buttons grouped check boxes and hidden fields should add a class attribute to the main html element in the gethtml method a work around is to use the divclass property which adds the class to the wrapping div ,0 8939,27242270375.0,IssuesEvent,2023-02-21 21:37:55,tm24fan8/Home-Assistant-Configs,https://api.github.com/repos/tm24fan8/Home-Assistant-Configs,closed,Bring dining room lamp in line with other lights,enhancement lighting automation,The scenes I made for this light are stupid and need reworked,1.0,Bring dining room lamp in line with other lights - The scenes I made for this light are stupid and need reworked,1,bring dining room lamp in line with other lights the scenes i made for this light are stupid and need reworked,1 13311,3701172875.0,IssuesEvent,2016-02-29 11:59:56,nilearn/nilearn,https://api.github.com/repos/nilearn/nilearn,closed,Strange organisation of examples,Discussion Documentation,"Currently the examples are organized as follows: ``` examples/01_plotting examples/plot_haxby_simple.py examples/02_decoding examples/plot_localizer_simple_analysis.py examples/03_connectivity examples/plot_nilearn_101.py examples/04_manipulating_images examples/plot_python_101.py examples/05_advanced examples/README.txt ``` Why the 01_, 02_, etc. prefix in basename of subdirectories ? This naming was probably done for a reason, but looks really weird. ",1.0,"Strange organisation of examples - Currently the examples are organized as follows: ``` examples/01_plotting examples/plot_haxby_simple.py examples/02_decoding examples/plot_localizer_simple_analysis.py examples/03_connectivity examples/plot_nilearn_101.py examples/04_manipulating_images examples/plot_python_101.py examples/05_advanced examples/README.txt ``` Why the 01_, 02_, etc. prefix in basename of subdirectories ? This naming was probably done for a reason, but looks really weird. ",0,strange organisation of examples currently the examples are organized as follows examples plotting examples plot haxby simple py examples decoding examples plot localizer simple analysis py examples connectivity examples plot nilearn py examples manipulating images examples plot python py examples advanced examples readme txt why the etc prefix in basename of subdirectories this naming was probably done for a reason but looks really weird ,0 263514,23063959361.0,IssuesEvent,2022-07-25 12:26:09,eclipse-openj9/openj9,https://api.github.com/repos/eclipse-openj9/openj9,opened,DaaLoadTest net.openj9.test.decimals.TestDecimalData2 OOM,test failure,"Failure link ------------ From [an internal build](https://hyc-runtimes-jenkins.swg-devops.com/job/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/289/)(`ubu22ppclert-3`): ``` openjdk version ""1.8.0_342"" IBM Semeru Runtime Open Edition (build 1.8.0_342-b07) Eclipse OpenJ9 VM (build openj9-0.33.0-rc1a, JRE 1.8.0 Linux ppc64le-64-Bit Compressed References 20220721_409 (JIT enabled, AOT enabled) OpenJ9 - 04a55b45b OMR - b58aa2708 JCL - 459493948e based on jdk8u342-b07) ``` [Rerun in Grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/parambuild/?SDK_RESOURCE=upstream&TARGET=-f+parallelList.mk+testList_1&TEST_FLAG=&UPSTREAM_TEST_JOB_NAME=Test_openjdk8_j9_special.system_ppc64le_linux&DOCKER_REQUIRED=false&ACTIVE_NODE_TIMEOUT=0&VENDOR_TEST_DIRS=&EXTRA_DOCKER_ARGS=&TKG_OWNER_BRANCH=adoptium%3Amaster&OPENJ9_SYSTEMTEST_OWNER_BRANCH=eclipse%3Amaster&PLATFORM=ppc64le_linux&GENERATE_JOBS=true&KEEP_REPORTDIR=false&PERSONAL_BUILD=false&ADOPTOPENJDK_REPO=https%3A%2F%2Fgithub.com%2Fadoptium%2Faqa-tests.git&LABEL=&EXTRA_OPTIONS=&CUSTOMIZED_SDK_URL=+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk8u%2Fjdk8u-linux-ppc64le-openj9%2F409%2Fibm-semeru-open-debugimage_ppc64le_linux_8u342b07_openj9-0.33.0-rc1a.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk8u%2Fjdk8u-linux-ppc64le-openj9%2F409%2Fibm-semeru-open-jdk_ppc64le_linux_8u342b07_openj9-0.33.0-rc1a.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk8u%2Fjdk8u-linux-ppc64le-openj9%2F409%2Fibm-semeru-open-jre_ppc64le_linux_8u342b07_openj9-0.33.0-rc1a.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk8u%2Fjdk8u-linux-ppc64le-openj9%2F409%2Fibm-semeru-open-testimage_ppc64le_linux_8u342b07_openj9-0.33.0-rc1a.tar.gz&BUILD_IDENTIFIER=&ADOPTOPENJDK_BRANCH=v0.9.3-release&LIGHT_WEIGHT_CHECKOUT=false&USE_JRE=false&ARTIFACTORY_SERVER=na.artifactory.swg-devops&KEEP_WORKSPACE=false&USER_CREDENTIALS_ID=&JDK_VERSION=8&ITERATIONS=1&VENDOR_TEST_REPOS=&JDK_REPO=https%3A%2F%2Fgithub.com%2Fibmruntimes%2Fopenj9-openjdk-jdk8&RELEASE_TAG=v0.33.0-release&OPENJ9_BRANCH=v0.33.0-release&OPENJ9_SHA=&JCK_GIT_REPO=&VENDOR_TEST_BRANCHES=&OPENJ9_REPO=https%3A%2F%2Fgithub.com%2Feclipse-openj9%2Fopenj9.git&UPSTREAM_JOB_NAME=&CLOUD_PROVIDER=&CUSTOM_TARGET=&VENDOR_TEST_SHAS=&JDK_BRANCH=v0.33.0-release&LABEL_ADDITION=&ARTIFACTORY_REPO=&ARTIFACTORY_ROOT_DIR=&UPSTREAM_TEST_JOB_NUMBER=310&DOCKERIMAGE_TAG=&JDK_IMPL=openj9&TEST_TIME=&SSH_AGENT_CREDENTIAL=83181e25-eea4-4f55-8b3e-e79615733226&AUTO_DETECT=true&SLACK_CHANNEL=&DYNAMIC_COMPILE=false&ADOPTOPENJDK_SYSTEMTEST_OWNER_BRANCH=adoptium%3Amaster&CUSTOMIZED_SDK_URL_CREDENTIAL_ID=4e18ffe7-b1b1-4272-9979-99769b68bcc2&ARCHIVE_TEST_RESULTS=false&NUM_MACHINES=&OPENJDK_SHA=&TRSS_URL=http%3A%2F%2Ftrss1.fyre.ibm.com&USE_TESTENV_PROPERTIES=true&BUILD_LIST=system&UPSTREAM_JOB_NUMBER=&STF_OWNER_BRANCH=adoptium%3Amaster&TIME_LIMIT=20&JVM_OPTIONS=&PARALLEL=None) - Change TARGET to run only the failed test targets. Optional info ------------- Failure output (captured from console output) --------------------------------------------- ``` [2022-07-22T02:30:37.248Z] variation: Mode101 [2022-07-22T02:30:37.248Z] JVM_OPTIONS: -Xjit -Xgcpolicy:optthruput -Xnocompressedrefs [2022-07-22T02:31:00.217Z] DLT 02:30:59.382 - Completed 6.7%. Number of tests started=2210 [2022-07-22T02:31:20.927Z] DLT 02:31:19.341 - Completed 13.3%. Number of tests started=4654 (+2444) [2022-07-22T02:31:38.238Z] DLT stderr JVMDUMP039I Processing dump event ""systhrow"", detail ""java/lang/OutOfMemoryError"" at 2022/07/22 02:31:36 - please wait. [2022-07-22T02:31:38.238Z] DLT stderr JVMDUMP032I JVM requested System dump using '/home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/core.20220722.023136.681602.0001.dmp' in response to an event [2022-07-22T02:31:38.238Z] DLT stderr JVMDUMP010I System dump written to /home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/core.20220722.023136.681602.0001.dmp [2022-07-22T02:31:38.239Z] DLT stderr JVMDUMP032I JVM requested Heap dump using '/home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/heapdump.20220722.023136.681602.0002.phd' in response to an event [2022-07-22T02:31:38.239Z] DLT stderr JVMDUMP010I Heap dump written to /home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/heapdump.20220722.023136.681602.0002.phd [2022-07-22T02:31:38.239Z] DLT stderr JVMDUMP032I JVM requested Java dump using '/home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/javacore.20220722.023136.681602.0003.txt' in response to an event [2022-07-22T02:31:38.605Z] STF 02:31:37.718 - Found dump at: /home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/javacore.20220722.023136.681602.0003.txt [2022-07-22T02:31:38.605Z] STF 02:31:37.718 - Found dump at: /home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/core.20220722.023136.681602.0001.dmp [2022-07-22T02:31:38.605Z] STF 02:31:37.718 - Found dump at: /home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/heapdump.20220722.023136.681602.0002.phd [2022-07-22T02:31:38.998Z] DLT 02:31:37.793 - First failure detected by thread: load-11. Not creating dumps as no dump generation is requested for this load test [2022-07-22T02:31:38.998Z] DLT 02:31:37.805 - Test failed [2022-07-22T02:31:38.998Z] DLT Failure num. = 1 [2022-07-22T02:31:38.998Z] DLT Test number = 14 [2022-07-22T02:31:38.998Z] DLT Test details = 'JUnit[net.openj9.test.decimals.TestDecimalData2]' [2022-07-22T02:31:38.998Z] DLT Suite number = 0 [2022-07-22T02:31:38.998Z] DLT Thread number = 11 [2022-07-22T02:31:38.998Z] DLT >>> Captured test output >>> [2022-07-22T02:31:38.999Z] DLT testFailure: testOtherConvertes_UDSL(net.openj9.test.decimals.TestDecimalData2): Java heap space [2022-07-22T02:31:38.999Z] DLT java.lang.OutOfMemoryError: Java heap space [2022-07-22T02:31:38.999Z] DLT at net.openj9.test.arithmetics.TestPerformance.setUpResultSpace(TestPerformance.java:386) [2022-07-22T02:31:38.999Z] DLT at net.openj9.test.arithmetics.TestPerformance.testDiv(TestPerformance.java:166) [2022-07-22T02:31:38.999Z] DLT at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [2022-07-22T02:31:38.999Z] DLT at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.Suite.runChild(Suite.java:128) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.Suite.runChild(Suite.java:27) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [2022-07-22T02:31:38.999Z] DLT at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [2022-07-22T02:31:38.999Z] DLT at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [2022-07-22T02:31:38.999Z] DLT at net.adoptopenjdk.loadTest.adaptors.JUnitAdaptor.executeTest(JUnitAdaptor.java:130) [2022-07-22T02:31:38.999Z] DLT at net.adoptopenjdk.loadTest.LoadTestRunner$2.run(LoadTestRunner.java:182) [2022-07-22T02:31:38.999Z] DLT at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [2022-07-22T02:31:38.999Z] DLT at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [2022-07-22T02:31:38.999Z] DLT at java.lang.Thread.run(Thread.java:827) [2022-07-22T02:31:38.999Z] DLT testFinished: testOtherConvertes_UDSL(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testOverflowED2Long(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testOverflowED2Long(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testConvertLongNormals(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testConvertLongNormals(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testBDToPD(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testBDToPD(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testi2PDExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testi2PDExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testConvertIntegerNormals(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testConvertIntegerNormals(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testInteger2ED2Integer(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testInteger2ED2Integer(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testNonExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testNonExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testSmallestLong2ED2Long(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testSmallestLong2ED2Long(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testBiggestInteger2UD2Integer(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testBiggestInteger2UD2Integer(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testConvertBigIntegerExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testConvertBigIntegerExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testOtherConverters(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testOtherConverters(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testConvertIntegerExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testConvertIntegerExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testInteger2ED2IntegerDecreasingPrecision(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testInteger2ED2IntegerDecreasingPrecision(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testConvertBigDecimalExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testConvertBigDecimalExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT [2022-07-22T02:31:38.999Z] DLT JUnit Test Results for: net.openj9.test.decimals.TestDecimalData2 [2022-07-22T02:31:38.999Z] DLT Ran : 31 [2022-07-22T02:31:38.999Z] DLT Passed : 30 [2022-07-22T02:31:38.999Z] DLT Failed : 1 [2022-07-22T02:31:38.999Z] DLT Ignored: 0 [2022-07-22T02:31:38.999Z] DLT Result : FAILED [2022-07-22T02:31:38.999Z] DLT Test failed: [2022-07-22T02:31:38.999Z] DLT java.lang.OutOfMemoryError: Java heap space [2022-07-22T02:31:38.999Z] DLT at net.openj9.test.arithmetics.TestPerformance.setUpResultSpace(TestPerformance.java:386) [2022-07-22T02:31:38.999Z] DLT at net.openj9.test.arithmetics.TestPerformance.testDiv(TestPerformance.java:166) [2022-07-22T02:31:38.999Z] DLT at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [2022-07-22T02:31:38.999Z] DLT at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.Suite.runChild(Suite.java:128) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.Suite.runChild(Suite.java:27) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [2022-07-22T02:31:38.999Z] DLT at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [2022-07-22T02:31:38.999Z] DLT at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [2022-07-22T02:31:38.999Z] DLT at net.adoptopenjdk.loadTest.adaptors.JUnitAdaptor.executeTest(JUnitAdaptor.java:130) [2022-07-22T02:31:38.999Z] DLT at net.adoptopenjdk.loadTest.LoadTestRunner$2.run(LoadTestRunner.java:182) [2022-07-22T02:31:38.999Z] DLT at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [2022-07-22T02:31:38.999Z] DLT at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [2022-07-22T02:31:38.999Z] DLT at java.lang.Thread.run(Thread.java:827) [2022-07-22T02:31:38.999Z] DLT <<< [2022-07-22T02:31:38.999Z] DLT [2022-07-22T02:31:38.999Z] DLT 02:31:37.805 - Out of memory exception. Aborting test run [2022-07-22T02:31:41.246Z] DaaLoadTest_all_special_5m_0_FAILED ``` [50x internal grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/26073/)",1.0,"DaaLoadTest net.openj9.test.decimals.TestDecimalData2 OOM - Failure link ------------ From [an internal build](https://hyc-runtimes-jenkins.swg-devops.com/job/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/289/)(`ubu22ppclert-3`): ``` openjdk version ""1.8.0_342"" IBM Semeru Runtime Open Edition (build 1.8.0_342-b07) Eclipse OpenJ9 VM (build openj9-0.33.0-rc1a, JRE 1.8.0 Linux ppc64le-64-Bit Compressed References 20220721_409 (JIT enabled, AOT enabled) OpenJ9 - 04a55b45b OMR - b58aa2708 JCL - 459493948e based on jdk8u342-b07) ``` [Rerun in Grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/parambuild/?SDK_RESOURCE=upstream&TARGET=-f+parallelList.mk+testList_1&TEST_FLAG=&UPSTREAM_TEST_JOB_NAME=Test_openjdk8_j9_special.system_ppc64le_linux&DOCKER_REQUIRED=false&ACTIVE_NODE_TIMEOUT=0&VENDOR_TEST_DIRS=&EXTRA_DOCKER_ARGS=&TKG_OWNER_BRANCH=adoptium%3Amaster&OPENJ9_SYSTEMTEST_OWNER_BRANCH=eclipse%3Amaster&PLATFORM=ppc64le_linux&GENERATE_JOBS=true&KEEP_REPORTDIR=false&PERSONAL_BUILD=false&ADOPTOPENJDK_REPO=https%3A%2F%2Fgithub.com%2Fadoptium%2Faqa-tests.git&LABEL=&EXTRA_OPTIONS=&CUSTOMIZED_SDK_URL=+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk8u%2Fjdk8u-linux-ppc64le-openj9%2F409%2Fibm-semeru-open-debugimage_ppc64le_linux_8u342b07_openj9-0.33.0-rc1a.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk8u%2Fjdk8u-linux-ppc64le-openj9%2F409%2Fibm-semeru-open-jdk_ppc64le_linux_8u342b07_openj9-0.33.0-rc1a.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk8u%2Fjdk8u-linux-ppc64le-openj9%2F409%2Fibm-semeru-open-jre_ppc64le_linux_8u342b07_openj9-0.33.0-rc1a.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk8u%2Fjdk8u-linux-ppc64le-openj9%2F409%2Fibm-semeru-open-testimage_ppc64le_linux_8u342b07_openj9-0.33.0-rc1a.tar.gz&BUILD_IDENTIFIER=&ADOPTOPENJDK_BRANCH=v0.9.3-release&LIGHT_WEIGHT_CHECKOUT=false&USE_JRE=false&ARTIFACTORY_SERVER=na.artifactory.swg-devops&KEEP_WORKSPACE=false&USER_CREDENTIALS_ID=&JDK_VERSION=8&ITERATIONS=1&VENDOR_TEST_REPOS=&JDK_REPO=https%3A%2F%2Fgithub.com%2Fibmruntimes%2Fopenj9-openjdk-jdk8&RELEASE_TAG=v0.33.0-release&OPENJ9_BRANCH=v0.33.0-release&OPENJ9_SHA=&JCK_GIT_REPO=&VENDOR_TEST_BRANCHES=&OPENJ9_REPO=https%3A%2F%2Fgithub.com%2Feclipse-openj9%2Fopenj9.git&UPSTREAM_JOB_NAME=&CLOUD_PROVIDER=&CUSTOM_TARGET=&VENDOR_TEST_SHAS=&JDK_BRANCH=v0.33.0-release&LABEL_ADDITION=&ARTIFACTORY_REPO=&ARTIFACTORY_ROOT_DIR=&UPSTREAM_TEST_JOB_NUMBER=310&DOCKERIMAGE_TAG=&JDK_IMPL=openj9&TEST_TIME=&SSH_AGENT_CREDENTIAL=83181e25-eea4-4f55-8b3e-e79615733226&AUTO_DETECT=true&SLACK_CHANNEL=&DYNAMIC_COMPILE=false&ADOPTOPENJDK_SYSTEMTEST_OWNER_BRANCH=adoptium%3Amaster&CUSTOMIZED_SDK_URL_CREDENTIAL_ID=4e18ffe7-b1b1-4272-9979-99769b68bcc2&ARCHIVE_TEST_RESULTS=false&NUM_MACHINES=&OPENJDK_SHA=&TRSS_URL=http%3A%2F%2Ftrss1.fyre.ibm.com&USE_TESTENV_PROPERTIES=true&BUILD_LIST=system&UPSTREAM_JOB_NUMBER=&STF_OWNER_BRANCH=adoptium%3Amaster&TIME_LIMIT=20&JVM_OPTIONS=&PARALLEL=None) - Change TARGET to run only the failed test targets. Optional info ------------- Failure output (captured from console output) --------------------------------------------- ``` [2022-07-22T02:30:37.248Z] variation: Mode101 [2022-07-22T02:30:37.248Z] JVM_OPTIONS: -Xjit -Xgcpolicy:optthruput -Xnocompressedrefs [2022-07-22T02:31:00.217Z] DLT 02:30:59.382 - Completed 6.7%. Number of tests started=2210 [2022-07-22T02:31:20.927Z] DLT 02:31:19.341 - Completed 13.3%. Number of tests started=4654 (+2444) [2022-07-22T02:31:38.238Z] DLT stderr JVMDUMP039I Processing dump event ""systhrow"", detail ""java/lang/OutOfMemoryError"" at 2022/07/22 02:31:36 - please wait. [2022-07-22T02:31:38.238Z] DLT stderr JVMDUMP032I JVM requested System dump using '/home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/core.20220722.023136.681602.0001.dmp' in response to an event [2022-07-22T02:31:38.238Z] DLT stderr JVMDUMP010I System dump written to /home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/core.20220722.023136.681602.0001.dmp [2022-07-22T02:31:38.239Z] DLT stderr JVMDUMP032I JVM requested Heap dump using '/home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/heapdump.20220722.023136.681602.0002.phd' in response to an event [2022-07-22T02:31:38.239Z] DLT stderr JVMDUMP010I Heap dump written to /home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/heapdump.20220722.023136.681602.0002.phd [2022-07-22T02:31:38.239Z] DLT stderr JVMDUMP032I JVM requested Java dump using '/home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/javacore.20220722.023136.681602.0003.txt' in response to an event [2022-07-22T02:31:38.605Z] STF 02:31:37.718 - Found dump at: /home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/javacore.20220722.023136.681602.0003.txt [2022-07-22T02:31:38.605Z] STF 02:31:37.718 - Found dump at: /home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/core.20220722.023136.681602.0001.dmp [2022-07-22T02:31:38.605Z] STF 02:31:37.718 - Found dump at: /home/jenkins/workspace/Test_openjdk8_j9_special.system_ppc64le_linux_testList_1/aqa-tests/TKG/output_16584479858476/DaaLoadTest_all_special_5m_0/20220722-023037-DaaLoadTest/results/heapdump.20220722.023136.681602.0002.phd [2022-07-22T02:31:38.998Z] DLT 02:31:37.793 - First failure detected by thread: load-11. Not creating dumps as no dump generation is requested for this load test [2022-07-22T02:31:38.998Z] DLT 02:31:37.805 - Test failed [2022-07-22T02:31:38.998Z] DLT Failure num. = 1 [2022-07-22T02:31:38.998Z] DLT Test number = 14 [2022-07-22T02:31:38.998Z] DLT Test details = 'JUnit[net.openj9.test.decimals.TestDecimalData2]' [2022-07-22T02:31:38.998Z] DLT Suite number = 0 [2022-07-22T02:31:38.998Z] DLT Thread number = 11 [2022-07-22T02:31:38.998Z] DLT >>> Captured test output >>> [2022-07-22T02:31:38.999Z] DLT testFailure: testOtherConvertes_UDSL(net.openj9.test.decimals.TestDecimalData2): Java heap space [2022-07-22T02:31:38.999Z] DLT java.lang.OutOfMemoryError: Java heap space [2022-07-22T02:31:38.999Z] DLT at net.openj9.test.arithmetics.TestPerformance.setUpResultSpace(TestPerformance.java:386) [2022-07-22T02:31:38.999Z] DLT at net.openj9.test.arithmetics.TestPerformance.testDiv(TestPerformance.java:166) [2022-07-22T02:31:38.999Z] DLT at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [2022-07-22T02:31:38.999Z] DLT at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.Suite.runChild(Suite.java:128) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.Suite.runChild(Suite.java:27) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [2022-07-22T02:31:38.999Z] DLT at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [2022-07-22T02:31:38.999Z] DLT at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [2022-07-22T02:31:38.999Z] DLT at net.adoptopenjdk.loadTest.adaptors.JUnitAdaptor.executeTest(JUnitAdaptor.java:130) [2022-07-22T02:31:38.999Z] DLT at net.adoptopenjdk.loadTest.LoadTestRunner$2.run(LoadTestRunner.java:182) [2022-07-22T02:31:38.999Z] DLT at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [2022-07-22T02:31:38.999Z] DLT at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [2022-07-22T02:31:38.999Z] DLT at java.lang.Thread.run(Thread.java:827) [2022-07-22T02:31:38.999Z] DLT testFinished: testOtherConvertes_UDSL(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testOverflowED2Long(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testOverflowED2Long(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testConvertLongNormals(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testConvertLongNormals(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testBDToPD(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testBDToPD(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testi2PDExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testi2PDExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testConvertIntegerNormals(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testConvertIntegerNormals(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testInteger2ED2Integer(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testInteger2ED2Integer(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testNonExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testNonExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testSmallestLong2ED2Long(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testSmallestLong2ED2Long(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testBiggestInteger2UD2Integer(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testBiggestInteger2UD2Integer(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testConvertBigIntegerExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testConvertBigIntegerExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testOtherConverters(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testOtherConverters(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testConvertIntegerExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testConvertIntegerExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testInteger2ED2IntegerDecreasingPrecision(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testInteger2ED2IntegerDecreasingPrecision(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testStarted : testConvertBigDecimalExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT testFinished: testConvertBigDecimalExceptions(net.openj9.test.decimals.TestDecimalData2) [2022-07-22T02:31:38.999Z] DLT [2022-07-22T02:31:38.999Z] DLT JUnit Test Results for: net.openj9.test.decimals.TestDecimalData2 [2022-07-22T02:31:38.999Z] DLT Ran : 31 [2022-07-22T02:31:38.999Z] DLT Passed : 30 [2022-07-22T02:31:38.999Z] DLT Failed : 1 [2022-07-22T02:31:38.999Z] DLT Ignored: 0 [2022-07-22T02:31:38.999Z] DLT Result : FAILED [2022-07-22T02:31:38.999Z] DLT Test failed: [2022-07-22T02:31:38.999Z] DLT java.lang.OutOfMemoryError: Java heap space [2022-07-22T02:31:38.999Z] DLT at net.openj9.test.arithmetics.TestPerformance.setUpResultSpace(TestPerformance.java:386) [2022-07-22T02:31:38.999Z] DLT at net.openj9.test.arithmetics.TestPerformance.testDiv(TestPerformance.java:166) [2022-07-22T02:31:38.999Z] DLT at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [2022-07-22T02:31:38.999Z] DLT at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) [2022-07-22T02:31:38.999Z] DLT at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.Suite.runChild(Suite.java:128) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.Suite.runChild(Suite.java:27) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [2022-07-22T02:31:38.999Z] DLT at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [2022-07-22T02:31:38.999Z] DLT at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [2022-07-22T02:31:38.999Z] DLT at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [2022-07-22T02:31:38.999Z] DLT at net.adoptopenjdk.loadTest.adaptors.JUnitAdaptor.executeTest(JUnitAdaptor.java:130) [2022-07-22T02:31:38.999Z] DLT at net.adoptopenjdk.loadTest.LoadTestRunner$2.run(LoadTestRunner.java:182) [2022-07-22T02:31:38.999Z] DLT at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [2022-07-22T02:31:38.999Z] DLT at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [2022-07-22T02:31:38.999Z] DLT at java.lang.Thread.run(Thread.java:827) [2022-07-22T02:31:38.999Z] DLT <<< [2022-07-22T02:31:38.999Z] DLT [2022-07-22T02:31:38.999Z] DLT 02:31:37.805 - Out of memory exception. Aborting test run [2022-07-22T02:31:41.246Z] DaaLoadTest_all_special_5m_0_FAILED ``` [50x internal grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/26073/)",0,daaloadtest net test decimals oom failure link from openjdk version ibm semeru runtime open edition build eclipse vm build jre linux bit compressed references jit enabled aot enabled omr jcl based on change target to run only the failed test targets optional info failure output captured from console output variation jvm options xjit xgcpolicy optthruput xnocompressedrefs dlt completed number of tests started dlt completed number of tests started dlt stderr processing dump event systhrow detail java lang outofmemoryerror at please wait dlt stderr jvm requested system dump using home jenkins workspace test special system linux testlist aqa tests tkg output daaloadtest all special daaloadtest results core dmp in response to an event dlt stderr system dump written to home jenkins workspace test special system linux testlist aqa tests tkg output daaloadtest all special daaloadtest results core dmp dlt stderr jvm requested heap dump using home jenkins workspace test special system linux testlist aqa tests tkg output daaloadtest all special daaloadtest results heapdump phd in response to an event dlt stderr heap dump written to home jenkins workspace test special system linux testlist aqa tests tkg output daaloadtest all special daaloadtest results heapdump phd dlt stderr jvm requested java dump using home jenkins workspace test special system linux testlist aqa tests tkg output daaloadtest all special daaloadtest results javacore txt in response to an event stf found dump at home jenkins workspace test special system linux testlist aqa tests tkg output daaloadtest all special daaloadtest results javacore txt stf found dump at home jenkins workspace test special system linux testlist aqa tests tkg output daaloadtest all special daaloadtest results core dmp stf found dump at home jenkins workspace test special system linux testlist aqa tests tkg output daaloadtest all special daaloadtest results heapdump phd dlt first failure detected by thread load not creating dumps as no dump generation is requested for this load test dlt test failed dlt failure num dlt test number dlt test details junit dlt suite number dlt thread number dlt captured test output dlt testfailure testotherconvertes udsl net test decimals java heap space dlt java lang outofmemoryerror java heap space dlt at net test arithmetics testperformance setupresultspace testperformance java dlt at net test arithmetics testperformance testdiv testperformance java dlt at sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java dlt at java lang reflect constructor newinstance constructor java dlt at org junit runners createtest java dlt at org junit runners runreflectivecall java dlt at org junit internal runners model reflectivecallable run reflectivecallable java dlt at org junit runners methodblock java dlt at org junit runners runchild java dlt at org junit runners runchild java dlt at org junit runners parentrunner run parentrunner java dlt at org junit runners parentrunner schedule parentrunner java dlt at org junit runners parentrunner runchildren parentrunner java dlt at org junit runners parentrunner access parentrunner java dlt at org junit runners parentrunner evaluate parentrunner java dlt at org junit internal runners statements runbefores evaluate runbefores java dlt at org junit internal runners statements runafters evaluate runafters java dlt at org junit runners parentrunner run parentrunner java dlt at org junit runners suite runchild suite java dlt at org junit runners suite runchild suite java dlt at org junit runners parentrunner run parentrunner java dlt at org junit runners parentrunner schedule parentrunner java dlt at org junit runners parentrunner runchildren parentrunner java dlt at org junit runners parentrunner access parentrunner java dlt at org junit runners parentrunner evaluate parentrunner java dlt at org junit runners parentrunner run parentrunner java dlt at org junit runner junitcore run junitcore java dlt at org junit runner junitcore run junitcore java dlt at net adoptopenjdk loadtest adaptors junitadaptor executetest junitadaptor java dlt at net adoptopenjdk loadtest loadtestrunner run loadtestrunner java dlt at java util concurrent threadpoolexecutor runworker threadpoolexecutor java dlt at java util concurrent threadpoolexecutor worker run threadpoolexecutor java dlt at java lang thread run thread java dlt testfinished testotherconvertes udsl net test decimals dlt teststarted net test decimals dlt testfinished net test decimals dlt teststarted testconvertlongnormals net test decimals dlt testfinished testconvertlongnormals net test decimals dlt teststarted testbdtopd net test decimals dlt testfinished testbdtopd net test decimals dlt teststarted net test decimals dlt testfinished net test decimals dlt teststarted testconvertintegernormals net test decimals dlt testfinished testconvertintegernormals net test decimals dlt teststarted net test decimals dlt testfinished net test decimals dlt teststarted testnonexceptions net test decimals dlt testfinished testnonexceptions net test decimals dlt teststarted net test decimals dlt testfinished net test decimals dlt teststarted net test decimals dlt testfinished net test decimals dlt teststarted testconvertbigintegerexceptions net test decimals dlt testfinished testconvertbigintegerexceptions net test decimals dlt teststarted testotherconverters net test decimals dlt testfinished testotherconverters net test decimals dlt teststarted testconvertintegerexceptions net test decimals dlt testfinished testconvertintegerexceptions net test decimals dlt teststarted net test decimals dlt testfinished net test decimals dlt teststarted testconvertbigdecimalexceptions net test decimals dlt testfinished testconvertbigdecimalexceptions net test decimals dlt dlt junit test results for net test decimals dlt ran dlt passed dlt failed dlt ignored dlt result failed dlt test failed dlt java lang outofmemoryerror java heap space dlt at net test arithmetics testperformance setupresultspace testperformance java dlt at net test arithmetics testperformance testdiv testperformance java dlt at sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java dlt at java lang reflect constructor newinstance constructor java dlt at org junit runners createtest java dlt at org junit runners runreflectivecall java dlt at org junit internal runners model reflectivecallable run reflectivecallable java dlt at org junit runners methodblock java dlt at org junit runners runchild java dlt at org junit runners runchild java dlt at org junit runners parentrunner run parentrunner java dlt at org junit runners parentrunner schedule parentrunner java dlt at org junit runners parentrunner runchildren parentrunner java dlt at org junit runners parentrunner access parentrunner java dlt at org junit runners parentrunner evaluate parentrunner java dlt at org junit internal runners statements runbefores evaluate runbefores java dlt at org junit internal runners statements runafters evaluate runafters java dlt at org junit runners parentrunner run parentrunner java dlt at org junit runners suite runchild suite java dlt at org junit runners suite runchild suite java dlt at org junit runners parentrunner run parentrunner java dlt at org junit runners parentrunner schedule parentrunner java dlt at org junit runners parentrunner runchildren parentrunner java dlt at org junit runners parentrunner access parentrunner java dlt at org junit runners parentrunner evaluate parentrunner java dlt at org junit runners parentrunner run parentrunner java dlt at org junit runner junitcore run junitcore java dlt at org junit runner junitcore run junitcore java dlt at net adoptopenjdk loadtest adaptors junitadaptor executetest junitadaptor java dlt at net adoptopenjdk loadtest loadtestrunner run loadtestrunner java dlt at java util concurrent threadpoolexecutor runworker threadpoolexecutor java dlt at java util concurrent threadpoolexecutor worker run threadpoolexecutor java dlt at java lang thread run thread java dlt dlt dlt out of memory exception aborting test run daaloadtest all special failed ,0 1736,10651744805.0,IssuesEvent,2019-10-17 11:04:34,MISP/MISP,https://api.github.com/repos/MISP/MISP,closed,E-mail alerts customized by user (subscription based),automation enhancement functionality,"Right now, a user can only receive alerts for all events or none at all. It is the publisher who chooses if the users will receive a notification (publish with or without e-mail). IMHO, it would be a lot more flexible to allow users to ""subscribe"" to a certain set of events: - By tag - By threat level - By Org - By Distribution values ",1.0,"E-mail alerts customized by user (subscription based) - Right now, a user can only receive alerts for all events or none at all. It is the publisher who chooses if the users will receive a notification (publish with or without e-mail). IMHO, it would be a lot more flexible to allow users to ""subscribe"" to a certain set of events: - By tag - By threat level - By Org - By Distribution values ",1,e mail alerts customized by user subscription based right now a user can only receive alerts for all events or none at all it is the publisher who chooses if the users will receive a notification publish with or without e mail imho it would be a lot more flexible to allow users to subscribe to a certain set of events by tag by threat level by org by distribution values ,1 1508,10244086594.0,IssuesEvent,2019-08-20 09:36:46,spacemeshos/go-spacemesh,https://api.github.com/repos/spacemeshos/go-spacemesh,opened,Update Kibana Longevity Dashboard,automation,"# Overview / Motivation In Longevity Dashboard we take the current epoch visualization from ""atx published"" log message If all miners did not publish atxs the current epoch id information that is showed is wrong # The Task Need to get current epoch id from ""release tick"" message. (see: https://github.com/spacemeshos/go-spacemesh/issues/1346) # Implementation Notes TODO: Add links to relevant resources, specs, related issues, etc... # Contribution Guidelines Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter [dev channel](https://gitter.im/spacemesh-os/Lobby) 1. Introduce yourself on go-spacemesh [dev chat channel](https://gitter.im/spacemesh-os/Lobby) - ask our team any question you may have about this task 2. Fork branch `develop` to your own repo and work in your repo 3. You must document all methods, enums and types with [godoc comments](https://blog.golang.org/godoc-documenting-go-code) 4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature 5. When ready for code review, submit a PR from your repo back to branch `develop` 6. Attach relevant issue to PR ",1.0,"Update Kibana Longevity Dashboard - # Overview / Motivation In Longevity Dashboard we take the current epoch visualization from ""atx published"" log message If all miners did not publish atxs the current epoch id information that is showed is wrong # The Task Need to get current epoch id from ""release tick"" message. (see: https://github.com/spacemeshos/go-spacemesh/issues/1346) # Implementation Notes TODO: Add links to relevant resources, specs, related issues, etc... # Contribution Guidelines Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter [dev channel](https://gitter.im/spacemesh-os/Lobby) 1. Introduce yourself on go-spacemesh [dev chat channel](https://gitter.im/spacemesh-os/Lobby) - ask our team any question you may have about this task 2. Fork branch `develop` to your own repo and work in your repo 3. You must document all methods, enums and types with [godoc comments](https://blog.golang.org/godoc-documenting-go-code) 4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature 5. When ready for code review, submit a PR from your repo back to branch `develop` 6. Attach relevant issue to PR ",1,update kibana longevity dashboard overview motivation in longevity dashboard we take the current epoch visualization from atx published log message if all miners did not publish atxs the current epoch id information that is showed is wrong the task need to get current epoch id from release tick message see implementation notes todo add links to relevant resources specs related issues etc contribution guidelines important issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity we will not assign tasks to developers who have nt introduced themselves on our gitter introduce yourself on go spacemesh ask our team any question you may have about this task fork branch develop to your own repo and work in your repo you must document all methods enums and types with you must write go unit tests for all types and methods when submitting a component and integration tests if you submit a feature when ready for code review submit a pr from your repo back to branch develop attach relevant issue to pr ,1 333,5502902314.0,IssuesEvent,2017-03-16 01:32:48,XIVStats/XIVStats,https://api.github.com/repos/XIVStats/XIVStats,opened,Evaluate issues with census mis-reporting data in 2017-03-16 test-run,automation bad-data bug critical site-issue,"_Initially discussed in #1_ For the test run of v1.3.0 of the gatherer program that was completed 2017-03-15, and the output of the XIVStats script run through to [2017-03-16](https://ffxivcensus.com/2017-03/) the statistics are extremely diminished on the figures for the beginning of [February](https://ffxivcensus.com/2017-02/). Taking some samples from the page: | Metric | February | Test Run | |--------|----------|-------| | World Players | 10.1 million | 8.1 million | | Eternal Bond Guest | 319k | 259k | | Eternal Bond Married | 121k | 99k | | ARR Soundtrack | 76k | 57k | From this small sample of metrics and comparing the two sites side by side, you can see that the results are obviously affected by some sort of issue. # Suggested Incident Response - [ ] @Pricetx - revert page for 2017-03 to the data from 2017-03-04, and restore the index page symlink. - [ ] @Pricetx - Investigate cause of issue with data being misreported, ascertain as to whether it was a network or system issue - [ ] @Pricetx / @ReidWeb - load a dump of the database from 2017-03-04 in once restored, and do a compare of the 'missing' data between the 2017-03-04 and 2017-03-16. Find missing IDs and manually check out and parse those pages - to ensure they still exist - it might be that they actually got deleted from the lodestone/characters deleted - but that's unlikely in such a large volume? - [ ] @Pricetx / @ReidWeb - verify that the highest player ID for 2017-03-16 was higher than that for 2017-03-04. # Possible Causes Listed in my evaluation of most to least likely. 1. Lodestone was rate limiting us - due to increased connections from 'image date parsing' - we were hitting this when we didn't hit it before 1. Server firewall/network experienced issues that caused requests to not reach the lodestone.",1.0,"Evaluate issues with census mis-reporting data in 2017-03-16 test-run - _Initially discussed in #1_ For the test run of v1.3.0 of the gatherer program that was completed 2017-03-15, and the output of the XIVStats script run through to [2017-03-16](https://ffxivcensus.com/2017-03/) the statistics are extremely diminished on the figures for the beginning of [February](https://ffxivcensus.com/2017-02/). Taking some samples from the page: | Metric | February | Test Run | |--------|----------|-------| | World Players | 10.1 million | 8.1 million | | Eternal Bond Guest | 319k | 259k | | Eternal Bond Married | 121k | 99k | | ARR Soundtrack | 76k | 57k | From this small sample of metrics and comparing the two sites side by side, you can see that the results are obviously affected by some sort of issue. # Suggested Incident Response - [ ] @Pricetx - revert page for 2017-03 to the data from 2017-03-04, and restore the index page symlink. - [ ] @Pricetx - Investigate cause of issue with data being misreported, ascertain as to whether it was a network or system issue - [ ] @Pricetx / @ReidWeb - load a dump of the database from 2017-03-04 in once restored, and do a compare of the 'missing' data between the 2017-03-04 and 2017-03-16. Find missing IDs and manually check out and parse those pages - to ensure they still exist - it might be that they actually got deleted from the lodestone/characters deleted - but that's unlikely in such a large volume? - [ ] @Pricetx / @ReidWeb - verify that the highest player ID for 2017-03-16 was higher than that for 2017-03-04. # Possible Causes Listed in my evaluation of most to least likely. 1. Lodestone was rate limiting us - due to increased connections from 'image date parsing' - we were hitting this when we didn't hit it before 1. Server firewall/network experienced issues that caused requests to not reach the lodestone.",1,evaluate issues with census mis reporting data in test run initially discussed in for the test run of of the gatherer program that was completed and the output of the xivstats script run through to the statistics are extremely diminished on the figures for the beginning of taking some samples from the page metric february test run world players million million eternal bond guest eternal bond married arr soundtrack from this small sample of metrics and comparing the two sites side by side you can see that the results are obviously affected by some sort of issue suggested incident response pricetx revert page for to the data from and restore the index page symlink pricetx investigate cause of issue with data being misreported ascertain as to whether it was a network or system issue pricetx reidweb load a dump of the database from in once restored and do a compare of the missing data between the and find missing ids and manually check out and parse those pages to ensure they still exist it might be that they actually got deleted from the lodestone characters deleted but that s unlikely in such a large volume pricetx reidweb verify that the highest player id for was higher than that for possible causes listed in my evaluation of most to least likely lodestone was rate limiting us due to increased connections from image date parsing we were hitting this when we didn t hit it before server firewall network experienced issues that caused requests to not reach the lodestone ,1 5520,19905100161.0,IssuesEvent,2022-01-25 11:58:13,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed, [CDC:ErrCheckClusterVersionFromPD]failed to request PD%!(EXTRA string=response status: 503 Service Unavailable),type/bug severity/moderate component/cli found/automation area/ticdc,"### What did you do? - 4 pd, 2 tidb, 7 tikv, 2 cdc - test infra case: 360 pd_leader_switch_changefeed_admin_sync - when pd leader switch one by one,meanwhile changefeed pause/resume ### What did you expect to see? _No response_ ### What did you see instead? - cli chnangefeed resume failed: 2022-01-14T19:16:51.039Z INFO host/ticdc.go:220 Execute command on ticdc {""command"": ""/cdc cli changefeed resume \""--pd=http://upstream-pd.cdc-qihoo-360-testbed-tps-600887-1-433:2379\"" \""--changefeed-id=qihoo360-mysql-task\"""", ""timeout"": ""10s"", ""ticdc peer"": 0} [2022/01/14 19:16:51.050 +00:00] [INFO] [pd.go:32] [""pd leader""] [leader-name=upstream-pd-2] [2022/01/14 19:16:51.553 +00:00] [INFO] [pd.go:46] [""transfer pd leader success""] [leader-name=upstream-pd-1] [2022/01/14 19:16:52.215 +00:00] [ERROR] [ticdc.go:61] [""run changefeed operation failed""] [output=""{stdout: Usage:\n cdc cli changefeed resume [flags]\n\nFlags:\n -c, --changefeed-id string Replication task (changefeed) ID\n -h, --help help for resume\n --no-confirm Don't ask user whether to ignore ineligible table\n\nGlobal Flags:\n --ca string CA certificate path for TLS connection\n --cert string Certificate path for TLS connection\n -i, --interact Run cdc cli with readline\n --key string Private key path for TLS connection\n --log-level string log level (etc: debug|info|warn|error) (default \""warn\"")\n --pd string PD address, use ',' to separate multiple PDs (default \""http://127.0.0.1:2379\""), stderr: Error: [CDC:ErrCheckClusterVersionFromPD]failed to request PD%!(EXTRA string=response status: 503 Service Unavailable)\n[CDC:ErrCheckClusterVersionFromPD]failed to request PD%!(EXTRA string=response status: 503 Service Unavailable), ExitCode: 1""] [stack=""github.com/pingcap/test-infra/caselib/pkg/host.(*TiCDCHost).ChangeFeed\n\t/Users/lixia/source-code/test-infra/caselib/pkg/host/ticdc.go:61\ngithub.com/pingcap/test-infra/caselib/pkg/steps.(*changeFeedTask).Execute\n\t/Users/lixia/source-code/test-infra/caselib/pkg/steps/changefeed.go:49\ngithub.com/pingcap/test-infra/caselib/pkg/steps.(*loopTask).Execute\n\t/Users/lixia/source-code/test-infra/caselib/pkg/steps/task.go:95\ngithub.com/pingcap/test-infra/caselib/pkg/steps.(*Parallel).Execute.func1\n\t/Users/lixia/source-code/test-infra/caselib/pkg/steps/step.go:69\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/Users/lixia/go/pkg/mod/golang.org/x/sync@v0.0.0-20210220032951-036812b2e83c/errgroup/errgroup.go:57""] ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console 5.4.0 ``` TiCDC version (execute `cdc version`): ```console 5.4.0 ```",1.0," [CDC:ErrCheckClusterVersionFromPD]failed to request PD%!(EXTRA string=response status: 503 Service Unavailable) - ### What did you do? - 4 pd, 2 tidb, 7 tikv, 2 cdc - test infra case: 360 pd_leader_switch_changefeed_admin_sync - when pd leader switch one by one,meanwhile changefeed pause/resume ### What did you expect to see? _No response_ ### What did you see instead? - cli chnangefeed resume failed: 2022-01-14T19:16:51.039Z INFO host/ticdc.go:220 Execute command on ticdc {""command"": ""/cdc cli changefeed resume \""--pd=http://upstream-pd.cdc-qihoo-360-testbed-tps-600887-1-433:2379\"" \""--changefeed-id=qihoo360-mysql-task\"""", ""timeout"": ""10s"", ""ticdc peer"": 0} [2022/01/14 19:16:51.050 +00:00] [INFO] [pd.go:32] [""pd leader""] [leader-name=upstream-pd-2] [2022/01/14 19:16:51.553 +00:00] [INFO] [pd.go:46] [""transfer pd leader success""] [leader-name=upstream-pd-1] [2022/01/14 19:16:52.215 +00:00] [ERROR] [ticdc.go:61] [""run changefeed operation failed""] [output=""{stdout: Usage:\n cdc cli changefeed resume [flags]\n\nFlags:\n -c, --changefeed-id string Replication task (changefeed) ID\n -h, --help help for resume\n --no-confirm Don't ask user whether to ignore ineligible table\n\nGlobal Flags:\n --ca string CA certificate path for TLS connection\n --cert string Certificate path for TLS connection\n -i, --interact Run cdc cli with readline\n --key string Private key path for TLS connection\n --log-level string log level (etc: debug|info|warn|error) (default \""warn\"")\n --pd string PD address, use ',' to separate multiple PDs (default \""http://127.0.0.1:2379\""), stderr: Error: [CDC:ErrCheckClusterVersionFromPD]failed to request PD%!(EXTRA string=response status: 503 Service Unavailable)\n[CDC:ErrCheckClusterVersionFromPD]failed to request PD%!(EXTRA string=response status: 503 Service Unavailable), ExitCode: 1""] [stack=""github.com/pingcap/test-infra/caselib/pkg/host.(*TiCDCHost).ChangeFeed\n\t/Users/lixia/source-code/test-infra/caselib/pkg/host/ticdc.go:61\ngithub.com/pingcap/test-infra/caselib/pkg/steps.(*changeFeedTask).Execute\n\t/Users/lixia/source-code/test-infra/caselib/pkg/steps/changefeed.go:49\ngithub.com/pingcap/test-infra/caselib/pkg/steps.(*loopTask).Execute\n\t/Users/lixia/source-code/test-infra/caselib/pkg/steps/task.go:95\ngithub.com/pingcap/test-infra/caselib/pkg/steps.(*Parallel).Execute.func1\n\t/Users/lixia/source-code/test-infra/caselib/pkg/steps/step.go:69\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/Users/lixia/go/pkg/mod/golang.org/x/sync@v0.0.0-20210220032951-036812b2e83c/errgroup/errgroup.go:57""] ### Versions of the cluster Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client): ```console 5.4.0 ``` TiCDC version (execute `cdc version`): ```console 5.4.0 ```",1, failed to request pd extra string response status service unavailable what did you do pd tidb tikv cdc test infra case pd leader switch changefeed admin sync when pd leader switch one by one meanwhile changefeed pause resume what did you expect to see no response what did you see instead cli chnangefeed resume failed info host ticdc go execute command on ticdc command cdc cli changefeed resume pd changefeed id mysql task timeout ticdc peer n nflags n c changefeed id string replication task changefeed id n h help help for resume n no confirm don t ask user whether to ignore ineligible table n nglobal flags n ca string ca certificate path for tls connection n cert string certificate path for tls connection n i interact run cdc cli with readline n key string private key path for tls connection n log level string log level etc debug info warn error default warn n pd string pd address use to separate multiple pds default stderr error failed to request pd extra string response status service unavailable n failed to request pd extra string response status service unavailable exitcode versions of the cluster upstream tidb cluster version execute select tidb version in a mysql client console ticdc version execute cdc version console ,1 395979,27094914686.0,IssuesEvent,2023-02-15 01:26:50,bazelbuild/intellij,https://api.github.com/repos/bazelbuild/intellij,closed,Test Coverage Information,type: documentation more-data-needed,It would be useful to have a means of looking at coverage information and to document that,1.0,Test Coverage Information - It would be useful to have a means of looking at coverage information and to document that,0,test coverage information it would be useful to have a means of looking at coverage information and to document that,0 7516,25016674155.0,IssuesEvent,2022-11-03 19:28:28,aws/aws-cli,https://api.github.com/repos/aws/aws-cli,closed,aws cloudformation package - allow taking the file from STDIN,duplicate feature-request cloudformation package-deploy automation-exempt customization,"Currently, if we generate a CloudFormation template, we have to store it in a temporary location so that the `aws cloudformation package --template-file=path/to/tempfile`. Can we please add an option to accept the `template-file` from STDIN to allow this flow: `generate_cloudformation | aws cloudformation package`",1.0,"aws cloudformation package - allow taking the file from STDIN - Currently, if we generate a CloudFormation template, we have to store it in a temporary location so that the `aws cloudformation package --template-file=path/to/tempfile`. Can we please add an option to accept the `template-file` from STDIN to allow this flow: `generate_cloudformation | aws cloudformation package`",1,aws cloudformation package allow taking the file from stdin currently if we generate a cloudformation template we have to store it in a temporary location so that the aws cloudformation package template file path to tempfile can we please add an option to accept the template file from stdin to allow this flow generate cloudformation aws cloudformation package ,1 2045,11305392425.0,IssuesEvent,2020-01-18 05:07:51,home-assistant/home-assistant-polymer,https://api.github.com/repos/home-assistant/home-assistant-polymer,closed,Automation editor exposing !secret values,editor: automation,"Home Assistant release (hass --version): 0.85.0 Component/platform: Automation Editor Description of problem: When editing a file that includes a secret it replaces the !secret value with the actual value. Expected: It should leave the line as !secret value ",1.0,"Automation editor exposing !secret values - Home Assistant release (hass --version): 0.85.0 Component/platform: Automation Editor Description of problem: When editing a file that includes a secret it replaces the !secret value with the actual value. Expected: It should leave the line as !secret value ",1,automation editor exposing secret values home assistant release hass version component platform automation editor description of problem when editing a file that includes a secret it replaces the secret value with the actual value expected it should leave the line as secret value ,1 7812,25730461725.0,IssuesEvent,2022-12-07 19:53:05,kotools/libraries,https://api.github.com/repos/kotools/libraries,opened,Ubuntu 22 to 20,bug automation types,"## Description Downgrade the Ubuntu image used in the `types-integration.yml` file from the version 22 to 20. ## Checklist - [ ] Fix. ",1.0,"Ubuntu 22 to 20 - ## Description Downgrade the Ubuntu image used in the `types-integration.yml` file from the version 22 to 20. ## Checklist - [ ] Fix. ",1,ubuntu to description downgrade the ubuntu image used in the types integration yml file from the version to checklist fix ,1 593299,17954688662.0,IssuesEvent,2021-09-13 05:35:07,googleapis/google-api-dotnet-client,https://api.github.com/repos/googleapis/google-api-dotnet-client,opened,The SSL connection could not be established,type: question priority: p3,"Hi, I am able to implement this in my small project and all is working in my local project hosted IIS Express but when I uploaded to my CentOS 7 server behind **Nginx Reverse proxy** all is working except when it redirect to my endpoint I get this error ~~~ The SSL connection could not be established ~~~ > https://prnt.sc/1s0sp7c I have tried this method of resolving : - UseForwardedHeaders - UseCertificateForwarding - Already install SSL Cert in Nginx But still I get that error, I was just wondering if anybody know to work around this issue",1.0,"The SSL connection could not be established - Hi, I am able to implement this in my small project and all is working in my local project hosted IIS Express but when I uploaded to my CentOS 7 server behind **Nginx Reverse proxy** all is working except when it redirect to my endpoint I get this error ~~~ The SSL connection could not be established ~~~ > https://prnt.sc/1s0sp7c I have tried this method of resolving : - UseForwardedHeaders - UseCertificateForwarding - Already install SSL Cert in Nginx But still I get that error, I was just wondering if anybody know to work around this issue",0,the ssl connection could not be established hi i am able to implement this in my small project and all is working in my local project hosted iis express but when i uploaded to my centos server behind nginx reverse proxy all is working except when it redirect to my endpoint i get this error the ssl connection could not be established i have tried this method of resolving useforwardedheaders usecertificateforwarding already install ssl cert in nginx but still i get that error i was just wondering if anybody know to work around this issue,0 12285,3594380268.0,IssuesEvent,2016-02-01 23:25:19,syl20bnr/spacemacs,https://api.github.com/repos/syl20bnr/spacemacs,closed,‘open the quickhelp’ suspended,Beginner friendly documentation :->,"OS: Slackware-current When I start Emacs, bottom line prompt ""open the quickhelp"". Emacs suspended animation without response, about 1 ~ 2 minutes after only can response keyboard action, which is what reason? Spacemacs for the first time, thank you.",1.0,"‘open the quickhelp’ suspended - OS: Slackware-current When I start Emacs, bottom line prompt ""open the quickhelp"". Emacs suspended animation without response, about 1 ~ 2 minutes after only can response keyboard action, which is what reason? Spacemacs for the first time, thank you.",0,‘open the quickhelp’ suspended os slackware current when i start emacs bottom line prompt open the quickhelp emacs suspended animation without response about minutes after only can response keyboard action which is what reason spacemacs for the first time thank you ,0 9575,2615162848.0,IssuesEvent,2015-03-01 06:42:17,chrsmith/reaver-wps,https://api.github.com/repos/chrsmith/reaver-wps,opened,Unable to associate,auto-migrated Priority-Triage Type-Defect,"``` 0. What version of Reaver are you using? (Only defects against the latest version will be considered.) 1.4 1. What operating system are you using (Linux is the only supported OS)? Xubuntu 12.04 2. Is your wireless card in monitor mode (yes/no)? Yes 3. What is the signal strength of the Access Point you are trying to crack? DIfferent 4. What is the manufacturer and model # of the device you are trying to crack? Netgear wgr614 5. What is the entire command line string you are supplying to reaver? default one, nothing changed 6. Please describe what you think the issue is. Not supported driver 7. Paste the output from Reaver below. [+] Waiting for beacon from {BSSID} [+] Switching mon0 to channel 6 [!] WARNING: Failed to associate with {BSSID} [!] WARNING: Failed to associate with {BSSID} [!] WARNING: Failed to associate with {BSSID} ---Or switches between channel a lot and again spams [!] WARNING: Failed to associate with {BSSID} [!] WARNING: Failed to associate with {BSSID} [!] WARNING: Failed to associate with {BSSID} ``` Original issue reported on code.google.com by `hdbus...@gmail.com` on 9 Jun 2012 at 4:40",1.0,"Unable to associate - ``` 0. What version of Reaver are you using? (Only defects against the latest version will be considered.) 1.4 1. What operating system are you using (Linux is the only supported OS)? Xubuntu 12.04 2. Is your wireless card in monitor mode (yes/no)? Yes 3. What is the signal strength of the Access Point you are trying to crack? DIfferent 4. What is the manufacturer and model # of the device you are trying to crack? Netgear wgr614 5. What is the entire command line string you are supplying to reaver? default one, nothing changed 6. Please describe what you think the issue is. Not supported driver 7. Paste the output from Reaver below. [+] Waiting for beacon from {BSSID} [+] Switching mon0 to channel 6 [!] WARNING: Failed to associate with {BSSID} [!] WARNING: Failed to associate with {BSSID} [!] WARNING: Failed to associate with {BSSID} ---Or switches between channel a lot and again spams [!] WARNING: Failed to associate with {BSSID} [!] WARNING: Failed to associate with {BSSID} [!] WARNING: Failed to associate with {BSSID} ``` Original issue reported on code.google.com by `hdbus...@gmail.com` on 9 Jun 2012 at 4:40",0,unable to associate what version of reaver are you using only defects against the latest version will be considered what operating system are you using linux is the only supported os xubuntu is your wireless card in monitor mode yes no yes what is the signal strength of the access point you are trying to crack different what is the manufacturer and model of the device you are trying to crack netgear what is the entire command line string you are supplying to reaver default one nothing changed please describe what you think the issue is not supported driver paste the output from reaver below waiting for beacon from bssid switching to channel warning failed to associate with bssid warning failed to associate with bssid warning failed to associate with bssid or switches between channel a lot and again spams warning failed to associate with bssid warning failed to associate with bssid warning failed to associate with bssid original issue reported on code google com by hdbus gmail com on jun at ,0 8853,27172331696.0,IssuesEvent,2023-02-17 20:41:02,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Instruction Not Updated,type:bug area:Docs automation:Closed,"As shown in the screenshot below, the response returned from OneDrive now will have one more field called `resourceId` in the JSON. It will also have another field called `statusDescription` when we are copying a drive item which is a folder. In addition, if the operation of copy fails, then the JSON will have another field called `errorCode`. Finally, the ""Retrieve the results of the completed operation"" section is not finished by the author. Please look into it. Thank you for your time attending to this. =) ![image](https://user-images.githubusercontent.com/8535306/103223194-a5a9f880-4960-11eb-8e96-7c57106fbc2f.png) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ccad95fe-ccea-c668-17c3-90781dc6b416 * Version Independent ID: cdd73909-251b-ed82-6a3b-778c7f5ce08a * Content: [Calling long running actions - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/long-running-actions?view=odsp-graph-online) * Content Source: [docs/rest-api/concepts/long-running-actions.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/concepts/long-running-actions.md) * Product: **onedrive** * GitHub Login: @rgregg * Microsoft Alias: **rgregg**",1.0,"Instruction Not Updated - As shown in the screenshot below, the response returned from OneDrive now will have one more field called `resourceId` in the JSON. It will also have another field called `statusDescription` when we are copying a drive item which is a folder. In addition, if the operation of copy fails, then the JSON will have another field called `errorCode`. Finally, the ""Retrieve the results of the completed operation"" section is not finished by the author. Please look into it. Thank you for your time attending to this. =) ![image](https://user-images.githubusercontent.com/8535306/103223194-a5a9f880-4960-11eb-8e96-7c57106fbc2f.png) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ccad95fe-ccea-c668-17c3-90781dc6b416 * Version Independent ID: cdd73909-251b-ed82-6a3b-778c7f5ce08a * Content: [Calling long running actions - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/long-running-actions?view=odsp-graph-online) * Content Source: [docs/rest-api/concepts/long-running-actions.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/concepts/long-running-actions.md) * Product: **onedrive** * GitHub Login: @rgregg * Microsoft Alias: **rgregg**",1,instruction not updated as shown in the screenshot below the response returned from onedrive now will have one more field called resourceid in the json it will also have another field called statusdescription when we are copying a drive item which is a folder in addition if the operation of copy fails then the json will have another field called errorcode finally the retrieve the results of the completed operation section is not finished by the author please look into it thank you for your time attending to this document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id ccea version independent id content content source product onedrive github login rgregg microsoft alias rgregg ,1 101023,4106253656.0,IssuesEvent,2016-06-06 07:56:42,duckduckgo/zeroclickinfo-spice,https://api.github.com/repos/duckduckgo/zeroclickinfo-spice,closed,"Shorten: API Redirecting to HTTPS, Code Update Required",Bug Low-Hanging Fruit Maintainer Input Requested Maintainer Timeout Priority: High,"This IA has been taken offline because it started failing. The API now redirects to HTTPS so we need to updated the `spice to` accordingly in order to get this IA online again. ------ IA Page: http://duck.co/ia/view/shorten [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @danjarvis",1.0,"Shorten: API Redirecting to HTTPS, Code Update Required - This IA has been taken offline because it started failing. The API now redirects to HTTPS so we need to updated the `spice to` accordingly in order to get this IA online again. ------ IA Page: http://duck.co/ia/view/shorten [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @danjarvis",0,shorten api redirecting to https code update required this ia has been taken offline because it started failing the api now redirects to https so we need to updated the spice to accordingly in order to get this ia online again ia page danjarvis,0 149908,5730992038.0,IssuesEvent,2017-04-21 10:59:42,pootlepress/pootle-page-builder,https://api.github.com/repos/pootlepress/pootle-page-builder,closed,Slight improvement for user feedback for draggable handles,bug enhancement PRIORITY,"Love how squarespace logo creator does this - can you have a quick play with it so you can see it for real ;) http://logo.squarespace.com/ https://www.dropbox.com/s/ppuw3wsntgkpje7/draggable%20content%20handles.mp4?dl=0",1.0,"Slight improvement for user feedback for draggable handles - Love how squarespace logo creator does this - can you have a quick play with it so you can see it for real ;) http://logo.squarespace.com/ https://www.dropbox.com/s/ppuw3wsntgkpje7/draggable%20content%20handles.mp4?dl=0",0,slight improvement for user feedback for draggable handles love how squarespace logo creator does this can you have a quick play with it so you can see it for real ,0 255669,21943916629.0,IssuesEvent,2022-05-23 21:19:34,brave/brave-browser,https://api.github.com/repos/brave/brave-browser,closed,Claim ad Rewards notification is not dismissed even after claiming it,bug feature/rewards QA/Yes QA/Test-Plan-Specified," ## Description Claim ad Rewards notification is not dismissed even after claiming it ## Steps to Reproduce 1. clean profile 1.9.x 2. enable rewards 3. restore ad grants (I have restored production ad grants from my daily driver) 4. upgrade profile to 1.10.x 5. open NTP and click on `Claim my Rewards` in `Way to Go! Your Rewards from ads here` modal 6. `Claim my Rewards` dismissed in that tab 7. open NTP's again `Claim my Rewards` modal popup is shown again ## Actual result: Claim ad Rewards notification is not dismissed even after claiming it ![image](https://user-images.githubusercontent.com/38657976/83874404-04a0e980-a753-11ea-8abd-22e578cfa83c.png) ## Expected result: Claim ad Rewards notification should be dismissed after claiming it ## Reproduces how often: Always ## Brave version (brave://version info) Brave | 1.10.86 Chromium: 83.0.4103.61 (Official Build) (64-bit) -- | -- Revision | 94f915a8d7c408b09cc7352161ad592299f384d2-refs/branch-heads/4103@{#561} OS | Windows 10 OS Version 1803 (Build 17134.1006) ## Version/Channel Information: - Can you reproduce this issue with the current release? Yes - Can you reproduce this issue with the beta channel? Yes - Can you reproduce this issue with the dev channel? Yes - Can you reproduce this issue with the nightly channel? Not sure ## Other Additional Information: - Does the issue resolve itself when disabling Brave Shields? NA - Does the issue resolve itself when disabling Brave Rewards? NA - Is the issue reproducible on the latest version of Chrome? NA ## Miscellaneous Information: cc: @brave/legacy_qa @tmancey ",1.0,"Claim ad Rewards notification is not dismissed even after claiming it - ## Description Claim ad Rewards notification is not dismissed even after claiming it ## Steps to Reproduce 1. clean profile 1.9.x 2. enable rewards 3. restore ad grants (I have restored production ad grants from my daily driver) 4. upgrade profile to 1.10.x 5. open NTP and click on `Claim my Rewards` in `Way to Go! Your Rewards from ads here` modal 6. `Claim my Rewards` dismissed in that tab 7. open NTP's again `Claim my Rewards` modal popup is shown again ## Actual result: Claim ad Rewards notification is not dismissed even after claiming it ![image](https://user-images.githubusercontent.com/38657976/83874404-04a0e980-a753-11ea-8abd-22e578cfa83c.png) ## Expected result: Claim ad Rewards notification should be dismissed after claiming it ## Reproduces how often: Always ## Brave version (brave://version info) Brave | 1.10.86 Chromium: 83.0.4103.61 (Official Build) (64-bit) -- | -- Revision | 94f915a8d7c408b09cc7352161ad592299f384d2-refs/branch-heads/4103@{#561} OS | Windows 10 OS Version 1803 (Build 17134.1006) ## Version/Channel Information: - Can you reproduce this issue with the current release? Yes - Can you reproduce this issue with the beta channel? Yes - Can you reproduce this issue with the dev channel? Yes - Can you reproduce this issue with the nightly channel? Not sure ## Other Additional Information: - Does the issue resolve itself when disabling Brave Shields? NA - Does the issue resolve itself when disabling Brave Rewards? NA - Is the issue reproducible on the latest version of Chrome? NA ## Miscellaneous Information: cc: @brave/legacy_qa @tmancey ",0,claim ad rewards notification is not dismissed even after claiming it have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description claim ad rewards notification is not dismissed even after claiming it steps to reproduce clean profile x enable rewards restore ad grants i have restored production ad grants from my daily driver upgrade profile to x open ntp and click on claim my rewards in way to go your rewards from ads here modal claim my rewards dismissed in that tab open ntp s again claim my rewards modal popup is shown again actual result claim ad rewards notification is not dismissed even after claiming it expected result claim ad rewards notification should be dismissed after claiming it reproduces how often always brave version brave version info brave chromium   official build   bit revision refs branch heads os windows  os version build version channel information can you reproduce this issue with the current release yes can you reproduce this issue with the beta channel yes can you reproduce this issue with the dev channel yes can you reproduce this issue with the nightly channel not sure other additional information does the issue resolve itself when disabling brave shields na does the issue resolve itself when disabling brave rewards na is the issue reproducible on the latest version of chrome na miscellaneous information cc brave legacy qa tmancey ,0 177920,6588574676.0,IssuesEvent,2017-09-14 04:09:04,FreeAndFair/ColoradoRLA,https://api.github.com/repos/FreeAndFair/ColoradoRLA,reopened,Revised CVR file format; new error messages needed for upload,CDOS Priority client feature server,"To comply with Colorado law protecting voter anonymity, CDOS will ask Counties to remove the ""counting group"" column from the CVR file before hashing and uploading the file. To do: 1. alter server-side as necessary to process file without the column 2. if the column of the CVR file has not been deleted, refuse to import contents to db 3. if the column of the CVR file has not been deleted, give County user an error message: ""Import failed. Please remove Counting Group column, rehash and re-upload.""",1.0,"Revised CVR file format; new error messages needed for upload - To comply with Colorado law protecting voter anonymity, CDOS will ask Counties to remove the ""counting group"" column from the CVR file before hashing and uploading the file. To do: 1. alter server-side as necessary to process file without the column 2. if the column of the CVR file has not been deleted, refuse to import contents to db 3. if the column of the CVR file has not been deleted, give County user an error message: ""Import failed. Please remove Counting Group column, rehash and re-upload.""",0,revised cvr file format new error messages needed for upload to comply with colorado law protecting voter anonymity cdos will ask counties to remove the counting group column from the cvr file before hashing and uploading the file to do alter server side as necessary to process file without the column if the column of the cvr file has not been deleted refuse to import contents to db if the column of the cvr file has not been deleted give county user an error message import failed please remove counting group column rehash and re upload ,0 10120,31724602409.0,IssuesEvent,2023-09-10 20:03:38,AzureAD/microsoft-authentication-library-for-objc,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-objc,opened,Automation tests failure,automation failure,"@AzureAD/appleidentity Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Merge pull request #1831 from AzureAD/mipetriu/main_to_dev [4158c0129d29ab8f5951f040cf9e0dce4877a815] Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1170321&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1170321&view=logs)",1.0,"Automation tests failure - @AzureAD/appleidentity Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Merge pull request #1831 from AzureAD/mipetriu/main_to_dev [4158c0129d29ab8f5951f040cf9e0dce4877a815] Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1170321&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1170321&view=logs)",1,automation tests failure azuread appleidentity automation failed for ran against commit merge pull request from azuread mipetriu main to dev pipeline url ,1 78957,9811513829.0,IssuesEvent,2019-06-13 00:05:45,Automattic/wp-calypso,https://api.github.com/repos/Automattic/wp-calypso,opened,Inconsistent spacing between form fields on Account Settings page,Design [Type] Enhancement,"#### Steps to reproduce 1. Starting at URL: https://wordpress.com/me/account 2. Observe how the spacing between some of the form fields is inconsistent. E.g. between ""Email Address"" and ""Primary Site"" vs ""Primary Site"" and ""Web Address"". #### What I expected The fields to have even spacing between them throughout the form. #### What happened instead The spacing is inconsistent #### Browser / OS version All browsers and versions. #### Screenshot / Video ",1.0,"Inconsistent spacing between form fields on Account Settings page - #### Steps to reproduce 1. Starting at URL: https://wordpress.com/me/account 2. Observe how the spacing between some of the form fields is inconsistent. E.g. between ""Email Address"" and ""Primary Site"" vs ""Primary Site"" and ""Web Address"". #### What I expected The fields to have even spacing between them throughout the form. #### What happened instead The spacing is inconsistent #### Browser / OS version All browsers and versions. #### Screenshot / Video ",0,inconsistent spacing between form fields on account settings page steps to reproduce starting at url observe how the spacing between some of the form fields is inconsistent e g between email address and primary site vs primary site and web address what i expected the fields to have even spacing between them throughout the form what happened instead the spacing is inconsistent browser os version all browsers and versions screenshot video img width alt screen shot at pm src ,0 3444,13771278284.0,IssuesEvent,2020-10-07 21:42:21,scanapi/scanapi,https://api.github.com/repos/scanapi/scanapi,closed,Set the version on pyproject.toml automatically,automation good first issue hacktoberfest,"## Description Set [the version on pyproject.toml file](https://github.com/scanapi/scanapi/blob/master/pyproject.toml#L3) automatically. We want to avoid to manually [bump the version](https://github.com/scanapi/scanapi/pull/212/files#diff-522adf759addbd3b193c74ca85243f7dR3) for each release PR. Maybe this would be a good candidate: https://github.com/mtkennerly/poetry-dynamic-versioning. It needs more investigation",1.0,"Set the version on pyproject.toml automatically - ## Description Set [the version on pyproject.toml file](https://github.com/scanapi/scanapi/blob/master/pyproject.toml#L3) automatically. We want to avoid to manually [bump the version](https://github.com/scanapi/scanapi/pull/212/files#diff-522adf759addbd3b193c74ca85243f7dR3) for each release PR. Maybe this would be a good candidate: https://github.com/mtkennerly/poetry-dynamic-versioning. It needs more investigation",1,set the version on pyproject toml automatically description set automatically we want to avoid to manually for each release pr maybe this would be a good candidate it needs more investigation,1 145527,11697687032.0,IssuesEvent,2020-03-06 12:22:03,SPW-DIG/metawal-core-geonetwork,https://api.github.com/repos/SPW-DIG/metawal-core-geonetwork,closed,ISO 19139 / Informations liées au système de projection,Env prod - OK Env test - OK Env valid - OK criticité.mineur,"Je créé ce ticket suit au problème critique rencontré par le Géoportail suite à la modification de la structure des balises concernant les système de projection. En résumé, voici ce que j'ai compris de la chaîne opérationnelle permettant le fonctionnement actuel du panier de téléchargement : 1. Encodage dans Metawal du système de projection dans la norme ISO19115-3 permettant d’encoder facilement le système de projection avec sa valeur, son url de référence, un label complet et le type de projection (voir http://metawal.wallonie.be/geonetwork/srv/api/records/12b56944-5a6a-4e0e-b8d8-41c47390816f/formatters/xml?approved=true) 2. On envoie cette information via le service CSW dans la norme ISO19139 (norme imposée par la directive INSPIRE), standard sur lequel s’appuie le Géoportail pour obtenir les infos. C’est ici que l’on conserve juste la valeur avec le schéma ‘mw’ et valeur + url avec le schéma ‘gmd’. 3. On parse le xml et on l’envoie en DB 4. L’interface du Géoportail interprète ces infos 5. On va rechercher une info de label grâce à l’url 6. Le panier de téléchargement utilise ce label 7. Le process FME d’extract de la donnée s’appuie sur ce label Il me paraît quand même anormal qu'une simple modification comme le passage d'un characterstring à une anchor puisse poser un problème aussi critique que l'arrêt de fonctionnement du process de téléchargement sur le Géoportail. Bref, je pense qu'on pourrait un minimum alléger la chaîne de traitement si on proposait de conserver le ""label"" (mcc:description/gco:characterstring en 115-3) lors du passage en ISO 19139. Mes questions sont les suivantes : Est-ce autorisé en 19139 ? Est-ce conforme INSPIRE ? Si pas, est-ce que c'est le genre de chose qu'on pourrait implémenter dans le schéma ""mw"" ? ",1.0,"ISO 19139 / Informations liées au système de projection - Je créé ce ticket suit au problème critique rencontré par le Géoportail suite à la modification de la structure des balises concernant les système de projection. En résumé, voici ce que j'ai compris de la chaîne opérationnelle permettant le fonctionnement actuel du panier de téléchargement : 1. Encodage dans Metawal du système de projection dans la norme ISO19115-3 permettant d’encoder facilement le système de projection avec sa valeur, son url de référence, un label complet et le type de projection (voir http://metawal.wallonie.be/geonetwork/srv/api/records/12b56944-5a6a-4e0e-b8d8-41c47390816f/formatters/xml?approved=true) 2. On envoie cette information via le service CSW dans la norme ISO19139 (norme imposée par la directive INSPIRE), standard sur lequel s’appuie le Géoportail pour obtenir les infos. C’est ici que l’on conserve juste la valeur avec le schéma ‘mw’ et valeur + url avec le schéma ‘gmd’. 3. On parse le xml et on l’envoie en DB 4. L’interface du Géoportail interprète ces infos 5. On va rechercher une info de label grâce à l’url 6. Le panier de téléchargement utilise ce label 7. Le process FME d’extract de la donnée s’appuie sur ce label Il me paraît quand même anormal qu'une simple modification comme le passage d'un characterstring à une anchor puisse poser un problème aussi critique que l'arrêt de fonctionnement du process de téléchargement sur le Géoportail. Bref, je pense qu'on pourrait un minimum alléger la chaîne de traitement si on proposait de conserver le ""label"" (mcc:description/gco:characterstring en 115-3) lors du passage en ISO 19139. Mes questions sont les suivantes : Est-ce autorisé en 19139 ? Est-ce conforme INSPIRE ? Si pas, est-ce que c'est le genre de chose qu'on pourrait implémenter dans le schéma ""mw"" ? ",0,iso informations liées au système de projection je créé ce ticket suit au problème critique rencontré par le géoportail suite à la modification de la structure des balises concernant les système de projection en résumé voici ce que j ai compris de la chaîne opérationnelle permettant le fonctionnement actuel du panier de téléchargement encodage dans metawal du système de projection dans la norme permettant d’encoder facilement le système de projection avec sa valeur son url de référence un label complet et le type de projection voir on envoie cette information via le service csw dans la norme norme imposée par la directive inspire standard sur lequel s’appuie le géoportail pour obtenir les infos c’est ici que l’on conserve juste la valeur avec le schéma ‘mw’ et valeur url avec le schéma ‘gmd’ on parse le xml et on l’envoie en db l’interface du géoportail interprète ces infos on va rechercher une info de label grâce à l’url le panier de téléchargement utilise ce label le process fme d’extract de la donnée s’appuie sur ce label il me paraît quand même anormal qu une simple modification comme le passage d un characterstring à une anchor puisse poser un problème aussi critique que l arrêt de fonctionnement du process de téléchargement sur le géoportail bref je pense qu on pourrait un minimum alléger la chaîne de traitement si on proposait de conserver le label mcc description gco characterstring en lors du passage en iso mes questions sont les suivantes est ce autorisé en est ce conforme inspire si pas est ce que c est le genre de chose qu on pourrait implémenter dans le schéma mw ,0 9134,27603832100.0,IssuesEvent,2023-03-09 11:42:02,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,opened,Add OBC support for 4.10 UI testing,MCG ui_automation,"The `views.py::locators.4.10` dict doesn't have an `obc` entry so ` test_obc_creation_and_deletion` fails with a KeyError while trying to initialize ObcUI: ``` class ObcUI(PageNavigator): """""" A class representation for abstraction of OBC-related OpenShift UI actions """""" def __init__(self, driver): super().__init__(driver) ocs_version = f""{version.get_ocs_version_from_csv(only_major_minor=True)}"" self.obc_loc = locators[ocs_version][""obc""] ... ... Message: KeyError: 'obc' ``` Runs: [test_obc_creation_and_deletion[openshift-storage.noobaa.io-noobaa-default-bucket-class-Actions]-02/24/2023](https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/432/9240/409505/409641/409643/log?item0Params=filter.eq.hasStats%3Dtrue%26filter.eq.hasChildren%3Dfalse%26filter.in.issueType%3Dti001%252Cti_1h7tquhpjupuu%252Cti_u7ukrfvrt1yu%252Cti_qxkzvw4t6ipf%252Cti_1h7u8s8jf8tvb) [test_obc_creation_and_deletion[openshift-storage.noobaa.io-noobaa-default-bucket-class-three_dots]-02/24/2023](https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/432/9240/409505/409641/409642/log?item0Params=filter.eq.hasStats%3Dtrue%26filter.eq.hasChildren%3Dfalse%26filter.in.issueType%3Dti001%252Cti_1h7tquhpjupuu%252Cti_u7ukrfvrt1yu%252Cti_qxkzvw4t6ipf%252Cti_1h7u8s8jf8tvb) [test_obc_creation_and_deletion[openshift-storage.noobaa.io-noobaa-default-bucket-class-three_dots]-02/25/2023](https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/432/9273/411686/411822/411824/log?item0Params=filter.eq.hasStats%3Dtrue%26filter.eq.hasChildren%3Dfalse%26filter.in.issueType%3Dti001%252Cti_1h7tquhpjupuu%252Cti_u7ukrfvrt1yu%252Cti_qxkzvw4t6ipf%252Cti_1h7u8s8jf8tvb) [test_obc_creation_and_deletion[openshift-storage.noobaa.io-noobaa-default-bucket-class-three_dots]-02/25/2023](https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/432/9273/411686/411822/411823/log?item0Params=filter.eq.hasStats%3Dtrue%26filter.eq.hasChildren%3Dfalse%26filter.in.issueType%3Dti001%252Cti_1h7tquhpjupuu%252Cti_u7ukrfvrt1yu%252Cti_qxkzvw4t6ipf%252Cti_1h7u8s8jf8tvb)",1.0,"Add OBC support for 4.10 UI testing - The `views.py::locators.4.10` dict doesn't have an `obc` entry so ` test_obc_creation_and_deletion` fails with a KeyError while trying to initialize ObcUI: ``` class ObcUI(PageNavigator): """""" A class representation for abstraction of OBC-related OpenShift UI actions """""" def __init__(self, driver): super().__init__(driver) ocs_version = f""{version.get_ocs_version_from_csv(only_major_minor=True)}"" self.obc_loc = locators[ocs_version][""obc""] ... ... Message: KeyError: 'obc' ``` Runs: [test_obc_creation_and_deletion[openshift-storage.noobaa.io-noobaa-default-bucket-class-Actions]-02/24/2023](https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/432/9240/409505/409641/409643/log?item0Params=filter.eq.hasStats%3Dtrue%26filter.eq.hasChildren%3Dfalse%26filter.in.issueType%3Dti001%252Cti_1h7tquhpjupuu%252Cti_u7ukrfvrt1yu%252Cti_qxkzvw4t6ipf%252Cti_1h7u8s8jf8tvb) [test_obc_creation_and_deletion[openshift-storage.noobaa.io-noobaa-default-bucket-class-three_dots]-02/24/2023](https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/432/9240/409505/409641/409642/log?item0Params=filter.eq.hasStats%3Dtrue%26filter.eq.hasChildren%3Dfalse%26filter.in.issueType%3Dti001%252Cti_1h7tquhpjupuu%252Cti_u7ukrfvrt1yu%252Cti_qxkzvw4t6ipf%252Cti_1h7u8s8jf8tvb) [test_obc_creation_and_deletion[openshift-storage.noobaa.io-noobaa-default-bucket-class-three_dots]-02/25/2023](https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/432/9273/411686/411822/411824/log?item0Params=filter.eq.hasStats%3Dtrue%26filter.eq.hasChildren%3Dfalse%26filter.in.issueType%3Dti001%252Cti_1h7tquhpjupuu%252Cti_u7ukrfvrt1yu%252Cti_qxkzvw4t6ipf%252Cti_1h7u8s8jf8tvb) [test_obc_creation_and_deletion[openshift-storage.noobaa.io-noobaa-default-bucket-class-three_dots]-02/25/2023](https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/432/9273/411686/411822/411823/log?item0Params=filter.eq.hasStats%3Dtrue%26filter.eq.hasChildren%3Dfalse%26filter.in.issueType%3Dti001%252Cti_1h7tquhpjupuu%252Cti_u7ukrfvrt1yu%252Cti_qxkzvw4t6ipf%252Cti_1h7u8s8jf8tvb)",1,add obc support for ui testing the views py locators dict doesn t have an obc entry so test obc creation and deletion fails with a keyerror while trying to initialize obcui class obcui pagenavigator a class representation for abstraction of obc related openshift ui actions def init self driver super init driver ocs version f version get ocs version from csv only major minor true self obc loc locators message keyerror obc runs ,1 4027,15194296369.0,IssuesEvent,2021-02-16 03:13:57,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,[Automation API] support setting list and nested fields to SetConfig ,area/automation-api kind/enhancement,"Working with list and netsted struct is supported on the cli with the `path` flag, i.e.,`pulumi config set --path`. But currently the auto.Stack.SetConfig API supports only ""key-value"". Maybe add an extra bool parameter to SetConfig, and set config with the `--path` flag when it is true.",1.0,"[Automation API] support setting list and nested fields to SetConfig - Working with list and netsted struct is supported on the cli with the `path` flag, i.e.,`pulumi config set --path`. But currently the auto.Stack.SetConfig API supports only ""key-value"". Maybe add an extra bool parameter to SetConfig, and set config with the `--path` flag when it is true.",1, support setting list and nested fields to setconfig working with list and netsted struct is supported on the cli with the path flag i e pulumi config set path but currently the auto stack setconfig api supports only key value maybe add an extra bool parameter to setconfig and set config with the path flag when it is true ,1 5516,19863636774.0,IssuesEvent,2022-01-22 06:58:07,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Jobs with status MaintenanceWindowExceeded not shown in Log Analytics,automation/svc triaged cxp doc-enhancement update-management/subsvc Pri2,"Hi, I've noted that I am not able to find jobs with ""MaintenanceWindowExceeded"", even though I see them in this way in the Azure Portal, when I do distinct on the UpdateRunProgress by the InstallationStaatus over the last 7 days: UpdateRunProgress | distinct InstallationStatus all I got is: NotStarted NotIncluded Succeeded Download Failed Install Failed However I got some jobs visible in the portal as ""MaintenanceWindowExceeded"" from 16.01 and 18.01- is there something I am doing wrong in the KQL? ![image](https://user-images.githubusercontent.com/19157221/149971139-0e70de3e-42b9-45ae-8c5b-421ac770ed20.png) [Enter feedback here] --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 478e860a-46f6-3e08-4138-8f6d8c8de57c * Version Independent ID: 69e9f9ea-614d-7822-c9c8-430bf1d7c83a * Content: [Query Azure Automation Update Management logs](https://docs.microsoft.com/en-us/azure/automation/update-management/query-logs) * Content Source: [articles/automation/update-management/query-logs.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/update-management/query-logs.md) * Service: **automation** * Sub-service: **update-management** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**",1.0,"Jobs with status MaintenanceWindowExceeded not shown in Log Analytics - Hi, I've noted that I am not able to find jobs with ""MaintenanceWindowExceeded"", even though I see them in this way in the Azure Portal, when I do distinct on the UpdateRunProgress by the InstallationStaatus over the last 7 days: UpdateRunProgress | distinct InstallationStatus all I got is: NotStarted NotIncluded Succeeded Download Failed Install Failed However I got some jobs visible in the portal as ""MaintenanceWindowExceeded"" from 16.01 and 18.01- is there something I am doing wrong in the KQL? ![image](https://user-images.githubusercontent.com/19157221/149971139-0e70de3e-42b9-45ae-8c5b-421ac770ed20.png) [Enter feedback here] --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 478e860a-46f6-3e08-4138-8f6d8c8de57c * Version Independent ID: 69e9f9ea-614d-7822-c9c8-430bf1d7c83a * Content: [Query Azure Automation Update Management logs](https://docs.microsoft.com/en-us/azure/automation/update-management/query-logs) * Content Source: [articles/automation/update-management/query-logs.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/update-management/query-logs.md) * Service: **automation** * Sub-service: **update-management** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**",1,jobs with status maintenancewindowexceeded not shown in log analytics hi i ve noted that i am not able to find jobs with maintenancewindowexceeded even though i see them in this way in the azure portal when i do distinct on the updaterunprogress by the installationstaatus over the last days updaterunprogress distinct installationstatus all i got is notstarted notincluded succeeded download failed install failed however i got some jobs visible in the portal as maintenancewindowexceeded from and is there something i am doing wrong in the kql document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service update management github login sgsneha microsoft alias v ssudhir ,1 15545,2859705308.0,IssuesEvent,2015-06-03 12:21:33,ddavison/sublime-tabs,https://api.github.com/repos/ddavison/sublime-tabs,closed,Switching to previous/next tab does not always work as expected,I-defect R-awaiting answer,"I haven't found a reliable way to reproduce it yet but it has happened to me several times so I'm pretty sure there is a bug somewhere. So `at some point` going to the previous or next tab (using the keyboard shortcuts) is no longer switching to the tab on the left or right of the current tab and instead jumps to a completely different tab. Hope this describes accurately the behaviour I'm seeing sometimes. Let me know if you have any idea of what the problem might be, that might help me to make it reproducible.",1.0,"Switching to previous/next tab does not always work as expected - I haven't found a reliable way to reproduce it yet but it has happened to me several times so I'm pretty sure there is a bug somewhere. So `at some point` going to the previous or next tab (using the keyboard shortcuts) is no longer switching to the tab on the left or right of the current tab and instead jumps to a completely different tab. Hope this describes accurately the behaviour I'm seeing sometimes. Let me know if you have any idea of what the problem might be, that might help me to make it reproducible.",0,switching to previous next tab does not always work as expected i haven t found a reliable way to reproduce it yet but it has happened to me several times so i m pretty sure there is a bug somewhere so at some point going to the previous or next tab using the keyboard shortcuts is no longer switching to the tab on the left or right of the current tab and instead jumps to a completely different tab hope this describes accurately the behaviour i m seeing sometimes let me know if you have any idea of what the problem might be that might help me to make it reproducible ,0 975,8949218694.0,IssuesEvent,2019-01-25 06:37:46,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Change tracking issue (currently have open ticket with Azure Support),automation/svc cxp in-progress product-question triaged,"While I've generally had a good experience with Azure Automation and Change Tracking in particular, I've currently got a ticket in with support because I am unable to track file changes that occur at H:\Shares\[sharename]\*. I did notice the documentation has been updated to contain the following statement under Known Issues: ""For Windows files, Change Tracking does not currently detect when a new file has been added to a tracked folder path"" What I'm wondering is: Is this being addressed? This would make Change Tracking more useful to me, as I would be better able to stay on top of the changes that are occurring in my shares. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 6b338aff-04ac-4867-2aea-e9e12bb750cc * Version Independent ID: 4dc5a70f-90db-de1b-78bd-c1af42d3a89f * Content: [Track changes with Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-change-tracking#known-issues) * Content Source: [articles/automation/automation-change-tracking.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-change-tracking.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1.0,"Change tracking issue (currently have open ticket with Azure Support) - While I've generally had a good experience with Azure Automation and Change Tracking in particular, I've currently got a ticket in with support because I am unable to track file changes that occur at H:\Shares\[sharename]\*. I did notice the documentation has been updated to contain the following statement under Known Issues: ""For Windows files, Change Tracking does not currently detect when a new file has been added to a tracked folder path"" What I'm wondering is: Is this being addressed? This would make Change Tracking more useful to me, as I would be better able to stay on top of the changes that are occurring in my shares. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 6b338aff-04ac-4867-2aea-e9e12bb750cc * Version Independent ID: 4dc5a70f-90db-de1b-78bd-c1af42d3a89f * Content: [Track changes with Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-change-tracking#known-issues) * Content Source: [articles/automation/automation-change-tracking.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-change-tracking.md) * Service: **automation** * GitHub Login: @georgewallace * Microsoft Alias: **gwallace**",1,change tracking issue currently have open ticket with azure support while i ve generally had a good experience with azure automation and change tracking in particular i ve currently got a ticket in with support because i am unable to track file changes that occur at h shares i did notice the documentation has been updated to contain the following statement under known issues for windows files change tracking does not currently detect when a new file has been added to a tracked folder path what i m wondering is is this being addressed this would make change tracking more useful to me as i would be better able to stay on top of the changes that are occurring in my shares document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1 107500,16761597391.0,IssuesEvent,2021-06-13 22:26:46,gms-ws-demo/nibrs,https://api.github.com/repos/gms-ws-demo/nibrs,closed,CVE-2020-1951 (Medium) detected in tika-parsers-1.18.jar - autoclosed,security vulnerability,"## CVE-2020-1951 - Medium Severity Vulnerability
    Vulnerable Library - tika-parsers-1.18.jar

    Apache Tika is a toolkit for detecting and extracting metadata and structured text content from various documents using existing parser libraries.

    Path to dependency file: nibrs/tools/nibrs-staging-data/pom.xml

    Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/tika-parsers-1.18.jar,canner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,canner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar

    Dependency Hierarchy: - :x: **tika-parsers-1.18.jar** (Vulnerable Library)

    Found in HEAD commit: 9fb1c19bd26c2113d1961640de126a33eacdc946

    Found in base branch: master

    Vulnerability Details

    A carefully crafted or corrupt PSD file can cause an infinite loop in Apache Tika's PSDParser in versions 1.0-1.23.

    Publish Date: 2020-03-23

    URL: CVE-2020-1951

    CVSS 3 Score Details (5.5)

    Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

    For more information on CVSS3 Scores, click here.

    Suggested Fix

    Type: Upgrade version

    Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1951

    Release Date: 2020-03-23

    Fix Resolution: org.apache.tika:tika-parsers:1.24

    *** - [ ] Check this box to open an automated fix PR ",True,"CVE-2020-1951 (Medium) detected in tika-parsers-1.18.jar - autoclosed - ## CVE-2020-1951 - Medium Severity Vulnerability
    Vulnerable Library - tika-parsers-1.18.jar

    Apache Tika is a toolkit for detecting and extracting metadata and structured text content from various documents using existing parser libraries.

    Path to dependency file: nibrs/tools/nibrs-staging-data/pom.xml

    Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/tika-parsers-1.18.jar,canner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,canner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar,/home/wss-scanner/.m2/repository/org/apache/tika/tika-parsers/1.18/tika-parsers-1.18.jar

    Dependency Hierarchy: - :x: **tika-parsers-1.18.jar** (Vulnerable Library)

    Found in HEAD commit: 9fb1c19bd26c2113d1961640de126a33eacdc946

    Found in base branch: master

    Vulnerability Details

    A carefully crafted or corrupt PSD file can cause an infinite loop in Apache Tika's PSDParser in versions 1.0-1.23.

    Publish Date: 2020-03-23

    URL: CVE-2020-1951

    CVSS 3 Score Details (5.5)

    Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

    For more information on CVSS3 Scores, click here.

    Suggested Fix

    Type: Upgrade version

    Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1951

    Release Date: 2020-03-23

    Fix Resolution: org.apache.tika:tika-parsers:1.24

    *** - [ ] Check this box to open an automated fix PR ",0,cve medium detected in tika parsers jar autoclosed cve medium severity vulnerability vulnerable library tika parsers jar apache tika is a toolkit for detecting and extracting metadata and structured text content from various documents using existing parser libraries path to dependency file nibrs tools nibrs staging data pom xml path to vulnerable library home wss scanner repository org apache tika tika parsers tika parsers jar nibrs web nibrs web target nibrs web web inf lib tika parsers jar canner repository org apache tika tika parsers tika parsers jar home wss scanner repository org apache tika tika parsers tika parsers jar home wss scanner repository org apache tika tika parsers tika parsers jar home wss scanner repository org apache tika tika parsers tika parsers jar home wss scanner repository org apache tika tika parsers tika parsers jar home wss scanner repository org apache tika tika parsers tika parsers jar home wss scanner repository org apache tika tika parsers tika parsers jar home wss scanner repository org apache tika tika parsers tika parsers jar home wss scanner repository org apache tika tika parsers tika parsers jar canner repository org apache tika tika parsers tika parsers jar home wss scanner repository org apache tika tika parsers tika parsers jar dependency hierarchy x tika parsers jar vulnerable library found in head commit a href found in base branch master vulnerability details a carefully crafted or corrupt psd file can cause an infinite loop in apache tika s psdparser in versions publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tika tika parsers check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache tika tika parsers isminimumfixversionavailable true minimumfixversion org apache tika tika parsers basebranches vulnerabilityidentifier cve vulnerabilitydetails a carefully crafted or corrupt psd file can cause an infinite loop in apache tika psdparser in versions vulnerabilityurl ,0 4664,17117960493.0,IssuesEvent,2021-07-11 18:55:37,submariner-io/submariner,https://api.github.com/repos/submariner-io/submariner,closed,Use cilium/team-manager for config-file based GH Org Member/Team management,P3 S automation wontfix,"It seems the Cilium team has built and open sourced a tool we've wished for a number of times, to enable config-file based management of our GitHub Organization's Members and Teams. https://github.com/cilium/team-manager Investigate using it for Submariner, build consensus, and add the tool. ",1.0,"Use cilium/team-manager for config-file based GH Org Member/Team management - It seems the Cilium team has built and open sourced a tool we've wished for a number of times, to enable config-file based management of our GitHub Organization's Members and Teams. https://github.com/cilium/team-manager Investigate using it for Submariner, build consensus, and add the tool. ",1,use cilium team manager for config file based gh org member team management it seems the cilium team has built and open sourced a tool we ve wished for a number of times to enable config file based management of our github organization s members and teams investigate using it for submariner build consensus and add the tool ,1 50729,21334061115.0,IssuesEvent,2022-04-18 12:28:32,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,FilterBar field selector is slow when many fields are present,Feature:Filters performance loe:hours Team:AppServicesSv impact:low,"**Kibana version:** master **Elasticsearch version:** **Server OS version:** **Browser version:** **Browser OS version:** **Original install method (e.g. download page, yum, from source, etc.):** **Describe the bug:** **Steps to reproduce:** 1. Load Discover with an index that contains several thousands of fields 2. Click Add Filter 3. The Field selector becomes laggy **Expected behavior:** The field selector becomes smoohter **Screenshots (if relevant):** ![Discover - Elastic (10)](https://user-images.githubusercontent.com/3016806/100750786-325ba800-33ef-11eb-9ae8-d06875055d41.gif) **Errors in browser console (if relevant):** **Provide logs and/or server output (if relevant):** **Any additional context:** ",1.0,"FilterBar field selector is slow when many fields are present - **Kibana version:** master **Elasticsearch version:** **Server OS version:** **Browser version:** **Browser OS version:** **Original install method (e.g. download page, yum, from source, etc.):** **Describe the bug:** **Steps to reproduce:** 1. Load Discover with an index that contains several thousands of fields 2. Click Add Filter 3. The Field selector becomes laggy **Expected behavior:** The field selector becomes smoohter **Screenshots (if relevant):** ![Discover - Elastic (10)](https://user-images.githubusercontent.com/3016806/100750786-325ba800-33ef-11eb-9ae8-d06875055d41.gif) **Errors in browser console (if relevant):** **Provide logs and/or server output (if relevant):** **Any additional context:** ",0,filterbar field selector is slow when many fields are present kibana version master elasticsearch version server os version browser version browser os version original install method e g download page yum from source etc describe the bug steps to reproduce load discover with an index that contains several thousands of fields click add filter the field selector becomes laggy expected behavior the field selector becomes smoohter screenshots if relevant errors in browser console if relevant provide logs and or server output if relevant any additional context ,0 3840,14690454017.0,IssuesEvent,2021-01-02 15:20:47,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Automations that trigger based on a templated time do not load in 0.115,integration: automation integration: template stale,"## The problem Automations that trigger based on a state that's not ready at startup don't load properly in 0.115. This worked in 0.114 ## Environment - Home Assistant Core release with the issue: 0.115.2 - Last working Home Assistant Core release (if known): 0.114 - Operating environment (OS/Container/Supervised/Core): Core - Integration causing this issue: templates - Link to integration documentation on our website: ## Problem-relevant `configuration.yaml` ```yaml sensor: - platform: template sensors: shabbat_mode_start_time: friendly_name: ""Shabbat Mode Start Time"" value_template: >- {% if states.sensor.jewish_calendar_upcoming_candle_lighting %} {%- if is_state(""input_select.early_shabbat"", ""Yes"") or ( is_state(""input_select.early_shabbat"", ""Auto"") and states.sensor.time.last_updated.timetuple().tm_isdst ) -%} {% set plag = strptime(states('sensor.jewish_calendar_plag_hamincha'), '%Y-%m-%d %H:%M:%S%z') %} {{ strptime(states('sensor.jewish_calendar_upcoming_candle_lighting'), '%Y-%m-%d %H:%M:%S%z').replace(hour=plag.hour, minute=plag.minute) }} {% else %} {{ strptime(states('sensor.jewish_calendar_upcoming_candle_lighting'), '%Y-%m-%d %H:%M:%S%z') }} {% endif %} {%- else -%} Sensor not loaded {%- endif -%} ``` ```yaml - id: automation_with_sensor_timestamp trigger: - platform: template value_template: '{{ (as_timestamp(states.sensor.time.last_changed) - as_timestamp(states.sensor.shabbat_mode_start_time.state)) > 0 }}' action: ... ``` ## Traceback/Error logs ```txt 2020-09-26 21:31:18 INFO (MainThread) [homeassistant.components.automation.shabbat_downstairs_on] Initialized trigger [Shabbat] Downstairs on 2020-09-26 21:31:18 ERROR (MainThread) [homeassistant] Error doing job: Task exception was never retrieved Traceback (most recent call last): File ""/usr/local/lib/python3.8/site-packages/homeassistant/components/automation/__init__.py"", line 459, in async_enable_automation self._async_detach_triggers = await self._async_attach_triggers(True) File ""/usr/local/lib/python3.8/site-packages/homeassistant/components/automation/__init__.py"", line 490, in _async_attach_triggers return await async_initialize_triggers( File ""/usr/local/lib/python3.8/site-packages/homeassistant/helpers/trigger.py"", line 78, in async_initialize_triggers removes = await asyncio.gather(*triggers) File ""/usr/local/lib/python3.8/site-packages/homeassistant/components/template/trigger.py"", line 101, in async_attach_trigger info = async_track_template_result( File ""/usr/local/lib/python3.8/site-packages/homeassistant/helpers/event.py"", line 792, in async_track_template_result tracker.async_setup() File ""/usr/local/lib/python3.8/site-packages/homeassistant/helpers/event.py"", line 518, in async_setup self._info[template] = template.async_render_to_info(variables) File ""/usr/local/lib/python3.8/site-packages/homeassistant/helpers/template.py"", line 306, in async_render_to_info render_info._result = self.async_render(variables, **kwargs) File ""/usr/local/lib/python3.8/site-packages/homeassistant/helpers/template.py"", line 285, in async_render return compiled.render(kwargs).strip() File ""/usr/local/lib/python3.8/site-packages/jinja2/environment.py"", line 1090, in render self.environment.handle_exception() File ""/usr/local/lib/python3.8/site-packages/jinja2/environment.py"", line 832, in handle_exception reraise(*rewrite_traceback_stack(source=source)) File ""/usr/local/lib/python3.8/site-packages/jinja2/_compat.py"", line 28, in reraise raise value.with_traceback(tb) File ""