Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 new columns ({'lang', 'proj', 'idx'})
This happened while the json dataset builder was generating data using
hf://datasets/auphong2707/review-code-generation-code-changes/data/Comment_Generation/msg-valid.jsonl (at revision 59bb4f0c541d4f81332662e8adc1640a5d433a84)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
patch: string
y: int64
oldf: string
idx: int64
id: int64
msg: string
proj: string
lang: string
to
{'oldf': Value('string'), 'patch': Value('string'), 'msg': Value('string'), 'id': Value('int64'), 'y': Value('int64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 new columns ({'lang', 'proj', 'idx'})
This happened while the json dataset builder was generating data using
hf://datasets/auphong2707/review-code-generation-code-changes/data/Comment_Generation/msg-valid.jsonl (at revision 59bb4f0c541d4f81332662e8adc1640a5d433a84)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
oldf
string | patch
string | msg
string | id
int64 | y
int64 |
|---|---|---|---|---|
// | / |
// ' / __| _` | __| _ \ __|
// . \ | ( | | ( |\__ `
// _|\_\_| \__,_|\__|\___/ ____/
// Multi-Physics
//
// License: BSD License
// Kratos default license: kratos/license.txt
//
// Main authors: Vicente Mataix Ferrandiz
//
// System includes
#include <limits>
// External includes
// Project includes
#include "testing/testing.h"
// Utility includes
#include "utilities/math_utils.h"
namespace Kratos
{
namespace Testing
{
/// Tests
/** Checks if the area of the triangle is calculated correctly using Heron equation.
* Checks if the area of the triangle is calculated correctly using Heron equation.
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsHeronTest, KratosCoreMathUtilsFastSuite)
{
constexpr double tolerance = 1e-6;
const double area = MathUtils<double>::Heron<false>(std::sqrt(2.0), 1.0, 1.0);
KRATOS_CHECK_NEAR(area, 0.5, tolerance);
}
/** Checks if it gives you the absolute value of a given value
* Checks if It gives you the absolute value of a given value
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsAbsTest, KratosCoreMathUtilsFastSuite)
{
const double absolute = MathUtils<double>::Abs(-1.0);
KRATOS_CHECK_EQUAL(absolute, 1.0);
}
/** Checks if it gives you the minimum value of a given value
* Checks if It gives you the minimum value of a given value
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsMinTest, KratosCoreMathUtilsFastSuite)
{
const double min = MathUtils<double>::Min(0.0,1.0);
KRATOS_CHECK_EQUAL(min, 0.0);
}
/** Checks if it gives you the maximum value of a given value
* Checks if It gives you the maximum value of a given value
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsMaxTest, KratosCoreMathUtilsFastSuite)
{
const double max = MathUtils<double>::Max(0.0,1.0);
KRATOS_CHECK_EQUAL(max, 1.0);
}
/** Checks if it calculates the determinant of a 1x1, 2x2, 3x3 and 4x4 matrix
* Checks if it calculates the determinant of a 1x1, 2x2, 3x3 and 4x4 matrix
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsDetMatTest, KratosCoreMathUtilsFastSuite)
{
constexpr double tolerance = 1e-6;
boost::numeric::ublas::bounded_matrix<double, 1, 1> mat11 = ZeroMatrix(1, 1);
mat11(0,0) = 1.0;
double det = MathUtils<double>::DetMat<1>(mat11);
KRATOS_CHECK_NEAR(det, 1.0, tolerance);
boost::numeric::ublas::bounded_matrix<double, 2, 2> mat22 = ZeroMatrix(2, 2);
mat22(0,0) = 1.0;
mat22(1,1) = 1.0;
det = MathUtils<double>::DetMat<2>(mat22);
KRATOS_CHECK_NEAR(det, 1.0, tolerance);
boost::numeric::ublas::bounded_matrix<double, 3, 3> mat33 = ZeroMatrix(3, 3);
mat33(0,0) = 1.0;
mat33(1,1) = 1.0;
mat33(2,2) = 1.0;
det = MathUtils<double>::DetMat<3>(mat33);
KRATOS_CHECK_NEAR(det, 1.0, tolerance);
boost::numeric::ublas::bounded_matrix<double, 4, 4> mat44 = ZeroMatrix(4, 4);
mat44(0,0) = 1.0;
mat44(1,1) = 1.0;
mat44(2,2) = 1.0;
mat44(3,3) = 1.0;
det = MathUtils<double>::DetMat<4>(mat44);
KRATOS_CHECK_NEAR(det, 1.0, tolerance);
}
/** Checks if it calculates the generalized determinant of a non-square matrix
* Checks if it calculates the generalized determinant of a non-square matrix
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsGenDetMatTest, KratosCoreMathUtilsFastSuite)
{
constexpr double tolerance = 1e-6;
Matrix mat23 = ZeroMatrix(2, 3);
mat23(0,0) = 1.0;
mat23(1,1) = 1.0;
double det = MathUtils<double>::GeneralizedDet(mat23);
KRATOS_CHECK_NEAR(det, 1.0, tolerance);
Matrix mat55 = ZeroMatrix(5, 5);
mat55(0,0) = 1.0;
mat55(1,1) = 1.0;
mat55(2,2) = 1.0;
mat55(3,3) = 1.0;
mat55(2,3) = - 1.0;
mat55(3,2) = 1.0;
mat55(4,4) = 2.0;
det = MathUtils<double>::Det(mat55);
KRATOS_CHECK_NEAR(det, 4.0, tolerance);
}
/** Checks if it calculates the inverse of a 1x1, 2x2, 3x3 and 4x4 matrix
* Checks if it calculates the inverse of a 1x1, 2x2, 3x3 and 4x4 matrix
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsInvMatTest, KratosCoreMathUtilsFastSuite)
{
constexpr double tolerance = 1e-6;
boost::numeric::ublas::bounded_matrix<double, 1, 1> mat11;
mat11(0,0) = 0.896308;
double det;
const boost::numeric::ublas::bounded_matrix<double, 1, 1> inv11 = MathUtils<double>::InvertMatrix<1>(mat11, det);
const boost::numeric::ublas::bounded_matrix<double, 1, 1> I11 = prod(inv11, mat11);
KRATOS_CHECK_NEAR(I11(0,0), 1.0, tolerance);
boost::numeric::ublas::bounded_matrix<double, 2, 2> mat22;
mat22(0,0) = 0.670005;
mat22(0,1) = 0.853367;
mat22(1,0) = 1.47006;
mat22(1,1) = 1.00029;
const boost::numeric::ublas::bounded_matrix<double, 2, 2> inv22 = MathUtils<double>::InvertMatrix<2>(mat22, det);
const boost::numeric::ublas::bounded_matrix<double, 2, 2> I22 = prod(inv22, mat22);
for (unsigned int i = 0; i < 2; i++)
{
for (unsigned int j = 0; j < 2; j++)
{
if (i == j)
{
KRATOS_CHECK_NEAR(I22(i,j), 1.0, tolerance);
}
else
{
KRATOS_CHECK_NEAR(I22(i,j), 0.0, tolerance);
}
}
}
boost::numeric::ublas::bounded_matrix<double, 3, 3> mat33;
mat33(0,0) = 0.678589;
mat33(0,1) = 0.386213;
mat33(0,2) = 0.371126;
mat33(1,0) = 1.01524;
mat33(1,1) = 0.403437;
mat33(1,2) = 1.03755;
mat33(2,0) = 0.450516;
mat33(2,1) = 1.08225;
mat33(2,2) = 0.972831;
const boost::numeric::ublas::bounded_matrix<double, 3, 3> inv33 = MathUtils<double>::InvertMatrix<3>(mat33, det);
const boost::numeric::ublas::bounded_matrix<double, 3, 3> I33 = prod(inv33, mat33);
for (unsigned int i = 0; i < 3; i++)
{
for (unsigned int j = 0; j < 3; j++)
{
if (i == j)
{
KRATOS_CHECK_NEAR(I33(i,j), 1.0, tolerance);
}
else
{
KRATOS_CHECK_NEAR(I33(i,j), 0.0, tolerance);
}
}
}
boost::numeric::ublas::bounded_matrix<double, 4, 4> mat44;
mat44(0,0) = 0.00959158;
mat44(0,1) = 0.466699;
mat44(0,2) = 0.167357;
mat44(0,3) = 0.255465;
mat44(1,0) = 1.6356;
mat44(1,1) = 0.387988;
mat44(1,2) = 1.17823;
mat44(1,3) = 1.38661;
mat44(2,0) = 2.57105;
mat44(2,1) = 1.63057;
mat44(2,2) = 2.5713;
mat44(2,3) = 1.73297;
mat44(3,0) = 3.40005;
mat44(3,1) = 1.94218;
mat44(3,2) = 2.58081;
mat44(3,3) = 3.3083;
const boost::numeric::ublas::bounded_matrix<double, 4, 4> inv44 = MathUtils<double>::InvertMatrix<4>(mat44, det);
const boost::numeric::ublas::bounded_matrix<double, 4, 4> I44 = prod(inv44, mat44);
for (unsigned int i = 0; i < 4; i++)
{
for (unsigned int j = 0; j < 4; j++)
{
if (i == j)
{
KRATOS_CHECK_NEAR(I44(i,j), 1.0, tolerance);
}
else
{
KRATOS_CHECK_NEAR(I44(i,j), 0.0, tolerance);
}
}
}
}
/** Checks if it calculates the inverse of a 1x1, 2x2, 3x3 and 4x4 matrix
* Checks if it calculates the inverse of a 1x1, 2x2, 3x3 and 4x4 matrix
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsInvertMatrixTest, KratosCoreMathUtilsFastSuite)
{
constexpr double tolerance = 1e-6;
double det;
Matrix inv;
Matrix I;
unsigned int i_dim = 1;
Matrix mat = ZeroMatrix(i_dim, i_dim);
mat(0,0) = 0.346432;
MathUtils<double>::InvertMatrix(mat,inv, det);
I = prod(inv, mat);
for (unsigned int i = 0; i < i_dim; i++)
{
for (unsigned int j = 0; j < i_dim; j++)
{
if (i == j)
{
KRATOS_CHECK_NEAR(I(i,j), 1.0, tolerance);
}
else
{
KRATOS_CHECK_NEAR(I(i,j), 0.0, tolerance);
}
}
}
i_dim = 2;
mat.resize(i_dim, i_dim, false);
mat(0,0) = 0.833328;
mat(0,1) = 0.491166;
mat(1,0) = 0.81167;
mat(1,1) = 1.17205;
MathUtils<double>::InvertMatrix(mat,inv, det);
I = prod(inv, mat);
for (unsigned int i = 0; i < i_dim; i++)
{
for (unsigned int j = 0; j < i_dim; j++)
{
if (i == j)
{
KRATOS_CHECK_NEAR(I(i,j), 1.0, tolerance);
}
else
{
KRATOS_CHECK_NEAR(I(i,j), 0.0, tolerance);
}
}
}
i_dim = 3;
mat.resize(i_dim, i_dim, false);
mat(0,0) = 0.371083;
mat(0,1) = 0.392607;
mat(0,2) = 0.306494;
mat(1,0) = 0.591012;
mat(1,1) = 1.00733;
mat(1,2) = 1.07727;
mat(2,0) = 0.0976054;
mat(2,1) = 2.54893;
mat(2,2) = 1.23981;
MathUtils<double>::InvertMatrix(mat,inv, det);
I = prod(inv, mat);
for (unsigned int i = 0; i < i_dim; i++)
{
for (unsigned int j = 0; j < i_dim; j++)
{
if (i == j)
{
KRATOS_CHECK_NEAR(I(i,j), 1.0, tolerance);
}
else
{
KRATOS_CHECK_NEAR(I(i,j), 0.0, tolerance);
}
}
}
i_dim = 4;
mat.resize(i_dim, i_dim, false);
mat(0,1) = 0.979749;
mat(0,2) = 0.494393;
mat(0,3) = 0.23073;
mat(1,0) = 1.79224;
mat(1,1) = 0.198842;
mat(1,2) = 0.074485;
mat(1,3) = 1.45717;
mat(2,0) = 1.6039;
mat(2,1) = 0.673926;
mat(2,2) = 2.63817;
mat(2,3) = 1.0287;
mat(3,0) = 0.366503;
mat(3,1) = 3.02634;
mat(3,2) = 1.24104;
mat(3,3) = 3.62022;
MathUtils<double>::InvertMatrix(mat,inv, det);
I = prod(inv, mat);
for (unsigned int i = 0; i < i_dim; i++)
{
for (unsigned int j = 0; j < i_dim; j++)
{
if (i == j)
{
KRATOS_CHECK_NEAR(I(i,j), 1.0, tolerance);
}
else
{
KRATOS_CHECK_NEAR(I(i,j), 0.0, tolerance);
}
}
}
i_dim = 5;
mat.resize(i_dim, i_dim, false);
mat = ZeroMatrix(5, 5);
mat(0,0) = 1.0;
mat(1,1) = 1.0;
mat(2,2) = 1.0;
mat(3,3) = 1.0;
mat(2,3) = - 1.0;
mat(3,2) = 1.0;
mat(4,4) = 2.0;
MathUtils<double>::InvertMatrix(mat,inv, det);
KRATOS_CHECK_NEAR(det, 4.0, tolerance);
I = prod(inv, mat);
for (unsigned int i = 0; i < i_dim; i++)
{
for (unsigned int j = 0; j < i_dim; j++)
{
if (i == j)
{
KRATOS_CHECK_NEAR(I(i,j), 1.0, tolerance);
}
else
{
KRATOS_CHECK_NEAR(I(i,j), 0.0, tolerance);
}
}
}
}
/** Checks if it calculates correctly the inverse of a non square matrix
* Checks if it calculates correctly the inverse of a non square matrix
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsGeneralizedInvertMatrixTest, KratosCoreMathUtilsFastSuite)
{
constexpr double tolerance = 1e-6;
// We check the Left inverse
const unsigned int i_dim = 2;
const unsigned int j_dim = 3;
Matrix mat = ZeroMatrix(i_dim, j_dim);
mat(0,0) = 0.770724;
mat(1,0) = 0.573294;
mat(0,1) = 1.27699;
mat(1,1) = 1.57776;
mat(0,2) = 1.30216;
mat(1,2) = 2.66483;
double det;
Matrix inv;
MathUtils<double>::GeneralizedInvertMatrix(mat,inv, det);
Matrix I = prod(mat, inv);
for (unsigned int i = 0; i < i_dim; i++)
{
for (unsigned int j = 0; j < i_dim; j++)
{
if (i == j)
{
KRATOS_CHECK_NEAR(I(i,j), 1.0, tolerance);
}
else
{
KRATOS_CHECK_NEAR(I(i,j), 0.0, tolerance);
}
}
}
// We check the Right inverse
mat.resize(j_dim, i_dim);
mat = ZeroMatrix(j_dim, i_dim);
mat(0,0) = 0.786075;
mat(1,0) = 0.91272;
mat(2,0) = 0.745604;
mat(0,1) = 0.992728;
mat(1,1) = 1.82324;
mat(2,1) = 0.19581;
MathUtils<double>::GeneralizedInvertMatrix(mat,inv, det);
I = prod(inv, mat);
for (unsigned int i = 0; i < i_dim; i++)
{
for (unsigned int j = 0; j < i_dim; j++)
{
if (i == j)
{
KRATOS_CHECK_NEAR(I(i,j), 1.0, tolerance);
}
else
{
KRATOS_CHECK_NEAR(I(i,j), 0.0, tolerance);
}
}
}
}
/** Checks if it calculates the sign function
* Checks if it calculates the sign function
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsSignTest, KratosCoreMathUtilsFastSuite)
{
int sign = MathUtils<double>::Sign(-1.0);
KRATOS_CHECK_EQUAL(sign, -1);
sign = MathUtils<double>::Sign(1.0);
KRATOS_CHECK_EQUAL(sign, 1);
}
/** Checks if it calculates the eigen decomposition of a 3x3 system
* Checks if it calculates the eigen decomposition of a 3x3 system
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsEigenTest, KratosCoreMathUtilsFastSuite)
{
constexpr double tolerance = 1e-6;
boost::numeric::ublas::bounded_matrix<double, 3, 3> mat33;
boost::numeric::ublas::bounded_matrix<double, 3, 3> eigenmat33;
boost::numeric::ublas::bounded_matrix<double, 3, 3> vectormat33;
mat33(0,0) = 0.678589;
mat33(0,1) = 0.386213;
mat33(0,2) = 0.371126;
mat33(1,0) = mat33(0,1);
mat33(1,1) = 0.403437;
mat33(1,2) = 1.03755;
mat33(2,0) = mat33(0,2);
mat33(2,1) = mat33(1,2);
mat33(2,2) = 0.972831;
bool converged = MathUtils<double>::EigenSystem<3>(mat33, vectormat33, eigenmat33);
boost::numeric::ublas::bounded_matrix<double, 3, 3> auxmat33 = prod(trans(vectormat33), eigenmat33);
auxmat33 = prod(auxmat33, vectormat33);
for (unsigned int i = 0; i < 3; i++)
{
for (unsigned int j = i; j < 3; j++)
{
KRATOS_CHECK_NEAR(auxmat33(i,j), mat33(i,j), tolerance);
}
}
KRATOS_CHECK_EQUAL(converged, true);
}
/** Checks if it calculates the dot product
* Checks if it calculates the dot product
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsDotTest, KratosCoreMathUtilsFastSuite)
{
Vector a = ZeroVector(3);
a[1] = 1.0;
Vector b = ZeroVector(3);
b[0] = 1.0;
const double c = MathUtils<double>::Dot3(a, b);
const double d = MathUtils<double>::Dot(a, b);
KRATOS_CHECK_EQUAL(c, 0.0);
KRATOS_CHECK_EQUAL(d, 0.0);
}
/** Checks if it calculates the norm of a vector
* Checks if it calculates the norm of a vector
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsNormTest, KratosCoreMathUtilsFastSuite)
{
array_1d<double, 3> a = ZeroVector(3);
a[0] = 1.0;
const double b = MathUtils<double>::Norm3(a);
const double c = MathUtils<double>::Norm(a);
KRATOS_CHECK_EQUAL(b, 1.0);
KRATOS_CHECK_EQUAL(c, 1.0);
}
/** Checks if it calculates the cross product
* Checks if it calculates the cross product
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsCrossTest, KratosCoreMathUtilsFastSuite)
{
array_1d<double, 3> a = ZeroVector(3);
a[1] = 2.0;
array_1d<double, 3> b = ZeroVector(3);
b[0] = 1.0;
const array_1d<double, 3> c = MathUtils<double>::CrossProduct(a, b);
const array_1d<double, 3> d = MathUtils<double>::UnitCrossProduct(a, b);
KRATOS_CHECK_EQUAL(c[2], 2.0);
KRATOS_CHECK_EQUAL(d[2], 1.0);
}
/** Checks if it calculates the tensor product
* Checks if it calculates the tensor product
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsTensorTest, KratosCoreMathUtilsFastSuite)
{
Vector a = ZeroVector(3);
a[1] = 2.0;
Vector b = ZeroVector(3);
b[0] = 1.0;
const Matrix c = MathUtils<double>::TensorProduct3(a, b);
KRATOS_CHECK_EQUAL(c(0,0), 0.0);
KRATOS_CHECK_EQUAL(c(1,0), 2.0);
KRATOS_CHECK_EQUAL(c(0,1), 0.0);
KRATOS_CHECK_EQUAL(c(1,1), 0.0);
}
/** Checks if it calculates the matrix operations
* Checks if it calculates the matrix operations
*/
KRATOS_TEST_CASE_IN_SUITE(MathUtilsMatrixOperationsTest, KratosCoreMathUtilsFastSuite)
{
Matrix a = IdentityMatrix(3);
Matrix b = IdentityMatrix(3);
MathUtils<double>::AddMatrix(a, b, 0 ,0);
KRATOS_CHECK_EQUAL(a(0,0), 2.0);
KRATOS_CHECK_EQUAL(a(1,0), 0.0);
KRATOS_CHECK_EQUAL(a(0,1), 0.0);
KRATOS_CHECK_EQUAL(a(1,1), 2.0);
MathUtils<double>::SubtractMatrix(a, b, 0 ,0);
KRATOS_CHECK_EQUAL(a(0,0), 1.0);
KRATOS_CHECK_EQUAL(a(1,0), 0.0);
KRATOS_CHECK_EQUAL(a(0,1), 0.0);
KRATOS_CHECK_EQUAL(a(1,1), 1.0);
MathUtils<double>::WriteMatrix(a, b, 0 ,0);
KRATOS_CHECK_EQUAL(a(0,0), 1.0);
KRATOS_CHECK_EQUAL(a(1,0), 0.0);
KRATOS_CHECK_EQUAL(a(0,1), 0.0);
KRATOS_CHECK_EQUAL(a(1,1), 1.0);
}
} // namespace Testing
} // namespace Kratos.
|
@@ -595,8 +595,10 @@ namespace Kratos
array_1d<double, 3> b = ZeroVector(3);
b[0] = 1.0;
- const array_1d<double, 3> c = MathUtils<double>::CrossProduct(a, b);
- const array_1d<double, 3> d = MathUtils<double>::UnitCrossProduct(a, b);
+ array_1d<double, 3> c, d;
+
+ MathUtils<double>::CrossProduct(c, b, a);
+ MathUtils<double>::UnitCrossProduct(d, b, a);
KRATOS_CHECK_EQUAL(c[2], 2.0);
KRATOS_CHECK_EQUAL(d[2], 1.0);
|
I assumed that for CrossProduct the values were inverted as well... Is that right?
| 90,017
| 1
|
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# cython: profile=True
"""Worker operations executor.
For internal use only; no backwards-compatibility guarantees.
"""
import sys
import traceback
import six
from apache_beam.internal import util
from apache_beam.pvalue import TaggedOutput
from apache_beam.transforms import DoFn
from apache_beam.transforms import core
from apache_beam.transforms.core import RestrictionProvider
from apache_beam.transforms.window import GlobalWindow
from apache_beam.transforms.window import TimestampedValue
from apache_beam.transforms.window import WindowFn
from apache_beam.utils.windowed_value import WindowedValue
class NameContext(object):
"""Holds the name information for a step."""
def __init__(self, step_name):
"""Creates a new step NameContext.
Args:
step_name: The name of the step.
"""
self.step_name = step_name
def __eq__(self, other):
return self.step_name == other.step_name
def __ne__(self, other):
return not self == other
def __repr__(self):
return 'NameContext(%s)' % self.__dict__
def __hash__(self):
return hash(self.step_name)
def metrics_name(self):
"""Returns the step name used for metrics reporting."""
return self.step_name
def logging_name(self):
"""Returns the step name used for logging."""
return self.step_name
# TODO(BEAM-4028): Move DataflowNameContext to Dataflow internal code.
class DataflowNameContext(NameContext):
"""Holds the name information for a step in Dataflow.
This includes a step_name (e.g. s2), a user_name (e.g. Foo/Bar/ParDo(Fab)),
and a system_name (e.g. s2-shuffle-read34)."""
def __init__(self, step_name, user_name, system_name):
"""Creates a new step NameContext.
Args:
step_name: The internal name of the step (e.g. s2).
user_name: The full user-given name of the step (e.g. Foo/Bar/ParDo(Far)).
system_name: The step name in the optimized graph (e.g. s2-1).
"""
super(DataflowNameContext, self).__init__(step_name)
self.user_name = user_name
self.system_name = system_name
def __eq__(self, other):
return (self.step_name == other.step_name and
self.user_name == other.user_name and
self.system_name == other.system_name)
def __ne__(self, other):
return not self == other
def __hash__(self):
return hash((self.step_name, self.user_name, self.system_name))
def __repr__(self):
return 'DataflowNameContext(%s)' % self.__dict__
def logging_name(self):
"""Stackdriver logging relies on user-given step names (e.g. Foo/Bar)."""
return self.user_name
class LoggingContext(object):
"""For internal use only; no backwards-compatibility guarantees."""
def enter(self):
pass
def exit(self):
pass
class Receiver(object):
"""For internal use only; no backwards-compatibility guarantees.
An object that consumes a WindowedValue.
This class can be efficiently used to pass values between the
sdk and worker harnesses.
"""
def receive(self, windowed_value):
raise NotImplementedError
class MethodWrapper(object):
"""For internal use only; no backwards-compatibility guarantees.
Represents a method that can be invoked by `DoFnInvoker`."""
def __init__(self, obj_to_invoke, method_name):
"""
Initiates a ``MethodWrapper``.
Args:
obj_to_invoke: the object that contains the method. Has to either be a
`DoFn` object or a `RestrictionProvider` object.
method_name: name of the method as a string.
"""
if not isinstance(obj_to_invoke, (DoFn, RestrictionProvider)):
raise ValueError('\'obj_to_invoke\' has to be either a \'DoFn\' or '
'a \'RestrictionProvider\'. Received %r instead.'
% obj_to_invoke)
args, _, _, defaults = core.get_function_arguments(
obj_to_invoke, method_name)
defaults = defaults if defaults else []
method_value = getattr(obj_to_invoke, method_name)
self.method_value = method_value
self.args = args
self.defaults = defaults
class DoFnSignature(object):
"""Represents the signature of a given ``DoFn`` object.
Signature of a ``DoFn`` provides a view of the properties of a given ``DoFn``.
Among other things, this will give an extensible way for for (1) accessing the
structure of the ``DoFn`` including methods and method parameters
(2) identifying features that a given ``DoFn`` support, for example, whether
a given ``DoFn`` is a Splittable ``DoFn`` (
https://s.apache.org/splittable-do-fn) (3) validating a ``DoFn`` based on the
feature set offered by it.
"""
def __init__(self, do_fn):
# We add a property here for all methods defined by Beam DoFn features.
assert isinstance(do_fn, core.DoFn)
self.do_fn = do_fn
self.process_method = MethodWrapper(do_fn, 'process')
self.start_bundle_method = MethodWrapper(do_fn, 'start_bundle')
self.finish_bundle_method = MethodWrapper(do_fn, 'finish_bundle')
restriction_provider = self._get_restriction_provider(do_fn)
self.initial_restriction_method = (
MethodWrapper(restriction_provider, 'initial_restriction')
if restriction_provider else None)
self.restriction_coder_method = (
MethodWrapper(restriction_provider, 'restriction_coder')
if restriction_provider else None)
self.create_tracker_method = (
MethodWrapper(restriction_provider, 'create_tracker')
if restriction_provider else None)
self.split_method = (
MethodWrapper(restriction_provider, 'split')
if restriction_provider else None)
self._validate()
def _get_restriction_provider(self, do_fn):
result = _find_param_with_default(self.process_method,
default_as_type=RestrictionProvider)
return result[1] if result else None
def _validate(self):
self._validate_process()
self._validate_bundle_method(self.start_bundle_method)
self._validate_bundle_method(self.finish_bundle_method)
def _validate_process(self):
"""Validate that none of the DoFnParameters are repeated in the function
"""
for param in core.DoFn.DoFnParams:
assert self.process_method.defaults.count(param) <= 1
def _validate_bundle_method(self, method_wrapper):
"""Validate that none of the DoFnParameters are used in the function
"""
for param in core.DoFn.DoFnParams:
assert param not in method_wrapper.defaults
def is_splittable_dofn(self):
return any([isinstance(default, RestrictionProvider) for default in
self.process_method.defaults])
class DoFnInvoker(object):
"""An abstraction that can be used to execute DoFn methods.
A DoFnInvoker describes a particular way for invoking methods of a DoFn
represented by a given DoFnSignature."""
def __init__(self, output_processor, signature):
self.output_processor = output_processor
self.signature = signature
@staticmethod
def create_invoker(
signature,
output_processor=None,
context=None, side_inputs=None, input_args=None, input_kwargs=None,
process_invocation=True):
""" Creates a new DoFnInvoker based on given arguments.
Args:
output_processor: an OutputProcessor for receiving elements produced by
invoking functions of the DoFn.
signature: a DoFnSignature for the DoFn being invoked.
context: Context to be used when invoking the DoFn (deprecated).
side_inputs: side inputs to be used when invoking th process method.
input_args: arguments to be used when invoking the process method. Some
of the arguments given here might be placeholders (for
example for side inputs) that get filled before invoking the
process method.
input_kwargs: keyword arguments to be used when invoking the process
method. Some of the keyword arguments given here might be
placeholders (for example for side inputs) that get filled
before invoking the process method.
process_invocation: If True, this function may return an invoker that
performs extra optimizations for invoking process()
method efficiently.
"""
side_inputs = side_inputs or []
default_arg_values = signature.process_method.defaults
use_simple_invoker = not process_invocation or (
not side_inputs and not input_args and not input_kwargs and
not default_arg_values)
if use_simple_invoker:
return SimpleInvoker(output_processor, signature)
else:
return PerWindowInvoker(
output_processor,
signature, context, side_inputs, input_args, input_kwargs)
def invoke_process(self, windowed_value, restriction_tracker=None,
output_processor=None,
additional_args=None, additional_kwargs=None):
"""Invokes the DoFn.process() function.
Args:
windowed_value: a WindowedValue object that gives the element for which
process() method should be invoked along with the window
the element belongs to.
output_procesor: if provided given OutputProcessor will be used.
additional_args: additional arguments to be passed to the current
`DoFn.process()` invocation, usually as side inputs.
additional_kwargs: additional keyword arguments to be passed to the
current `DoFn.process()` invocation.
"""
raise NotImplementedError
def invoke_start_bundle(self):
"""Invokes the DoFn.start_bundle() method.
"""
self.output_processor.start_bundle_outputs(
self.signature.start_bundle_method.method_value())
def invoke_finish_bundle(self):
"""Invokes the DoFn.finish_bundle() method.
"""
self.output_processor.finish_bundle_outputs(
self.signature.finish_bundle_method.method_value())
def invoke_split(self, element, restriction):
return self.signature.split_method.method_value(element, restriction)
def invoke_initial_restriction(self, element):
return self.signature.initial_restriction_method.method_value(element)
def invoke_restriction_coder(self):
return self.signature.restriction_coder_method.method_value()
def invoke_create_tracker(self, restriction):
return self.signature.create_tracker_method.method_value(restriction)
def _find_param_with_default(
method, default_as_value=None, default_as_type=None):
if ((default_as_value and default_as_type) or
not (default_as_value or default_as_type)):
raise ValueError(
'Exactly one of \'default_as_value\' and \'default_as_type\' should be '
'provided. Received %r and %r.' % (default_as_value, default_as_type))
defaults = method.defaults
default_as_value = default_as_value
default_as_type = default_as_type
ret = None
for i, value in enumerate(defaults):
if default_as_value and value == default_as_value:
ret = (method.args[len(method.args) - len(defaults) + i], value)
elif default_as_type and isinstance(value, default_as_type):
index = len(method.args) - len(defaults) + i
ret = (method.args[index], value)
return ret
class SimpleInvoker(DoFnInvoker):
"""An invoker that processes elements ignoring windowing information."""
def __init__(self, output_processor, signature):
super(SimpleInvoker, self).__init__(output_processor, signature)
self.process_method = signature.process_method.method_value
def invoke_process(self, windowed_value, restriction_tracker=None,
output_processor=None,
additional_args=None, additional_kwargs=None):
if not output_processor:
output_processor = self.output_processor
output_processor.process_outputs(
windowed_value, self.process_method(windowed_value.value))
class PerWindowInvoker(DoFnInvoker):
"""An invoker that processes elements considering windowing information."""
def __init__(self, output_processor, signature, context,
side_inputs, input_args, input_kwargs):
super(PerWindowInvoker, self).__init__(output_processor, signature)
self.side_inputs = side_inputs
self.context = context
self.process_method = signature.process_method.method_value
default_arg_values = signature.process_method.defaults
self.has_windowed_inputs = (
not all(si.is_globally_windowed() for si in side_inputs) or
(core.DoFn.WindowParam in default_arg_values))
# Try to prepare all the arguments that can just be filled in
# without any additional work. in the process function.
# Also cache all the placeholders needed in the process function.
# Flag to cache additional arguments on the first element if all
# inputs are within the global window.
self.cache_globally_windowed_args = not self.has_windowed_inputs
input_args = input_args if input_args else []
input_kwargs = input_kwargs if input_kwargs else {}
arguments = signature.process_method.args
defaults = signature.process_method.defaults
# Create placeholder for element parameter of DoFn.process() method.
self_in_args = int(signature.do_fn.is_process_bounded())
class ArgPlaceholder(object):
def __init__(self, placeholder):
self.placeholder = placeholder
if core.DoFn.ElementParam not in default_arg_values:
args_to_pick = len(arguments) - len(default_arg_values) - 1 - self_in_args
args_with_placeholders = (
[ArgPlaceholder(core.DoFn.ElementParam)] + input_args[:args_to_pick])
else:
args_to_pick = len(arguments) - len(defaults) - self_in_args
args_with_placeholders = input_args[:args_to_pick]
# Fill the OtherPlaceholders for context, window or timestamp
remaining_args_iter = iter(input_args[args_to_pick:])
for a, d in zip(arguments[-len(defaults):], defaults):
if d == core.DoFn.ElementParam:
args_with_placeholders.append(ArgPlaceholder(d))
elif d == core.DoFn.WindowParam:
args_with_placeholders.append(ArgPlaceholder(d))
elif d == core.DoFn.TimestampParam:
args_with_placeholders.append(ArgPlaceholder(d))
elif d == core.DoFn.SideInputParam:
# If no more args are present then the value must be passed via kwarg
try:
args_with_placeholders.append(next(remaining_args_iter))
except StopIteration:
if a not in input_kwargs:
raise ValueError("Value for sideinput %s not provided" % a)
else:
# If no more args are present then the value must be passed via kwarg
try:
args_with_placeholders.append(next(remaining_args_iter))
except StopIteration:
pass
args_with_placeholders.extend(list(remaining_args_iter))
# Stash the list of placeholder positions for performance
self.placeholders = [(i, x.placeholder) for (i, x) in enumerate(
args_with_placeholders)
if isinstance(x, ArgPlaceholder)]
self.args_for_process = args_with_placeholders
self.kwargs_for_process = input_kwargs
def invoke_process(self, windowed_value, restriction_tracker=None,
output_processor=None,
additional_args=None, additional_kwargs=None):
if not additional_args:
additional_args = []
if not additional_kwargs:
additional_kwargs = {}
if not output_processor:
output_processor = self.output_processor
self.context.set_element(windowed_value)
# Call for the process function for each window if has windowed side inputs
# or if the process accesses the window parameter. We can just call it once
# otherwise as none of the arguments are changing
if restriction_tracker:
restriction_tracker_param = _find_param_with_default(
self.signature.process_method,
default_as_type=core.RestrictionProvider)[0]
if not restriction_tracker_param:
raise ValueError(
'A RestrictionTracker %r was provided but DoFn does not have a '
'RestrictionTrackerParam defined' % restriction_tracker)
additional_kwargs[restriction_tracker_param] = restriction_tracker
if self.has_windowed_inputs and len(windowed_value.windows) != 1:
for w in windowed_value.windows:
self._invoke_per_window(
WindowedValue(windowed_value.value, windowed_value.timestamp, (w,)),
additional_args, additional_kwargs, output_processor)
else:
self._invoke_per_window(
windowed_value, additional_args, additional_kwargs, output_processor)
def _invoke_per_window(
self, windowed_value, additional_args,
additional_kwargs, output_processor):
if self.has_windowed_inputs:
window, = windowed_value.windows
side_inputs = [si[window] for si in self.side_inputs]
side_inputs.extend(additional_args)
args_for_process, kwargs_for_process = util.insert_values_in_args(
self.args_for_process, self.kwargs_for_process,
side_inputs)
elif self.cache_globally_windowed_args:
# Attempt to cache additional args if all inputs are globally
# windowed inputs when processing the first element.
self.cache_globally_windowed_args = False
# Fill in sideInputs if they are globally windowed
global_window = GlobalWindow()
self.args_for_process, self.kwargs_for_process = (
util.insert_values_in_args(
self.args_for_process, self.kwargs_for_process,
[si[global_window] for si in self.side_inputs]))
args_for_process, kwargs_for_process = (
self.args_for_process, self.kwargs_for_process)
else:
args_for_process, kwargs_for_process = (
self.args_for_process, self.kwargs_for_process)
# TODO(sourabhbajaj): Investigate why we can't use `is` instead of ==
for i, p in self.placeholders:
if p == core.DoFn.ElementParam:
args_for_process[i] = windowed_value.value
elif p == core.DoFn.WindowParam:
args_for_process[i] = window
elif p == core.DoFn.TimestampParam:
args_for_process[i] = windowed_value.timestamp
if additional_kwargs:
if kwargs_for_process is None:
kwargs_for_process = additional_kwargs
else:
for key in additional_kwargs:
kwargs_for_process[key] = additional_kwargs[key]
if kwargs_for_process:
output_processor.process_outputs(
windowed_value,
self.process_method(*args_for_process, **kwargs_for_process))
else:
output_processor.process_outputs(
windowed_value, self.process_method(*args_for_process))
class DoFnRunner(Receiver):
"""For internal use only; no backwards-compatibility guarantees.
A helper class for executing ParDo operations.
"""
def __init__(self,
fn,
args,
kwargs,
side_inputs,
windowing,
tagged_receivers=None,
step_name=None,
logging_context=None,
state=None,
scoped_metrics_container=None):
"""Initializes a DoFnRunner.
Args:
fn: user DoFn to invoke
args: positional side input arguments (static and placeholder), if any
kwargs: keyword side input arguments (static and placeholder), if any
side_inputs: list of sideinput.SideInputMaps for deferred side inputs
windowing: windowing properties of the output PCollection(s)
tagged_receivers: a dict of tag name to Receiver objects
step_name: the name of this step
logging_context: a LoggingContext object
state: handle for accessing DoFn state
scoped_metrics_container: Context switcher for metrics container
"""
# Need to support multiple iterations.
side_inputs = list(side_inputs)
from apache_beam.metrics.execution import ScopedMetricsContainer
self.scoped_metrics_container = (
scoped_metrics_container or ScopedMetricsContainer())
self.step_name = step_name
self.logging_context = logging_context or LoggingContext()
self.context = DoFnContext(step_name, state=state)
do_fn_signature = DoFnSignature(fn)
# Optimize for the common case.
main_receivers = tagged_receivers[None]
output_processor = _OutputProcessor(
windowing.windowfn, main_receivers, tagged_receivers)
self.do_fn_invoker = DoFnInvoker.create_invoker(
do_fn_signature, output_processor, self.context, side_inputs, args,
kwargs)
def receive(self, windowed_value):
self.process(windowed_value)
def process(self, windowed_value):
try:
self.logging_context.enter()
self.scoped_metrics_container.enter()
self.do_fn_invoker.invoke_process(windowed_value)
except BaseException as exn:
self._reraise_augmented(exn)
finally:
self.scoped_metrics_container.exit()
self.logging_context.exit()
def _invoke_bundle_method(self, bundle_method):
try:
self.logging_context.enter()
self.scoped_metrics_container.enter()
self.context.set_element(None)
bundle_method()
except BaseException as exn:
self._reraise_augmented(exn)
finally:
self.scoped_metrics_container.exit()
self.logging_context.exit()
def start(self):
self._invoke_bundle_method(self.do_fn_invoker.invoke_start_bundle)
def finish(self):
self._invoke_bundle_method(self.do_fn_invoker.invoke_finish_bundle)
def _reraise_augmented(self, exn):
if getattr(exn, '_tagged_with_step', False) or not self.step_name:
raise
step_annotation = " [while running '%s']" % self.step_name
# To emulate exception chaining (not available in Python 2).
original_traceback = sys.exc_info()[2]
try:
# Attempt to construct the same kind of exception
# with an augmented message.
new_exn = type(exn)(exn.args[0] + step_annotation, *exn.args[1:])
new_exn._tagged_with_step = True # Could raise attribute error.
except: # pylint: disable=bare-except
# If anything goes wrong, construct a RuntimeError whose message
# records the original exception's type and message.
new_exn = RuntimeError(
traceback.format_exception_only(type(exn), exn)[-1].strip()
+ step_annotation)
new_exn._tagged_with_step = True
six.reraise(type(new_exn), new_exn, original_traceback)
class OutputProcessor(object):
def process_outputs(self, windowed_input_element, results):
raise NotImplementedError
class _OutputProcessor(OutputProcessor):
"""Processes output produced by DoFn method invocations."""
def __init__(self, window_fn, main_receivers, tagged_receivers):
"""Initializes ``_OutputProcessor``.
Args:
window_fn: a windowing function (WindowFn).
main_receivers: a dict of tag name to Receiver objects.
tagged_receivers: main receiver object.
"""
self.window_fn = window_fn
self.main_receivers = main_receivers
self.tagged_receivers = tagged_receivers
def process_outputs(self, windowed_input_element, results):
"""Dispatch the result of process computation to the appropriate receivers.
A value wrapped in a TaggedOutput object will be unwrapped and
then dispatched to the appropriate indexed output.
"""
if results is None:
return
for result in results:
tag = None
if isinstance(result, TaggedOutput):
tag = result.tag
if not isinstance(tag, six.string_types):
raise TypeError('In %s, tag %s is not a string' % (self, tag))
result = result.value
if isinstance(result, WindowedValue):
windowed_value = result
if (windowed_input_element is not None
and len(windowed_input_element.windows) != 1):
windowed_value.windows *= len(windowed_input_element.windows)
elif isinstance(result, TimestampedValue):
assign_context = WindowFn.AssignContext(result.timestamp, result.value)
windowed_value = WindowedValue(
result.value, result.timestamp,
self.window_fn.assign(assign_context))
if len(windowed_input_element.windows) != 1:
windowed_value.windows *= len(windowed_input_element.windows)
else:
windowed_value = windowed_input_element.with_value(result)
if tag is None:
self.main_receivers.receive(windowed_value)
else:
self.tagged_receivers[tag].receive(windowed_value)
def start_bundle_outputs(self, results):
"""Validate that start_bundle does not output any elements"""
if results is None:
return
raise RuntimeError(
'Start Bundle should not output any elements but got %s' % results)
def finish_bundle_outputs(self, results):
"""Dispatch the result of finish_bundle to the appropriate receivers.
A value wrapped in a TaggedOutput object will be unwrapped and
then dispatched to the appropriate indexed output.
"""
if results is None:
return
for result in results:
tag = None
if isinstance(result, TaggedOutput):
tag = result.tag
if not isinstance(tag, six.string_types):
raise TypeError('In %s, tag %s is not a string' % (self, tag))
result = result.value
if isinstance(result, WindowedValue):
windowed_value = result
else:
raise RuntimeError('Finish Bundle should only output WindowedValue ' +\
'type but got %s' % type(result))
if tag is None:
self.main_receivers.receive(windowed_value)
else:
self.tagged_receivers[tag].receive(windowed_value)
class _NoContext(WindowFn.AssignContext):
"""An uninspectable WindowFn.AssignContext."""
NO_VALUE = object()
def __init__(self, value, timestamp=NO_VALUE):
self.value = value
self._timestamp = timestamp
@property
def timestamp(self):
if self._timestamp is self.NO_VALUE:
raise ValueError('No timestamp in this context.')
else:
return self._timestamp
@property
def existing_windows(self):
raise ValueError('No existing_windows in this context.')
class DoFnState(object):
"""For internal use only; no backwards-compatibility guarantees.
Keeps track of state that DoFns want, currently, user counters.
"""
def __init__(self, counter_factory):
self.step_name = ''
self._counter_factory = counter_factory
def counter_for(self, aggregator):
"""Looks up the counter for this aggregator, creating one if necessary."""
return self._counter_factory.get_aggregator_counter(
self.step_name, aggregator)
# TODO(robertwb): Replace core.DoFnContext with this.
class DoFnContext(object):
"""For internal use only; no backwards-compatibility guarantees."""
def __init__(self, label, element=None, state=None):
self.label = label
self.state = state
if element is not None:
self.set_element(element)
def set_element(self, windowed_value):
self.windowed_value = windowed_value
@property
def element(self):
if self.windowed_value is None:
raise AttributeError('element not accessible in this context')
else:
return self.windowed_value.value
@property
def timestamp(self):
if self.windowed_value is None:
raise AttributeError('timestamp not accessible in this context')
else:
return self.windowed_value.timestamp
@property
def windows(self):
if self.windowed_value is None:
raise AttributeError('windows not accessible in this context')
else:
return self.windowed_value.windows
|
@@ -22,8 +22,13 @@
For internal use only; no backwards-compatibility guarantees.
"""
+from __future__ import absolute_import
+
import sys
import traceback
+from builtins import next
+from builtins import object
+from builtins import zip
import six
|
I think we should we avoid `import six` for consistency with the approach followed elsewhere. What do you think, @RobbeSneyders ? Looks like we are using `six.reraise` in a few places and `six.text_type` in apiclient.py.
| 144,914
| 1
|
# frozen_string_literal: true
require 'view/game/part/blocker'
require 'view/game/part/borders'
require 'view/game/part/cities'
require 'view/game/part/label'
require 'view/game/part/location_name'
require 'view/game/part/revenue'
require 'view/game/part/towns'
require 'view/game/part/track'
require 'view/game/part/upgrades'
module View
module Game
class Tile < Snabberb::Component
needs :tile
needs :routes, default: [], store: true
# helper method to pass @tile and @region_use to every part
def render_tile_part(part_class, **kwargs)
h(part_class, region_use: @region_use, tile: @tile, **kwargs)
end
# if false, then the revenue is rendered by Part::Cities or Part::Towns
def should_render_revenue?
revenue = @tile.revenue_to_render
return false if revenue.empty?
return false if revenue.first.is_a?(Numeric) && (@tile.cities + @tile.towns).one?
return false if revenue.uniq.size > 1
return false if @tile.cities.sum(&:slots) < 3 && @tile.stops.size == 2
true
end
def render
# hash mapping the different regions to a number representing how much
# they've been used; it gets passed to the different tile parts and is
# modified before being passed on to the next one
@region_use = Hash.new(0)
children = []
render_revenue = should_render_revenue?
children << render_tile_part(Part::Track, routes: @routes) if @tile.exits.any?
children << render_tile_part(Part::Cities, show_revenue: !render_revenue) if @tile.cities.any?
children << render_tile_part(Part::Towns, routes: @routes) if @tile.towns.any?
# OO tiles have different rules...
rendered_loc_name = render_tile_part(Part::LocationName) if @tile.location_name && @tile.cities.size > 1
children << render_tile_part(Part::Revenue) if render_revenue
children << render_tile_part(Part::Label) if @tile.label
children << render_tile_part(Part::Upgrades) if @tile.upgrades.any?
children << render_tile_part(Part::Blocker)
rendered_loc_name = render_tile_part(Part::LocationName) if @tile.location_name && (@tile.cities.size <= 1)
@tile.reservations.each { |x| children << render_tile_part(Part::Reservation, reservation: x) }
children << render_tile_part(Part::Icons) if @tile.icons.any?
# borders should always be the top layer
children << h(Part::Borders, tile: @tile) if @tile.borders.any?
children << rendered_loc_name if rendered_loc_name
children.flatten!
h('g.tile', children)
end
end
end
end
|
@@ -25,13 +25,16 @@ module View
def should_render_revenue?
revenue = @tile.revenue_to_render
+ # special case: city with multi-revenue - no choice but to draw separate revenue
+ return true if revenue.any? { |r| !r.is_a?(Numeric) }
+
return false if revenue.empty?
return false if revenue.first.is_a?(Numeric) && (@tile.cities + @tile.towns).one?
return false if revenue.uniq.size > 1
- return false if @tile.cities.sum(&:slots) < 3 && @tile.stops.size == 2
+ return false if @tile.cities.sum(&:slots) < 3 && (@tile.cities + @tile.towns).size == 2
true
end
|
we call cities + towns . size a lot, maybe make a helper method on tiles
| 12,959
| 1
|
"/*\n * (C) Copyright 2016-2021 Intel Corporation.\n *\n * SPDX-License-Identifier: BSD-2-Clause-Pat(...TRUNCATED)
| "@@ -947,6 +947,7 @@ out:\n \t\tD_ERROR(\"pool \"DF_UUID\" event %d failed: rc %d\\n\",\n \t\t\tDP_U(...TRUNCATED)
|
This will be removed.
| 171,235
| 1
|
"// Licensed to Elasticsearch B.V. under one or more contributor\n// license agreements. See the NOT(...TRUNCATED)
| "@@ -160,6 +160,11 @@ func (r *routeBuilder) profileHandler() (request.Handler, error) {\n \treturn (...TRUNCATED)
| "nit: `firehoseLogHandler` vs. `firehoseMiddleware` looks like a naming inconsistency? (`log` is not(...TRUNCATED)
| 32,732
| 1
|
"# The actions necessary for managing grade entry forms.\n\nclass GradeEntryFormsController < Applic(...TRUNCATED)
| "@@ -416,7 +416,8 @@ class GradeEntryFormsController < ApplicationController\n end\n end\n(...TRUNCATED)
|
Trailing whitespace detected.
| 38,006
| 1
|
"/*\n * SonarQube Java\n * Copyright (C) 2012-2018 SonarSource SA\n * mailto:info AT sonarsource DOT(...TRUNCATED)
| "@@ -40,6 +40,8 @@ import org.sonar.plugins.java.api.semantic.Symbol;\n \n public class BytecodeComp(...TRUNCATED)
|
Confirmed the issue on SQ side.
| 18,300
| 1
|
"'use strict';\n\n/**\n * @ngdoc function\n * @module ng\n * @name angular.injector\n * @kind functi(...TRUNCATED)
| "@@ -359,6 +359,8 @@ function annotate(fn, strictDi, name) {\n * * {@link auto.$provide#service ser(...TRUNCATED)
| "service decorator is not correct, as not only services can be decorated. You should call it **decor(...TRUNCATED)
| 77,485
| 1
|
"\"\"\"Generate and work with PEP 425 Compatibility Tags.\"\"\"\nfrom __future__ import absolute_imp(...TRUNCATED)
| "@@ -158,6 +158,8 @@ def get_supported(versions=None, noarch=False):\n \n abis.append('none')\n (...TRUNCATED)
| "Renaming this variable `arch` and flipping the values/logic in the surrounding code would make this(...TRUNCATED)
| 27,839
| 1
|
"/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license a(...TRUNCATED)
| "@@ -641,8 +641,10 @@ public class DoFnOperator<InputT, OutputT>\n @Override\n public final void(...TRUNCATED)
|
Is this required on every element? I'd rather trigger this only if we set / remove a hold.
| 224,505
| 1
|
End of preview.
No dataset card yet
- Downloads last month
- 4