deep-analysis-research commited on
Commit
db68f2f
·
verified ·
1 Parent(s): e154e0f

update en&ja README

Browse files
Files changed (4) hide show
  1. .gitattributes +1 -0
  2. README-ja.md +109 -0
  3. README.md +65 -94
  4. tech-dev.png +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tech-dev.png filter=lfs diff=lfs merge=lfs -text
README-ja.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ base_model: Qwen/Qwen2.5-32B
8
+ tags:
9
+ - chat
10
+ library_name: transformers
11
+ ---
12
+
13
+ # Flux-Japanese-Qwen2.5-32B-Instruct-V1.0
14
+ [[English](./README.md)] [**Japanese**]
15
+ Flux-Japanese-Qwen2.5-32B-Instruct-V1.0は、320億個のパラメータを持つオープンウェイトの大規模言語モデルです。日本語に関する深い知識と、高度な推論能力および言語能力を特長としています。Qwen2.5‑32B‑Instruct をベースとしてトレーニングされ、Apache 2.0 オープンソースライセンスの下で提供されています。
16
+
17
+ # 🏆 Open-Japanese-LLM-Leaderboard 第1位
18
+ [Open LLM Japanese LLM Leaderboard](https://huggingface.co/spaces/deep-analysis-research/open-japanese-llm-leaderboard) において、以下の結果となりました。
19
+ - ベースモデル:「Qwen2.5‑32B‑Instruct 」(平均スコア:0.6553)
20
+ - 元評価トップのモデル:「D2IL‑Japanese‑Qwen2.5‑32B‑Instruct‑v0.1 」(平均スコア: 0.7100)
21
+ - 本モデル:「Flux‑Japanese‑Qwen2.5‑32B‑V1.0 」(平均スコア:0.7417)
22
+ ベースモデルである Qwen2.5‑32B‑Instruct と比較すると、本モデルはほとんどのタスクで性能が大幅に向上し、特に FA(Fundamental Analysis/基礎分析)、SUM(Summarization/要約)、CG(Code Generation/コード生成) において顕著な改善が見られます。
23
+
24
+ | Tasks | Qwen2.5-32B-Instruct | D2IL-Japanese-Qwen2.5-32B-Instruct-v0.1 | Flux-Japanese-Qwen2.5-32B-Instruct-V1.0 |
25
+ |------------------------------------|----------------------|-----------------------------------------|------------------------------------------|
26
+ | NLI - 自然言語推論 | 0.8106 | 0.8793 | 0.8846 (+0.0740) |
27
+ | QA - 質問応答 | 0.541 | 0.5897 | 0.5965 (+0.0555) |
28
+ | RC - 読解力 | 0.9047 | 0.9005 | 0.9261 (+0.0214) |
29
+ | MC - 多肢選択式質問応答 | 0.8966 | 0.9139 | 0.9128 (+0.0162) |
30
+ | EL - エンティティリンキン | 0.5894 | 0.6782 | 0.6975 (+0.1081) |
31
+ | FA - 基礎分析 | 0.2737 | 0.4321 | 0.5185 (+0.2448) |
32
+ | MR - 数学的推論 | 0.944 | 0.938 | 0.9420 (-0.0020) |
33
+ | MT - 機械翻訳 | 0.8479 | 0.7954 | 0.8389 (-0.0090) |
34
+ | HE - 試験問題 | 0.7757 | 0.7902 | 0.7987 (+0.0230) |
35
+ | CG - コード生成 | 0.5281 | 0.6084 | 0.7610 (+0.2329) |
36
+ | SUM - 要約 | 0.097 | 0.2843 | 0.2827 (+0.1857) |
37
+ | **Average** | **0.6553** | **0.71** | **0.7417 (+0.0864)** |
38
+
39
+
40
+ # 🚀 一貫した汎用性能
41
+ Flux‑Japanese‑Qwen2.5‑32B‑Instruct‑V1.0は学習により日本語能力が大幅に改善した一方で、汎用的なタスクや英語でのタスク遂行能力においても性能を維持しており、ベースモデルである「Qwen2.5-32B-Instruct」と比較して、1%以下というごく僅かな差にとどまっています。評価は [simple-evals](https://github.com/deep-analysis-research/simple-evals) に基づいて行われています。
42
+
43
+ | Tasks | Dataset | Qwen2.5-32B-Instruct | Flux-Japanese-Qwen2.5-32B-Instruct-V1.0 |
44
+ |---------------------|----------------|----------------------|------------------------------------------|
45
+ | General Tasks | MMLU-redux | 80.37 | 80.03 (-0.34) |
46
+ | | GPQGA-Diamond | 46.11 | 47.32 (+1.21) |
47
+ | | MMLU | 82.84 | 83.39 (+0.55) |
48
+ | Math Tasks | MATH-500 | 78.14 | 78.50 (+0.36) |
49
+ | | AIME24 | 17.06 | 17.92 (+0.86) |
50
+ | | AIME25 | 16.25 | 14.58 (-1.67) |
51
+ | | MT-AIME24 | 12.73 | 12.97 (+0.24) |
52
+ | Multilingual Tasks | Multi-IF | 71.85 | 63.45 (-8.40) |
53
+ | | INCLUDE | 65.16 | 64.64 (-0.52) |
54
+ | | MMMLU | 73.43 | 74.08 (+0.65) |
55
+ | Coding Tasks | HumanEval | 87.93 | 86.51 (-1.42) |
56
+ | Alignment Tasks | IFEval | 78.37 | 77.46 (-0.91) |
57
+ | **Average** | | **59.17** | **58.40 (-0.77)** |
58
+
59
+
60
+ # ⚙️ 技術開発
61
+
62
+ ![](./tech-dev.png)
63
+
64
+ - **Phase 1: Interpretability Analysis & Pinpoint Tuning** — 日本語能力(知識・推論・言語)向上のために、Mechanistic Interpretability (MI)技術を活用することで、特定のパスと回路を識別し、僅か5%のパラメータに対してPinpoint Tuningを実施する方法を採用しました。これにより日本語の知識・推論・言語のそれぞれの能力に特化した3つのエキスパートモデルを作成しました。
65
+ - **Phase 2: Pinpoint Merging** — 3つのエキスパートモデルに対し、Pinpoint Merging を実施し、日本語の知識・推論・言語の各分野でエキスパートレベルの性能を備えた”日本語特化”統合モデルを構築します。 [[Pinpoint Merging のコード](https://github.com/deep-analysis-research/SLTA)].
66
+
67
+ # 🚩 クイックスタート
68
+ ```python
69
+ from transformers import AutoModelForCausalLM, AutoTokenizer
70
+ device = "cuda" # the device to load the model onto
71
+
72
+ model = AutoModelForCausalLM.from_pretrained(
73
+ "Deep-Analysis-Research/Flux-Japanese-Qwen2.5-32B-V1.0",
74
+ torch_dtype="auto",
75
+ device_map="auto"
76
+ )
77
+ tokenizer = AutoTokenizer.from_pretrained("Deep-Analysis-Research/Flux-Japanese-Qwen2.5-32B-V1.0")
78
+
79
+ prompt = "大規模言語モデルについて簡単に紹介してください。"
80
+ messages = [
81
+ {"role": "user", "content": prompt}
82
+ ]
83
+ text = tokenizer.apply_chat_template(
84
+ messages,
85
+ tokenize=False,
86
+ add_generation_prompt=True
87
+ )
88
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
89
+
90
+ generated_ids = model.generate(
91
+ model_inputs.input_ids,
92
+ max_new_tokens=512
93
+ )
94
+ generated_ids = [
95
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
96
+ ]
97
+
98
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
99
+ ```
100
+ # 💡 利用規約
101
+ 本モデルは、バイアスや有害な応答をはじめとする各種リスクを低減するため、様々な技術的アプローチを用いて、その安全性と信頼性の向上に努めております。
102
+ しかし、本モデルを含む全ての大規模言語モデル(LLM)には、不正確な情報、誤解を招く内容、あるいは偏見を反映した、意図しない応答を生成する可能性が依然として存在します。本モデルをダウンロード、利用、または対話形式で使用することにより、利用者は以下の事項を理解し、これに同意したものとみなされます。
103
+ 1. 禁止事項
104
+ - 利用者は、本モデルを、詐欺、濫用、嫌がらせ、プライバシー侵害、悪意のあるコンテンツの作成/拡散などを含む、違法、または法令に違反する悪質な活動に利用することは禁止します。
105
+ 2. 利用者の責任
106
+ - 本モデルの使用方法及びその利用に起因して生じるすべての事象については、利用者自身が単独で責任を負うこととします。
107
+ - 本モデルの公開に関与した作成者・公開者及び機関は、その使用に起因して生じるいかなる結果についても、一切の責任を負いません。
108
+ 3. 保証
109
+ - 本モデルは現状の状態で提供され、いかなる保証も伴いません。
README.md CHANGED
@@ -10,58 +10,69 @@ tags:
10
  library_name: transformers
11
  ---
12
 
13
- # Qwen2.5-32B-Instruct
14
-
15
- ## Introduction
16
-
17
- Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
18
-
19
- - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
20
- - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
21
- - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
22
- - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
23
-
24
- **This repo contains the instruction-tuned 32B Qwen2.5 model**, which has the following features:
25
- - Type: Causal Language Models
26
- - Training Stage: Pretraining & Post-training
27
- - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
28
- - Number of Parameters: 32.5B
29
- - Number of Paramaters (Non-Embedding): 31.0B
30
- - Number of Layers: 64
31
- - Number of Attention Heads (GQA): 40 for Q and 8 for KV
32
- - Context Length: Full 131,072 tokens and generation 8192 tokens
33
- - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
34
-
35
- For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
36
-
37
- ## Requirements
38
-
39
- The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
40
-
41
- With `transformers<4.37.0`, you will encounter the following error:
42
- ```
43
- KeyError: 'qwen2'
44
- ```
45
-
46
- ## Quickstart
47
-
48
- Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
49
-
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ```python
51
  from transformers import AutoModelForCausalLM, AutoTokenizer
52
-
53
- model_name = "Qwen/Qwen2.5-32B-Instruct"
54
 
55
  model = AutoModelForCausalLM.from_pretrained(
56
- model_name,
57
  torch_dtype="auto",
58
  device_map="auto"
59
  )
60
- tokenizer = AutoTokenizer.from_pretrained(model_name)
61
 
62
- prompt = "Give me a short introduction to large language model."
63
  messages = [
64
- {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
65
  {"role": "user", "content": prompt}
66
  ]
67
  text = tokenizer.apply_chat_template(
@@ -69,10 +80,10 @@ text = tokenizer.apply_chat_template(
69
  tokenize=False,
70
  add_generation_prompt=True
71
  )
72
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
73
 
74
  generated_ids = model.generate(
75
- **model_inputs,
76
  max_new_tokens=512
77
  )
78
  generated_ids = [
@@ -81,52 +92,12 @@ generated_ids = [
81
 
82
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
83
  ```
84
-
85
- ### Processing Long Texts
86
-
87
- The current `config.json` is set for context length up to 32,768 tokens.
88
- To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
89
-
90
- For supported frameworks, you could add the following to `config.json` to enable YaRN:
91
- ```json
92
- {
93
- ...,
94
- "rope_scaling": {
95
- "factor": 4.0,
96
- "original_max_position_embeddings": 32768,
97
- "type": "yarn"
98
- }
99
- }
100
- ```
101
-
102
- For deployment, we recommend using vLLM.
103
- Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
104
- Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
105
- We advise adding the `rope_scaling` configuration only when processing long contexts is required.
106
-
107
- ## Evaluation & Performance
108
-
109
- Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
110
-
111
- For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
112
-
113
- ## Citation
114
-
115
- If you find our work helpful, feel free to give us a cite.
116
-
117
- ```
118
- @misc{qwen2.5,
119
- title = {Qwen2.5: A Party of Foundation Models},
120
- url = {https://qwenlm.github.io/blog/qwen2.5/},
121
- author = {Qwen Team},
122
- month = {September},
123
- year = {2024}
124
- }
125
-
126
- @article{qwen2,
127
- title={Qwen2 Technical Report},
128
- author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
129
- journal={arXiv preprint arXiv:2407.10671},
130
- year={2024}
131
- }
132
- ```
 
10
  library_name: transformers
11
  ---
12
 
13
+ # Flux-Japanese-Qwen2.5-32B-Instruct-V1.0
14
+ [**English**] [[Japanese](./README-ja.md)]
15
+ Flux-Japanese-Qwen2.5-32B-Instruct-V1.0 is a 32 billion parameter open-weights models with strong performance in Japanese knowledge, reasoning and language. It is trained based on the Qwen2.5-32B-Instruct and under the Apache 2.0 opensource licence
16
+
17
+ # 🏆 Open-Japanese-LLM-Leaderboard Rank-1
18
+ On the [Open LLM Japanese LLM Leaderboard](https://huggingface.co/spaces/deep-analysis-research/open-japanese-llm-leaderboard), Qwen2.5-32B-Instruct scores 0.6553, compared to the former top-ranked D2IL-Japanese-Qwen2.5-32B-Instruct-v0.1 at 0.7100, and Flux-Japanese-Qwen2.5-32B-V1.0 at 0.7417. Compared with the original Qwen2.5-32B-Instruct, Flux-Japanese-Qwen2.5-32B-v1.0 demonstrates significant gains across most tasks, with especially strong improvements in FA (Fundamental Analysis, 基礎分析), SUM (Summarization, 要約), and CG (Code Generation, コード生成).
19
+ | Tasks | Qwen2.5-32B-Instruct | D2IL-Japanese-Qwen2.5-32B-Instruct-v0.1 | Flux-Japanese-Qwen2.5-32B-Instruct-V1.0 |
20
+ |------------------------------------|----------------------|-----------------------------------------|------------------------------------------|
21
+ | NLI - 自然言語推論 | 0.8106 | 0.8793 | 0.8846 (+0.0740) |
22
+ | QA - 質問応答 | 0.541 | 0.5897 | 0.5965 (+0.0555) |
23
+ | RC - 読解力 | 0.9047 | 0.9005 | 0.9261 (+0.0214) |
24
+ | MC - 多肢選択式質問応答 | 0.8966 | 0.9139 | 0.9128 (+0.0162) |
25
+ | EL - エンティティリンキン | 0.5894 | 0.6782 | 0.6975 (+0.1081) |
26
+ | FA - 基礎分析 | 0.2737 | 0.4321 | 0.5185 (+0.2448) |
27
+ | MR - 数学的推論 | 0.944 | 0.938 | 0.9420 (-0.0020) |
28
+ | MT - 機械翻訳 | 0.8479 | 0.7954 | 0.8389 (-0.0090) |
29
+ | HE - 試験問題 | 0.7757 | 0.7902 | 0.7987 (+0.0230) |
30
+ | CG - コード生成 | 0.5281 | 0.6084 | 0.7610 (+0.2329) |
31
+ | SUM - 要約 | 0.097 | 0.2843 | 0.2827 (+0.1857) |
32
+ | **Average** | **0.6553** | **0.71** | **0.7417 (+0.0864)** |
33
+
34
+
35
+ # 🚀 Consistent General Performance
36
+ While Flux-Japanese-Qwen2.5-32B-Instruct-V1.0 has been specifically tuned for Japanese, its performance on general capabilities and English tasks remains within 1% of Qwen2.5-32B-Instruct, indicating negligible impact. The evaluation is based on [simple-evals](https://github.com/deep-analysis-research/simple-evals).
37
+
38
+ | Tasks | Dataset | Qwen2.5-32B-Instruct | Flux-Japanese-Qwen2.5-32B-Instruct-V1.0 |
39
+ |---------------------|----------------|----------------------|------------------------------------------|
40
+ | General Tasks | MMLU-redux | 80.37 | 80.03 (-0.34) |
41
+ | | GPQGA-Diamond | 46.11 | 47.32 (+1.21) |
42
+ | | MMLU | 82.84 | 83.39 (+0.55) |
43
+ | Math Tasks | MATH-500 | 78.14 | 78.50 (+0.36) |
44
+ | | AIME24 | 17.06 | 17.92 (+0.86) |
45
+ | | AIME25 | 16.25 | 14.58 (-1.67) |
46
+ | | MT-AIME24 | 12.73 | 12.97 (+0.24) |
47
+ | Multilingual Tasks | Multi-IF | 71.85 | 63.45 (-8.40) |
48
+ | | INCLUDE | 65.16 | 64.64 (-0.52) |
49
+ | | MMMLU | 73.43 | 74.08 (+0.65) |
50
+ | Coding Tasks | HumanEval | 87.93 | 86.51 (-1.42) |
51
+ | Alignment Tasks | IFEval | 78.37 | 77.46 (-0.91) |
52
+ | **Average** | | **59.17** | **58.40 (-0.77)** |
53
+
54
+
55
+ # ⚙️ Technical Development
56
+
57
+ ![](./tech-dev.png)
58
+
59
+ - **Phase 1: Interpretability Analysis & Pinpoint Tuning** — For Japanese Knowledge, Reasoning, and Language, leverage mechanistic interpretability techniques to identify independent pathways/circuits, and apply targeted pinpoint tuning to only 5% of the parameters. This produces three expert models specialized respectively in Japanese knowledge, reasoning, and language.
60
+ - **Phase 2: Pinpoint Merging** — Perform pinpoint parameter merging on the three expert models to obtain a unified model that reaches expert-level performance across Japanese knowledge, reasoning, and language [[Code of Pinpoint Merging](https://github.com/deep-analysis-research/SLTA)].
61
+
62
+ # 🚩 Quickstart
63
  ```python
64
  from transformers import AutoModelForCausalLM, AutoTokenizer
65
+ device = "cuda" # the device to load the model onto
 
66
 
67
  model = AutoModelForCausalLM.from_pretrained(
68
+ "Deep-Analysis-Research/Flux-Japanese-Qwen2.5-32B-V1.0",
69
  torch_dtype="auto",
70
  device_map="auto"
71
  )
72
+ tokenizer = AutoTokenizer.from_pretrained("Deep-Analysis-Research/Flux-Japanese-Qwen2.5-32B-V1.0")
73
 
74
+ prompt = "大規模言語モデルについて簡単に紹介してください。"
75
  messages = [
 
76
  {"role": "user", "content": prompt}
77
  ]
78
  text = tokenizer.apply_chat_template(
 
80
  tokenize=False,
81
  add_generation_prompt=True
82
  )
83
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
84
 
85
  generated_ids = model.generate(
86
+ model_inputs.input_ids,
87
  max_new_tokens=512
88
  )
89
  generated_ids = [
 
92
 
93
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
94
  ```
95
+ # 💡 Terms of Use
96
+ We have employed various techniques to reduce bias, harmful outputs, and other risks in the model. While these efforts help improve safety and reliability, the model, like all Large Language Models, may still generate inaccurate, misleading, biased, or otherwise undesirable content. By downloading, using, or interacting with this model, you acknowledge these limitations and agree to the following:
97
+ 1. Prohibited Uses
98
+ - You may NOT use this model for any illegal, unlawful, or harmful activities, including but not limited to fraud, abuse, harassment, privacy violations, or the creation/dissemination of malicious content.
99
+ 2. User Responsibility
100
+ - You are solely responsible for how you use the model and for any outcomes that result from its use.
101
+ - The authors and institutions involved in releasing this model do NOT accept liability for any consequences arising from its use.
102
+ 3. No Warranty
103
+ - The model is provided “as is” without any warranties or guarantees.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tech-dev.png ADDED

Git LFS Details

  • SHA256: 12642d10ef5500edc408a30393b2f552066e567771be2956a2018d5e541a9a20
  • Pointer size: 131 Bytes
  • Size of remote file: 372 kB