|
|
--- |
|
|
language: en |
|
|
license: apache-2.0 |
|
|
datasets: |
|
|
- ESGBERT/environmental_2k |
|
|
tags: |
|
|
- ESG |
|
|
- environmental |
|
|
--- |
|
|
|
|
|
# Model Card for EnvRoBERTa-environmental |
|
|
|
|
|
## Model Description |
|
|
|
|
|
Based on [this paper](https://www.sciencedirect.com/science/article/pii/S1544612324000096), this is the EnvRoBERTa-environmental language model. A language model that is trained to better classify environmental texts in the ESG domain. |
|
|
|
|
|
*Note: We generally recommend choosing the [EnvironmentalBERT-environmental](https://huggingface.co/ESGBERT/EnvironmentalBERT-environmental) model since it is quicker, less resource-intensive and only marginally worse in performance.* |
|
|
|
|
|
Using the [EnvRoBERTa-base](https://huggingface.co/ESGBERT/EnvRoBERTa-base) model as a starting point, the EnvRoBERTa-environmental Language Model is additionally fine-trained on a 2k environmental dataset to detect environmental text samples. |
|
|
|
|
|
## How to Get Started With the Model |
|
|
|
|
|
See these tutorials on Medium for a guide on [model usage](https://medium.com/@schimanski.tobi/analyzing-esg-with-ai-and-nlp-tutorial-1-report-analysis-towards-esg-risks-and-opportunities-8daa2695f6c5?source=friends_link&sk=423e30ac2f50ee4695d258c2c4d54aa5), [large-scale analysis](https://medium.com/@schimanski.tobi/analyzing-esg-with-ai-and-nlp-tutorial-2-large-scale-analyses-of-environmental-actions-0735cc8dc9c2?source=friends_link&sk=13a5aa1999fbb11e9eed4a0c26c40efa), and [fine-tuning](https://medium.com/@schimanski.tobi/analyzing-esg-with-ai-and-nlp-tutorial-3-fine-tune-your-own-models-e3692fc0b3c0?source=friends_link&sk=49dc9f00768e43242fc1a76aa0969c70). |
|
|
|
|
|
You can use the model with a pipeline for text classification: |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline |
|
|
|
|
|
tokenizer_name = "ESGBERT/EnvRoBERTa-environmental" |
|
|
model_name = "ESGBERT/EnvRoBERTa-environmental" |
|
|
|
|
|
model = AutoModelForSequenceClassification.from_pretrained(model_name) |
|
|
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, max_len=512) |
|
|
|
|
|
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer) # set device=0 to use GPU |
|
|
|
|
|
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline |
|
|
print(pipe("Scope 1 emissions are reported here on a like-for-like basis against the 2013 baseline and exclude emissions from additional vehicles used during repairs.", padding=True, truncation=True)) |
|
|
``` |
|
|
|
|
|
## More details can be found in the paper |
|
|
|
|
|
```bibtex |
|
|
@article{schimanski_ESGBERT_2024, |
|
|
title = {Bridging the gap in ESG measurement: Using NLP to quantify environmental, social, and governance communication}, |
|
|
journal = {Finance Research Letters}, |
|
|
volume = {61}, |
|
|
pages = {104979}, |
|
|
year = {2024}, |
|
|
issn = {1544-6123}, |
|
|
doi = {https://doi.org/10.1016/j.frl.2024.104979}, |
|
|
url = {https://www.sciencedirect.com/science/article/pii/S1544612324000096}, |
|
|
author = {Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold}, |
|
|
} |
|
|
``` |