dheeena commited on
Commit
93948e2
·
verified ·
1 Parent(s): add9d34

Add files using upload-large-folder tool

Browse files
Files changed (50) hide show
  1. .env +2 -0
  2. .gitignore +3 -0
  3. README.md +141 -0
  4. containers/etl/Dockerfile +37 -0
  5. containers/etl/__init__.py +0 -0
  6. containers/etl/common.py +119 -0
  7. containers/etl/requirements.txt +2 -0
  8. containers/etl/run.py +38 -0
  9. containers/jupyter/Dockerfile +28 -0
  10. containers/jupyter/requirements.txt +1 -0
  11. containers/physionet/Dockerfile +11 -0
  12. containers/physionet/entrypoint.sh +11 -0
  13. containers/physionet/run.sh +6 -0
  14. containers/streamlit/Dockerfile +31 -0
  15. containers/streamlit/README.md +13 -0
  16. containers/streamlit/app.py +30 -0
  17. containers/streamlit/requirements.txt +3 -0
  18. containers/train/Dockerfile +31 -0
  19. containers/train/requirements.txt +3 -0
  20. containers/train/run.py +83 -0
  21. docker-compose.yaml +42 -0
  22. resources/baseline.png +0 -0
  23. venv/.gitignore +2 -0
  24. venv/bin/Activate.ps1 +248 -0
  25. venv/bin/activate +76 -0
  26. venv/bin/activate.csh +27 -0
  27. venv/bin/activate.fish +69 -0
  28. venv/bin/f2py +7 -0
  29. venv/bin/hf +7 -0
  30. venv/bin/huggingface-cli +7 -0
  31. venv/bin/normalizer +7 -0
  32. venv/bin/numpy-config +7 -0
  33. venv/bin/pip +7 -0
  34. venv/bin/pip3 +7 -0
  35. venv/bin/pip3.13 +7 -0
  36. venv/bin/tiny-agents +7 -0
  37. venv/bin/tqdm +7 -0
  38. venv/bin/transformers +7 -0
  39. venv/bin/transformers-cli +7 -0
  40. venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/INSTALLER +1 -0
  41. venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/LICENSE +201 -0
  42. venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/METADATA +334 -0
  43. venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/RECORD +336 -0
  44. venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/REQUESTED +0 -0
  45. venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/entry_points.txt +8 -0
  46. venv/lib/python3.13/site-packages/typing_extensions.py +0 -0
  47. venv/pyvenv.cfg +5 -0
  48. volumes/notebooks/.gitignore +1 -0
  49. volumes/notebooks/etl.ipynb +191 -0
  50. volumes/physionet/.gitignore +2 -0
.env ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ PHYSIONET_PASSWORD=my_password
2
+ PHYSIONET_USER=my_username
.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ *.env
2
+ *.jpg
3
+ *.ipynb_checkpoints
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A new generative model for radiology
2
+
3
+ tl;dr you should include text inputs along with images
4
+
5
+ [![YourActionName Actions Status](https://github.com/nathansutton/prerad/workflows/CI/badge.svg)](https://github.com/nathansutton/prerad/actions)
6
+
7
+ Available @ https://nathansutton-prerad.hf.space
8
+
9
+ Machine learning in radiology has come a long way. For a long time the goal was simply to make a probability estimate of different conditions available to the radiologist at the time of interpretation. As evidence, see any of the hundreds of AI vendors that have commercialized computer vision algorithms. On the academic frontier, recent advances have made it possible to generate realistic sounding radiology reports directly from an image. The first paper I found describing such a model was from 2017, but there have been many more recently with the onset of transformers.
10
+
11
+ However, every example architecture I have found suffers from the same structural problem.
12
+
13
+ __They aren't answering a clinical question.__
14
+
15
+ When a provider orders an imaging study the ordering provider is asking the radiologist to combine their education and experience to answer a clinical question. Unlike most specialist consultations, the format is asynchronous. Occasional telephone or face-to-face consults do occur for time-sensitive findings, but this is not the norm. Instead, the radiology report is the primary product of this provider-radiologist specialist consultation. This document's purpose to answer the clinical question, or indication. It usually has a succinct 'impression' section at the end. For example…
16
+
17
+ ```
18
+ INDICATION: 36yo M with hypoxia // ?pna, aspiration.
19
+
20
+ FINDINGS:
21
+
22
+ PA and lateral views of the chest provided. The lungs are adequately aerated.
23
+ There is a focal consolidation at the left lung base adjacent to the lateral
24
+ hemidiaphragm. There is mild vascular engorgement. There is bilateral apical
25
+ pleural thickening. The cardiomediastinal silhouette is remarkable for
26
+ arch calcifications. The heart is top normal in size.
27
+
28
+ IMPRESSION:
29
+
30
+ Focal consolidation at the left lung base, possibly representing aspiration
31
+ or pneumonia. Central vascular engorgement.
32
+ ```
33
+
34
+ __Incorrect Architectures__
35
+ Medical images alone are not sufficient to answer why an imaging study was ordered by a provider. For example, a provider might order a chest x-ray because their patient is presenting with shortness of breath and they suspect pneumonia. In another case, they might suspect a fracture after a motor vehicle collision and order a chest x-ray to rule out a broken rib. The same image could answer either of these clinical questions.
36
+ In my search I found over ten studies since 2017 describing model architectures that could conditionally generate entire radiology reports from an image (Jing et al. 2017, Li et al. 2018, Xue et al. 2018, Singh et al. 2019, Yuan et al. 2019, Chen et al. 2020, Miura et al. 2020, Fenglin et al. 2021, Nooralahzadeh et al. 2021, Sirshar et al. 2022, Chen et al. 2022, Yang et al. 2022). Unfortunately, none had text inputs. The one exception was a recent paper included the full-text radiology report as a textual input alongside the image, but its goal was to remove hallucinated references by cleaning up the data used for model training (Ramesh et al. 2022). These oversights are a problem because all the variation in the generated reports will come from the images. They cannot answer a clinical question posed in text by the ordering provider.
37
+
38
+ ![](./resources/baseline.png)
39
+
40
+ __Better Architectures__
41
+ To realistically describe what a radiologist is doing when they write a report a deep learning model needs to accept the same inputs. This means the conditional generation of radiology reports should include both image and text inputs and have a text output. This year a new transformer architecture particularly suited for this type of multi-modal problem was just released by SalesForce (Li et al. 2022). BLIP has a dual text and vision encoder paired with a text decoder. This allows it to continue generating new text for a radiology report from a given prompt's starting point. Lucky for us, the first paragraph of most radiology reports is the clinical question!
42
+ This makes conditionally generating a radiology report possible in couple of lines of code.
43
+
44
+ ```
45
+ from PIL import Image
46
+ from transformers import BlipForConditionalGeneration, BlipProcessor
47
+
48
+ # read in the model
49
+ processor = BlipProcessor.from_pretrained("nathansutton/generate-cxr")
50
+ model = BlipForConditionalGeneration.from_pretrained("nathansutton/generate-cxr")
51
+
52
+ # your data
53
+ my_image = 'my-chest-x-ray.jpg'
54
+ my_indication = 'RLL crackles, eval for pneumonia'
55
+
56
+ # process the inputs
57
+ inputs = processor(
58
+ images=Image.open(my_image),
59
+ text='indication:' + my_indication,
60
+ return_tensors="pt"
61
+ )
62
+
63
+ # generate an entire radiology report
64
+ output = model.generate(**inputs,max_length=512)
65
+ report = processor.decode(output[0], skip_special_tokens=True)
66
+ ```
67
+
68
+ __Simplified Application__
69
+ Starting from the base BLIP image captioning model, I fine-tuned a causal language model to generate radiology reports from a chest x-ray and a clinical prompt. The data used to fine-tune these assessments were derived from the MIMIC critical care database. Specifically, I cross referenced the original radiology reports in the MIMIC-CXR project with the JPG images available in the MIMIC-CXR-JPG project.
70
+ More information on how to reproduce these labels can be found in the corresponding Github repository.
71
+
72
+ Does it work? Let's go back to our original radiology report and perturb it with two different clinical indications. On the left we show the original question for this image ('question pneumonia') and on the right a fictitious concern ('question pneumothorax'). The original reference report is in quotes above. You can play around with your own de-identified images in this interactive web application hosted graciously by huggingface spaces.
73
+
74
+ ![](./resources/streamlit.png)
75
+
76
+ The same image with two different clinical prompts. This shows that by changing the prompt to 'chest pain, history of pneumothorax' the model successfully changed its answer in response to a different clinical question.
77
+
78
+ This is a simplified example was meant to demonstrate one concept. Conditionally generated radiology reports should include text inputs alongside the medical images to answer a clinical question. Obviously this kind of automation is not intended to replace radiologists, but it could help them quickly template their reports so they aren't starting from scratch.
79
+
80
+ In the bigger scheme of things, I have seen this kind of disparity between the question technologists are answering and providers are asking repeated across healthcare. Bring a provider on board to collaborate, and you'll be rewarded with actually useful models.
81
+
82
+ ## Data
83
+ All data were derived from MIMIC and require signing a data use agreement with Physionet. None are provided here.
84
+
85
+ ## Services
86
+
87
+ This repository exposes four components that are useful in a data science proof of concept.
88
+ - A container running Jupyter notebooks with common machine learning libraries (available @ localhost:8888). Any notebooks will persist in a mounted volume (./volumes/notebooks)
89
+ - A container running Streamlit allows a user to access the predictions from the model based on user inputs (available at localhost:8501)
90
+
91
+ ## Usage
92
+
93
+ turn on the application
94
+ ```
95
+ docker-compose up
96
+ ```
97
+
98
+ download the data from physionet, passing any argument downloads the data (no arguments does nothing)
99
+ ```
100
+ docker-compose run physionet True
101
+ ```
102
+
103
+ run the etl migrations
104
+ ```
105
+ docker-compose run etl
106
+ ```
107
+
108
+ train the model
109
+ ```
110
+ docker-compose run train
111
+ ```
112
+
113
+ ## Structure
114
+
115
+ ```
116
+ |-- containers - code
117
+ | |-- etl # transforms raw data from physionet into jsonlines files
118
+ | |-- jupyter # interactive notebooks
119
+ | |-- physionet # download the MIMIC-CXR and MIMIC-CXR-JPG data from physionet
120
+ | |-- streamlit # a small streamlit application to demo the model functionality
121
+ |-- volumes # persistent data
122
+ | |-- notebooks # jupyter notebooks persisted here
123
+ | |-- physionet # physionet data is persisted here
124
+ ```
125
+
126
+ ## References
127
+
128
+ - Chen, Zhihong, et al. "Generating radiology reports via memory-driven transformer." arXiv preprint arXiv:2010.16056 (2020).
129
+ - Chen, Zhihong, et al. "Cross-modal memory networks for radiology report generation." arXiv preprint arXiv:2204.13258 (2022).
130
+ - Jing, Baoyu, Pengtao Xie, and Eric Xing. "On the automatic generation of medical imaging reports." arXiv preprint arXiv:1711.08195 (2017).
131
+ - Li, Junnan, et al. "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation." International Conference on Machine Learning. PMLR, 2022.
132
+ - Li, Yuan, et al. "Hybrid retrieval-generation reinforced agent for medical image report generation." Advances in neural information processing systems 31 (2018).
133
+ - Liu, Fenglin, et al. "Exploring and distilling posterior and prior knowledge for radiology report generation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
134
+ - Miura, Yasuhide, et al. "Improving factual completeness and consistency of image-to-text radiology report generation." arXiv preprint arXiv:2010.10042 (2020).
135
+ - Nooralahzadeh, Farhad, et al. "Progressive transformer-based generation of radiology reports." arXiv preprint arXiv:2102.09777 (2021).
136
+ - Ramesh, Vignav, Nathan A. Chi, and Pranav Rajpurkar. "Improving Radiology Report Generation Systems by Removing Hallucinated References to Non-existent Priors." Machine Learning for Health. PMLR, 2022.
137
+ - Sirshar, Mehreen, et al. "Attention based automated radiology report generation using CNN and LSTM." Plos one 17.1 (2022): e0262209.
138
+ - Singh, Sonit, et al. "From chest x-rays to radiology reports: a multimodal machine learning approach." 2019 Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2019.
139
+ - Yang, Shuxin, et al. "Knowledge matters: Chest radiology report generation with general and specific knowledge." Medical Image Analysis 80 (2022): 102510.
140
+ - Xue, Yuan, et al. "Multimodal recurrent model with attention for automated radiology report generation." Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16–20, 2018, Proceedings, Part I. Springer International Publishing, 2018.
141
+ - Yuan, Jianbo, et al. "Automatic radiology report generation based on multi-view image fusion and medical concept enrichment." Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part VI 22. Springer International Publishing, 2019.
containers/etl/Dockerfile ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.9-buster
2
+
3
+ RUN \
4
+ apt-get update && \
5
+ apt-get -y upgrade && \
6
+ apt-get clean && \
7
+ rm -rf /var/lib/apt/lists/*
8
+
9
+ RUN useradd --create-home app
10
+ WORKDIR /home/app
11
+
12
+ COPY requirements.txt /home/app/
13
+ COPY __init__.py /home/app/
14
+ COPY common.py /home/app/
15
+ COPY run.py /home/app/
16
+
17
+ RUN \
18
+ chown app:app /home/app/requirements.txt && \
19
+ chmod 0755 /home/app/requirements.txt && \
20
+ chown app:app /home/app/__init__.py && \
21
+ chmod 0755 /home/app/__init__.py && \
22
+ chown app:app /home/app/run.py && \
23
+ chmod 0755 /home/app/run.py && \
24
+ chown app:app /home/app/common.py && \
25
+ chmod 0755 /home/app/common.py
26
+
27
+ USER app
28
+
29
+ ENV VIRTUAL_ENV=/home/app/venv
30
+ RUN python3 -m venv $VIRTUAL_ENV
31
+ ENV PATH="$VIRTUAL_ENV/bin:$PATH"
32
+
33
+ RUN \
34
+ pip install --upgrade pip && \
35
+ pip install -r requirements.txt
36
+
37
+ CMD ["python", "run.py", "worker", "-l", "info"]
containers/etl/__init__.py ADDED
File without changes
containers/etl/common.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import pandas as pd
3
+ import os
4
+ import random
5
+ import jsonlines
6
+
7
+ def flatten_json(data: dict) -> dict:
8
+ """ recursive flatten json elements from https://www.geeksforgeeks.org/flattening-json-objects-in-python/"""
9
+ out = {}
10
+
11
+ def flatten(x, name=""):
12
+ # If the Nested key-value
13
+ # pair is of dict type
14
+ if type(x) is dict:
15
+ for a in x:
16
+ flatten(x[a], name + a + "_")
17
+
18
+ # If the Nested key-value
19
+ # pair is of list type
20
+ elif type(x) is list:
21
+ i = 0
22
+
23
+ for a in x:
24
+ flatten(a, name + str(i) + "_")
25
+ i += 1
26
+ else:
27
+ out[name[:-1]] = x
28
+
29
+ flatten(data)
30
+ return out
31
+
32
+
33
+ def construct_report(string: str) -> tuple:
34
+
35
+ # normalize sections
36
+ keywords = [x.replace(":","").lower() for x in re.findall("[A-Z0-9][A-Z0-9. ]*:",string)]
37
+
38
+ # normalize sections
39
+ paragraphs = re.findall("(\w+)*: *(.*?)(?=\s*(?:\w+:|$))", string.lower())
40
+ sections = []
41
+ for header, paragraph in paragraphs:
42
+ if header in [x.replace(" ","_").replace("/","_") for x in keywords]:
43
+ sections.append(":".join([header, ". ".join([x.strip() for x in paragraph.split(". ") if x])]))
44
+ else:
45
+ sections.append(" - ".join([header, ". ".join([x.strip() for x in paragraph.split(". ") if x])]))
46
+ sections = list(map(lambda a: a + "." if a[-1] != "." else a, sections))
47
+ paragraphs = re.findall("(\w+) *: *(.*?)(?=\s*(?:\w+:|$))", " ".join(sections))
48
+
49
+ report = {}
50
+ for header, paragraph in paragraphs:
51
+ sentence = paragraph.replace(" ", ". ").replace("..", ".").replace(" - ."," - ")
52
+ sentence = re.split(r"(?<!\w\.\w.)(?<![A-Z][a-z]\.)(?<=\.|\?)\s", sentence)
53
+ sentence = [x.strip() for x in sentence if len(x) > 2]
54
+ report[header.replace("_", " ")] = [x.replace("_", " ") for x in sentence]
55
+ report = flatten_json(report)
56
+ topic = [x.split("_")[0] for x in report.keys()]
57
+ body = [x for x in report.values()]
58
+ report = pd.DataFrame(list(zip(topic, body)))
59
+ try:
60
+ report.columns = ["paragraph", "sentence"]
61
+ report["ranking"] = report.index
62
+ report["screen"] = report["sentence"].apply(lambda x: 1 if 'interval change' in x or 'compar' in x or 'prior' in x or 'improved from' in x else 0)
63
+ reason = re.sub(" +", " ", " ".join([": ".join([key, value]) for (key,value) in collapse_report(report).items() if key in ['indication','history']]))
64
+ text = re.sub(" +", " ", " ".join([": ".join([key, value]) for (key,value) in collapse_report(report[report.screen==0]).items() if key in ['findings','impression']]))
65
+ if 'findings' in text and 'impression' in text:
66
+ return reason, text
67
+ else:
68
+ return None, None
69
+ except ValueError:
70
+ return None, None
71
+
72
+ # take a report dataframe and return a dictionary of the paragraphs
73
+ def collapse_report(report: pd.DataFrame) -> dict:
74
+ """take raw text and return paragraphs in sections as key:value pairs"""
75
+ out = pd.merge(
76
+ report['paragraph'].drop_duplicates(),
77
+ report.groupby(['paragraph'])['sentence'].transform(lambda x: ' '.join(x)).drop_duplicates(),
78
+ left_index=True,
79
+ right_index=True
80
+ )
81
+ structure = dict()
82
+ for index, row in out.iterrows():
83
+ structure[row['paragraph']] = row['sentence']
84
+ return structure
85
+
86
+
87
+ def extract_transform(row: dict) -> None:
88
+
89
+ report_root = "./physionet.org/files/mimic-cxr/2.0.0/files"
90
+ image_root = "./physionet.org/files/mimic-cxr-jpg/2.0.0/files"
91
+
92
+ try:
93
+ scans = os.listdir(os.path.join(image_root,row["part"],row["patient"]))
94
+ scans = [x for x in scans if 'txt' not in x]
95
+ for scan in scans:
96
+ report = os.path.join(report_root,row["part"],row["patient"],scan+".txt")
97
+ if os.path.exists(report):
98
+ with open(report,"r") as f:
99
+ original = f.read()
100
+ transformed = re.sub(" +"," ",original.replace("FINAL REPORT","").strip().replace("\n \n",".").replace("\n"," ")).replace(" . "," ").replace("..",".").replace("CHEST RADIOGRAPHS."," ").strip()
101
+ if len(transformed) > 0:
102
+ reason, text = construct_report(transformed)
103
+ images = [os.path.join(image_root,row["part"],row["patient"],scan,x) for x in os.listdir(os.path.join(image_root,row["part"],row["patient"],scan))]
104
+ images = [x for x in images if os.path.exists(x)]
105
+ random.shuffle(images) # shuffle so we can reasonably sample 1 image per study
106
+ with jsonlines.open("dataset.jsonl","a") as writer:
107
+ for image in images:
108
+ writer.write({
109
+ "fold": row["patient"][0:3],
110
+ "image": image,
111
+ "study": image.split("/")[-2],
112
+ "original": transformed,
113
+ "report": report,
114
+ "patient": row["patient"],
115
+ "reason": reason,
116
+ "text": " ".join([reason,text]) if reason is not None and text is not None else None
117
+ })
118
+ except FileNotFoundError:
119
+ pass
containers/etl/requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ jsonlines>=3.1.0,<3.2
2
+ pandas>=1.5.3,<1.6
containers/etl/run.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import jsonlines
3
+ from concurrent.futures import ThreadPoolExecutor
4
+ from common import extract_transform
5
+
6
+ def run():
7
+
8
+ # remove previous executions
9
+ if os.path.exists("/opt/physionet/dataset.jsonl"):
10
+ os.remove("/opt/physionet/dataset.jsonl")
11
+
12
+ if os.path.exists("/opt/physionet/control.jsonl"):
13
+ os.remove("/opt/physionet/control.jsonl")
14
+
15
+ # create a control dictionary
16
+ root = "/opt/physionet/physionet.org/files/mimic-cxr/2.0.0/files"
17
+ with jsonlines.open("/opt/physionet/control.jsonl","w") as writer:
18
+ parts = os.listdir(root)
19
+ for part in parts:
20
+ patients = os.listdir(os.path.join(root,part))
21
+ for patient in patients:
22
+ scan = [x for x in os.listdir(os.path.join(root,part,patient)) if x.endswith('.txt')]
23
+ writer.write({"part": part, "patient": patient,"scan": scan})
24
+
25
+
26
+ # parse each record
27
+ with ThreadPoolExecutor(max_workers=4) as executor:
28
+ with jsonlines.open("/opt/physionet/control.jsonl","r") as reader:
29
+ executor.map(extract_transform, reader)
30
+
31
+
32
+ # only run it if there are files downloaded
33
+ if __name__ == "__main__":
34
+ try:
35
+ if len(os.listdir('/opt/physionet/physionet.org/files/mimic-cxr/2.0.0/files')) > 0:
36
+ run()
37
+ except OSError:
38
+ print("not downloaded yet")
containers/jupyter/Dockerfile ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.9-buster
2
+
3
+ RUN \
4
+ apt-get update && \
5
+ apt-get -y upgrade && \
6
+ apt-get clean && \
7
+ rm -rf /var/lib/apt/lists/*
8
+
9
+ RUN useradd --create-home app
10
+ WORKDIR /home/app
11
+
12
+ COPY requirements.txt /home/app/
13
+
14
+ RUN \
15
+ chown app:app /home/app/requirements.txt && \
16
+ chmod 0755 /home/app/requirements.txt
17
+
18
+ USER app
19
+
20
+ ENV VIRTUAL_ENV=/home/app/venv
21
+ RUN python3 -m venv $VIRTUAL_ENV
22
+ ENV PATH="$VIRTUAL_ENV/bin:$PATH"
23
+
24
+ RUN \
25
+ pip install --upgrade pip && \
26
+ pip install -r requirements.txt
27
+
28
+ CMD ["jupyter", "notebook", "--notebook-dir=/opt/notebooks", "--ip='*'", "--port=8888", "--no-browser"]
containers/jupyter/requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ jupyter>=1.0.0,<1.1
containers/physionet/Dockerfile ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM debian:buster
2
+
3
+ RUN apt-get update -y && \
4
+ apt-get -y install parallel wget && \
5
+ apt-get -y autoclean && \
6
+ apt-get -y autoremove && \
7
+ rm -rf /var/lib/apt/lists/*
8
+
9
+ COPY entrypoint.sh /opt/entrypoint.sh
10
+
11
+ ENTRYPOINT ["/bin/bash", "/opt/entrypoint.sh"]
containers/physionet/entrypoint.sh ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ if [ $# -eq 0 ]
5
+ then
6
+ echo no download requested
7
+ else
8
+ cd /opt/physionet
9
+ wget -A .txt -r -nc -c -np --user $PHYSIONET_USER --password $PHYSIONET_PASSWORD https://physionet.org/files/mimic-cxr/2.0.0/files/
10
+ seq 10 19 | parallel -j4 wget -A .jpg -r -nc -c -np --user $PHYSIONET_USER --password $PHYSIONET_PASSWORD https://physionet.org/files/mimic-cxr-jpg/2.0.0/files/p{}/
11
+ fi
containers/physionet/run.sh ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+
2
+ if ${1:-false}; then
3
+ # get the JPG
4
+ # spread out over 4 cores
5
+ echo True
6
+ fi
containers/streamlit/Dockerfile ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.9-buster
2
+
3
+ RUN \
4
+ apt-get update && \
5
+ apt-get -y upgrade && \
6
+ apt-get clean && \
7
+ rm -rf /var/lib/apt/lists/*
8
+
9
+ RUN useradd --create-home app
10
+
11
+ COPY requirements.txt /home/app/
12
+ COPY app.py /home/app/
13
+
14
+ RUN \
15
+ chown app:app /home/app/requirements.txt && \
16
+ chmod 0755 /home/app/requirements.txt && \
17
+ chown app:app /home/app/app.py && \
18
+ chmod 0755 /home/app/app.py
19
+
20
+ USER app
21
+ WORKDIR /home/app
22
+
23
+ ENV VIRTUAL_ENV=/home/app/venv
24
+ RUN python3 -m venv $VIRTUAL_ENV
25
+ ENV PATH="$VIRTUAL_ENV/bin:$PATH"
26
+
27
+ RUN \
28
+ pip install --upgrade pip && \
29
+ pip install -r requirements.txt
30
+
31
+ CMD ["streamlit", "run", "app.py"]
containers/streamlit/README.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Prerad
3
+ emoji: ⚡
4
+ colorFrom: red
5
+ colorTo: gray
6
+ sdk: streamlit
7
+ sdk_version: 1.17.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: apache-2.0
11
+ ---
12
+
13
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
containers/streamlit/app.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ from PIL import Image
3
+ from transformers import BlipForConditionalGeneration, BlipProcessor
4
+
5
+ processor = BlipProcessor.from_pretrained("nathansutton/generate-cxr")
6
+ model = BlipForConditionalGeneration.from_pretrained("nathansutton/generate-cxr")
7
+
8
+ def humanize_report(report: str) -> str:
9
+ report = report.replace("impression :","IMPRESSION:\n").replace("findings :","FINDINGS:\n").replace("indication :","INDICATION:\n")
10
+ sentences = [x.split("\n") for x in report.split(".") if x]
11
+ sentences = [item for sublist in sentences for item in sublist]
12
+ sentences = [x.strip().capitalize() if ':' not in x else x for x in sentences]
13
+ return ". ".join(sentences).replace(":.",":").replace("IMPRESSION:","\n\nIMPRESSION:\n\n").replace("FINDINGS:","\n\nFINDINGS:\n\n").replace("INDICATION:","INDICATION:\n\n")
14
+
15
+ indication = st.text_input("What is the indication for this study")
16
+ img_file_buffer = st.file_uploader("Upload a single view from a Chest X-Ray (JPG preferred)")
17
+ if img_file_buffer is not None and indication is not None:
18
+
19
+ image = Image.open(img_file_buffer)
20
+ st.image(image, use_column_width=True)
21
+ inputs = processor(
22
+ images=Image.open(img_file_buffer),
23
+ text='indication:' + indication,
24
+ return_tensors="pt"
25
+ )
26
+ output = model.generate(**inputs,max_length=512)
27
+ report = processor.decode(output[0], skip_special_tokens=True)
28
+ st.write(humanize_report(report))
29
+
30
+
containers/streamlit/requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ streamlit>=1.17.0,<1.18
2
+ transformers[torch]>=4.26.0,<4.27
3
+ pillow>=9.4.0,<9.5
containers/train/Dockerfile ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.9-buster
2
+
3
+ RUN \
4
+ apt-get update && \
5
+ apt-get -y upgrade && \
6
+ apt-get clean && \
7
+ rm -rf /var/lib/apt/lists/*
8
+
9
+ RUN useradd --create-home app
10
+ WORKDIR /home/app
11
+
12
+ COPY requirements.txt /home/app/
13
+ COPY run.py /home/app/
14
+
15
+ RUN \
16
+ chown app:app /home/app/requirements.txt && \
17
+ chmod 0755 /home/app/requirements.txt && \
18
+ chown app:app /home/app/run.py && \
19
+ chmod 0755 /home/app/run.py
20
+
21
+ USER app
22
+
23
+ ENV VIRTUAL_ENV=/home/app/venv
24
+ RUN python3 -m venv $VIRTUAL_ENV
25
+ ENV PATH="$VIRTUAL_ENV/bin:$PATH"
26
+
27
+ RUN \
28
+ pip install --upgrade pip && \
29
+ pip install -r requirements.txt
30
+
31
+ CMD ["python", "run.py", "worker", "-l", "info"]
containers/train/requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ transformers[torch]>=4.26.0,<4.27
2
+ pillow>=9.4.0,<9.5
3
+ datasets>=2.9.0,<2.10
containers/train/run.py ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ from datasets import Dataset, Image
3
+ import torch
4
+ from transformers import Trainer, TrainingArguments
5
+ from transformers import DataCollatorForLanguageModeling
6
+ from transformers import BlipProcessor, BlipForConditionalGeneration
7
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
8
+
9
+ # model initialize form pretrained
10
+ repo = "Salesforce/blip-image-captioning-large"
11
+ processor = BlipProcessor.from_pretrained(repo)
12
+ tokenizer = processor.tokenizer
13
+ model = BlipForConditionalGeneration.from_pretrained(repo)
14
+
15
+ # load the data configuration and split into test/valid
16
+ dt = pd.read_json("dataset.jsonl",lines=True).dropna()
17
+ dt["train"] = dt["fold"].apply(lambda x: 0 if x in ['p19'] else 1) # 10% of data
18
+ dt["patient"]= dt["patient"].apply(lambda x: x[0:5])
19
+ train=dt[dt.train==1]
20
+ valid=dt[dt.train==0]
21
+
22
+ # create datasets
23
+ train_dataset = Dataset.from_dict({
24
+ "image": train["image"].to_list(),
25
+ "fold": train["fold"].to_list(),
26
+ "text": train["text"].to_list(),
27
+ "reason": train["reason"].to_list(),
28
+ "id": [x.split("/")[-1].replace(".jpg","") for x in train["image"].to_list()]
29
+ }).cast_column("image", Image())
30
+
31
+ valid_dataset = Dataset.from_dict({
32
+ "image": valid["image"].to_list(),
33
+ "fold": valid["fold"].to_list(),
34
+ "text": valid["text"].to_list(),
35
+ "reason": valid["reason"].to_list(),
36
+ "id": [x.split("/")[-1].replace(".jpg","") for x in valid["image"].to_list()]
37
+ }).cast_column("image", Image())
38
+
39
+ def transform(example_batch):
40
+ return processor(
41
+ images=[image for image in example_batch["image"]],
42
+ text=[text for text in example_batch["text"]],
43
+ return_tensors="np",
44
+ padding='max_length',
45
+ max_length=512
46
+ )
47
+
48
+ # apply
49
+ train_prepared = train_dataset.shuffle(seed=42).with_transform(transform)
50
+ valid_prepared = valid_dataset.shuffle(seed=42).with_transform(transform)
51
+
52
+ # " ".join(processor.batch_decode(train_prepared[0]["input_ids"])).replace(" ##","")
53
+ training_args = TrainingArguments(
54
+ num_train_epochs=5,
55
+ evaluation_strategy="epoch",
56
+ save_steps=1000,
57
+ logging_steps=100,
58
+ per_device_eval_batch_size=2,
59
+ per_device_train_batch_size=2,
60
+ gradient_accumulation_steps=8,
61
+ lr_scheduler_type='cosine_with_restarts',
62
+ warmup_ratio=0.1,
63
+ learning_rate=5e-5,
64
+ save_total_limit=1,
65
+ output_dir="/opt/models/generate-cxr-checkpoints"
66
+ )
67
+
68
+ data_collator = DataCollatorForLanguageModeling(
69
+ tokenizer=tokenizer,
70
+ mlm = False
71
+ )
72
+
73
+ trainer = Trainer(
74
+ model=model,
75
+ tokenizer=processor,
76
+ args=training_args,
77
+ train_dataset=train_prepared,
78
+ eval_dataset=valid_prepared,
79
+ data_collator=data_collator,
80
+ )
81
+
82
+ trainer.train()
83
+ trainer.save_model("/opt/models/generate-cxr")
docker-compose.yaml ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: '3'
2
+ services:
3
+
4
+ # download data
5
+ physionet:
6
+ build: "./containers/physionet"
7
+ container_name: physionet
8
+ env_file:
9
+ - .env
10
+ volumes:
11
+ - ./volumes/physionet:/opt/physionet
12
+
13
+ # etl
14
+ etl:
15
+ build: "./containers/etl"
16
+ container_name: etl
17
+ env_file:
18
+ - .env
19
+ volumes:
20
+ - ./volumes/physionet:/opt/physionet
21
+
22
+ # interactive notebooks
23
+ jupyter:
24
+ build: "./containers/jupyter"
25
+ container_name: jupyter
26
+ volumes:
27
+ - ./volumes/notebooks:/opt/notebooks
28
+ - ./volumes/physionet:/opt/physionet
29
+ - ./volumes/models:/opt/models
30
+ ports:
31
+ - "8888:8888"
32
+
33
+ # interactive application
34
+ streamlit:
35
+ build: "./containers/streamlit"
36
+ container_name: streamlit
37
+ ports:
38
+ - "8501:8501"
39
+ volumes:
40
+ - ./volumes/models:/opt/models
41
+
42
+
resources/baseline.png ADDED
venv/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ # Created by venv; see https://docs.python.org/3/library/venv.html
2
+ *
venv/bin/Activate.ps1 ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <#
2
+ .Synopsis
3
+ Activate a Python virtual environment for the current PowerShell session.
4
+
5
+ .Description
6
+ Pushes the python executable for a virtual environment to the front of the
7
+ $Env:PATH environment variable and sets the prompt to signify that you are
8
+ in a Python virtual environment. Makes use of the command line switches as
9
+ well as the `pyvenv.cfg` file values present in the virtual environment.
10
+
11
+ .Parameter VenvDir
12
+ Path to the directory that contains the virtual environment to activate. The
13
+ default value for this is the parent of the directory that the Activate.ps1
14
+ script is located within.
15
+
16
+ .Parameter Prompt
17
+ The prompt prefix to display when this virtual environment is activated. By
18
+ default, this prompt is the name of the virtual environment folder (VenvDir)
19
+ surrounded by parentheses and followed by a single space (ie. '(.venv) ').
20
+
21
+ .Example
22
+ Activate.ps1
23
+ Activates the Python virtual environment that contains the Activate.ps1 script.
24
+
25
+ .Example
26
+ Activate.ps1 -Verbose
27
+ Activates the Python virtual environment that contains the Activate.ps1 script,
28
+ and shows extra information about the activation as it executes.
29
+
30
+ .Example
31
+ Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv
32
+ Activates the Python virtual environment located in the specified location.
33
+
34
+ .Example
35
+ Activate.ps1 -Prompt "MyPython"
36
+ Activates the Python virtual environment that contains the Activate.ps1 script,
37
+ and prefixes the current prompt with the specified string (surrounded in
38
+ parentheses) while the virtual environment is active.
39
+
40
+ .Notes
41
+ On Windows, it may be required to enable this Activate.ps1 script by setting the
42
+ execution policy for the user. You can do this by issuing the following PowerShell
43
+ command:
44
+
45
+ PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
46
+
47
+ For more information on Execution Policies:
48
+ https://go.microsoft.com/fwlink/?LinkID=135170
49
+
50
+ #>
51
+ Param(
52
+ [Parameter(Mandatory = $false)]
53
+ [String]
54
+ $VenvDir,
55
+ [Parameter(Mandatory = $false)]
56
+ [String]
57
+ $Prompt
58
+ )
59
+
60
+ <# Function declarations --------------------------------------------------- #>
61
+
62
+ <#
63
+ .Synopsis
64
+ Remove all shell session elements added by the Activate script, including the
65
+ addition of the virtual environment's Python executable from the beginning of
66
+ the PATH variable.
67
+
68
+ .Parameter NonDestructive
69
+ If present, do not remove this function from the global namespace for the
70
+ session.
71
+
72
+ #>
73
+ function global:deactivate ([switch]$NonDestructive) {
74
+ # Revert to original values
75
+
76
+ # The prior prompt:
77
+ if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) {
78
+ Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt
79
+ Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT
80
+ }
81
+
82
+ # The prior PYTHONHOME:
83
+ if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) {
84
+ Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME
85
+ Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME
86
+ }
87
+
88
+ # The prior PATH:
89
+ if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) {
90
+ Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH
91
+ Remove-Item -Path Env:_OLD_VIRTUAL_PATH
92
+ }
93
+
94
+ # Just remove the VIRTUAL_ENV altogether:
95
+ if (Test-Path -Path Env:VIRTUAL_ENV) {
96
+ Remove-Item -Path env:VIRTUAL_ENV
97
+ }
98
+
99
+ # Just remove VIRTUAL_ENV_PROMPT altogether.
100
+ if (Test-Path -Path Env:VIRTUAL_ENV_PROMPT) {
101
+ Remove-Item -Path env:VIRTUAL_ENV_PROMPT
102
+ }
103
+
104
+ # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether:
105
+ if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) {
106
+ Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force
107
+ }
108
+
109
+ # Leave deactivate function in the global namespace if requested:
110
+ if (-not $NonDestructive) {
111
+ Remove-Item -Path function:deactivate
112
+ }
113
+ }
114
+
115
+ <#
116
+ .Description
117
+ Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the
118
+ given folder, and returns them in a map.
119
+
120
+ For each line in the pyvenv.cfg file, if that line can be parsed into exactly
121
+ two strings separated by `=` (with any amount of whitespace surrounding the =)
122
+ then it is considered a `key = value` line. The left hand string is the key,
123
+ the right hand is the value.
124
+
125
+ If the value starts with a `'` or a `"` then the first and last character is
126
+ stripped from the value before being captured.
127
+
128
+ .Parameter ConfigDir
129
+ Path to the directory that contains the `pyvenv.cfg` file.
130
+ #>
131
+ function Get-PyVenvConfig(
132
+ [String]
133
+ $ConfigDir
134
+ ) {
135
+ Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg"
136
+
137
+ # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue).
138
+ $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue
139
+
140
+ # An empty map will be returned if no config file is found.
141
+ $pyvenvConfig = @{ }
142
+
143
+ if ($pyvenvConfigPath) {
144
+
145
+ Write-Verbose "File exists, parse `key = value` lines"
146
+ $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath
147
+
148
+ $pyvenvConfigContent | ForEach-Object {
149
+ $keyval = $PSItem -split "\s*=\s*", 2
150
+ if ($keyval[0] -and $keyval[1]) {
151
+ $val = $keyval[1]
152
+
153
+ # Remove extraneous quotations around a string value.
154
+ if ("'""".Contains($val.Substring(0, 1))) {
155
+ $val = $val.Substring(1, $val.Length - 2)
156
+ }
157
+
158
+ $pyvenvConfig[$keyval[0]] = $val
159
+ Write-Verbose "Adding Key: '$($keyval[0])'='$val'"
160
+ }
161
+ }
162
+ }
163
+ return $pyvenvConfig
164
+ }
165
+
166
+
167
+ <# Begin Activate script --------------------------------------------------- #>
168
+
169
+ # Determine the containing directory of this script
170
+ $VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition
171
+ $VenvExecDir = Get-Item -Path $VenvExecPath
172
+
173
+ Write-Verbose "Activation script is located in path: '$VenvExecPath'"
174
+ Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)"
175
+ Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)"
176
+
177
+ # Set values required in priority: CmdLine, ConfigFile, Default
178
+ # First, get the location of the virtual environment, it might not be
179
+ # VenvExecDir if specified on the command line.
180
+ if ($VenvDir) {
181
+ Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values"
182
+ }
183
+ else {
184
+ Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir."
185
+ $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/")
186
+ Write-Verbose "VenvDir=$VenvDir"
187
+ }
188
+
189
+ # Next, read the `pyvenv.cfg` file to determine any required value such
190
+ # as `prompt`.
191
+ $pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir
192
+
193
+ # Next, set the prompt from the command line, or the config file, or
194
+ # just use the name of the virtual environment folder.
195
+ if ($Prompt) {
196
+ Write-Verbose "Prompt specified as argument, using '$Prompt'"
197
+ }
198
+ else {
199
+ Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value"
200
+ if ($pyvenvCfg -and $pyvenvCfg['prompt']) {
201
+ Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'"
202
+ $Prompt = $pyvenvCfg['prompt'];
203
+ }
204
+ else {
205
+ Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)"
206
+ Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'"
207
+ $Prompt = Split-Path -Path $venvDir -Leaf
208
+ }
209
+ }
210
+
211
+ Write-Verbose "Prompt = '$Prompt'"
212
+ Write-Verbose "VenvDir='$VenvDir'"
213
+
214
+ # Deactivate any currently active virtual environment, but leave the
215
+ # deactivate function in place.
216
+ deactivate -nondestructive
217
+
218
+ # Now set the environment variable VIRTUAL_ENV, used by many tools to determine
219
+ # that there is an activated venv.
220
+ $env:VIRTUAL_ENV = $VenvDir
221
+
222
+ $env:VIRTUAL_ENV_PROMPT = $Prompt
223
+
224
+ if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) {
225
+
226
+ Write-Verbose "Setting prompt to '$Prompt'"
227
+
228
+ # Set the prompt to include the env name
229
+ # Make sure _OLD_VIRTUAL_PROMPT is global
230
+ function global:_OLD_VIRTUAL_PROMPT { "" }
231
+ Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT
232
+ New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt
233
+
234
+ function global:prompt {
235
+ Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) "
236
+ _OLD_VIRTUAL_PROMPT
237
+ }
238
+ }
239
+
240
+ # Clear PYTHONHOME
241
+ if (Test-Path -Path Env:PYTHONHOME) {
242
+ Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME
243
+ Remove-Item -Path Env:PYTHONHOME
244
+ }
245
+
246
+ # Add the venv to the PATH
247
+ Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH
248
+ $Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH"
venv/bin/activate ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This file must be used with "source bin/activate" *from bash*
2
+ # You cannot run it directly
3
+
4
+ deactivate () {
5
+ # reset old environment variables
6
+ if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
7
+ PATH="${_OLD_VIRTUAL_PATH:-}"
8
+ export PATH
9
+ unset _OLD_VIRTUAL_PATH
10
+ fi
11
+ if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
12
+ PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
13
+ export PYTHONHOME
14
+ unset _OLD_VIRTUAL_PYTHONHOME
15
+ fi
16
+
17
+ # Call hash to forget past locations. Without forgetting
18
+ # past locations the $PATH changes we made may not be respected.
19
+ # See "man bash" for more details. hash is usually a builtin of your shell
20
+ hash -r 2> /dev/null
21
+
22
+ if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
23
+ PS1="${_OLD_VIRTUAL_PS1:-}"
24
+ export PS1
25
+ unset _OLD_VIRTUAL_PS1
26
+ fi
27
+
28
+ unset VIRTUAL_ENV
29
+ unset VIRTUAL_ENV_PROMPT
30
+ if [ ! "${1:-}" = "nondestructive" ] ; then
31
+ # Self destruct!
32
+ unset -f deactivate
33
+ fi
34
+ }
35
+
36
+ # unset irrelevant variables
37
+ deactivate nondestructive
38
+
39
+ # on Windows, a path can contain colons and backslashes and has to be converted:
40
+ case "$(uname)" in
41
+ CYGWIN*|MSYS*|MINGW*)
42
+ # transform D:\path\to\venv to /d/path/to/venv on MSYS and MINGW
43
+ # and to /cygdrive/d/path/to/venv on Cygwin
44
+ VIRTUAL_ENV=$(cygpath /home/dheena/prerad/venv)
45
+ export VIRTUAL_ENV
46
+ ;;
47
+ *)
48
+ # use the path as-is
49
+ export VIRTUAL_ENV=/home/dheena/prerad/venv
50
+ ;;
51
+ esac
52
+
53
+ _OLD_VIRTUAL_PATH="$PATH"
54
+ PATH="$VIRTUAL_ENV/"bin":$PATH"
55
+ export PATH
56
+
57
+ VIRTUAL_ENV_PROMPT=venv
58
+ export VIRTUAL_ENV_PROMPT
59
+
60
+ # unset PYTHONHOME if set
61
+ # this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
62
+ # could use `if (set -u; : $PYTHONHOME) ;` in bash
63
+ if [ -n "${PYTHONHOME:-}" ] ; then
64
+ _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
65
+ unset PYTHONHOME
66
+ fi
67
+
68
+ if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
69
+ _OLD_VIRTUAL_PS1="${PS1:-}"
70
+ PS1="("venv") ${PS1:-}"
71
+ export PS1
72
+ fi
73
+
74
+ # Call hash to forget past commands. Without forgetting
75
+ # past commands the $PATH changes we made may not be respected
76
+ hash -r 2> /dev/null
venv/bin/activate.csh ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This file must be used with "source bin/activate.csh" *from csh*.
2
+ # You cannot run it directly.
3
+
4
+ # Created by Davide Di Blasi <[email protected]>.
5
+ # Ported to Python 3.3 venv by Andrew Svetlov <[email protected]>
6
+
7
+ alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; unsetenv VIRTUAL_ENV_PROMPT; test "\!:*" != "nondestructive" && unalias deactivate'
8
+
9
+ # Unset irrelevant variables.
10
+ deactivate nondestructive
11
+
12
+ setenv VIRTUAL_ENV /home/dheena/prerad/venv
13
+
14
+ set _OLD_VIRTUAL_PATH="$PATH"
15
+ setenv PATH "$VIRTUAL_ENV/"bin":$PATH"
16
+ setenv VIRTUAL_ENV_PROMPT venv
17
+
18
+
19
+ set _OLD_VIRTUAL_PROMPT="$prompt"
20
+
21
+ if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then
22
+ set prompt = "("venv") $prompt:q"
23
+ endif
24
+
25
+ alias pydoc python -m pydoc
26
+
27
+ rehash
venv/bin/activate.fish ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This file must be used with "source <venv>/bin/activate.fish" *from fish*
2
+ # (https://fishshell.com/). You cannot run it directly.
3
+
4
+ function deactivate -d "Exit virtual environment and return to normal shell environment"
5
+ # reset old environment variables
6
+ if test -n "$_OLD_VIRTUAL_PATH"
7
+ set -gx PATH $_OLD_VIRTUAL_PATH
8
+ set -e _OLD_VIRTUAL_PATH
9
+ end
10
+ if test -n "$_OLD_VIRTUAL_PYTHONHOME"
11
+ set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME
12
+ set -e _OLD_VIRTUAL_PYTHONHOME
13
+ end
14
+
15
+ if test -n "$_OLD_FISH_PROMPT_OVERRIDE"
16
+ set -e _OLD_FISH_PROMPT_OVERRIDE
17
+ # prevents error when using nested fish instances (Issue #93858)
18
+ if functions -q _old_fish_prompt
19
+ functions -e fish_prompt
20
+ functions -c _old_fish_prompt fish_prompt
21
+ functions -e _old_fish_prompt
22
+ end
23
+ end
24
+
25
+ set -e VIRTUAL_ENV
26
+ set -e VIRTUAL_ENV_PROMPT
27
+ if test "$argv[1]" != "nondestructive"
28
+ # Self-destruct!
29
+ functions -e deactivate
30
+ end
31
+ end
32
+
33
+ # Unset irrelevant variables.
34
+ deactivate nondestructive
35
+
36
+ set -gx VIRTUAL_ENV /home/dheena/prerad/venv
37
+
38
+ set -gx _OLD_VIRTUAL_PATH $PATH
39
+ set -gx PATH "$VIRTUAL_ENV/"bin $PATH
40
+ set -gx VIRTUAL_ENV_PROMPT venv
41
+
42
+ # Unset PYTHONHOME if set.
43
+ if set -q PYTHONHOME
44
+ set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME
45
+ set -e PYTHONHOME
46
+ end
47
+
48
+ if test -z "$VIRTUAL_ENV_DISABLE_PROMPT"
49
+ # fish uses a function instead of an env var to generate the prompt.
50
+
51
+ # Save the current fish_prompt function as the function _old_fish_prompt.
52
+ functions -c fish_prompt _old_fish_prompt
53
+
54
+ # With the original prompt function renamed, we can override with our own.
55
+ function fish_prompt
56
+ # Save the return status of the last command.
57
+ set -l old_status $status
58
+
59
+ # Output the venv prompt; color taken from the blue of the Python logo.
60
+ printf "%s(%s)%s " (set_color 4B8BBE) venv (set_color normal)
61
+
62
+ # Restore the return status of the previous command.
63
+ echo "exit $old_status" | .
64
+ # Output the original/"old" prompt.
65
+ _old_fish_prompt
66
+ end
67
+
68
+ set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV"
69
+ end
venv/bin/f2py ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from numpy.f2py.f2py2e import main
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(main())
venv/bin/hf ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from huggingface_hub.cli.hf import main
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(main())
venv/bin/huggingface-cli ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from huggingface_hub.commands.huggingface_cli import main
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(main())
venv/bin/normalizer ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from charset_normalizer.cli import cli_detect
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(cli_detect())
venv/bin/numpy-config ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from numpy._configtool import main
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(main())
venv/bin/pip ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from pip._internal.cli.main import main
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(main())
venv/bin/pip3 ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from pip._internal.cli.main import main
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(main())
venv/bin/pip3.13 ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from pip._internal.cli.main import main
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(main())
venv/bin/tiny-agents ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from huggingface_hub.inference._mcp.cli import app
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(app())
venv/bin/tqdm ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from tqdm.cli import main
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(main())
venv/bin/transformers ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from transformers.commands.transformers_cli import main
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(main())
venv/bin/transformers-cli ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/home/dheena/prerad/venv/bin/python
2
+ import sys
3
+ from transformers.commands.transformers_cli import main_cli
4
+ if __name__ == '__main__':
5
+ if sys.argv[0].endswith('.exe'):
6
+ sys.argv[0] = sys.argv[0][:-4]
7
+ sys.exit(main_cli())
venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/INSTALLER ADDED
@@ -0,0 +1 @@
 
 
1
+ pip
venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/METADATA ADDED
@@ -0,0 +1,334 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: huggingface-hub
3
+ Version: 0.36.0
4
+ Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub
5
+ Home-page: https://github.com/huggingface/huggingface_hub
6
+ Author: Hugging Face, Inc.
7
+ Author-email: [email protected]
8
+ License: Apache
9
+ Keywords: model-hub machine-learning models natural-language-processing deep-learning pytorch pretrained-models
10
+ Platform: UNKNOWN
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Intended Audience :: Education
13
+ Classifier: Intended Audience :: Science/Research
14
+ Classifier: License :: OSI Approved :: Apache Software License
15
+ Classifier: Operating System :: OS Independent
16
+ Classifier: Programming Language :: Python :: 3
17
+ Classifier: Programming Language :: Python :: 3 :: Only
18
+ Classifier: Programming Language :: Python :: 3.8
19
+ Classifier: Programming Language :: Python :: 3.9
20
+ Classifier: Programming Language :: Python :: 3.10
21
+ Classifier: Programming Language :: Python :: 3.11
22
+ Classifier: Programming Language :: Python :: 3.12
23
+ Classifier: Programming Language :: Python :: 3.13
24
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
25
+ Requires-Python: >=3.8.0
26
+ Description-Content-Type: text/markdown
27
+ License-File: LICENSE
28
+ Requires-Dist: filelock
29
+ Requires-Dist: fsspec>=2023.5.0
30
+ Requires-Dist: packaging>=20.9
31
+ Requires-Dist: pyyaml>=5.1
32
+ Requires-Dist: requests
33
+ Requires-Dist: tqdm>=4.42.1
34
+ Requires-Dist: typing-extensions>=3.7.4.3
35
+ Requires-Dist: hf-xet<2.0.0,>=1.1.3; platform_machine == "x86_64" or platform_machine == "amd64" or platform_machine == "arm64" or platform_machine == "aarch64"
36
+ Provides-Extra: all
37
+ Requires-Dist: InquirerPy==0.3.4; extra == "all"
38
+ Requires-Dist: aiohttp; extra == "all"
39
+ Requires-Dist: authlib>=1.3.2; extra == "all"
40
+ Requires-Dist: fastapi; extra == "all"
41
+ Requires-Dist: httpx; extra == "all"
42
+ Requires-Dist: itsdangerous; extra == "all"
43
+ Requires-Dist: jedi; extra == "all"
44
+ Requires-Dist: Jinja2; extra == "all"
45
+ Requires-Dist: pytest<8.2.2,>=8.1.1; extra == "all"
46
+ Requires-Dist: pytest-cov; extra == "all"
47
+ Requires-Dist: pytest-env; extra == "all"
48
+ Requires-Dist: pytest-xdist; extra == "all"
49
+ Requires-Dist: pytest-vcr; extra == "all"
50
+ Requires-Dist: pytest-asyncio; extra == "all"
51
+ Requires-Dist: pytest-rerunfailures<16.0; extra == "all"
52
+ Requires-Dist: pytest-mock; extra == "all"
53
+ Requires-Dist: urllib3<2.0; extra == "all"
54
+ Requires-Dist: soundfile; extra == "all"
55
+ Requires-Dist: Pillow; extra == "all"
56
+ Requires-Dist: gradio>=4.0.0; extra == "all"
57
+ Requires-Dist: numpy; extra == "all"
58
+ Requires-Dist: ruff>=0.9.0; extra == "all"
59
+ Requires-Dist: libcst>=1.4.0; extra == "all"
60
+ Requires-Dist: ty; extra == "all"
61
+ Requires-Dist: typing-extensions>=4.8.0; extra == "all"
62
+ Requires-Dist: types-PyYAML; extra == "all"
63
+ Requires-Dist: types-requests; extra == "all"
64
+ Requires-Dist: types-simplejson; extra == "all"
65
+ Requires-Dist: types-toml; extra == "all"
66
+ Requires-Dist: types-tqdm; extra == "all"
67
+ Requires-Dist: types-urllib3; extra == "all"
68
+ Requires-Dist: mypy<1.15.0,>=1.14.1; python_version == "3.8" and extra == "all"
69
+ Requires-Dist: mypy==1.15.0; python_version >= "3.9" and extra == "all"
70
+ Provides-Extra: cli
71
+ Requires-Dist: InquirerPy==0.3.4; extra == "cli"
72
+ Provides-Extra: dev
73
+ Requires-Dist: InquirerPy==0.3.4; extra == "dev"
74
+ Requires-Dist: aiohttp; extra == "dev"
75
+ Requires-Dist: authlib>=1.3.2; extra == "dev"
76
+ Requires-Dist: fastapi; extra == "dev"
77
+ Requires-Dist: httpx; extra == "dev"
78
+ Requires-Dist: itsdangerous; extra == "dev"
79
+ Requires-Dist: jedi; extra == "dev"
80
+ Requires-Dist: Jinja2; extra == "dev"
81
+ Requires-Dist: pytest<8.2.2,>=8.1.1; extra == "dev"
82
+ Requires-Dist: pytest-cov; extra == "dev"
83
+ Requires-Dist: pytest-env; extra == "dev"
84
+ Requires-Dist: pytest-xdist; extra == "dev"
85
+ Requires-Dist: pytest-vcr; extra == "dev"
86
+ Requires-Dist: pytest-asyncio; extra == "dev"
87
+ Requires-Dist: pytest-rerunfailures<16.0; extra == "dev"
88
+ Requires-Dist: pytest-mock; extra == "dev"
89
+ Requires-Dist: urllib3<2.0; extra == "dev"
90
+ Requires-Dist: soundfile; extra == "dev"
91
+ Requires-Dist: Pillow; extra == "dev"
92
+ Requires-Dist: gradio>=4.0.0; extra == "dev"
93
+ Requires-Dist: numpy; extra == "dev"
94
+ Requires-Dist: ruff>=0.9.0; extra == "dev"
95
+ Requires-Dist: libcst>=1.4.0; extra == "dev"
96
+ Requires-Dist: ty; extra == "dev"
97
+ Requires-Dist: typing-extensions>=4.8.0; extra == "dev"
98
+ Requires-Dist: types-PyYAML; extra == "dev"
99
+ Requires-Dist: types-requests; extra == "dev"
100
+ Requires-Dist: types-simplejson; extra == "dev"
101
+ Requires-Dist: types-toml; extra == "dev"
102
+ Requires-Dist: types-tqdm; extra == "dev"
103
+ Requires-Dist: types-urllib3; extra == "dev"
104
+ Requires-Dist: mypy<1.15.0,>=1.14.1; python_version == "3.8" and extra == "dev"
105
+ Requires-Dist: mypy==1.15.0; python_version >= "3.9" and extra == "dev"
106
+ Provides-Extra: fastai
107
+ Requires-Dist: toml; extra == "fastai"
108
+ Requires-Dist: fastai>=2.4; extra == "fastai"
109
+ Requires-Dist: fastcore>=1.3.27; extra == "fastai"
110
+ Provides-Extra: hf_transfer
111
+ Requires-Dist: hf-transfer>=0.1.4; extra == "hf-transfer"
112
+ Provides-Extra: hf_xet
113
+ Requires-Dist: hf-xet<2.0.0,>=1.1.2; extra == "hf-xet"
114
+ Provides-Extra: inference
115
+ Requires-Dist: aiohttp; extra == "inference"
116
+ Provides-Extra: mcp
117
+ Requires-Dist: mcp>=1.8.0; extra == "mcp"
118
+ Requires-Dist: typer; extra == "mcp"
119
+ Requires-Dist: aiohttp; extra == "mcp"
120
+ Provides-Extra: oauth
121
+ Requires-Dist: authlib>=1.3.2; extra == "oauth"
122
+ Requires-Dist: fastapi; extra == "oauth"
123
+ Requires-Dist: httpx; extra == "oauth"
124
+ Requires-Dist: itsdangerous; extra == "oauth"
125
+ Provides-Extra: quality
126
+ Requires-Dist: ruff>=0.9.0; extra == "quality"
127
+ Requires-Dist: libcst>=1.4.0; extra == "quality"
128
+ Requires-Dist: ty; extra == "quality"
129
+ Requires-Dist: mypy<1.15.0,>=1.14.1; python_version == "3.8" and extra == "quality"
130
+ Requires-Dist: mypy==1.15.0; python_version >= "3.9" and extra == "quality"
131
+ Provides-Extra: tensorflow
132
+ Requires-Dist: tensorflow; extra == "tensorflow"
133
+ Requires-Dist: pydot; extra == "tensorflow"
134
+ Requires-Dist: graphviz; extra == "tensorflow"
135
+ Provides-Extra: tensorflow-testing
136
+ Requires-Dist: tensorflow; extra == "tensorflow-testing"
137
+ Requires-Dist: keras<3.0; extra == "tensorflow-testing"
138
+ Provides-Extra: testing
139
+ Requires-Dist: InquirerPy==0.3.4; extra == "testing"
140
+ Requires-Dist: aiohttp; extra == "testing"
141
+ Requires-Dist: authlib>=1.3.2; extra == "testing"
142
+ Requires-Dist: fastapi; extra == "testing"
143
+ Requires-Dist: httpx; extra == "testing"
144
+ Requires-Dist: itsdangerous; extra == "testing"
145
+ Requires-Dist: jedi; extra == "testing"
146
+ Requires-Dist: Jinja2; extra == "testing"
147
+ Requires-Dist: pytest<8.2.2,>=8.1.1; extra == "testing"
148
+ Requires-Dist: pytest-cov; extra == "testing"
149
+ Requires-Dist: pytest-env; extra == "testing"
150
+ Requires-Dist: pytest-xdist; extra == "testing"
151
+ Requires-Dist: pytest-vcr; extra == "testing"
152
+ Requires-Dist: pytest-asyncio; extra == "testing"
153
+ Requires-Dist: pytest-rerunfailures<16.0; extra == "testing"
154
+ Requires-Dist: pytest-mock; extra == "testing"
155
+ Requires-Dist: urllib3<2.0; extra == "testing"
156
+ Requires-Dist: soundfile; extra == "testing"
157
+ Requires-Dist: Pillow; extra == "testing"
158
+ Requires-Dist: gradio>=4.0.0; extra == "testing"
159
+ Requires-Dist: numpy; extra == "testing"
160
+ Provides-Extra: torch
161
+ Requires-Dist: torch; extra == "torch"
162
+ Requires-Dist: safetensors[torch]; extra == "torch"
163
+ Provides-Extra: typing
164
+ Requires-Dist: typing-extensions>=4.8.0; extra == "typing"
165
+ Requires-Dist: types-PyYAML; extra == "typing"
166
+ Requires-Dist: types-requests; extra == "typing"
167
+ Requires-Dist: types-simplejson; extra == "typing"
168
+ Requires-Dist: types-toml; extra == "typing"
169
+ Requires-Dist: types-tqdm; extra == "typing"
170
+ Requires-Dist: types-urllib3; extra == "typing"
171
+
172
+ <p align="center">
173
+ <picture>
174
+ <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/huggingface_hub-dark.svg">
175
+ <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/huggingface_hub.svg">
176
+ <img alt="huggingface_hub library logo" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/huggingface_hub.svg" width="352" height="59" style="max-width: 100%;">
177
+ </picture>
178
+ <br/>
179
+ <br/>
180
+ </p>
181
+
182
+ <p align="center">
183
+ <i>The official Python client for the Huggingface Hub.</i>
184
+ </p>
185
+
186
+ <p align="center">
187
+ <a href="https://huggingface.co/docs/huggingface_hub/en/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/huggingface_hub/index.svg?down_color=red&down_message=offline&up_message=online&label=doc"></a>
188
+ <a href="https://github.com/huggingface/huggingface_hub/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/huggingface_hub.svg"></a>
189
+ <a href="https://github.com/huggingface/huggingface_hub"><img alt="PyPi version" src="https://img.shields.io/pypi/pyversions/huggingface_hub.svg"></a>
190
+ <a href="https://pypi.org/project/huggingface-hub"><img alt="PyPI - Downloads" src="https://img.shields.io/pypi/dm/huggingface_hub"></a>
191
+ <a href="https://codecov.io/gh/huggingface/huggingface_hub"><img alt="Code coverage" src="https://codecov.io/gh/huggingface/huggingface_hub/branch/main/graph/badge.svg?token=RXP95LE2XL"></a>
192
+ </p>
193
+
194
+ <h4 align="center">
195
+ <p>
196
+ <b>English</b> |
197
+ <a href="https://github.com/huggingface/huggingface_hub/blob/main/i18n/README_de.md">Deutsch</a> |
198
+ <a href="https://github.com/huggingface/huggingface_hub/blob/main/i18n/README_hi.md">हिंदी</a> |
199
+ <a href="https://github.com/huggingface/huggingface_hub/blob/main/i18n/README_ko.md">한국어</a> |
200
+ <a href="https://github.com/huggingface/huggingface_hub/blob/main/i18n/README_cn.md">中文(简体)</a>
201
+ <p>
202
+ </h4>
203
+
204
+ ---
205
+
206
+ **Documentation**: <a href="https://hf.co/docs/huggingface_hub" target="_blank">https://hf.co/docs/huggingface_hub</a>
207
+
208
+ **Source Code**: <a href="https://github.com/huggingface/huggingface_hub" target="_blank">https://github.com/huggingface/huggingface_hub</a>
209
+
210
+ ---
211
+
212
+ ## Welcome to the huggingface_hub library
213
+
214
+ The `huggingface_hub` library allows you to interact with the [Hugging Face Hub](https://huggingface.co/), a platform democratizing open-source Machine Learning for creators and collaborators. Discover pre-trained models and datasets for your projects or play with the thousands of machine learning apps hosted on the Hub. You can also create and share your own models, datasets and demos with the community. The `huggingface_hub` library provides a simple way to do all these things with Python.
215
+
216
+ ## Key features
217
+
218
+ - [Download files](https://huggingface.co/docs/huggingface_hub/en/guides/download) from the Hub.
219
+ - [Upload files](https://huggingface.co/docs/huggingface_hub/en/guides/upload) to the Hub.
220
+ - [Manage your repositories](https://huggingface.co/docs/huggingface_hub/en/guides/repository).
221
+ - [Run Inference](https://huggingface.co/docs/huggingface_hub/en/guides/inference) on deployed models.
222
+ - [Search](https://huggingface.co/docs/huggingface_hub/en/guides/search) for models, datasets and Spaces.
223
+ - [Share Model Cards](https://huggingface.co/docs/huggingface_hub/en/guides/model-cards) to document your models.
224
+ - [Engage with the community](https://huggingface.co/docs/huggingface_hub/en/guides/community) through PRs and comments.
225
+
226
+ ## Installation
227
+
228
+ Install the `huggingface_hub` package with [pip](https://pypi.org/project/huggingface-hub/):
229
+
230
+ ```bash
231
+ pip install huggingface_hub
232
+ ```
233
+
234
+ If you prefer, you can also install it with [conda](https://huggingface.co/docs/huggingface_hub/en/installation#install-with-conda).
235
+
236
+ In order to keep the package minimal by default, `huggingface_hub` comes with optional dependencies useful for some use cases. For example, if you want have a complete experience for Inference, run:
237
+
238
+ ```bash
239
+ pip install "huggingface_hub[inference]"
240
+ ```
241
+
242
+ To learn more installation and optional dependencies, check out the [installation guide](https://huggingface.co/docs/huggingface_hub/en/installation).
243
+
244
+ ## Quick start
245
+
246
+ ### Download files
247
+
248
+ Download a single file
249
+
250
+ ```py
251
+ from huggingface_hub import hf_hub_download
252
+
253
+ hf_hub_download(repo_id="tiiuae/falcon-7b-instruct", filename="config.json")
254
+ ```
255
+
256
+ Or an entire repository
257
+
258
+ ```py
259
+ from huggingface_hub import snapshot_download
260
+
261
+ snapshot_download("stabilityai/stable-diffusion-2-1")
262
+ ```
263
+
264
+ Files will be downloaded in a local cache folder. More details in [this guide](https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache).
265
+
266
+ ### Login
267
+
268
+ The Hugging Face Hub uses tokens to authenticate applications (see [docs](https://huggingface.co/docs/hub/security-tokens)). To log in your machine, run the following CLI:
269
+
270
+ ```bash
271
+ hf auth login
272
+ # or using an environment variable
273
+ hf auth login --token $HUGGINGFACE_TOKEN
274
+ ```
275
+
276
+ ### Create a repository
277
+
278
+ ```py
279
+ from huggingface_hub import create_repo
280
+
281
+ create_repo(repo_id="super-cool-model")
282
+ ```
283
+
284
+ ### Upload files
285
+
286
+ Upload a single file
287
+
288
+ ```py
289
+ from huggingface_hub import upload_file
290
+
291
+ upload_file(
292
+ path_or_fileobj="/home/lysandre/dummy-test/README.md",
293
+ path_in_repo="README.md",
294
+ repo_id="lysandre/test-model",
295
+ )
296
+ ```
297
+
298
+ Or an entire folder
299
+
300
+ ```py
301
+ from huggingface_hub import upload_folder
302
+
303
+ upload_folder(
304
+ folder_path="/path/to/local/space",
305
+ repo_id="username/my-cool-space",
306
+ repo_type="space",
307
+ )
308
+ ```
309
+
310
+ For details in the [upload guide](https://huggingface.co/docs/huggingface_hub/en/guides/upload).
311
+
312
+ ## Integrating to the Hub.
313
+
314
+ We're partnering with cool open source ML libraries to provide free model hosting and versioning. You can find the existing integrations [here](https://huggingface.co/docs/hub/libraries).
315
+
316
+ The advantages are:
317
+
318
+ - Free model or dataset hosting for libraries and their users.
319
+ - Built-in file versioning, even with very large files, thanks to a git-based approach.
320
+ - In-browser widgets to play with the uploaded models.
321
+ - Anyone can upload a new model for your library, they just need to add the corresponding tag for the model to be discoverable.
322
+ - Fast downloads! We use Cloudfront (a CDN) to geo-replicate downloads so they're blazing fast from anywhere on the globe.
323
+ - Usage stats and more features to come.
324
+
325
+ If you would like to integrate your library, feel free to open an issue to begin the discussion. We wrote a [step-by-step guide](https://huggingface.co/docs/hub/adding-a-library) with ❤️ showing how to do this integration.
326
+
327
+ ## Contributions (feature requests, bugs, etc.) are super welcome 💙💚💛💜🧡❤️
328
+
329
+ Everyone is welcome to contribute, and we value everybody's contribution. Code is not the only way to help the community.
330
+ Answering questions, helping others, reaching out and improving the documentations are immensely valuable to the community.
331
+ We wrote a [contribution guide](https://github.com/huggingface/huggingface_hub/blob/main/CONTRIBUTING.md) to summarize
332
+ how to get started to contribute to this repository.
333
+
334
+
venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/RECORD ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ../../../bin/hf,sha256=hYcvtyYu0U20AOsOnRUsPayoH6ntXkSPouKQjbXK41Q,213
2
+ ../../../bin/huggingface-cli,sha256=aMwOrmnSAieCYaLVnx97sGJlNQgkQVajQLYeaX_veo0,231
3
+ ../../../bin/tiny-agents,sha256=WYW6slshx3NBjSOE1HtLo9IpSHChr9TQYFPYnm2ptzU,223
4
+ huggingface_hub-0.36.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
5
+ huggingface_hub-0.36.0.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
6
+ huggingface_hub-0.36.0.dist-info/METADATA,sha256=Js6AVj00AvKTW32IGDW5saZNiX3C-Yb1K_QRRkLeca0,14822
7
+ huggingface_hub-0.36.0.dist-info/RECORD,,
8
+ huggingface_hub-0.36.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
9
+ huggingface_hub-0.36.0.dist-info/WHEEL,sha256=tZoeGjtWxWRfdplE7E3d45VPlLNQnvbKiYnx7gwAy8A,92
10
+ huggingface_hub-0.36.0.dist-info/entry_points.txt,sha256=HIzLhjwPTO7U_ncpW4AkmzAuaadr1ajmYagW5mdb5TM,217
11
+ huggingface_hub-0.36.0.dist-info/top_level.txt,sha256=8KzlQJAY4miUvjAssOAJodqKOw3harNzuiwGQ9qLSSk,16
12
+ huggingface_hub/__init__.py,sha256=AXr4hh9b3lZcbOnyMUV_Y6JXn2Q1iMCVYPnq1FYPhOQ,52675
13
+ huggingface_hub/__pycache__/__init__.cpython-313.pyc,,
14
+ huggingface_hub/__pycache__/_commit_api.cpython-313.pyc,,
15
+ huggingface_hub/__pycache__/_commit_scheduler.cpython-313.pyc,,
16
+ huggingface_hub/__pycache__/_inference_endpoints.cpython-313.pyc,,
17
+ huggingface_hub/__pycache__/_jobs_api.cpython-313.pyc,,
18
+ huggingface_hub/__pycache__/_local_folder.cpython-313.pyc,,
19
+ huggingface_hub/__pycache__/_login.cpython-313.pyc,,
20
+ huggingface_hub/__pycache__/_oauth.cpython-313.pyc,,
21
+ huggingface_hub/__pycache__/_snapshot_download.cpython-313.pyc,,
22
+ huggingface_hub/__pycache__/_space_api.cpython-313.pyc,,
23
+ huggingface_hub/__pycache__/_tensorboard_logger.cpython-313.pyc,,
24
+ huggingface_hub/__pycache__/_upload_large_folder.cpython-313.pyc,,
25
+ huggingface_hub/__pycache__/_webhooks_payload.cpython-313.pyc,,
26
+ huggingface_hub/__pycache__/_webhooks_server.cpython-313.pyc,,
27
+ huggingface_hub/__pycache__/community.cpython-313.pyc,,
28
+ huggingface_hub/__pycache__/constants.cpython-313.pyc,,
29
+ huggingface_hub/__pycache__/dataclasses.cpython-313.pyc,,
30
+ huggingface_hub/__pycache__/errors.cpython-313.pyc,,
31
+ huggingface_hub/__pycache__/fastai_utils.cpython-313.pyc,,
32
+ huggingface_hub/__pycache__/file_download.cpython-313.pyc,,
33
+ huggingface_hub/__pycache__/hf_api.cpython-313.pyc,,
34
+ huggingface_hub/__pycache__/hf_file_system.cpython-313.pyc,,
35
+ huggingface_hub/__pycache__/hub_mixin.cpython-313.pyc,,
36
+ huggingface_hub/__pycache__/inference_api.cpython-313.pyc,,
37
+ huggingface_hub/__pycache__/keras_mixin.cpython-313.pyc,,
38
+ huggingface_hub/__pycache__/lfs.cpython-313.pyc,,
39
+ huggingface_hub/__pycache__/repocard.cpython-313.pyc,,
40
+ huggingface_hub/__pycache__/repocard_data.cpython-313.pyc,,
41
+ huggingface_hub/__pycache__/repository.cpython-313.pyc,,
42
+ huggingface_hub/_commit_api.py,sha256=pGESDsicpWMeZnct-71635KgTfvUoyok_hPl9ZgIIWI,41010
43
+ huggingface_hub/_commit_scheduler.py,sha256=P64poLZoTJnSyR39SN6w5s9bLyngKstWee03fpoVETQ,14660
44
+ huggingface_hub/_inference_endpoints.py,sha256=ahmbPcEXsJ_JcMb9TDgdkD8Z2z9uytkFG3_1o6dTm8g,17598
45
+ huggingface_hub/_jobs_api.py,sha256=OFcbChcXsLvaX4oGumsHscZKAzsueYIhh0Z6Y4ycpio,10883
46
+ huggingface_hub/_local_folder.py,sha256=2iHXNgIT3UdSt2PvCovd0NzgVxTRypKb-rvAFLK-gZU,17305
47
+ huggingface_hub/_login.py,sha256=TWNkZpMPkDuttQ36uoi-ozLQ1IcXVsZ42tbcQ-b-h0Q,20248
48
+ huggingface_hub/_oauth.py,sha256=75ya9toHxC0WRKsLOAI212CrssRjTSxs16mHWWNMb3w,18714
49
+ huggingface_hub/_snapshot_download.py,sha256=b-NzYQcvktsAirIfGQKgzQwu8w0S6lhBTvnJ5S6saw8,16166
50
+ huggingface_hub/_space_api.py,sha256=jb6rF8qLtjaNU12D-8ygAPM26xDiHCu8CHXHowhGTmg,5470
51
+ huggingface_hub/_tensorboard_logger.py,sha256=tUdQzx-wXF4yjoGJG2izqZrn-IPMflMBWMkl1sKYzo0,8420
52
+ huggingface_hub/_upload_large_folder.py,sha256=l2YWLZttOw69EGdihT3y_Nhr5mweLGooZG9L8smNoHY,30066
53
+ huggingface_hub/_webhooks_payload.py,sha256=Xm3KaK7tCOGBlXkuZvbym6zjHXrT1XCrbUFWuXiBmNY,3617
54
+ huggingface_hub/_webhooks_server.py,sha256=RLrQuCHlDH_qUQJQOm11fKFDEhIUR2IxwazuKy-T9Uo,15672
55
+ huggingface_hub/cli/__init__.py,sha256=xzX1qgAvrtAX4gP59WrPlvOZFLuzuTgcjvanQvcpgHc,928
56
+ huggingface_hub/cli/__pycache__/__init__.cpython-313.pyc,,
57
+ huggingface_hub/cli/__pycache__/_cli_utils.cpython-313.pyc,,
58
+ huggingface_hub/cli/__pycache__/auth.cpython-313.pyc,,
59
+ huggingface_hub/cli/__pycache__/cache.cpython-313.pyc,,
60
+ huggingface_hub/cli/__pycache__/download.cpython-313.pyc,,
61
+ huggingface_hub/cli/__pycache__/hf.cpython-313.pyc,,
62
+ huggingface_hub/cli/__pycache__/jobs.cpython-313.pyc,,
63
+ huggingface_hub/cli/__pycache__/lfs.cpython-313.pyc,,
64
+ huggingface_hub/cli/__pycache__/repo.cpython-313.pyc,,
65
+ huggingface_hub/cli/__pycache__/repo_files.cpython-313.pyc,,
66
+ huggingface_hub/cli/__pycache__/system.cpython-313.pyc,,
67
+ huggingface_hub/cli/__pycache__/upload.cpython-313.pyc,,
68
+ huggingface_hub/cli/__pycache__/upload_large_folder.cpython-313.pyc,,
69
+ huggingface_hub/cli/_cli_utils.py,sha256=Nt6CjbkYqQQRuh70bUXVA6rZpbZt_Sa1WqBUxjQLu6g,2095
70
+ huggingface_hub/cli/auth.py,sha256=XSsbU7-_TS5IXdASkgUCdQeoXVG82VUyGYvOS4oLLRs,7317
71
+ huggingface_hub/cli/cache.py,sha256=fQjYfbRUapeHsK10Y6w_Ixu9JKyuZyM7pJzExJGd_2c,15855
72
+ huggingface_hub/cli/download.py,sha256=8b5wqhMYg3X9tar9EEeWdPZk9um1kZTI_WgBqyiatqs,7141
73
+ huggingface_hub/cli/hf.py,sha256=SQ73_SXEQnWVJkhKT_6bwNQBHQXGOdI5qqlTTtI0XH0,2328
74
+ huggingface_hub/cli/jobs.py,sha256=eA6Q7iy_-7vjU4SjYPvn71b2aVo2qt3q-pVxLyXCWqg,44317
75
+ huggingface_hub/cli/lfs.py,sha256=J9MkKOGUW6GjBrKs2zZUCOaAGxpatxsEoSbBjuhDJV8,7230
76
+ huggingface_hub/cli/repo.py,sha256=CuOqQZ7WELLk9Raf3tnyXILt9e93OrlS8Dyxx3BqdQA,10618
77
+ huggingface_hub/cli/repo_files.py,sha256=9oeeQJx8Z0ygbTElw1o5T6dGtRbeolcXENt_ouEBvjk,4844
78
+ huggingface_hub/cli/system.py,sha256=eLSYME7ywt5Ae3tYQnS43Tai2pR2JLtA1KGImzPt5pM,1707
79
+ huggingface_hub/cli/upload.py,sha256=lOHR_JzfM2XL_pYK3Z1HlGnaAI-fw7xGY46Lccvbsy4,14362
80
+ huggingface_hub/cli/upload_large_folder.py,sha256=w4RIW0yZKTnNnhDOB6yISnIo_h_Hy13KwWVzrFzczpY,6164
81
+ huggingface_hub/commands/__init__.py,sha256=AkbM2a-iGh0Vq_xAWhK3mu3uZ44km8-X5uWjKcvcrUQ,928
82
+ huggingface_hub/commands/__pycache__/__init__.cpython-313.pyc,,
83
+ huggingface_hub/commands/__pycache__/_cli_utils.cpython-313.pyc,,
84
+ huggingface_hub/commands/__pycache__/delete_cache.cpython-313.pyc,,
85
+ huggingface_hub/commands/__pycache__/download.cpython-313.pyc,,
86
+ huggingface_hub/commands/__pycache__/env.cpython-313.pyc,,
87
+ huggingface_hub/commands/__pycache__/huggingface_cli.cpython-313.pyc,,
88
+ huggingface_hub/commands/__pycache__/lfs.cpython-313.pyc,,
89
+ huggingface_hub/commands/__pycache__/repo.cpython-313.pyc,,
90
+ huggingface_hub/commands/__pycache__/repo_files.cpython-313.pyc,,
91
+ huggingface_hub/commands/__pycache__/scan_cache.cpython-313.pyc,,
92
+ huggingface_hub/commands/__pycache__/tag.cpython-313.pyc,,
93
+ huggingface_hub/commands/__pycache__/upload.cpython-313.pyc,,
94
+ huggingface_hub/commands/__pycache__/upload_large_folder.cpython-313.pyc,,
95
+ huggingface_hub/commands/__pycache__/user.cpython-313.pyc,,
96
+ huggingface_hub/commands/__pycache__/version.cpython-313.pyc,,
97
+ huggingface_hub/commands/_cli_utils.py,sha256=ePYTIEWnU677nPvdNC5AdYcEB1400L6qYEUxMkVUzME,2329
98
+ huggingface_hub/commands/delete_cache.py,sha256=035yACUtVUIG8tEtc5vexDoFFphzdk5IXkFTlD4WMiw,17738
99
+ huggingface_hub/commands/download.py,sha256=0QY9ho7eiAPvFndBPttGtH6vXNk3r9AioltNwc8h1Z4,8310
100
+ huggingface_hub/commands/env.py,sha256=qv4SmjuzUz9exo4RDMY2HqabLCKE1oRb55cBA6LN9R4,1342
101
+ huggingface_hub/commands/huggingface_cli.py,sha256=gDi7JueyiLD0bGclTEYfHPQWpAY_WBdPfHT7vkqa5v0,2654
102
+ huggingface_hub/commands/lfs.py,sha256=xdbnNRO04UuQemEhUGT809jFgQn9Rj-SnyT_0Ph-VYg,7342
103
+ huggingface_hub/commands/repo.py,sha256=WcRDFqUYKB0Kz0zFopegiG614ot6VOYTAf6jht0BMss,6042
104
+ huggingface_hub/commands/repo_files.py,sha256=ftjLCC3XCY-AMmiYiZPIdRMmIqZbqVZw-BSjBLcZup4,5054
105
+ huggingface_hub/commands/scan_cache.py,sha256=gQlhBZgWkUzH4wrIYnvgV7CA4C7rvV2SuY0x2JCB7g0,8675
106
+ huggingface_hub/commands/tag.py,sha256=4fgQuXJHG59lTVyOjIUZjxdJDL4JZW4q10XDPSo-gss,6382
107
+ huggingface_hub/commands/upload.py,sha256=eAJIig4ljtO9FRyGjiz6HbHS-Q4MOQziRgzjQrl5Koo,14576
108
+ huggingface_hub/commands/upload_large_folder.py,sha256=_1id84BFtbL8HgFRKZ-el_uPrijamz1qWlzO16KbUAc,6254
109
+ huggingface_hub/commands/user.py,sha256=dDpi0mLYvTeYf0fhPVQyEJsn7Wrk6gWvR5YHC6RgebU,7516
110
+ huggingface_hub/commands/version.py,sha256=rGpCbvxImY9eQqXrshYt609Iws27R75WARmKQrIo6Ok,1390
111
+ huggingface_hub/community.py,sha256=exJxrySnXURAijkVOcreuwM5JAuuz2L1xTSDkd223wk,12365
112
+ huggingface_hub/constants.py,sha256=nILseAp4rqLu_KQTZDpPGOhepVAPanD7azbomAvovj0,10313
113
+ huggingface_hub/dataclasses.py,sha256=rjQfuX9MeTXZQrCQC8JvkjpARDehOiSluE7Kz1L7Ueg,17337
114
+ huggingface_hub/errors.py,sha256=D7Lw0Jjrf8vfmD0B26LEvg-JWkU8Zq0KDPJOzFY4QLw,11201
115
+ huggingface_hub/fastai_utils.py,sha256=m7wwWk-TdhIB1CJMigAzzUBP4eLQALutEzwjWf9Ej-o,16755
116
+ huggingface_hub/file_download.py,sha256=-vUQWkPGlzxNqevwU4OzlGOa0ZgvS1U_RGnuO_PwqN8,78957
117
+ huggingface_hub/hf_api.py,sha256=REMm9AFgUtyizI6tkEy6glX2Aa7-TH7-uWhlhl0q0fE,487935
118
+ huggingface_hub/hf_file_system.py,sha256=uLeublBZhWd4309fE3eFHIN8G7RCrX2_6_gr0BYjuzQ,48338
119
+ huggingface_hub/hub_mixin.py,sha256=Ii3w9o7XgGbj6UNPnieW5IDfaCd8OEKpIH1hRkncRDQ,38208
120
+ huggingface_hub/inference/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
121
+ huggingface_hub/inference/__pycache__/__init__.cpython-313.pyc,,
122
+ huggingface_hub/inference/__pycache__/_client.cpython-313.pyc,,
123
+ huggingface_hub/inference/__pycache__/_common.cpython-313.pyc,,
124
+ huggingface_hub/inference/_client.py,sha256=9cAIkBFuzFC5f6jVp62MJNDSUcPqxsFluhQLi6FqXdc,157536
125
+ huggingface_hub/inference/_common.py,sha256=dI3OPg0320OOB0FRy_kqftW9F3ghEnBVA5Gi4VaSctg,15778
126
+ huggingface_hub/inference/_generated/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
127
+ huggingface_hub/inference/_generated/__pycache__/__init__.cpython-313.pyc,,
128
+ huggingface_hub/inference/_generated/__pycache__/_async_client.cpython-313.pyc,,
129
+ huggingface_hub/inference/_generated/_async_client.py,sha256=DSOAXJ_TxRubPisWnVKzepXalDA7PcE-NG3oczo8iMw,163445
130
+ huggingface_hub/inference/_generated/types/__init__.py,sha256=9WvrGQ8aThtKSNzZF06j-CIE2ZuItne8FFnea1p1u38,6557
131
+ huggingface_hub/inference/_generated/types/__pycache__/__init__.cpython-313.pyc,,
132
+ huggingface_hub/inference/_generated/types/__pycache__/audio_classification.cpython-313.pyc,,
133
+ huggingface_hub/inference/_generated/types/__pycache__/audio_to_audio.cpython-313.pyc,,
134
+ huggingface_hub/inference/_generated/types/__pycache__/automatic_speech_recognition.cpython-313.pyc,,
135
+ huggingface_hub/inference/_generated/types/__pycache__/base.cpython-313.pyc,,
136
+ huggingface_hub/inference/_generated/types/__pycache__/chat_completion.cpython-313.pyc,,
137
+ huggingface_hub/inference/_generated/types/__pycache__/depth_estimation.cpython-313.pyc,,
138
+ huggingface_hub/inference/_generated/types/__pycache__/document_question_answering.cpython-313.pyc,,
139
+ huggingface_hub/inference/_generated/types/__pycache__/feature_extraction.cpython-313.pyc,,
140
+ huggingface_hub/inference/_generated/types/__pycache__/fill_mask.cpython-313.pyc,,
141
+ huggingface_hub/inference/_generated/types/__pycache__/image_classification.cpython-313.pyc,,
142
+ huggingface_hub/inference/_generated/types/__pycache__/image_segmentation.cpython-313.pyc,,
143
+ huggingface_hub/inference/_generated/types/__pycache__/image_to_image.cpython-313.pyc,,
144
+ huggingface_hub/inference/_generated/types/__pycache__/image_to_text.cpython-313.pyc,,
145
+ huggingface_hub/inference/_generated/types/__pycache__/image_to_video.cpython-313.pyc,,
146
+ huggingface_hub/inference/_generated/types/__pycache__/object_detection.cpython-313.pyc,,
147
+ huggingface_hub/inference/_generated/types/__pycache__/question_answering.cpython-313.pyc,,
148
+ huggingface_hub/inference/_generated/types/__pycache__/sentence_similarity.cpython-313.pyc,,
149
+ huggingface_hub/inference/_generated/types/__pycache__/summarization.cpython-313.pyc,,
150
+ huggingface_hub/inference/_generated/types/__pycache__/table_question_answering.cpython-313.pyc,,
151
+ huggingface_hub/inference/_generated/types/__pycache__/text2text_generation.cpython-313.pyc,,
152
+ huggingface_hub/inference/_generated/types/__pycache__/text_classification.cpython-313.pyc,,
153
+ huggingface_hub/inference/_generated/types/__pycache__/text_generation.cpython-313.pyc,,
154
+ huggingface_hub/inference/_generated/types/__pycache__/text_to_audio.cpython-313.pyc,,
155
+ huggingface_hub/inference/_generated/types/__pycache__/text_to_image.cpython-313.pyc,,
156
+ huggingface_hub/inference/_generated/types/__pycache__/text_to_speech.cpython-313.pyc,,
157
+ huggingface_hub/inference/_generated/types/__pycache__/text_to_video.cpython-313.pyc,,
158
+ huggingface_hub/inference/_generated/types/__pycache__/token_classification.cpython-313.pyc,,
159
+ huggingface_hub/inference/_generated/types/__pycache__/translation.cpython-313.pyc,,
160
+ huggingface_hub/inference/_generated/types/__pycache__/video_classification.cpython-313.pyc,,
161
+ huggingface_hub/inference/_generated/types/__pycache__/visual_question_answering.cpython-313.pyc,,
162
+ huggingface_hub/inference/_generated/types/__pycache__/zero_shot_classification.cpython-313.pyc,,
163
+ huggingface_hub/inference/_generated/types/__pycache__/zero_shot_image_classification.cpython-313.pyc,,
164
+ huggingface_hub/inference/_generated/types/__pycache__/zero_shot_object_detection.cpython-313.pyc,,
165
+ huggingface_hub/inference/_generated/types/audio_classification.py,sha256=Jg3mzfGhCSH6CfvVvgJSiFpkz6v4nNA0G4LJXacEgNc,1573
166
+ huggingface_hub/inference/_generated/types/audio_to_audio.py,sha256=2Ep4WkePL7oJwcp5nRJqApwviumGHbft9HhXE9XLHj4,891
167
+ huggingface_hub/inference/_generated/types/automatic_speech_recognition.py,sha256=8CEphr6rvRHgq1L5Md3tq14V0tEAmzJkemh1_7gSswo,5515
168
+ huggingface_hub/inference/_generated/types/base.py,sha256=4XG49q0-2SOftYQ8HXQnWLxiJktou-a7IoG3kdOv-kg,6751
169
+ huggingface_hub/inference/_generated/types/chat_completion.py,sha256=j1Y8G4g5yGs4g7N4sXWbipF8TwkQG0J-ftL9OxejkBw,11254
170
+ huggingface_hub/inference/_generated/types/depth_estimation.py,sha256=rcpe9MhYMeLjflOwBs3KMZPr6WjOH3FYEThStG-FJ3M,929
171
+ huggingface_hub/inference/_generated/types/document_question_answering.py,sha256=6BEYGwJcqGlah4RBJDAvWFTEXkO0mosBiMy82432nAM,3202
172
+ huggingface_hub/inference/_generated/types/feature_extraction.py,sha256=NMWVL_TLSG5SS5bdt1-fflkZ75UMlMKeTMtmdnUTADc,1537
173
+ huggingface_hub/inference/_generated/types/fill_mask.py,sha256=OrTgQ7Ndn0_dWK5thQhZwTOHbQni8j0iJcx9llyhRds,1708
174
+ huggingface_hub/inference/_generated/types/image_classification.py,sha256=A-Y024o8723_n8mGVos4TwdAkVL62McGeL1iIo4VzNs,1585
175
+ huggingface_hub/inference/_generated/types/image_segmentation.py,sha256=vrkI4SuP1Iq_iLXc-2pQhYY3SHN4gzvFBoZqbUHxU7o,1950
176
+ huggingface_hub/inference/_generated/types/image_to_image.py,sha256=snvGbmCdqchxGef25MceD7LSKAmVkIgnoX5t71rdlAQ,2290
177
+ huggingface_hub/inference/_generated/types/image_to_text.py,sha256=OaFEBAfgT-fOVzJ7xVermGf7VODhrc9-Jg38WrM7-2o,4810
178
+ huggingface_hub/inference/_generated/types/image_to_video.py,sha256=bC-L_cNsDhk4s_IdSiprJ9d1NeMGePLcUp7UPpco21w,2240
179
+ huggingface_hub/inference/_generated/types/object_detection.py,sha256=VuFlb1281qTXoSgJDmquGz-VNfEZLo2H0Rh_F6MF6ts,2000
180
+ huggingface_hub/inference/_generated/types/question_answering.py,sha256=zw38a9_9l2k1ifYZefjkioqZ4asfSRM9M4nU3gSCmAQ,2898
181
+ huggingface_hub/inference/_generated/types/sentence_similarity.py,sha256=w5Nj1g18eBzopZwxuDLI-fEsyaCK2KrHA5yf_XfSjgo,1052
182
+ huggingface_hub/inference/_generated/types/summarization.py,sha256=WGGr8uDLrZg8JQgF9ZMUP9euw6uZo6zwkVZ-IfvCFI0,1487
183
+ huggingface_hub/inference/_generated/types/table_question_answering.py,sha256=cJnIPA2fIbQP2Ejn7X_esY48qGWoXg30fnNOqCXiOVQ,2293
184
+ huggingface_hub/inference/_generated/types/text2text_generation.py,sha256=v-418w1JNNSZ2tuW9DUl6a36TQQCADa438A3ufvcbOw,1609
185
+ huggingface_hub/inference/_generated/types/text_classification.py,sha256=FarAjygLEfPofLfKeabzJ7PKEBItlHGoUNUOzyLRpL4,1445
186
+ huggingface_hub/inference/_generated/types/text_generation.py,sha256=28u-1zU7elk2teP3y4u1VAtDDHzY0JZ2KEEJe5d5uvg,5922
187
+ huggingface_hub/inference/_generated/types/text_to_audio.py,sha256=1HR9Q6s9MXqtKGTvHPLGVMum5-eg7O-Pgv6Nd0v8_HU,4741
188
+ huggingface_hub/inference/_generated/types/text_to_image.py,sha256=sGGi1Fa0n5Pmd6G3I-F2SBJcJ1M7Gmqnng6sfi0AVzs,1903
189
+ huggingface_hub/inference/_generated/types/text_to_speech.py,sha256=ROFuR32ijROCeqbv81Jos0lmaA8SRWyIUsWrdD4yWow,4760
190
+ huggingface_hub/inference/_generated/types/text_to_video.py,sha256=yHXVNs3t6aYO7visrBlB5cH7kjoysxF9510aofcf_18,1790
191
+ huggingface_hub/inference/_generated/types/token_classification.py,sha256=iblAcgfxXeaLYJ14NdiiCMIQuBlarUknLkXUklhvcLI,1915
192
+ huggingface_hub/inference/_generated/types/translation.py,sha256=xww4X5cfCYv_F0oINWLwqJRPCT6SV3VBAJuPjTs_j7o,1763
193
+ huggingface_hub/inference/_generated/types/video_classification.py,sha256=TyydjQw2NRLK9sDGzJUVnkDeo848ebmCx588Ur8I9q0,1680
194
+ huggingface_hub/inference/_generated/types/visual_question_answering.py,sha256=AWrQ6qo4gZa3PGedaNpzDFqx5yOYyjhnUB6iuZEj_uo,1673
195
+ huggingface_hub/inference/_generated/types/zero_shot_classification.py,sha256=BAiebPjsqoNa8EU35Dx0pfIv8W2c4GSl-TJckV1MaxQ,1738
196
+ huggingface_hub/inference/_generated/types/zero_shot_image_classification.py,sha256=8J9n6VqFARkWvPfAZNWEG70AlrMGldU95EGQQwn06zI,1487
197
+ huggingface_hub/inference/_generated/types/zero_shot_object_detection.py,sha256=GUd81LIV7oEbRWayDlAVgyLmY596r1M3AW0jXDp1yTA,1630
198
+ huggingface_hub/inference/_mcp/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
199
+ huggingface_hub/inference/_mcp/__pycache__/__init__.cpython-313.pyc,,
200
+ huggingface_hub/inference/_mcp/__pycache__/_cli_hacks.cpython-313.pyc,,
201
+ huggingface_hub/inference/_mcp/__pycache__/agent.cpython-313.pyc,,
202
+ huggingface_hub/inference/_mcp/__pycache__/cli.cpython-313.pyc,,
203
+ huggingface_hub/inference/_mcp/__pycache__/constants.cpython-313.pyc,,
204
+ huggingface_hub/inference/_mcp/__pycache__/mcp_client.cpython-313.pyc,,
205
+ huggingface_hub/inference/_mcp/__pycache__/types.cpython-313.pyc,,
206
+ huggingface_hub/inference/_mcp/__pycache__/utils.cpython-313.pyc,,
207
+ huggingface_hub/inference/_mcp/_cli_hacks.py,sha256=KX9HZJPa1p8ngY3mtYGGlVUXfg4vYbbBRs-8HLToP04,3284
208
+ huggingface_hub/inference/_mcp/agent.py,sha256=jqvQwOajY41RIhCtD-XgVfuWbTouSYCQkIWJ1gHRrJQ,4262
209
+ huggingface_hub/inference/_mcp/cli.py,sha256=AmSUT6wXlE6EWmI0SfQgTWYnL07322zGwwk2yMZZlBc,9640
210
+ huggingface_hub/inference/_mcp/constants.py,sha256=kldRfaidXMdyMl_jLosaQomgWDv4shvnFe3dnQNwXSU,2511
211
+ huggingface_hub/inference/_mcp/mcp_client.py,sha256=9rcwOO7L2Ih0oGLkeY9o5gbkwEBmsDkHKf4XAmp4Mvc,16784
212
+ huggingface_hub/inference/_mcp/types.py,sha256=3gq-P_mrmvPI6KWBqjCxavtMPiGz10YXog7wg4oJYAo,941
213
+ huggingface_hub/inference/_mcp/utils.py,sha256=KFsGOC8dytS3VgaugBzibdteWasZ9CAnp83U2SyIlMw,4188
214
+ huggingface_hub/inference/_providers/__init__.py,sha256=UxPnzOdVcJgroPEatuahb4fsHaObUYPrwUCzv5ADCa4,9019
215
+ huggingface_hub/inference/_providers/__pycache__/__init__.cpython-313.pyc,,
216
+ huggingface_hub/inference/_providers/__pycache__/_common.cpython-313.pyc,,
217
+ huggingface_hub/inference/_providers/__pycache__/black_forest_labs.cpython-313.pyc,,
218
+ huggingface_hub/inference/_providers/__pycache__/cerebras.cpython-313.pyc,,
219
+ huggingface_hub/inference/_providers/__pycache__/clarifai.cpython-313.pyc,,
220
+ huggingface_hub/inference/_providers/__pycache__/cohere.cpython-313.pyc,,
221
+ huggingface_hub/inference/_providers/__pycache__/fal_ai.cpython-313.pyc,,
222
+ huggingface_hub/inference/_providers/__pycache__/featherless_ai.cpython-313.pyc,,
223
+ huggingface_hub/inference/_providers/__pycache__/fireworks_ai.cpython-313.pyc,,
224
+ huggingface_hub/inference/_providers/__pycache__/groq.cpython-313.pyc,,
225
+ huggingface_hub/inference/_providers/__pycache__/hf_inference.cpython-313.pyc,,
226
+ huggingface_hub/inference/_providers/__pycache__/hyperbolic.cpython-313.pyc,,
227
+ huggingface_hub/inference/_providers/__pycache__/nebius.cpython-313.pyc,,
228
+ huggingface_hub/inference/_providers/__pycache__/novita.cpython-313.pyc,,
229
+ huggingface_hub/inference/_providers/__pycache__/nscale.cpython-313.pyc,,
230
+ huggingface_hub/inference/_providers/__pycache__/openai.cpython-313.pyc,,
231
+ huggingface_hub/inference/_providers/__pycache__/publicai.cpython-313.pyc,,
232
+ huggingface_hub/inference/_providers/__pycache__/replicate.cpython-313.pyc,,
233
+ huggingface_hub/inference/_providers/__pycache__/sambanova.cpython-313.pyc,,
234
+ huggingface_hub/inference/_providers/__pycache__/scaleway.cpython-313.pyc,,
235
+ huggingface_hub/inference/_providers/__pycache__/together.cpython-313.pyc,,
236
+ huggingface_hub/inference/_providers/__pycache__/zai_org.cpython-313.pyc,,
237
+ huggingface_hub/inference/_providers/_common.py,sha256=brZJ1CUxDKooPdmVlm4cuKjvaW_refVY0Y7CbGQe7e4,12373
238
+ huggingface_hub/inference/_providers/black_forest_labs.py,sha256=FIukZoIFt_FDrTTDfpF-Vko5sXnmH0QvVIsMtV2Jzm8,2852
239
+ huggingface_hub/inference/_providers/cerebras.py,sha256=QOJ-1U-os7uE7p6eUnn_P_APq-yQhx28be7c3Tq2EuA,210
240
+ huggingface_hub/inference/_providers/clarifai.py,sha256=1cEXQwhGk4DRKiPCQUa5y-L6okTo4781EImQC8yJVOw,380
241
+ huggingface_hub/inference/_providers/cohere.py,sha256=O3tC-qIUL91mx_mE8bOHCtDWcQuKOUauhUoXSUBUCZ8,1253
242
+ huggingface_hub/inference/_providers/fal_ai.py,sha256=pCr5qP6R1W1CrEw-_nKdNuP3UqsUi58yL18w4r7mXRo,9989
243
+ huggingface_hub/inference/_providers/featherless_ai.py,sha256=QxBz-32O4PztxixrIjrfKuTOzvfqyUi-cVsw0Hf_zlY,1382
244
+ huggingface_hub/inference/_providers/fireworks_ai.py,sha256=Id226ITfPkOcFMFzly3MW9l-dZl9l4qizL4JEHWkBFk,1215
245
+ huggingface_hub/inference/_providers/groq.py,sha256=JTk2JV4ZOlaohho7zLAFQtk92kGVsPmLJ1hmzcwsqvQ,315
246
+ huggingface_hub/inference/_providers/hf_inference.py,sha256=0yi3cR-EJ4HYx3mSzOsMOTVmvVBkaajTzTfKB8JXQpk,9540
247
+ huggingface_hub/inference/_providers/hyperbolic.py,sha256=OQIBi2j3aNvuaSQ8BUK1K1PVeRXdrxc80G-6YmBa-ns,1985
248
+ huggingface_hub/inference/_providers/nebius.py,sha256=VJpTF2JZ58rznc9wxdk-57vwF8sV2vESw_WkXjXqCho,3580
249
+ huggingface_hub/inference/_providers/novita.py,sha256=HGVC8wPraRQUuI5uBoye1Y4Wqe4X116B71GhhbWy5yM,2514
250
+ huggingface_hub/inference/_providers/nscale.py,sha256=qWUsWinQmUbNUqehyKn34tVoWehu8gd-OZ2F4uj2SWM,1802
251
+ huggingface_hub/inference/_providers/openai.py,sha256=GCVYeNdjWIgpQQ7E_Xv8IebmdhTi0S6WfFosz3nLtps,1089
252
+ huggingface_hub/inference/_providers/publicai.py,sha256=1I2W6rORloB5QHSvky4njZO2XKLTwA-kPdNoauoT5rg,210
253
+ huggingface_hub/inference/_providers/replicate.py,sha256=otVfPkfBtlWrpjQub4V__t7g_w8Ewc7ZU3efiOauW-I,3820
254
+ huggingface_hub/inference/_providers/sambanova.py,sha256=Unt3H3jr_kgI9vzRjmmW1DFyoEuPkKCcgIIloiOj3j8,2037
255
+ huggingface_hub/inference/_providers/scaleway.py,sha256=Jy81kXWbXCHBpx6xmyzdEfXGSyhUfjKOLHuDSvhHWGo,1209
256
+ huggingface_hub/inference/_providers/together.py,sha256=KHF19CS3qXS7G1-CwcMiD8Z5wzPKEKi4F2DzqAthbBE,3439
257
+ huggingface_hub/inference/_providers/zai_org.py,sha256=plGzMZuLrChZvgpS3CCPqI6ImotZZxNLgfxnR7v6tw8,646
258
+ huggingface_hub/inference_api.py,sha256=b4-NhPSn9b44nYKV8tDKXodmE4JVdEymMWL4CVGkzlE,8323
259
+ huggingface_hub/keras_mixin.py,sha256=gDm8PBcTqYhfrEvhu1_ptxzxbVOF3h0wAArn90UyzRA,19547
260
+ huggingface_hub/lfs.py,sha256=v0mTThnULTmFv8MVWfrkQEwkiFXzWWx7xyp2VLf-EPo,17020
261
+ huggingface_hub/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
262
+ huggingface_hub/repocard.py,sha256=8tmR7SYVQZ4iBFYCmOj0yl6Ohc9Vv136s-KQKkxBq7U,34865
263
+ huggingface_hub/repocard_data.py,sha256=hr4ReFpEQMNdh_9Dx-L-IJoI1ElHyk-h-8ZRqwVYYOE,34082
264
+ huggingface_hub/repository.py,sha256=axZcbAh4ufXEaMgPbrS1WWgvshd-mFvYnRZAZ_yYljQ,54541
265
+ huggingface_hub/serialization/__init__.py,sha256=kn-Fa-m4FzMnN8lNsF-SwFcfzug4CucexybGKyvZ8S0,1041
266
+ huggingface_hub/serialization/__pycache__/__init__.cpython-313.pyc,,
267
+ huggingface_hub/serialization/__pycache__/_base.cpython-313.pyc,,
268
+ huggingface_hub/serialization/__pycache__/_dduf.cpython-313.pyc,,
269
+ huggingface_hub/serialization/__pycache__/_tensorflow.cpython-313.pyc,,
270
+ huggingface_hub/serialization/__pycache__/_torch.cpython-313.pyc,,
271
+ huggingface_hub/serialization/_base.py,sha256=VGQ4Z9Abg2gsL_1rTGSS9p-3tkkG9eaERjlzBTLGkdU,8109
272
+ huggingface_hub/serialization/_dduf.py,sha256=s42239rLiHwaJE36QDEmS5GH7DSmQ__BffiHJO5RjIg,15424
273
+ huggingface_hub/serialization/_tensorflow.py,sha256=Ea3wN1bKgyb_9opj-FtH-WpIp0ptkovKimroZOudX5c,3608
274
+ huggingface_hub/serialization/_torch.py,sha256=dw3RMkr0CYAr_TwPG_rma-ueHBRTXpfEJtrVKAvvtN4,45143
275
+ huggingface_hub/templates/datasetcard_template.md,sha256=W-EMqR6wndbrnZorkVv56URWPG49l7MATGeI015kTvs,5503
276
+ huggingface_hub/templates/modelcard_template.md,sha256=4AqArS3cqdtbit5Bo-DhjcnDFR-pza5hErLLTPM4Yuc,6870
277
+ huggingface_hub/utils/__init__.py,sha256=ORfVkn5D0wuLIq12jjhTzn5_c4F8fRPxB7TG-iednuQ,3722
278
+ huggingface_hub/utils/__pycache__/__init__.cpython-313.pyc,,
279
+ huggingface_hub/utils/__pycache__/_auth.cpython-313.pyc,,
280
+ huggingface_hub/utils/__pycache__/_cache_assets.cpython-313.pyc,,
281
+ huggingface_hub/utils/__pycache__/_cache_manager.cpython-313.pyc,,
282
+ huggingface_hub/utils/__pycache__/_chunk_utils.cpython-313.pyc,,
283
+ huggingface_hub/utils/__pycache__/_datetime.cpython-313.pyc,,
284
+ huggingface_hub/utils/__pycache__/_deprecation.cpython-313.pyc,,
285
+ huggingface_hub/utils/__pycache__/_dotenv.cpython-313.pyc,,
286
+ huggingface_hub/utils/__pycache__/_experimental.cpython-313.pyc,,
287
+ huggingface_hub/utils/__pycache__/_fixes.cpython-313.pyc,,
288
+ huggingface_hub/utils/__pycache__/_git_credential.cpython-313.pyc,,
289
+ huggingface_hub/utils/__pycache__/_headers.cpython-313.pyc,,
290
+ huggingface_hub/utils/__pycache__/_hf_folder.cpython-313.pyc,,
291
+ huggingface_hub/utils/__pycache__/_http.cpython-313.pyc,,
292
+ huggingface_hub/utils/__pycache__/_lfs.cpython-313.pyc,,
293
+ huggingface_hub/utils/__pycache__/_pagination.cpython-313.pyc,,
294
+ huggingface_hub/utils/__pycache__/_paths.cpython-313.pyc,,
295
+ huggingface_hub/utils/__pycache__/_runtime.cpython-313.pyc,,
296
+ huggingface_hub/utils/__pycache__/_safetensors.cpython-313.pyc,,
297
+ huggingface_hub/utils/__pycache__/_subprocess.cpython-313.pyc,,
298
+ huggingface_hub/utils/__pycache__/_telemetry.cpython-313.pyc,,
299
+ huggingface_hub/utils/__pycache__/_typing.cpython-313.pyc,,
300
+ huggingface_hub/utils/__pycache__/_validators.cpython-313.pyc,,
301
+ huggingface_hub/utils/__pycache__/_xet.cpython-313.pyc,,
302
+ huggingface_hub/utils/__pycache__/_xet_progress_reporting.cpython-313.pyc,,
303
+ huggingface_hub/utils/__pycache__/endpoint_helpers.cpython-313.pyc,,
304
+ huggingface_hub/utils/__pycache__/insecure_hashlib.cpython-313.pyc,,
305
+ huggingface_hub/utils/__pycache__/logging.cpython-313.pyc,,
306
+ huggingface_hub/utils/__pycache__/sha.cpython-313.pyc,,
307
+ huggingface_hub/utils/__pycache__/tqdm.cpython-313.pyc,,
308
+ huggingface_hub/utils/_auth.py,sha256=Ixve2vxdftHXXk2R2vfyLzlVoDT39Tkq-Hrou9KCUvw,8286
309
+ huggingface_hub/utils/_cache_assets.py,sha256=kai77HPQMfYpROouMBQCr_gdBCaeTm996Sqj0dExbNg,5728
310
+ huggingface_hub/utils/_cache_manager.py,sha256=XbeYoZMj8_JCl6eqRviHO6DxGSS29r5Pj38xLlao96Y,34364
311
+ huggingface_hub/utils/_chunk_utils.py,sha256=MH7-6FwCDZ8noV6dGRytCOJGSfcZmDBvsvVotdI8TvQ,2109
312
+ huggingface_hub/utils/_datetime.py,sha256=kCS5jaKV25kOncX1xujbXsz5iDLcjLcLw85semGNzxQ,2770
313
+ huggingface_hub/utils/_deprecation.py,sha256=HZhRGGUX_QMKBBBwHHlffLtmCSK01TOpeXHefZbPfwI,4872
314
+ huggingface_hub/utils/_dotenv.py,sha256=RzHqC8HgzVxE-N4DFBcnemvX0NHmXcV0My2ASK0U1OQ,2017
315
+ huggingface_hub/utils/_experimental.py,sha256=3-c8irbn9sJr2CwWbzhGkIrdXKg8_x7BifhHFy32ei8,2470
316
+ huggingface_hub/utils/_fixes.py,sha256=xQV1QkUn2WpLqLjtXNiyn9gh-454K6AF-Q3kwkYAQD8,4437
317
+ huggingface_hub/utils/_git_credential.py,sha256=ao9rq-rVHn8lghSVZEjDAX4kIkNi7bayY361TDSgSpg,4619
318
+ huggingface_hub/utils/_headers.py,sha256=w4ayq4hLGaZ3B7nwdEi5Zu23SmmDuOwv58It78wkakk,8868
319
+ huggingface_hub/utils/_hf_folder.py,sha256=WNjTnu0Q7tqcSS9EsP4ssCJrrJMcCvAt8P_-LEtmOU8,2487
320
+ huggingface_hub/utils/_http.py,sha256=Q7W1YoT2k47duPb9ib_FGPEXwn8B3BtrwNvjF5ZYW_w,25581
321
+ huggingface_hub/utils/_lfs.py,sha256=EC0Oz6Wiwl8foRNkUOzrETXzAWlbgpnpxo5a410ovFY,3957
322
+ huggingface_hub/utils/_pagination.py,sha256=EX5tRasSuQDaKbXuGYbInBK2odnSWNHgzw2tSgqeBRI,1906
323
+ huggingface_hub/utils/_paths.py,sha256=w1ZhFmmD5ykWjp_hAvhjtOoa2ZUcOXJrF4a6O3QpAWo,5042
324
+ huggingface_hub/utils/_runtime.py,sha256=L7SOYezdxKcwd4DovAY0UGY3qt27toXO-QjceIDwExk,11634
325
+ huggingface_hub/utils/_safetensors.py,sha256=GW3nyv7xQcuwObKYeYoT9VhURVzG1DZTbKBKho8Bbos,4458
326
+ huggingface_hub/utils/_subprocess.py,sha256=u9FFUDE7TrzQTiuEzlUnHx7S2P57GbYRV8u16GJwrFw,4625
327
+ huggingface_hub/utils/_telemetry.py,sha256=54LXeIJU5pEGghPAh06gqNAR-UoxOjVLvKqAQscwqZs,4890
328
+ huggingface_hub/utils/_typing.py,sha256=z-134-HG_qJc0cjdSXkmDm3vIRyF5aEfbZgJCB_Qp2Y,3628
329
+ huggingface_hub/utils/_validators.py,sha256=u8AacmA9xCCyer8efmzl1EpQUWTe3zVzsWSJSv3uxTU,9190
330
+ huggingface_hub/utils/_xet.py,sha256=f8qfk8YKePAeGUL6lQiQ1w_3bcs78oWwbeACYdUeg5k,7312
331
+ huggingface_hub/utils/_xet_progress_reporting.py,sha256=JK64hv8orABfNnk1_Wd0YyD_5FfeyVeBvelKpjaNIvs,6169
332
+ huggingface_hub/utils/endpoint_helpers.py,sha256=9VtIAlxQ5H_4y30sjCAgbu7XCqAtNLC7aRYxaNn0hLI,2366
333
+ huggingface_hub/utils/insecure_hashlib.py,sha256=iAaepavFZ5Dhfa5n8KozRfQprKmvcjSnt3X58OUl9fQ,1142
334
+ huggingface_hub/utils/logging.py,sha256=N6NXaCcbPbZSF-Oe-TY3ZnmkpmdFVyTOV8ASo-yVXLE,4916
335
+ huggingface_hub/utils/sha.py,sha256=OFnNGCba0sNcT2gUwaVCJnldxlltrHHe0DS_PCpV3C4,2134
336
+ huggingface_hub/utils/tqdm.py,sha256=xAKcyfnNHsZ7L09WuEM5Ew5-MDhiahLACbbN2zMmcLs,10671
venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/REQUESTED ADDED
File without changes
venv/lib/python3.13/site-packages/huggingface_hub-0.36.0.dist-info/entry_points.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ [console_scripts]
2
+ hf = huggingface_hub.cli.hf:main
3
+ huggingface-cli = huggingface_hub.commands.huggingface_cli:main
4
+ tiny-agents = huggingface_hub.inference._mcp.cli:app
5
+
6
+ [fsspec.specs]
7
+ hf=huggingface_hub.HfFileSystem
8
+
venv/lib/python3.13/site-packages/typing_extensions.py ADDED
The diff for this file is too large to render. See raw diff
 
venv/pyvenv.cfg ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ home = /usr/bin
2
+ include-system-site-packages = false
3
+ version = 3.13.7
4
+ executable = /usr/bin/python3.13
5
+ command = /usr/bin/python -m venv /home/dheena/prerad/venv
volumes/notebooks/.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .ipynb_checkpoints
volumes/notebooks/etl.ipynb ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": null,
6
+ "id": "d12005fa",
7
+ "metadata": {},
8
+ "outputs": [
9
+ {
10
+ "ename": "",
11
+ "evalue": "",
12
+ "output_type": "error",
13
+ "traceback": [
14
+ "\u001b[1;31mRunning cells with 'blip' requires ipykernel package.\n",
15
+ "\u001b[1;31mRun the following command to install 'ipykernel' into the Python environment. \n",
16
+ "\u001b[1;31mCommand: 'conda install -n blip ipykernel --update-deps --force-reinstall'"
17
+ ]
18
+ }
19
+ ],
20
+ "source": [
21
+ "import pandas as pd "
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "code",
26
+ "execution_count": 8,
27
+ "id": "e7eebe96",
28
+ "metadata": {},
29
+ "outputs": [
30
+ {
31
+ "data": {
32
+ "text/plain": [
33
+ "part p10\n",
34
+ "patient p10000032\n",
35
+ "scan [s50414267.txt, s53189527.txt, s53911762.txt, ...\n",
36
+ "Name: 0, dtype: object"
37
+ ]
38
+ },
39
+ "execution_count": 8,
40
+ "metadata": {},
41
+ "output_type": "execute_result"
42
+ }
43
+ ],
44
+ "source": [
45
+ "# one row in the control dictionary\n",
46
+ "pd.read_json(\"/opt/physionet/control.jsonl\",lines=True).iloc[0]"
47
+ ]
48
+ },
49
+ {
50
+ "cell_type": "code",
51
+ "execution_count": 10,
52
+ "id": "1d191897",
53
+ "metadata": {},
54
+ "outputs": [
55
+ {
56
+ "data": {
57
+ "text/plain": [
58
+ "fold p10\n",
59
+ "image /opt/physionet/physionet.org/files/mimic-cxr-j...\n",
60
+ "original FINAL REPORT\\...\n",
61
+ "report /opt/physionet/physionet.org/files/mimic-cxr/2...\n",
62
+ "patient p10000764\n",
63
+ "text findings: pa and lateral views of the chest pr...\n",
64
+ "indication indication: unknown year old male with hypoxia...\n",
65
+ "Name: 0, dtype: object"
66
+ ]
67
+ },
68
+ "execution_count": 10,
69
+ "metadata": {},
70
+ "output_type": "execute_result"
71
+ }
72
+ ],
73
+ "source": [
74
+ "# each row contains the labels and metadata needed to train the transformer\n",
75
+ "example = pd.read_json(\"/opt/physionet/dataset.jsonl\",lines=True).iloc[0]\n",
76
+ "example"
77
+ ]
78
+ },
79
+ {
80
+ "cell_type": "code",
81
+ "execution_count": 11,
82
+ "id": "dd14ab31",
83
+ "metadata": {},
84
+ "outputs": [
85
+ {
86
+ "name": "stdout",
87
+ "output_type": "stream",
88
+ "text": [
89
+ " FINAL REPORT\n",
90
+ " EXAMINATION: CHEST (PA AND LAT)\n",
91
+ " \n",
92
+ " INDICATION: ___M with hypoxia // ?pna, aspiration.\n",
93
+ " \n",
94
+ " COMPARISON: None\n",
95
+ " \n",
96
+ " FINDINGS: \n",
97
+ " \n",
98
+ " PA and lateral views of the chest provided. The lungs are adequately\n",
99
+ " aerated.\n",
100
+ " \n",
101
+ " There is a focal consolidation at the left lung base adjacent to the lateral\n",
102
+ " hemidiaphragm. There is mild vascular engorgement. There is bilateral apical\n",
103
+ " pleural thickening.\n",
104
+ " \n",
105
+ " The cardiomediastinal silhouette is remarkable for aortic arch calcifications.\n",
106
+ " The heart is top normal in size.\n",
107
+ " \n",
108
+ " IMPRESSION: \n",
109
+ " \n",
110
+ " Focal consolidation at the left lung base, possibly representing aspiration or\n",
111
+ " pneumonia.\n",
112
+ " \n",
113
+ " Central vascular engorgement.\n",
114
+ "\n"
115
+ ]
116
+ }
117
+ ],
118
+ "source": [
119
+ "# original text\n",
120
+ "print(example[\"original\"])"
121
+ ]
122
+ },
123
+ {
124
+ "cell_type": "code",
125
+ "execution_count": 12,
126
+ "id": "ea5792df",
127
+ "metadata": {},
128
+ "outputs": [
129
+ {
130
+ "name": "stdout",
131
+ "output_type": "stream",
132
+ "text": [
133
+ "findings: pa and lateral views of the chest provided. the lungs are adequately aerated. there is a focal consolidation at the left lung base adjacent to the lateral hemidiaphragm. there is mild vascular engorgement. there is bilateral apical pleural thickening. the cardiomediastinal silhouette is remarkable for aortic arch calcifications. the heart is top normal in size. impression: focal consolidation at the left lung base, possibly representing aspiration or pneumonia. central vascular engorgement.\n"
134
+ ]
135
+ }
136
+ ],
137
+ "source": [
138
+ "# text used as a label for the model\n",
139
+ "# no indicatin\n",
140
+ "# no technique\n",
141
+ "# no comparison\n",
142
+ "print(example[\"text\"])"
143
+ ]
144
+ },
145
+ {
146
+ "cell_type": "code",
147
+ "execution_count": 13,
148
+ "id": "67e5bb00",
149
+ "metadata": {},
150
+ "outputs": [
151
+ {
152
+ "name": "stdout",
153
+ "output_type": "stream",
154
+ "text": [
155
+ "indication: unknown year old male with hypoxia // question pna, aspiration.\n"
156
+ ]
157
+ }
158
+ ],
159
+ "source": [
160
+ "# used as a INPUT to the model along with the image\n",
161
+ "print(example[\"indication\"])"
162
+ ]
163
+ }
164
+ ],
165
+ "metadata": {
166
+ "kernelspec": {
167
+ "display_name": "blip",
168
+ "language": "python",
169
+ "name": "python3"
170
+ },
171
+ "language_info": {
172
+ "codemirror_mode": {
173
+ "name": "ipython",
174
+ "version": 3
175
+ },
176
+ "file_extension": ".py",
177
+ "mimetype": "text/x-python",
178
+ "name": "python",
179
+ "nbconvert_exporter": "python",
180
+ "pygments_lexer": "ipython3",
181
+ "version": "3.10.8 (main, Nov 24 2022, 08:08:27) [Clang 14.0.6 ]"
182
+ },
183
+ "vscode": {
184
+ "interpreter": {
185
+ "hash": "d2929fa862ca5c20be7df7418b9bcb368752100a819a60622976f7f091b1ba7c"
186
+ }
187
+ }
188
+ },
189
+ "nbformat": 4,
190
+ "nbformat_minor": 5
191
+ }
volumes/physionet/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *
2
+ !.gitignore