Publicly Available Clinical BERT Embeddings
Paper β’ 1904.03323 β’ Published β’ 2
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("obi/deid_bert_i2b2")
model = AutoModelForTokenClassification.from_pretrained("obi/deid_bert_i2b2")| I2B2 | I2B2 | |||
|---|---|---|---|---|
| TRAIN SET - 790 NOTES | TEST SET - 514 NOTES | |||
| PHI LABEL | COUNT | PERCENTAGE | COUNT | PERCENTAGE |
| DATE | 7502 | 43.69 | 4980 | 44.14 |
| STAFF | 3149 | 18.34 | 2004 | 17.76 |
| HOSP | 1437 | 8.37 | 875 | 7.76 |
| AGE | 1233 | 7.18 | 764 | 6.77 |
| LOC | 1206 | 7.02 | 856 | 7.59 |
| PATIENT | 1316 | 7.66 | 879 | 7.79 |
| PHONE | 317 | 1.85 | 217 | 1.92 |
| ID | 881 | 5.13 | 625 | 5.54 |
| PATORG | 124 | 0.72 | 82 | 0.73 |
| 4 | 0.02 | 1 | 0.01 | |
| OTHERPHI | 2 | 0.01 | 0 | 0 |
| TOTAL | 17171 | 100 | 11283 | 100 |
Steps on how this model was trained can be found here: Training. The "model_name_or_path" was set to: "emilyalsentzer/Bio_ClinicalBERT".
Training details:
Post a Github issue on the repo: Robust DeID.
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="obi/deid_bert_i2b2")