Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
137 changes: 137 additions & 0 deletions docs/_posts/ahmet-mesut/2024-02-26-snomed_umls_mapper_en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
---
layout: model
title: Mapping SNOMED Codes with Their Corresponding UMLS Codes
author: John Snow Labs
name: snomed_umls_mapper
date: 2024-02-26
tags: [licensed, en, umls, snomed, mapping]
task: Chunk Mapping
language: en
edition: Healthcare NLP 5.2.1
spark_version: 3.0
supported: true
annotator: ChunkMapperModel
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

This pretrained model maps SNOMED codes to corresponding UMLS codes.

## Predicted Entities

`umls_code`

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/clinical/models/snomed_umls_mapper_en_5.2.1_3.0_1708984203970.zip){:.button.button-orange.button-orange-trans.arr.button-icon.hidden}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/clinical/models/snomed_umls_mapper_en_5.2.1_3.0_1708984203970.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}

```python
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")

sbert_embedder = BertSentenceEmbeddings.pretrained("sbiobert_base_cased_mli", "en", "clinical/models")\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)

snomed_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_snomed_drug", "en", "clinical/models")\
.setInputCols(["sbert_embeddings"]) \
.setOutputCol("snomed_code")\
.setDistanceFunction("EUCLIDEAN")

resolver2chunk = Resolution2Chunk()\
.setInputCols(["snomed_code"])\
.setOutputCol("snomed2chunk")

chunkerMapper = ChunkMapperModel.pretrained("snomed_umls_mapper", "en", "clinical/models")\
.setInputCols(["umls2chunk"])\
.setOutputCol("mappings")

pipeline = Pipeline(stages = [
documentAssembler,
sbert_embedder,
snomed_resolver,
resolver2chunk,
chunkerMapper])


data = spark.createDataFrame([["acebutolol"],["aspirin"]]).toDF("text")

mapper_model = pipeline.fit(data)
result= mapper_model.transform(data)
```
```scala
val documentAssembler = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("ner_chunk")

val sbert_embedder = BertSentenceEmbeddings.pretrained("sbiobert_base_cased_mli","en","clinical/models")
.setInputCols(Array("ner_chunk"))
.setOutputCol("sbert_embeddings")
.setCaseSensitive(false)

val snomed_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_snomed_drug","en","clinical/models")
.setInputCols(Array("sbert_embeddings"))
.setOutputCol("snomed_code")
.setDistanceFunction("EUCLIDEAN")

val resolver2chunk = new Resolution2Chunk()
.setInputCols(Array("snomed_code"))
.setOutputCol("snomed2chunk")

val chunkerMapper = ChunkMapperModel.pretrained("snomed_umls_mapper","en","clinical/models")
.setInputCols(Array("umls2chunk"))
.setOutputCol("mappings")

val Pipeline(stages = Array(
documentAssembler,
sbert_embedder,
snomed_resolver,
resolver2chunk,
chunkerMapper))

val data = Seq("acebutolol", "aspirin").toDF("text")

val mapper_model = pipeline.fit(data)
result= mapper_model.transform(data)
```
</div>

## Results

```bash
+----------+-----------+---------+
|chunk |snomed_code|umls_code|
+----------+-----------+---------+
|acebutolol|372815001 |C0000946 |
|aspirin |387458008 |C0004057 |
+----------+-----------+---------+

```

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|snomed_umls_mapper|
|Compatibility:|Healthcare NLP 5.2.1+|
|License:|Licensed|
|Edition:|Official|
|Input Labels:|[ner_chunk]|
|Output Labels:|[mappings]|
|Language:|en|
|Size:|7.5 MB|
136 changes: 136 additions & 0 deletions docs/_posts/ahmet-mesut/2024-02-26-umls_snomed_mapper_en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
---
layout: model
title: Mapping UMLS Codes with Their Corresponding SNOMED Codes
author: John Snow Labs
name: umls_snomed_mapper
date: 2024-02-26
tags: [en, licensed, umls, snomed, mapping]
task: Chunk Mapping
language: en
edition: Healthcare NLP 5.2.1
spark_version: 3.0
supported: true
annotator: ChunkMapperModel
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

This pretrained model maps UMLS codes to corresponding SNOMED codes.

## Predicted Entities

`snomed`

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/clinical/models/umls_snomed_mapper_en_5.2.1_3.0_1708981559297.zip){:.button.button-orange.button-orange-trans.arr.button-icon.hidden}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/clinical/models/umls_snomed_mapper_en_5.2.1_3.0_1708981559297.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}

```python
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")

sbert_embedder = BertSentenceEmbeddings.pretrained("sbiobert_base_cased_mli", "en", "clinical/models")\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)

umls_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_umls_clinical_drugs", "en", "clinical/models")\
.setInputCols(["sbert_embeddings"]) \
.setOutputCol("umls_code")\
.setDistanceFunction("EUCLIDEAN")

resolver2chunk = Resolution2Chunk()\
.setInputCols(["umls_code"])\
.setOutputCol("umls2chunk")

chunkerMapper = ChunkMapperModel.pretrained("umls_snomed_mapper", "en", "clinical/models")\
.setInputCols(["umls2chunk"])\
.setOutputCol("mappings")

pipeline = Pipeline(stages = [
documentAssembler,
sbert_embedder,
umls_resolver,
resolver2chunk,
chunkerMapper])

data = spark.createDataFrame([["acebutolol"],["aspirin"]]).toDF("text")

mapper_model = pipeline.fit(data)
result= mapper_model.transform(data)
```
```scala
val documentAssembler = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("ner_chunk")

val sbert_embedder = BertSentenceEmbeddings.pretrained("sbiobert_base_cased_mli","en","clinical/models")
.setInputCols(Array("ner_chunk"))
.setOutputCol("sbert_embeddings")
.setCaseSensitive(false)

val umls_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_umls_clinical_drugs","en","clinical/models")
.setInputCols(Array("sbert_embeddings"))
.setOutputCol("umls_code")
.setDistanceFunction("EUCLIDEAN")

val resolver2chunk = new Resolution2Chunk()
.setInputCols(Array("umls_code"))
.setOutputCol("umls2chunk")

val chunkerMapper = ChunkMapperModel.pretrained("umls_snomed_mapper","en","clinical/models")
.setInputCols(Array("umls2chunk"))
.setOutputCol("mappings")

val Pipeline(stages = Array(
documentAssembler,
sbert_embedder,
umls_resolver,
resolver2chunk,
chunkerMapper))

val data = Seq("acebutolol", "aspirin").toDF("text")

val mapper_model = pipeline.fit(data)

result= mapper_model.transform(data)
```
</div>

## Results

```bash
+----------+---------+-----------+
|chunk |umls_code|snomed_code|
+----------+---------+-----------+
|acebutolol|C0000946 |68088000 |
|aspirin |C0004057 |319770009 |
+----------+---------+-----------+
```

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|umls_snomed_mapper|
|Compatibility:|Healthcare NLP 5.2.1+|
|License:|Licensed|
|Edition:|Official|
|Input Labels:|[ner_chunk]|
|Output Labels:|[mappings]|
|Language:|en|
|Size:|6.9 MB|