Live Demo · Pre-trained Models · Report Bug
A live demonstration of multilingual Text-Image retrieval using M-CLIP can be found here!
To set up the environment, please use the environment.yaml file. This file contains all the necessary dependencies.
To create the conda environment using the environment.yaml file, run the following command:
conda env create -f environment.yamlEvery text encoder is a Arabic-Clip Huggingface Organization available transformer, with an additional linear layer on top to be downloaded.
- Multilingual-CLIP
- Stability.ai for providing much appreciated compute during training.
- CLIP
- OpenAI
- Huggingface
- Best Readme Template
- "Two Cats" Image by pl1602
Distributed under the MIT License. See LICENSE for more information.
