This commit is contained in:
GitJournal 2024-01-11 03:14:08 +08:00
parent 6de7c584f4
commit d7bac542af
2 changed files with 16 additions and 16 deletions

View File

@ -7,7 +7,7 @@
"0": {
"filepath": "/README.md",
"entry_id": 0,
"language_id": "plain-text"
"language_id": "markdown"
},
"1": {
"filepath": "/hubconf.py",

View File

@ -74,31 +74,31 @@
<h2>Project Structure<span hierarchy="0" class="partial-repository-url"> of: openai/CLIP</span><div style="float: right;"><a href="index.html"><i class="bi bi-search"></i></a></div></h2>
<ul>
<li><span hierarchy="0" class="expanded" onclick="toggleVisibility(this)" ><strong class="directory" id="/"><code>CLIP</code></strong> <em>CLIP model experiments, data, and utilities</em></span><ul>
<li><a href="index.html?q=/README.md" id="/README.md"><code>README.md</code></a> <em>CLIP feature extraction for CIFAR100 and logistic regression</em></li>
<li><a href="index.html?q=/README.md" id="/README.md"><code>README&period;md</code></a> <em>CLIP feature extraction for CIFAR100 and logistic regression</em></li>
<li><span hierarchy="1" class="expanded" onclick="toggleVisibility(this)" ><strong class="directory" id="/clip/"><code>clip</code></strong> <em>CLIP model and tokenizer downloader</em></span><ul>
<li><a href="index.html?q=/clip/__init__.py" id="/clip/__init__.py"><code><strong>init</strong>.py</code></a> <em>Imports "clip" module functions and classes.</em></li>
<li><a href="index.html?q=/clip/clip.py" id="/clip/clip.py"><code>clip.py</code></a> <em>CLIP model downloader and tokenizer.</em></li>
<li><a href="index.html?q=/clip/model.py" id="/clip/model.py"><code>model.py</code></a> <em>CLIP models, deep learning, attention mechanisms, ConvNeuralNetworks, VisionTransformers.</em></li>
<li><a href="index.html?q=/clip/simple_tokenizer.py" id="/clip/simple_tokenizer.py"><code>simple_tokenizer.py</code></a> <em>SimpleTokenizer: BPE-based text tokenization, cleaning, encoding, and decoding.</em></li>
<li><a href="index.html?q=/clip/__init__.py" id="/clip/__init__.py"><code>&UnderBar;&UnderBar;init&UnderBar;&UnderBar;&period;py</code></a> <em>Imports "clip" module functions and classes.</em></li>
<li><a href="index.html?q=/clip/clip.py" id="/clip/clip.py"><code>clip&period;py</code></a> <em>CLIP model downloader and tokenizer.</em></li>
<li><a href="index.html?q=/clip/model.py" id="/clip/model.py"><code>model&period;py</code></a> <em>CLIP models, deep learning, attention mechanisms, ConvNeuralNetworks, VisionTransformers.</em></li>
<li><a href="index.html?q=/clip/simple_tokenizer.py" id="/clip/simple_tokenizer.py"><code>simple&UnderBar;tokenizer&period;py</code></a> <em>SimpleTokenizer: BPE-based text tokenization, cleaning, encoding, and decoding.</em></li>
</ul>
</li>
<li><span hierarchy="1" class="expanded" onclick="toggleVisibility(this)" ><strong class="directory" id="/data/"><code>data</code></strong> <em>Geo-Tagged, Rendered SST2, YFCC100M Dataset Directory</em></span><ul>
<li><a href="index.html?q=/data/country211.md" id="/data/country211.md"><code>country211.md</code></a> <em>Download, Extract &amp; Classify Country211 Geo-Tagged Images</em></li>
<li><a href="index.html?q=/data/rendered-sst2.md" id="/data/rendered-sst2.md"><code>rendered-sst2.md</code></a> <em>Rendered SST2 dataset: Image Classification.</em></li>
<li><a href="index.html?q=/data/yfcc100m.md" id="/data/yfcc100m.md"><code>yfcc100m.md</code></a> <em>YFCC100M dataset: 14M+ images, Creative Commons licenses.</em></li>
<li><a href="index.html?q=/data/country211.md" id="/data/country211.md"><code>country211&period;md</code></a> <em>Download, Extract &amp; Classify Country211 Geo-Tagged Images</em></li>
<li><a href="index.html?q=/data/rendered-sst2.md" id="/data/rendered-sst2.md"><code>rendered-sst2&period;md</code></a> <em>Rendered SST2 dataset: Image Classification.</em></li>
<li><a href="index.html?q=/data/yfcc100m.md" id="/data/yfcc100m.md"><code>yfcc100m&period;md</code></a> <em>YFCC100M dataset: 14M+ images, Creative Commons licenses.</em></li>
</ul>
</li>
<li><a href="index.html?q=/hubconf.py" id="/hubconf.py"><code>hubconf.py</code></a> <em>Create CLIP model entry points, convert PIL images to tensors.</em></li>
<li><a href="index.html?q=/model-card.md" id="/model-card.md"><code>model-card.md</code></a> <em>Multimodal AI for vision, classification, biases.</em></li>
<li><a href="index.html?q=/hubconf.py" id="/hubconf.py"><code>hubconf&period;py</code></a> <em>Create CLIP model entry points, convert PIL images to tensors.</em></li>
<li><a href="index.html?q=/model-card.md" id="/model-card.md"><code>model-card&period;md</code></a> <em>Multimodal AI for vision, classification, biases.</em></li>
<li><span hierarchy="1" class="expanded" onclick="toggleVisibility(this)" ><strong class="directory" id="/notebooks/"><code>notebooks</code></strong> <em>Notebooks: Machine Learning Experiments</em></span><ul>
<li><a href="index.html?q=/notebooks/Interacting_with_CLIP.py" id="/notebooks/Interacting_with_CLIP.py"><code>Interacting_with_CLIP.py</code></a> <em>Interacting with CLIP: Image-Text Similarity Analysis</em></li>
<li><a href="index.html?q=/notebooks/Prompt_Engineering_for_ImageNet.py" id="/notebooks/Prompt_Engineering_for_ImageNet.py"><code>Prompt_Engineering_for_ImageNet.py</code></a> <em>Zero-shot ImageNet classification with CLIP model.</em></li>
<li><a href="index.html?q=/notebooks/Interacting_with_CLIP.py" id="/notebooks/Interacting_with_CLIP.py"><code>Interacting&UnderBar;with&UnderBar;CLIP&period;py</code></a> <em>Interacting with CLIP: Image-Text Similarity Analysis</em></li>
<li><a href="index.html?q=/notebooks/Prompt_Engineering_for_ImageNet.py" id="/notebooks/Prompt_Engineering_for_ImageNet.py"><code>Prompt&UnderBar;Engineering&UnderBar;for&UnderBar;ImageNet&period;py</code></a> <em>Zero-shot ImageNet classification with CLIP model.</em></li>
</ul>
</li>
<li><a href="index.html?q=/requirements.txt" id="/requirements.txt"><code>requirements.txt</code></a> <em>Install necessary packages for project.</em></li>
<li><a href="index.html?q=/setup.py" id="/setup.py"><code>setup.py</code></a> <em>Set up Python package 'clip' with setuptools.</em></li>
<li><a href="index.html?q=/requirements.txt" id="/requirements.txt"><code>requirements&period;txt</code></a> <em>Install necessary packages for project.</em></li>
<li><a href="index.html?q=/setup.py" id="/setup.py"><code>setup&period;py</code></a> <em>Set up Python package 'clip' with setuptools.</em></li>
<li><span hierarchy="1" class="expanded" onclick="toggleVisibility(this)" ><strong class="directory" id="/tests/"><code>tests</code></strong></span><ul>
<li><a href="index.html?q=/tests/test_consistency.py" id="/tests/test_consistency.py"><code>test_consistency.py</code></a> <em>CLIP model consistency test.</em></li>
<li><a href="index.html?q=/tests/test_consistency.py" id="/tests/test_consistency.py"><code>test&UnderBar;consistency&period;py</code></a> <em>CLIP model consistency test.</em></li>
</ul>
</li>
</ul>