Skip to content

Commit bc486ad

Browse files
authored
[Examples] add Pytorch image demo (second-state#9)
* [Examples] add Pytorch image demo Signed-off-by: Jianbai Ye <jianbaiye@outlook.com> * fix: add fixtures generation Signed-off-by: Jianbai Ye <jianbaiye@outlook.com> * chore: upload fixtures Signed-off-by: Jianbai Ye <jianbaiye@outlook.com> * chore: update readme Signed-off-by: Jianbai Ye <jianbaiye@outlook.com> Signed-off-by: Jianbai Ye <jianbaiye@outlook.com>
1 parent 96e5118 commit bc486ad

15 files changed

+2321
-0
lines changed

‎.gitignore‎

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,5 +9,8 @@ openvino-mobilenet-raw/mobilenet.bin
99
openvino-mobilenet-raw/mobilenet.xml
1010
openvino-mobilenet-raw/tensor-1x224x224x3-f32.bgr
1111

12+
pytorch-mobilenet-image/fixtures/input.jpg
13+
# pytorch-mobilenet-image/mobilenet.pt
14+
1215
.DS_Store
1316
Cargo.lock

‎pytorch-mobilenet-image/README.md‎

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
# Mobilenet example for WASI-NN
2+
3+
This package is a high-level Rust bindings for [wasi-nn] example of Mobilenet.
4+
5+
[wasi-nn]: https://github.com/WebAssembly/wasi-nn
6+
7+
## Dependencies
8+
9+
This crate depends on the `wasi-nn` in the `Cargo.toml`:
10+
11+
```toml
12+
[dependencies]
13+
wasi-nn = "0.1.0"
14+
```
15+
16+
## Build
17+
18+
Compile the application to WebAssembly:
19+
20+
```bash
21+
cargo build --target=wasm32-wasi --release
22+
```
23+
24+
The output WASM file will be at `target/wasm32-wasi/release/wasmedge-wasinn-example-mobilenet-image.wasm`.
25+
To speed up the image processing, we can enable the AOT mode in WasmEdge with:
26+
27+
```bash
28+
wasmedgec rust/target/wasm32-wasi/release/wasmedge-wasinn-example-mobilenet-image.wasm wasmedge-wasinn-example-mobilenet-image.wasm
29+
```
30+
31+
## Run
32+
33+
First generate the fixture of the pre-trained mobilenet with the script:
34+
35+
```bash
36+
./download_data.sh fixtures && cd fixtures
37+
python -m pip install -r requirements.txt
38+
# generate the model fixture
39+
python generate_mobilenet.py
40+
```
41+
42+
(or you can use the pre-generated fixture in `fixtures/mobilenet.pt`)
43+
44+
The above will download a testing image `input.jpg`
45+
![](https://github.com/bytecodealliance/wasi-nn/raw/main/rust/images/1.jpg)
46+
as well as a pre-trained mobilenet model, then convert the model into the torchscript model for C++.
47+
48+
And execute the WASM with the `wasmedge` with PyTorch supporting:
49+
50+
```bash
51+
wasmedge --dir .:. wasmedge-wasinn-example-mobilenet-image.wasm fixtures/mobilenet.pt input.jpg
52+
```
53+
54+
You will get the output:
55+
56+
```console
57+
Read torchscript binaries, size in bytes: 14376924
58+
Loaded graph into wasi-nn with ID: 0
59+
Created wasi-nn execution context with ID: 0
60+
Read input tensor, size in bytes: 602112
61+
Executed graph inference
62+
1.) [954](20.6681)banana
63+
2.) [940](12.1483)spaghetti squash
64+
3.) [951](11.5748)lemon
65+
4.) [950](10.4899)orange
66+
5.) [953](9.4834)pineapple, ananas
67+
```
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
FIXTURE=https://github.com/intel/openvino-rs/raw/v0.3.3/crates/openvino/tests/fixtures/mobilenet
2+
TODIR=$1
3+
4+
if [ ! -f $TODIR/input.jpg ]; then
5+
wget --no-clobber https://github.com/bytecodealliance/wasi-nn/raw/main/rust/images/1.jpg -O $TODIR/input.jpg
6+
fi

‎pytorch-mobilenet-image/fixtures/README.md‎

Whitespace-only changes.
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
2+
# adapted from https://pytorch.org/hub/pytorch_vision_mobilenet_v2/
3+
4+
import torch
5+
import struct
6+
# Download an example image from the pytorch website
7+
url, filename = (
8+
"https://github.com/bytecodealliance/wasi-nn/raw/main/rust/images/1.jpg", "../input.jpg")
9+
# import urllib
10+
# try:
11+
# urllib.URLopener().retrieve(url, filename)
12+
# except:
13+
# urllib.request.urlretrieve(url, filename)
14+
15+
# sample execution (requires torchvision)
16+
model = torch.hub.load('pytorch/vision:v0.10.0',
17+
'mobilenet_v2', pretrained=True)
18+
model.eval()
19+
20+
from PIL import Image
21+
from torchvision import transforms
22+
input_image = Image.open(filename)
23+
print(input_image.mode)
24+
preprocess = transforms.Compose([
25+
transforms.Resize((224, 224)),
26+
transforms.ToTensor(),
27+
transforms.Normalize(mean=[0.485, 0.456, 0.406],
28+
std=[0.229, 0.224, 0.225]),
29+
])
30+
input_tensor = preprocess(input_image)
31+
# create a mini-batch as expected by the model
32+
input_batch = input_tensor.unsqueeze(0)
33+
with open("image-1-3-244-244.rgb", 'wb') as f:
34+
order_data = input_batch.reshape(-1)
35+
for d in order_data:
36+
d = d.item()
37+
f.write(struct.pack('f', d))
38+
39+
40+
# move the input and model to GPU for speed if available
41+
if torch.cuda.is_available():
42+
input_batch = input_batch.to('cuda')
43+
model.to('cuda')
44+
45+
with torch.no_grad():
46+
output = model(input_batch)
47+
# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes
48+
# The output has unnormalized scores. To get probabilities, you can run a softmax on it.
49+
probabilities = torch.nn.functional.softmax(output[0], dim=0)
50+
# print(probabilities)
51+
52+
with open("imagenet_classes.txt", "r") as f:
53+
categories = [s.strip() for s in f.readlines()]
54+
# Show top categories per image
55+
top5_prob, top5_catid = torch.topk(probabilities, 5)
56+
for i in range(top5_prob.size(0)):
57+
print(top5_catid[i], categories[top5_catid[i]], top5_prob[i].item())
588 KB
Binary file not shown.

0 commit comments

Comments
 (0)