Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@sid-alluri
Copy link

Converts an onnx model into the .msgpack. Current functionality: converts a simple mnist onnx model.

To run:

  • cargo build --release
  • ./target/release/time_circuit python/onnx_converter/first_transformed_onnx.msgpack examples/mnist/inp.msgpack kzg

@@ -0,0 +1,45 @@
use std::{collections::HashMap, rc::Rc, vec};
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need this as a separate layer? Is it not fused?

Copy link
Author

@sid-alluri sid-alluri Aug 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
It is not fused.

for tensor in inp {
model.tensors.push(tensor);
model.tensors.push(tensor.clone());
// println!("tensor: {:?}", tensor);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's tons of extraneous changes. Clean them up.

@@ -0,0 +1,302 @@
# Oggn
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Turn this into the way that the other converter is written with a class and a main function

struct.pack_into('q', shape, 0, -1)
init.raw_data = bytes(shape)

onnx.save(model, "/Users/siddharthaalluri/Desktop/sid-alluri/zkml/python/onnx_converter/new_mnist.onnx")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix this

params = [kernel[0], kernel[1], stride[0], stride[1]]

elif node.op_type == "Relu":
layer_type = "ReLUONNX"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there a separate ReluONNX layer?

output_dim = get_output_dim(node_id, model_graph)
params = []

elif node.op_type == "Gemm":
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work for all forms of Gemm?

"layer_type": layer_type,
"params": params, ## Change params HELP HELP HELP HELP
"inp_shapes": inputs_dim,
"inp_idxes": inp_idxes, ### RANDOM COME BACK HERE HELP HELP HELP HELP
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's going on here?

src/layers.rs Outdated
pub mod squared_diff;
pub mod tanh;
pub mod update;
pub mod relu;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the rust extension to sort imports.

tanh::TanhChip,
update::UpdateChip,
update::UpdateChip,
relu::ReluLayerChip,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move ReLU to the non-linear diretory

) -> Array<Value<F>, IxDyn> {
assert_eq!(input.ndim(), 2);
assert_eq!(weight.ndim(), 2);
println!("input shape: {:?}", input.shape());
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extraneous

Tanh,
Transpose,
Update,

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there an extra space?

let (sx, sy) = (sx as usize, sy as usize);

// Only support batch size 1 for now

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extraneous


model
}
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extraneous

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants