Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Pass around a DataView instead of individual kind/payload fields #24

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

j-f1
Copy link
Member

@j-f1 j-f1 commented Aug 2, 2020

Also change memory layout for numbers:

There are now only two payload slots. Numbers use them as a single Double, while every other type just uses them as two UInt32s

Originally part of #22.

The use of a DataView reduces function paramater count.

There are now only two payload slots. Numbers use them as a single Double, while every other type just uses them as two UInt32s

Co-Authored-By: Manuel <[email protected]>
@j-f1 j-f1 mentioned this pull request Aug 2, 2020
@j-f1
Copy link
Member Author

j-f1 commented Aug 2, 2020

Note: I dropped the size of each object in memory from 24 down to 12. I’m not exactly sure why they were 24 bytes long before since kind + payload1 + payload2 = 12 bytes and payload3 = 8 bytes which only add up to 20. If I’m missing something about the memory layout, I’d be happy to correct it.

@kateinoigakukun
Copy link
Member

Thanks for your cooperation.

Did you check performance benchmark suite? You can check benchmark by make benchmark in IntegrationTests directory.

I found some performance regression on number interoperation.

Before

Running 'Serialization/Write JavaScript number directly' ...
done 118.31193709373474 ms
Running 'Serialization/Write JavaScript string directly' ...
done 119.49532198905945 ms
Running 'Serialization/Swift Int to JavaScript' ...
done 3457.1316990852356 ms
Running 'Serialization/Swift String to JavaScript' ...
done 5258.867609977722 ms
Running 'Object heap/Increment and decrement RC' ...
done 3061.0713909864426 ms

After

Running 'Serialization/Write JavaScript number directly' ...
done 118.58276605606079 ms
Running 'Serialization/Write JavaScript string directly' ...
done 120.5656269788742 ms
Running 'Serialization/Swift Int to JavaScript' ...
done 4948.468513011932 ms
Running 'Serialization/Swift String to JavaScript' ...
done 5090.137539982796 ms
Running 'Object heap/Increment and decrement RC' ...
done 2963.7367210388184 ms

I'm still reviewing the changes, so please wait for a while.

@kateinoigakukun
Copy link
Member

I’m not exactly sure why they were 24 bytes long before since kind + payload1 + payload2 = 12 bytes and payload3 = 8 bytes which only add up to 20.

The additional 4 byte is for double which is aligned by 8 bytes in C convention

@@ -124,10 +124,12 @@ export class SwiftRuntime {
const exports = this.instance.exports as any as SwiftRuntimeExportedFunctions;
const argc = args.length
const argv = exports.swjs_prepare_host_function_call(argc)
const uint32Memory = new Uint32Array(memory().buffer, argv, args.length * 3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This uint32Memory is not used

@kateinoigakukun
Copy link
Member

I think the regression is due to the indirection of payloads access.
Why don't you pass the two payloads directly as arguments?

@j-f1 j-f1 mentioned this pull request Aug 2, 2020
@j-f1
Copy link
Member Author

j-f1 commented Aug 2, 2020

I think the regression is due to the indirection of payloads access.

That definitely seems to be the case! I’m not sure why they switched to passing in the DataViews but I’d be happy to revert that part

The additional 4 byte is for double which is aligned by 8 bytes in C convention

Ah, that makes sense! Since we’re not using C doubles anymore (we’re just encoding them as two ints), this shouldn’t be an issue, right?

@kateinoigakukun
Copy link
Member

That definitely seems to be the case! I’m not sure why they switched to passing in the DataViews but I’d be happy to revert that part

Thanks ❤️

Ah, that makes sense! Since we’re not using C doubles anymore (we’re just encoding them as two ints), this shouldn’t be an issue, right?

Yes, I think it's ok.

@j-f1
Copy link
Member Author

j-f1 commented Aug 2, 2020

Ok, so I did a test where I removed all the DataView stuff (except when decoding numbers (where we need to specify endianness, I think?)) but keeping the new function style where we only pass a single pointer representing the entire JSValue rather than kind/payload1/payload2 and it’s somehow slower than all the other styles?

Test master single pointer both
Serialization/Write JavaScript number directly 123 126 122
Serialization/Write JavaScript string directly 130 134 126
Serialization/Swift Int to JavaScript 3150 5075 5083
Serialization/Swift String to JavaScript 4842 4925 4838
Object heap/Increment and decrement RC 2898 2817 2904

@j-f1
Copy link
Member Author

j-f1 commented Aug 2, 2020

Ultimately I’d be fine just closing this if it has no real benefit

@kateinoigakukun
Copy link
Member

Hmm, the benchmark results show that decoding number from two 32bit payloads is slower than passing 20 byte payloads directly.

I think we need to optimize the payloads, but this approach seems a little difficult.

@j-f1
Copy link
Member Author

j-f1 commented Aug 3, 2020

This might be a good thing to explore in the future when there’s better tooling to analyze perf but I don’t have a good rationale for this PR right now. Not really sure why I opened it in the first place. Thanks for sticking with me as I figure out exactly which things need to be brought over!

@j-f1 j-f1 closed this Aug 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants