-
Notifications
You must be signed in to change notification settings - Fork 51
Add generic process chunks functions #46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @chcmedeiros, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new set of generic functions for managing data chunking and APDU (Application Protocol Data Unit) communication with Ledger hardware devices. The primary goal is to centralize and standardize the common patterns for preparing and processing data chunks, which were previously duplicated across various Ledger Go applications. This change improves code reusability and maintainability by providing a unified API for interacting with Ledger devices when sending large transactions or data.
Highlights
- Standardized Chunking Utilities: Introduces PrepareChunks to split transaction data into fixed-size chunks, including a BIP44 path prefix, mirroring existing patterns in other Ledger Go apps.
- Generic Chunk Processing: Adds ProcessChunks and ProcessChunksSimple functions to send these data chunks to a Ledger device, handling the APDU P1 parameter (ChunkInit, ChunkAdd, ChunkLast) automatically.
- Customizable Error Handling: ProcessChunks now supports an optional ErrorHandler callback, allowing specific Ledger applications (like Avalanche or Filecoin) to implement their unique error handling logic.
- APDU Command Building: Provides BuildChunkedAPDU for constructing APDU commands for chunked data transmission.
- Comprehensive Testing: Includes a dedicated test file (chunking_test.go) with mock devices to ensure the correctness and robustness of the new chunking and processing functions, covering various scenarios including error handling.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces generic functions for chunking data for Ledger devices, along with corresponding tests. The implementation is clean and well-tested. I've provided a couple of suggestions for improvement: one to align a function signature with Go best practices by removing an unnecessary error return, and another to optimize a slice-building operation in the tests for better performance and idiomatic code. Overall, this is a great addition.
chunking.go
Outdated
| func PrepareChunks(bip44PathBytes []byte, transaction []byte) ([][]byte, error) { | ||
| var packetIndex = 0 | ||
| // first chunk + number of chunk needed for transaction | ||
| var packetCount = 1 + int(math.Ceil(float64(len(transaction))/float64(DefaultChunkSize))) | ||
|
|
||
| chunks := make([][]byte, packetCount) | ||
|
|
||
| // First chunk is path | ||
| chunks[0] = bip44PathBytes | ||
| packetIndex++ | ||
|
|
||
| for packetIndex < packetCount { | ||
| var start = (packetIndex - 1) * DefaultChunkSize | ||
| var end = packetIndex * DefaultChunkSize | ||
|
|
||
| if end >= len(transaction) { | ||
| chunks[packetIndex] = transaction[start:] | ||
| } else { | ||
| chunks[packetIndex] = transaction[start:end] | ||
| } | ||
| packetIndex++ | ||
| } | ||
|
|
||
| return chunks, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function PrepareChunks is declared to return an error, but there is no path in its implementation that can lead to an error as it always returns nil. According to Go best practices, a function that cannot fail should not include error in its return signature.
Please consider changing the function signature and the return statement. This will also require updating the call sites in the tests.
func PrepareChunks(bip44PathBytes []byte, transaction []byte) [][]byte {
var packetIndex = 0
// first chunk + number of chunk needed for transaction
var packetCount = 1 + int(math.Ceil(float64(len(transaction))/float64(DefaultChunkSize)))
chunks := make([][]byte, packetCount)
// First chunk is path
chunks[0] = bip44PathBytes
packetIndex++
for packetIndex < packetCount {
var start = (packetIndex - 1) * DefaultChunkSize
var end = packetIndex * DefaultChunkSize
if end >= len(transaction) {
chunks[packetIndex] = transaction[start:]
} else {
chunks[packetIndex] = transaction[start:end]
}
packetIndex++
}
return chunks
}
chunking_test.go
Outdated
| reconstructed := []byte{} | ||
| for i := 1; i < len(chunks); i++ { | ||
| reconstructed = append(reconstructed, chunks[i]...) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this test, the reconstructed slice is built by appending in a loop. This can be inefficient due to repeated reallocations. You can improve performance by pre-allocating the slice with the final capacity. This is a good practice even in tests for efficiency and to demonstrate idiomatic Go.
| reconstructed := []byte{} | |
| for i := 1; i < len(chunks); i++ { | |
| reconstructed = append(reconstructed, chunks[i]...) | |
| } | |
| reconstructed := make([]byte, 0, len(tt.transaction)) | |
| for i := 1; i < len(chunks); i++ { | |
| reconstructed = append(reconstructed, chunks[i]...) | |
| } |
c3fdc6a to
a909ce4
Compare
cosmos/ledger-cosmos-go#82 Zondax#46 Signed-off-by: Artur Troian <[email protected]>
Adding common and generic chunk processing functions.
These can be used throughout the go packages, and will allow us to control the chunk processing in one single place, fixing all the apps in case of issues.