-
Notifications
You must be signed in to change notification settings - Fork 3
Change token usds #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…amed to Stably USD Classic (USDSC) and is no longer supported by the issuer. For more information, please refer to https://stably.io/usdsc
WalkthroughUpdates Husky pre-commit hook to delegate to a helper script. Extends tokens/token-list.json with new ERC-20 tokens (GHO, USDS across multiple networks) and adds a MetaMask USD entry. Modifies existing mUSD-mainnet entry (name and address/hash). No exported/public API declarations changed beyond token list data. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Dev as Developer
participant Git as Git
participant Hook as Husky pre-commit
participant H as Helper script "h"
Dev->>Git: git commit
Git->>Hook: Invoke pre-commit
Hook->>H: source "$(dirname "$0")/h"
rect rgba(200,230,255,0.25)
note right of H: Delegated hook logic
H->>H: Run format/check/validate
end
H-->>Hook: Exit status
alt Success
Hook-->>Git: Allow commit
else Failure
Hook-->>Git: Block commit
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal). Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Hello @pclaesen, thank you for submitting your first pull request to the request-token-list repository. We value your contribution and encourage you to review our contribution guidelines to ensure your submission meets our standards. Please note that every merged PR is automatically enrolled in our Best PR Initiative, offering a chance to win $500 each quarter. Our team is available via GitHub Discussions or Discord if you have any questions. Welcome aboard! |
Thank you for your submission! As you prepare for the review process, please ensure that your PR title, description, and any linked issues fully comply with our contribution guidelines. A clear explanation of your changes and their context will help expedite the review process. Every merged PR is automatically entered into our Best PR Initiative, offering a chance to win $500 every quarter. We appreciate your attention to detail and look forward to reviewing your contribution! |
All tests are passing, but there are 2 tokens with the same ID in the list:
I don't think the tests should pass, especially not the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tokens/token-list.json (1)
3-8
: Version and timestamp not updatedTokens were added/modified; bump the list version (per token-list semver: add = minor, metadata tweak = patch) and refresh the ISO timestamp.
Example:
- "timestamp": "2025-01-09T11:39:10.425Z", + "timestamp": "2025-09-17T00:00:00.000Z", "version": { - "major": 1, - "minor": 0, - "patch": 0 + "major": 1, + "minor": 1, + "patch": 0 },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.husky/_/pre-commit
(1 hunks)tokens/token-list.json
(5 hunks)
🔇 Additional comments (4)
tokens/token-list.json (4)
4333-4343
: GHO mainnet entry: looks correct; please double-check against issuer docsAddress/decimals/network match expectations for GHO; just confirm against the official registry before merge.
5633-5641
: USDS (mainnet): confirm issuer and address to avoid symbol collisionsUSDS ticker is ambiguous across ecosystems. Please confirm the issuer and contract at
0xdC035D...384F
in the PR description and commit message.
7799-7808
: USDS (Optimism): verify chain address and decimalsValidate the Optimism contract and decimals (18) with issuer docs or chain explorers.
7975-7984
: USDS (Arbitrum One): verify chain address and decimalsSame as above; confirm correctness before publishing.
npm run check | ||
git add -A | ||
npm run validate | ||
. "$(dirname "$0")/h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Sourcing helper script: verify existence and execution flow
Ensure .husky/_/h
exists, is committed, and sets strict flags (e.g., set -eu
) so the hook fails fast. Also consider adding a trailing newline to this file.
Run:
🏁 Script executed:
#!/bin/bash
set -eu
# Verify helper presence and permissions
test -f .husky/_/h && echo "found .husky/_/h" || (echo "missing .husky/_/h" && exit 1)
# Show first lines for sanity (should include set -e or similar)
head -n 5 .husky/_/h || true
Length of output: 18
Add the missing husky helper (.husky/_/h) — pre-commit will fail
- .husky//h is missing; .husky//pre-commit (line 2) sources it: '. "$(dirname "$0")/h"'.
- Fix: add & commit .husky/_/h, ensure it begins with strict flags (e.g., set -eu), is readable, and ends with a trailing newline.
🤖 Prompt for AI Agents
In .husky/_/pre-commit around line 2, the script sources .husky/_/h which is
missing; add and commit the .husky/_/h helper file containing the required
helper functions, make it executable/readable, start it with strict shell flags
(for example include set -eu near the top), ensure it ends with a trailing
newline, and verify the pre-commit script can source it without errors.
"id": "mUSD-mainnet", | ||
"name": "MetaMask USD", | ||
"symbol": "mUSD", | ||
"decimals": 6, | ||
"address": "0xA4Bdb11dc0a2bEC88d24A3aa1E6Bb17201112eBe", | ||
"address": "0xacA92E438df0B2401fF60dA7E4337B687a2435DA", | ||
"network": "mainnet", | ||
"type": "ERC20", | ||
"hash": "0xA4Bdb11dc0a2bEC88d24A3aa1E6Bb17201112eBe", | ||
"hash": "0xacA92E438df0B2401fF60dA7E4337B687a2435DA", | ||
"chainId": 1 | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate id "mUSD-mainnet" for two different tokens (MetaMask USD vs mStable USD)
This is a breaking data error: ids must be unique. The new MetaMask USD entry reuses the existing mUSD-mainnet
id used by mStable’s mUSD.
Apply one of:
- Keep legacy
mUSD-mainnet
for mStable (back-compat) and give MetaMask USD a new id (e.g.,mUSD-metamask-mainnet
). - Or, if you intend to migrate the id, do a major version bump and provide a migration note while renaming mStable’s entry (e.g.,
mUSD-mstable-mainnet
).
Example fix (Option 1):
- "id": "mUSD-mainnet",
+ "id": "mUSD-metamask-mainnet",
Also applies to: 6446-6455
🤖 Prompt for AI Agents
In tokens/token-list.json around lines 5643-5652 (and also the duplicate
occurrence around lines 6446-6455), the id "mUSD-mainnet" is duplicated for two
different tokens (MetaMask USD vs mStable USD); ensure ids are unique by
renaming the new MetaMask entry to a distinct id (e.g., "mUSD-metamask-mainnet")
while keeping the legacy "mUSD-mainnet" for mStable for back-compatibility, or
alternatively rename the legacy entry and document a breaking migration if you
intend to change the canonical id; apply the chosen renaming consistently in
both occurrences and update any references accordingly.
🧩 Analysis chain
Automated sanity checks for duplicates and structural issues
Run these to catch dup ids, dup (network,address) pairs, and ensure fields are well-formed.
Also applies to: 5633-5641, 7799-7808, 7975-7984, 4333-4343
🏁 Script executed:
#!/bin/bash
set -eu
f=tokens/token-list.json
# 1) Duplicate ids
echo "Duplicate ids:"
jq -r '.tokens[].id' "$f" | sort | uniq -d || true
# 2) Duplicate (network,address) combos
echo "Duplicate (network,address) pairs:"
jq -r '.tokens[] | "\(.network)|\(.address|ascii_downcase)"' "$f" | sort | uniq -d || true
# 3) Check that every token has required fields and correct types
echo "Schema spot-check (count missing):"
jq '[.tokens[] | select((.id|type!="string") or (.name|type!="string") or (.symbol|type!="string") or (.decimals|type!="number") or (.address|type!="string") or (.network|type!="string") or (.type|type!="string") or (.hash|type!="string") or (.chainId|type!="number"))] | length' "$f"
# 4) Show any symbol/id collisions (same id with different address)
echo "Same id with multiple addresses:"
jq -r '.tokens[] | [.id, (.address|ascii_downcase)] | @tsv' "$f" | sort | uniq -f1 -d || true
Length of output: 4938
🏁 Script executed:
#!/bin/bash
set -euo pipefail
f=tokens/token-list.json
echo "File exists:" [ -f "$f" ] && echo "$f"
echo
echo "Total tokens:"
jq '.tokens | length' "$f"
echo
echo "Duplicate ids (counts >1):"
jq -r '.tokens[].id' "$f" | sort | uniq -c | awk '$1>1{print $2" ("$1" occurrences)"}' || true
echo
echo "Full entries for id == 'mUSD-mainnet':"
jq -c 'to_entries[] | select(.value.id=="mUSD-mainnet") | {index:.key, token:.value}' "$f" || true
echo
echo "Duplicate (network,address) pairs (case-insensitive):"
jq -r '.tokens[] | "\(.network)|\(.address|ascii_downcase)"' "$f" | sort | uniq -c | awk '$1>1{print $2" ("$1" entries)"}' || true
echo
echo "Show any ids that map to multiple distinct addresses:"
jq -r '.tokens[] | [.id, (.address|ascii_downcase)] | @tsv' "$f" | \
python3 - <<'PY'
import sys
from collections import defaultdict
d=defaultdict(set)
for line in sys.stdin:
line=line.rstrip("\n")
if not line: continue
id_, addr = line.split("\t")
d[id_].add(addr)
for id_, addrs in d.items():
if len(addrs)>1:
print(id_, list(addrs))
PY
echo
echo "Schema/type spot-check using Python (lists tokens with missing/invalid fields):"
python3 - <<'PY'
import json,sys
f='tokens/token-list.json'
with open(f) as fh:
data=json.load(f)
tokens=data.get('tokens',[])
req_str_fields=['id','name','symbol','address','network','type','hash']
req_num_fields=['decimals','chainId']
issues=[]
for idx,t in enumerate(tokens):
cur=[]
for k in req_str_fields:
if k not in t:
cur.append(f"missing:{k}")
elif not isinstance(t[k], str):
cur.append(f"{k}:not-str")
for k in req_num_fields:
if k not in t:
cur.append(f"missing:{k}")
elif not isinstance(t[k], (int,float)):
cur.append(f"{k}:not-number")
if cur:
issues.append((idx, t.get('id'), cur))
print("Total tokens:", len(tokens))
print("Tokens with schema/type issues:", len(issues))
for i, id_, cur in issues[:200]:
print(i, id_, ";".join(cur))
PY
Length of output: 372
🏁 Script executed:
#!/bin/bash
set -euo pipefail
f=tokens/token-list.json
echo "File:" "$f"
python3 - <<'PY'
import json
from collections import defaultdict
f='tokens/token-list.json'
with open(f) as fh:
data=json.load(f)
tokens=data.get('tokens',[])
print("Total tokens:", len(tokens))
# duplicate ids
id_map=defaultdict(list)
for i,t in enumerate(tokens):
id_map.get(t.get('id')) # ensure key exists even if None
id_map[t.get('id')].append((i,t))
dups=[(id_, len(v)) for id_,v in id_map.items() if id_ is not None and len(v)>1]
if dups:
print("\nDuplicate ids:")
for id_,cnt in sorted(dups, key=lambda x: x[0] or ""):
print(f"{id_} ({cnt} occurrences)")
else:
print("\nDuplicate ids: none")
# details for mUSD-mainnet
print()
if 'mUSD-mainnet' in id_map:
print("Entries for id 'mUSD-mainnet':")
for idx,t in id_map['mUSD-mainnet']:
addr = t.get('address')
network = t.get('network')
name = t.get('name')
symbol = t.get('symbol')
chainId = t.get('chainId')
print(f" index={idx} network={network} address={addr} name={name} symbol={symbol} chainId={chainId}")
else:
print("No entries for id 'mUSD-mainnet'")
# ids with multiple distinct addresses
multi_addr=[]
for id_, lst in id_map.items():
addrs=set()
for idx,t in lst:
a=(t.get('address') or '').lower()
addrs.add(a)
if len(addrs)>1:
multi_addr.append((id_, list(addrs)))
print()
if multi_addr:
print("Ids mapping to multiple distinct addresses:")
for id_, addrs in multi_addr:
print(id_, addrs)
else:
print("No ids mapping to multiple distinct addresses")
# duplicate (network,address) pairs
pair_map=defaultdict(list)
for i,t in enumerate(tokens):
net=t.get('network')
addr=(t.get('address') or '').lower()
pair_map[(net,addr)].append((i,t))
dups_pairs=[(k,len(v)) for k,v in pair_map.items() if k[0] and k[1] and len(v)>1]
print()
if dups_pairs:
print("Duplicate (network,address) pairs:")
for (net,addr),cnt in dups_pairs:
print(f"{net}|{addr} ({cnt} entries)")
else:
print("Duplicate (network,address) pairs: none")
# schema/type checks
req_str=['id','name','symbol','address','network','type','hash']
req_num=['decimals','chainId']
issues=[]
for i,t in enumerate(tokens):
cur=[]
for k in req_str:
if k not in t:
cur.append(f"missing:{k}")
elif not isinstance(t[k],str):
cur.append(f"{k}:not-str")
for k in req_num:
if k not in t:
cur.append(f"missing:{k}")
elif not isinstance(t[k], (int,float)):
cur.append(f"{k}:not-number")
if cur:
issues.append((i,t.get('id'),cur))
print()
print("Tokens with schema/type issues:", len(issues))
for idx,id_,cur in issues[:100]:
print(idx, id_, ";".join(cur))
PY
Length of output: 272
🏁 Script executed:
#!/bin/bash
set -euo pipefail
f=tokens/token-list.json
echo "Checking file:" "$f"
echo
python3 - <<'PY'
import json
from collections import defaultdict, Counter
f='tokens/token-list.json'
with open(f) as fh:
data=json.load(f)
tokens = data.get('tokens') if isinstance(data, dict) else None
if tokens is None:
print("ERROR: top-level 'tokens' array not found or JSON root is not an object")
raise SystemExit(1)
print("Total tokens:", len(tokens))
# Build maps
id_map = defaultdict(list)
pair_map = defaultdict(list)
for idx, t in enumerate(tokens):
if not isinstance(t, dict):
id_map[None].append((idx, t))
continue
id_map[t.get('id')].append((idx, t))
net = t.get('network')
addr = (t.get('address') or '').lower()
pair_map[(net, addr)].append((idx, t))
# Duplicate ids
dups = [(id_, len(v)) for id_, v in id_map.items() if id_ is not None and len(v) > 1]
print("\nDuplicate ids (id, count):")
if dups:
for id_,cnt in sorted(dups):
print(id_, cnt)
else:
print("none")
# Show full entries for 'mUSD-mainnet' if present
if 'mUSD-mainnet' in id_map:
print("\nEntries for id 'mUSD-mainnet':")
for idx, t in id_map['mUSD-mainnet']:
addr = t.get('address')
net = t.get('network')
name = t.get('name')
symbol = t.get('symbol')
chainId = t.get('chainId')
print(f" index={idx} network={net} address={addr} name={name} symbol={symbol} chainId={chainId}")
# print raw JSON snippet (compact)
import json as _j
print(" raw:", _j.dumps(t, separators=(',',':')))
else:
print("\nNo entries with id 'mUSD-mainnet' found")
# Duplicate (network,address) pairs
dup_pairs = [ (k,len(v)) for k,v in pair_map.items() if k[0] and k[1] and len(v)>1 ]
print("\nDuplicate (network|address) pairs:")
if dup_pairs:
for (net,addr),cnt in dup_pairs:
print(f"{net}|{addr} ({cnt} entries)")
else:
print("none")
# IDs mapping to multiple distinct addresses
multi_addr=[]
for id_, entries in id_map.items():
if id_ is None: continue
addrs = set()
for _,t in entries:
if isinstance(t, dict):
addrs.add((t.get('address') or '').lower())
addrs = {a for a in addrs if a}
if len(addrs) > 1:
multi_addr.append((id_, list(addrs)))
print("\nIDs mapping to multiple distinct addresses:")
if multi_addr:
for id_, addrs in multi_addr:
print(id_, addrs)
else:
print("none")
# Schema/type spot-check
req_str=['id','name','symbol','address','network','type','hash']
req_num=['decimals','chainId']
issues=[]
for i,t in enumerate(tokens):
cur=[]
if not isinstance(t, dict):
cur.append('not-object')
issues.append((i, None, cur)); continue
for k in req_str:
if k not in t:
cur.append(f"missing:{k}")
elif not isinstance(t[k], str):
cur.append(f"{k}:not-str")
for k in req_num:
if k not in t:
cur.append(f"missing:{k}")
elif not isinstance(t[k], (int,float)):
cur.append(f"{k}:not-number")
if cur:
issues.append((i, t.get('id'), cur))
print("\nTokens with schema/type issues count:", len(issues))
for idx, id_, cur in issues[:200]:
print(idx, id_, ";".join(cur))
PY
Length of output: 282
Remove duplicate token id "mUSD-mainnet" in tokens/token-list.json
Automated checks found id "mUSD-mainnet" occurs 2× in tokens/token-list.json (739 tokens). Remove or merge the duplicate entry, ensure (network,address) pairs are unique and required fields are present, then re-run the sanity checks.
🤖 Prompt for AI Agents
In tokens/token-list.json around lines 5643 to 5652, there is a duplicate token
entry with id "mUSD-mainnet"; remove or merge this duplicate so each id and
(network,address) pair is unique. Locate the other "mUSD-mainnet" entry, compare
fields (name, symbol, decimals, address, network, type, hash, chainId), keep the
correct/most complete record, delete the redundant object, and ensure the JSON
remains valid (no trailing commas) before re-running the sanity checks.
Change USDS token to MakerDao token (Sky):
StableUSD has been renamed to Stably USD Classic (USDSC) and is no longer supported by the issuer. For more information, please refer to https://stably.io/usdsc.
Add GHO
Add MetaMask USD (mUSD)
Summary by CodeRabbit
New Features
Bug Fixes
Chores