cheqd’s cover photo
cheqd

cheqd

Software Development

The Payment and Trust Infrastructure for Credentials. Building the Trusted Data and AI Agentic economies. $CHEQ

About us

The Payment and Trust infrastructure for credentials. Building the Trusted Data and AI Agentic economy. Your Data 🆔 Verified 👌 Portable 🎒 Private 🔑

Website
https://www.cheqd.io
Industry
Software Development
Company size
11-50 employees
Headquarters
London
Type
Privately Held
Founded
2021
Specialties
Blockchain, Self-Sovereign Identity, Web3, and Decentralized Identity

Locations

Employees at cheqd

Updates

  • cheqd reposted this

    Figma just integrated Anthropic's AI to turn designs into working code. On the surface, this looks like another “AI feature” announcement. It’s not. One of our team switched from app front-end generators like Base44 , Replit to Figma. The UI quality was drastically better. Why? Training data. Figma sits on a massive corpus of real production design work. Foundational model + proprietary data + distribution = dominance. We’re entering the phase where: Platforms don’t get disrupted by AI. They absorb AI. Standalone AI wrappers should be nervous. The real consolidation hasn’t even started yet. Link in comment below.

  • cheqd reposted this

    🎓 Trust Anchors in Verifiable Credentials In a Verifiable Credentials system, trust does not come from the credential artifact itself — it comes from the cryptographic binding to an Issuer identity that can be independently resolved and verified. That identity is the Issuer DID, anchored in a Verifiable Data Registry (VDR). We’re using the cheqd network as the VDR for the Issuer DID (https://lnkd.in/gJ2_KUMM). ✅ Key Separation and Trust Semantics The DID document intentionally separates key responsibilities: ⭐ Authentication key Used to update and govern the DID document itself (control plane). ⭐ Issuer key (key-1) Used to cryptographically sign Verifiable Credentials (issuance plane). This separation supports key rotation, and aligns with best practices for DID governance and long-lived trust anchors. ✅ Privacy-Preserving Issuance The Issuer key supports BBS+ signatures, enabling: ⭐ selective disclosure ⭐ unlinkable presentations ⭐ minimal disclosure proofs aligned with W3C VC data model extensions This allows Verifiers to validate claims against a public trust anchor without requiring full credential disclosure. ✅ Why this matters A trust anchor is not a logo, registry entry, or legal assertion. It is a publicly resolvable DID + verifiable keys + cryptographic proofs, backed by a neutral registry. That’s Verifiable Trust from Anonyome Labs, Inc.

  • The Fed holding rates last week wasn’t much of a surprise, but the tone of the decision matters more than the decision itself. In a piece by Yellow, cheqd’s co-founder and CFO Javed Khattak shared his take on what markets are really watching now. The rate hold was fully priced in. What investors care about is whether the Fed is quietly moving towards looser conditions later this year. That shift in focus helps explain the current mood in crypto. Liquidity is still supportive, but without a clear catalyst, markets are stuck in a holding pattern. Institutional flows have slowed, spot demand is weaker, and prices are reacting more to economic data than to central bank headlines. It’s a reminder that crypto is behaving more like a macro asset than ever. The next real move won’t come from another expected Fed decision, but from data that changes expectations around growth, inflation, and risk appetite. Learn: https://lnkd.in/gcPEkF2e

  • The European Commission has launched a formal investigation into X over alleged failures to prevent its Grok AI from generating and spreading illegal content, including non consensual and exploitative deepfakes. Regulators are asking the question: did the platform adequately assess and mitigate systemic risks before deploying powerful generative AI features, as required under the Digital Services Act? This case exposes a deeper structural problem with the internet. There is still no native way to verify who created a piece of synthetic content, whether it is authentic, or whether the use of someone’s likeness was ever authorised. In the absence of that infrastructure, liability continues to default to platforms and intermediaries after harm has already occurred. Fraser Edwards, CEO and co-founder of cheqd, put it: "every creator should be able to control how their likeness is used in AI generated media. Without verifiable provenance, consent, and attribution baked in at the content level, abuse is not an edge case. It is an inevitability." This investigation is a clear signal that regulation alone is not enough. What’s missing is verifiable, machine-readable trust infrastructure (such as cheqd) that allows content to be traced back to its source and linked to proof of authorisation. Otherwise, platforms will remain stuck in a reactive cycle, and citizens will continue to bear the cost. Read: https://lnkd.in/gDz_4BuA

  • If Web2 was characterised by careless data sharing and its uncontrollable exploitation for commercial purposes, Web3 shifts that paradigm towards enabling data owners to be in charge of their data. The latter is possible thanks to Self Sovereign Identity (SSI) – a method of identity that centres the control of information around the user, removing the need to store personal information entirely on a central database. This article looks at Web2 and Web3 specifically, plus how SSI works like in Web3: https://lnkd.in/gycQV8qt Contact us to build your own trusted data economy: [email protected]

  • View organization page for cheqd

    2,937 followers

    cheqd has integrated into a suite of SDKs to enable third parties to create DIDs and DID-Linked Resources; and, issue and verify Verifiable Credentials, using cheqd DIDs. Here is a comparison between our supported SDKs, including: Credo, ACA-Py, Veramo SDK plugin and Walt.id's Community Stack. Choose the best fit for your project now: https://lnkd.in/d7RhX4vz

  • View organization page for cheqd

    2,937 followers

    Finextra’s latest piece on EBWs, EUDIWs and wallet-carrying AI agents captures a shift that wealth management can no longer ignore. The issue the author points to is structural. Wealth is still fragmented, often self reported, and managed through static documents and periodic reviews. Even within a single bank, clients rarely have a complete and up-to-date view of their assets, liabilities and rights. Digital wallets change this at the foundation level. When ownership, authority and mandates are issued as verifiable credentials, wealth management moves from documents to continuously verifiable data. Asset views can be aggregated with consent, updates flow automatically, and assessments stop being frozen in time. AI agents only become useful in this context. Without identity and authority they add risk. With wallets and clearly scoped mandates, they can act within defined boundaries, with every action traceable and reversible. This is where cheqd fits. We build the infrastructure to issue, verify and govern verifiable credentials for identity, ownership and authority, with payment rails built in. As EBWs and EUDIWs roll out, this trust layer becomes essential for wealth management systems that want to scale without losing control. The article is well worth reading, and so is reflecting on how much of today’s wealth stack is still held together by PDFs. https://lnkd.in/gi_sWjGY

  • View organization page for cheqd

    2,937 followers

    A thoughtful piece from The Defiant on why crypto engagement feels quieter right now and why it’s not just an X algorithm issue. The article looks beyond “Crypto Twitter is dead” narratives and points to something more structural: retail participation has materially cooled across social, search, and trading data. Fatigue from repeated cycles, higher interest rates, and the opportunity cost of holding speculative, non yielding assets are all playing a role. As Javed Khattak, Co-founder and CFO at cheqd, put it in the piece: “Retail interest in crypto has cooled materially relative to prior cycle peaks, with engagement, search data, and trading activity all pointing to a retrenchment in speculative participation.” He also highlights an important shift in how this cycle should be read. It’s not retail led, but institution led, with continued buildout happening quietly in the background. That distinction matters. Lower noise doesn’t necessarily mean less progress. It may simply signal a macro reset and market maturing, with attention driven hype giving way to infrastructure, regulation, and long-term utility. Worth a read if you’re trying to make sense of current market signals without defaulting to simple bull or bear labels. https://lnkd.in/gJY-kaFP

  • View organization page for cheqd

    2,937 followers

    This recent analysis on the “ZombieAgent” attack is another reminder that as AI agents become more autonomous and more connected, trust and security risks are scaling just as quickly. The research shows how indirect prompt injections hidden in everyday content like emails or documents can turn AI agents into silent data exfiltration tools, without any user interaction. What’s particularly concerning is the persistence of these attacks, where memory and connected tools can be abused to create long lived backdoors that operate invisibly in the background. This isn’t solely a ChatGPT problem. It’s a structural issue with how AI systems consume data today. Large language models don’t understand intent, provenance, or authenticity. They treat all inputs the same, whether they come from a trusted system, a verified individual, or a malicious actor hiding instructions in a footer or disclaimer. As AI agents are increasingly used in enterprise environments, this creates a critical question: how do we ensure that AI systems can verify who or what a piece of data comes from, and whether it should be trusted, before acting on it? This is exactly the problem we’re working on at cheqd. Our approach focuses on adding verifiable trust to AI inputs using credentials. This allows organisations to issue tamper-evident, machine verifiable signals about the origin and permissions of data before it’s consumed by an AI agent. https://lnkd.in/gn3mhhCF

Similar pages

Browse jobs

Funding

cheqd 2 total rounds

Last Round

Seed

US$ 2.6M

See more info on crunchbase