Key Management: Revolutionizing Blockchain & Web3 Security

Klenance
77 Min Read

Episode 140 of the Public Key podcast is here! “Once my web3 wallet is as easy to use as Gmail, now things will be actually really really interesting.” This is a great quote from Riad Wahby, the Co-founder & CEO of Cubist, who sums up the challenges of mass adoption in the web3 space. Riad goes deep into the intricacies of cryptographic security and how Cubist enhances key protection with hardware-based policies and takes a complex set of concepts and makes them easy to understand and digest in this episode. 

You can listen or subscribe now on Spotify, Apple, or Audible. Keep reading for a full preview of episode 140.

 

Public Key Episode 140: Revolutionizes Key Management for Blockchain Security

In this episode, Ian Andrews (host) speaks to Riad Wahby, the Co-founder & CEO of Cubist, a company that stands at the forefront of private key management, promising robust security solutions that both protect cryptocurrency assets and enhance usability for developers and enterprises.

Riad discussed the company’s commitment to optimizing the secure handling of cryptographic keys, an often overlooked yet critical aspect of blockchain technology and explains the vulnerability for exploitation of existing key storage solutions like MetaMask, hardware wallets, and MPC setups.

The technical discussion provides easy-to-understand explanations of concepts like threshold signature schemes, multi-party computation and complete mediation, and even touches on secure hardware solutions and user-centric designs that can transform web3 technology into an everyday tool for the average user. 

Quote of the episode

“We all get to decide among ourselves, do we want this to be sort of a place where only the hardcore bleeding edge fanatics go, or do we want it to be something that everyone uses the way that the internet is?”  – Riad Wahby (Co-founder & CEO, Cubist)

Minute-by-minute episode breakdown

2 | Cubist’s role in secure key management for blockchain organizations

4 | From chip design to blockchain security and entrepreneurship

8 |  Balancing security and convenience in cryptocurrency key management

15 | Complete Mediation: Embedding policies in secure hardware for key management 

20 | Amazon’s AWS dominates secure hardware for cloud services

24 | Exploring threshold signature schemes and multi-signature approaches

27 | Understanding what went wrong with Axie Infinity’s Ronin Bridge and how the industry can learn from this hack

32 | Exploring secure web3 policies without Solidity and enhancing smart contracts with secure hardware for secret computation

39 | Making web3 usable for everyone with improved wallet usability 

Check out more resources provided by Chainalysis that perfectly complement this episode of the Public Key.

Speakers on today’s episode

This website may contain links to third-party sites that are not under the control of Chainalysis, Inc. or its affiliates (collectively “Chainalysis”). Access to such information does not imply association with, endorsement of, approval of, or recommendation by Chainalysis of the site or its operators, and Chainalysis is not responsible for the products, services, or other content hosted therein.

Our podcasts are for informational purposes only, and are not intended to provide legal, tax, financial, or investment advice. Listeners should consult their own advisors before making these types of decisions. Chainalysis has no responsibility or liability for any decision made or any other acts or omissions in connection with your use of this material.

Chainalysis does not guarantee or warrant the accuracy, completeness, timeliness, suitability or validity of the information in any particular podcast and will not be responsible for any claim attributable to errors, omissions, or other inaccuracies of any part of such material. 

Unless stated otherwise, reference to any specific product or entity does not constitute an endorsement or recommendation by Chainalysis. The views expressed by guests are their own and their appearance on the program does not imply an endorsement of them or any entity they represent. Views and opinions expressed by Chainalysis employees are those of the employees and do not necessarily reflect the views of the company. 

Transcript

Ian:

Hey everyone. Welcome to another episode of Public Key. This is your host, Ian Andrews. Today I’m joined by Riad Wahby, who’s the co-founder and CEO of a company called Cubist, which I’m super excited to learn about today. They’re focused on security, specifically on protecting private keys, which as we know on this podcast is the source of so much trouble when it comes to the crypto ecosystem. Riad, welcome to the show.

Riad:

Thanks for having me. And yeah, I couldn’t have said the intro better myself. I’m glad that your audience is already keyed in as it were. Sorry, I won’t make another one of those.

Ian:

There’s so many ways we could take the joke. Maybe let’s start with just a quick high level, what is Cubist? Why should people care? And then obviously we’re going to dive deep into the details over the course of the show.

Riad:

Yeah, absolutely. So we build key management infrastructure. When you interact with any blockchain, as you know, if you want to make a transaction, if you want to deploy a smart contract, basically everything comes back to somewhere, someone has a private key, they generate a signature, they post that signature to the chain, and then something happens as a result. Basically, if you are a developer, you’re sitting around a kitchen table with your two best friends and you’re starting to make your world ending or world building app, then that’s great. You can do that quite simply with whatever, a Ledger, or probably at first … You should never do this [inaudible 00:01:16] first, you keep the keys on your laptop because your app isn’t worth anything yet. But eventually things get more complicated and in exactly the same way that your company eventually will have some org structure and you’re going to have different people with different responsibilities, that structure, what we’ve seen in the industry is big companies, basically their key management infrastructure basically mirrors their org structure.

So for example, you have a finance team that team has access to certain keys for sending and receiving payments, but that team is not touching the keys that, for example, are deploying upgrades to the smart contracts. So that’s for the engineering team to do. And then the security team is kind of watching everything. So as your company grows, what we think of as a Web3 native company is really a company where most of the operations of the company, even in non-technical parts of the org, somehow interact with the chain. And so now you’ve got this massive complexity. I am also on the side, an academic cryptographer, and so I’m allowed to make fun of cryptographers who like to say things like, “Well, we’re going to solve this problem just by we’re going to introduce a secret key, and then this simplifies the problem.” And basically that’s always false.

And introducing a secret key makes the problem more complicated because now you have to have people keeping a secret key secret secret, and it’s very non-intuitive because it’s not a thing, it’s not physical, there’s nothing there. It’s just like, well protect some data. And so you have these two complexities. One, just fundamentally keeping cryptographic key material safe is hard. And then the other, doing it in a way that maps to the structure of your organization is just as hard, if not harder. I’m using an organization as an example here, but if you’re building wallet infrastructure for a million users, or if you’re building a validator machinery for a bunch of chains, in all of these cases, you have comparable complexity, even though it’s maybe somewhat different in character. And so basically we exist to manage that complexity.

Ian:

So for anyone listening who’s heard about some of the recent very large hacks where private key compromises come into play, that’s where potentially Cubist plays a role in protecting all that private key material, ideally, so you’re not storing it on your laptop or even in some of maybe the popular cold wallet storage that we hear people talk about, an offline hardware device. You’ve developed a more useful, more usable and more secure solution, fair to say?

Riad:

Yeah, I think that’s a good summary.

Ian:

Yeah, amazing. You mentioned that you’re an academic as well as founder here at Cubist. Tell us how you kind of came into the world of cryptocurrency and blockchain. I’m always fascinated by people’s origin story.

Riad:

Completely weird story. So I started out a long time ago as an electrical engineer. I was designing integrated circuits, and I worked for a company in Austin called Silicon Labs, building the chips that operate old school landline telephones, really ancient technology, really actually ancient. You could take a phone built in 1895 and plug it into one of our chips, and it would still work because telephones are backwards compatible forever, basically. I was working on that. I worked at that company for almost 10 years. We built a bunch of really, really cool stuff. Building chips is awesome. It’s super fun. It’s a terrible way to make money, but it’s a super fantastically funded thing to do.

And at some point, I became friends with a professor at UT Austin, his name is Mike Walfish, who was working on this really cool new area of probabilistic proof systems and zero knowledge proofs. And new in the sense that new building it practically. Theoretical constructs have been around since the 80s. They really sort of were this super important piece of CS theory in especially early 90s and early 2000s. This was sort of the amazing cool stuff that everyone in the theory world was like, “Oh my gosh, these results are unbelievable.” A paper famously with the title, Anything That Can Be Proved Can Be Proved In Zero Knowledge, which is just kind of this mind-blowing-

Ian:

It’s mind-bending.

Riad:

Yeah. But up until late 2000s basically, everyone kind of assumed these things can’t be built at all. This is theory only, and nobody will ever really run one of these things on a computer with some very, very small exceptions, very special case ones for sure. But these super general purpose things where we take any program, turn it into a zero knowledge proof, people basically assumed, “Eh, that’s never going to be practical.” And so Mike was working on this, and he started to tell me about this, but me, I’m an electrical engineer. So we were talking about this over ping pong or something, and he was like, “Dude, this is super cool. You should just read these papers.” And I was like, “Okay. Yeah.” And I read some papers and then we talked again, played some more ping pong. He was like, “Okay, so now you should quit your job and go back to grad school.” And I said, “What? I own a house. No, there’s no way.”

But he worked on me for a little while, and eventually I did. “Okay, Mike, I’m selling my house. I’m going to grad school.” And so I did, and I worked with him. He eventually moved to NYU. I worked for him at NYU for almost two years, and in the meantime was applying to grad school, ended up at Stanford working on this and working on proof systems. And this was actually long enough ago that they were useless still. Okay, I joke. But basically for the most part, people looked at them and said, “Eh, yeah, this is interesting, but is anyone ever really going to use them? Sure, you’re making them practical.” And then a few people had these ideas around using them to build cryptocurrencies. So my colleague now, Bryan Parno had this little paper called Pinocchio Coin, and then there was Zero Coin and Zero Cash, and Zcash eventually came out of that. And then people started to think about rollups. And so all of this was pushing towards anyone who’s working in ZK proofs is kind of thinking about blockchains and cryptocurrencies.

And then as a minor aside, in parallel with that, I was, for a while, I was the administrator of the Cypherpunks mailing list. And so I was sort of interested in all of these things already. I had seen the Bitcoin paper go across the wire and people were really talking about this stuff really interestingly. And so this confluence, I was like, well, I’ve got to start looking into this stuff now. I remember walking around the Lower East Side reading the Bitcoin paper, and it was a nice day. I was out for a stroll and thinking like, “Oh, this is super cool. Okay, yeah, I can totally get into this.”

And eventually, so I started to work on some research stuff and we built some kind of interesting stuff. We worked on some aspects of rollup efficiency. This was with my PhD advisor at Stanford, Dan Boneh, who’s involved in everything blockchain-related. He’s fantastic. We made these anonymous air drops again, out of zero knowledge proofs, and eventually we actually wrote the standard for the signature scheme that’s now used in ETH2, BLS signatures. And so kind of just working in all of this sort of the low level bits and bytes of blockchain security. And so I was totally hooked. It’s kind everything that I wanted. It’s like networking and security and optimization and writing code. And yeah, doesn’t get-

Ian:

It feels like you went from maybe one of the hardest fields in electrical engineering and chip design, and you maybe found one of the few even harder fields by getting this far deep into cryptography possibly.

Riad:

Well, it’s very kind of you to say. I’m not sure if I was in a particularly good corner of either field, but yeah, no, I mean really they’re so different, but both of them have this kind of really interesting character of just sit down and think for a while and then make something.

Ian:

Yeah. So you finished your PhD at Stanford and then you’re now on the faculty at Carnegie Mellon. What led you to want to found a company and ultimately create Cubist?

Riad:

I mean, as I said, we were working on all these really low level parts of the [inaudible 00:08:34] ecosystem. And at some point I was talking with a couple of colleagues, or rather people who were students with me at Stanford and then became faculty, Dan Stefan who was at UC, San Diego and Fraser Brown who’s here at CMU. And we were talking about some problems with very developer focused problems that are Web3 specific. You get into these really unique security questions when you’re building applications. Of course, intuitively, everyone’s like, “Well, sure you’re building an application that processes money as a first class object. So yeah, sure. There’s always something hard and interesting about it.” So we started to think, “Oh, some of our research is actually really closely related to this.” And so then we really started to think,” “Okay, how do we go from doing really researchy stuff to actually building a product?

And we were really fortunate that our fourth co-founder, Ann, who’s our COO and sort of really, I call her the Business Savior, actually told us, “Okay, well you guys have fun ideas and all that, but let’s actually turn this into a product. You’ve got to talk to customers.” And so we did, and we had all these questions of, okay, when you face these security problems, are you doing this? Are you doing this? Where are the problems? And basically what we heard from everyone was this kind of story of, “Well, my company started out really simple. Everything worked on a Ledger or we had a laptop and then things just spiraled out of control, and now I just need to be able to manage keys. I need to be able to build infrastructure that maps to the way that I use my keys.”

And so we thought this is actually something that we really, really, we’ve done a lot of research around exactly these kinds of things. So Dan worked a lot on this interesting part of computer security called information flow control. Fraser has worked on all kinds of formal verification and automatically finding bugs in software, and I like networking and cryptography. So we basically, that’s the nexus of stuff that you need to build key management infrastructure. And so it was like, okay, this is kind of perfect. All of us get to play our strengths.

Ian:

What an amazing team you’ve assembled so we can baseline kind of knowledge here. So I think probably everyone listening has probably interacted with MetaMask either as a mobile app on their phone or as a browser extension in something like Chrome. And they’ve probably signed a transaction or transferred some money between two wallets. So they probably got that experience. And then I would think that quite a few of our listeners have familiarity with some of the MPC key management systems. We’re partnered with Fireblocks. We’ve had folks from Fireblocks on the podcast previously who were, I would say, generally considered state-of-the-art in terms of this enterprise case of key management. Anybody’s running an exchange or you’re running a large trading operation, you’ve probably got Fireblocks or maybe something like that that does key protection. And it can apply some security policies where if you want multiple signers on certain types of transactions, I think it supports hardware extensions. So if you want the key material to actually be stored in a physical device but have a software management layer around it can do that sort of thing.

So those seem the two ways that key material gets stored today, an app like MetaMask or a browser plugin or you graduate up to this kind of enterprise requirement. And then there’s this third element, which people are probably familiar with, which is like a Ledger or a Nano, which is a hardware device. Usually you plug it into a laptop via USB, although there’s some varying configurations there, but it basically restricts movement of that key material and often require a biometric verification in order to use the key material to do anything. And it seems like from that kind of baseline understanding of the universe, where people usually get into trouble is they say, “Well, if I ever lose this key material, I lose everything. I lose access to my assets, I lose the ability to publish a new version of my DAP. It’s very important I never lose this, so I need to back it up because hardware fails and people forget passwords.” And so then they back it up to, or maybe they’ll back up the seed phrase into Dropbox.

Riad:

Yup. Or the classic Google Drive or-

Ian:

Yeah, exactly.

Riad:

Put it in your secret spreadsheet or whatever.

Ian:

Yes. Totally. And then that gets compromised because you get malware on your laptop or you’ve reused passwords and somebody hacks a that account, and then you wake up one day and everything’s gone. And so that tends to be the compromise pattern. We’ve seen some more exotic versions of that with the North Koreans impersonating legitimate employee, gaining trust and access inside an organization, eventually getting access to key material and then being able to run off with all of the funds locked in a particular DAP. But it’s kind of the same compromise pattern. It’s like someone is able to compromise the underlying hardware or an application where the key material exists, and then there’s really no protection at that point. Maybe from that baseline level of understanding, talk a little bit about what Cubist is doing that perhaps solves for that or prevents that condition set from ever existing in the first place.

Riad:

Yeah, this is a great question. I thank you for the fantastic setup. So I think there are a few different versions of this that we tend to see, and they depend on who is being compromised. So one type of compromise that you tend to see is one that you mentioned malware that gets, I mean, usually it’s like somebody puts something on an ad network, because frankly, they’re fantastic malware distribution networks too, and compromises a bunch of browsers and steals a bunch of keys from MetaMask and now, oops. And so this one, fundamentally, if we’re talking about attacking end users, it’s really about getting as much reach as possible and attacking the weak point, which is basically I’m keeping my secret key in my browser, and unfortunately my browser is this horrifically complex piece of code that also does everything in my whole life. And unfortunately, browsers are just sort of crazy.

And so this tends to be something that where basically it fundamentally comes down to a numbers game. For an attacker, they just want to hit as many machines as possible, they’re going to have a low success rate, but they’re going to manage somehow to really get enough hits to make the thing worth it.

Ian:

$100 stolen 10,000 times turns out to be a lot of money.

Riad:

Yeah, exactly. It’s plenty of money. So basically to us, the really fundamental issue there is that people are looking for convenience because of course they want to be able to use MetaMask, use whatever it is without, I mean, frankly, even something like a Ledger, yeah, that’s a big upgrade in terms of security, but it’s also a big downgrade in terms of convenience. I’ve got to plug the thing in. I’ve got to use the little keypad. It’s a much worse experience, even if it is much, much more secure.

Ian:

And do I travel with it? Is it on my key chain? Do I keep it in a safe in my house? And then what if I’m 3,000 miles away and I need to make a transaction somewhere? There’s all sorts of then downstream cascading complexity.

Riad:

Absolutely. Absolutely. I think basically everyone thinks, well, this is a low probability, so I’m not going to worry about it so much. I’m going to just use MetaMask or whatever and take the risk. Okay. That’s how it goes. So basically for folks like that, I think the fundamental thing is you want the same kind of security you get from a Ledger. You want sort of hardware backing. You want to make sure that the key material can’t be extracted even if somebody gets access to your machine. And yet you want the sort of convenience of using something like MetaMask where it’s just an application. Ideally, I just log in with Google or something like this where I also don’t have to worry about things like a 24 word seed phrase because I mean, frankly, most people are just turned off by that. They hit that 24 word seed phrase, they’re like, you know what? Let’s not bother.

So I think for folks like that, it’s about finding that balance of getting the high security and getting really good convenience. Basically the way that you can think about CubeSigner in a situation like that, so for example, we work with Avalanche, the Avalanche Core Wallet uses our infrastructure. If you use Core Wallet and you say, “Create me a wallet using sign in with Google,” what you get essentially is a remote private Ledger that lives in the cloud that you can access from anywhere. And once you’ve authenticated to it, now you control it. So this is super cool because it means you’re getting that level of security and you’re getting the convenience of something that’s basically sign in with Google and maybe use a U2F key or something like this if you want sort of an additional layer of security. So that’s kind of one aspect.

There are other aspects. If I’m a company and I’m running some kind of large application and moving around a lot of funds, now this is a completely different threat model because now I am worth targeting as an individual. The company is worth targeting, where it’s like if I have 100 bucks in my MetaMask wallet, nobody’s going to, I don’t know, Mission Impossible me to get the funds. So it’s a completely different threat model. And so what you need there is you need the ability to set very tight policies around what a key is able to do, around who is able to do those things with the keys and around where the backups are stored and how they can be restored. Because of course, the most secure way to have a key is to delete the key. Now nobody has it. It can’t be used, but that’s not so useful.

So you got to somehow find a balance where I have the key, it can be restored in case of some kind of emergency. And when it’s being used, basically the abstraction that I want as a big company who’s using 100 keys is I want to be able to say for this key, these are the kinds of behaviors that key should exhibit and everything else should be disallowed. So if I’m running some kind of trading strategy, I should be able to encode essentially, this is the trading strategy that I am using, and anything that looks like it’s too far away from that trading strategy just shouldn’t be allowed at all. And now even if somebody breaks into the machine that’s running the trading strategy and steals an authentication token and whatever, still the key just says, “No, I’m sorry. I’m not going to sign that because it’s outside of my parameters.”

So the ability to do that level of lockdown, and then on top of that, having governance policies and all of this sort of thing, basically you now have the ability to do something that looks very much like what would appear in a SOC 2 report. Here are the people who are responsible, here is the way that they sign off on changes, et cetera. So building in that kind of machinery essentially directly on the key kind of gives you the strongest guarantee. So I promise I’ll shut up, 10 more seconds of thought here. So I’m currently teaching the undergraduate security class at CMU, and one of the things that we stress throughout the whole class is there are these security principles that basically always apply that whenever you’re considering a system, you need to consider these principles. One of those principles is called complete mediation. So what that means is anytime that some sensitive resource is being accessed, every access to the resource needs to be examined for compliance with a policy or for correctness or for whatever it is.

At Cubist, when we talk about policies, those policies are essentially directly on the key. It’s not like, well, you can apply a policy to certain requests, but bypass it for others. No, the idea is the abstraction you get is a key that is only allowed to do certain things, and this is exactly complete mediation, and this is the kind of thing that you need. If you have instead keys stored over here and policy stored over here and like, well, I happen to make a request to the policy engine, and then the policy engine happens to make a request to the keys, if you can bypass that policy engine, that means an attacker can bypass it, which means that it’s not helping you. So okay, many, many words there, but basically different people have different needs, and the idea is that you can basically serve those needs, or our opinion is you can basically serve those needs with a few primitives, which are exactly the primitives that CubeSigner provides.

Ian:

Yeah, I’m fascinated about the last part that you described there, where the key itself has the policies embedded in them. Can you talk a little bit more about what does that actually look like? Because I’m thinking about a key, like the representation that I see is the long string of letters and numbers when the key is generated. How do you actually go about encoding policy into that key? Is this a new design that is different than what a normal key that I might create with a MetaMask operation would look like?

Riad:

We generate the same kinds of ECDSA or EDDSA or whatever keys.

Ian:

Yeah, like that.

Riad:

So essentially you have secure hardware that is used to enforce all of these mechanisms. So the idea is that the abstraction that the secure hardware presents to the world is here is a key with policies on it. And so by design, the only way to get a signature under a key is to ask the hardware for that signature. And the only way the hardware will release the signature is if it complies with the policy. So because we’re using this kind of hardware security modules and secure enclaves to build this abstraction of a key with policies directly on it. And so basically this is something that secure hardware gives you that is actually really, really hard to get using some kind of protocol like a multi-party computation protocol or something like this. In fact, what we tend to see in the industry is when you see somebody who’s using MPC for key management threshold signature schemes really is already this high level of complexity.

And so the protocol itself essentially only encodes this set of players communicate in this particular way in order to produce a signature. And the assumption when you design one of these protocols is well, the players sort of collectively decide whether or not they want to generate a signature. On the one hand, you’ve got the threshold signature scheme, and then on the other hand, you have sort of each player individually is responsible for enforcing policies, which is fine, by the way. It’s possible to build such a thing. You basically have to build an even more complex piece of software. And the assumption that’s necessary in order to believe that it’s secure is that all the players actually have the correct software, that the software doesn’t have any bugs, and that somehow it’s not possible to request a signature from the scheme unless you also satisfy the policies.

And because we avoid threshold signature schemes, because we’re using secure hardware, basically we get rid of the vast majority of that complexity and replace it with this very simple assumption, namely the secure hardware tests correctly to the software that it runs, and that hardware security module behaves correctly with respect to the attestations.

Ian:

Wow, I have a ton of questions. I will try and formulate them in an order where we have a path through the woods here. Are you all building your own hardware?

Riad:

No, we’re using almost entirely running an AWS.

Ian:

Okay.

Riad:

There’s actually a reason for that. If folks ask us, “Well, when will you have support for Google or for Azure?” Nobody’s ever asked me, “When will you have support for Oracle?” But someday somebody is going to ask, and the answer is when they get around to adding the right functionality. It turns out right now, if you want to go out and build one of these systems where you have secure enclaves where only a particular piece of software can access a particular key, and even an administrator can’t change that, that’s the super important part. The only one who offers that right now, and the only one that has that functionality is Amazon.

Ian:

Okay.

Riad:

With Google and others, we have conversations going on with them, we’re encouraging them to do it. But basically the assumption there is always, well, if you’re an administrator, maybe you want to be able to change that. Whereas with Amazon, you really can lock yourself out. You can tear the steering wheel off, which is super, super important because as the folks running the infrastructure, we of course don’t want any access to anyone’s keys. That’s crazy. Of course, we don’t want that. And if we were running a comparable system, but in Google, it would be basically impossible to provide that guarantee. Whereas with AWS, this is what we get. So for now, we’re entirely in AWS. I think down the road, other clouds for sure, and eventually running on-prem with people’s own hardware. Absolutely. So I think the future is bright in terms more hardware support, and maybe someday we’ll even get to build our own hardware. Don’t tell my co-founders that. Yeah, but right now, AWS.

Ian:

All the new startups are building their own hardware from defense tech to AI companies building their own inference chips. You’ve got to get on the bandwagon.

Riad:

That’s a tough road. I’ve seen it from the inside. That is a tough road.

Ian:

You’ve got the experience to do it. That EE degree could come in handy.

Riad:

We’ll see.

Ian:

We’ll see. Okay, so you’re building off, we’ll call it off the shelf Amazon key management infrastructure hardware, which gives you a bunch of guarantees. It also means that you can have a cloud delivered service secure to your customers, even though you’re providing a management setup layer. So there’s low complexity, it doesn’t give you any mechanism to interfere with or touch the keys. Customers can have confidence. They own all the infrastructure, so that’s cool.

Riad:

Exactly.

Ian:

When you were talking about threshold signing a second ago, when I think about that, it’s like one of these M of N schemes. So we’ve issued six keys, and in order to complete a transaction, we need four of six or five of seven keys to sign the transaction in order for it to process. Is that what you meant when you said threshold?

Riad:

So there are a couple different things that have that character. So there’s a true threshold signature scheme. If you go up to a cryptographer and say, “Tell me about threshold signature scheme,” what they’ll tell you is, “All right, here’s a protocol for M of N or K of N.” So we have N players, let’s say seven players. And at least four of those players have to agree in order to generate a signature. Once they’ve generated the signature, it looks exactly like any other, let’s say ECDSA signature or EDDSA signature or whatever it is. So I build a threshold signature scheme for a particular normal non-threshold signature scheme like here’s threshold ECDSA or something like that. Now that is sort of cool in that it’s basically nobody can tell what you’re doing. You just use your threshold scheme, and then what comes out is a signature that could have come out of MetaMask. And the nice thing about that is that it’s compatible with everything automatically.

The downside is those are very, very complicated protocols, especially for ECDSA, which is this sort of horrific signature scheme that we never should have used, but came about because of patents. Anyway, so ECDSA is a disaster threshold. ECDSA is more of a disaster, and so we’re all kind of suffering for it. But that’s one thing.

The other way that we tend to see is something like Gnosis Safe, which is a multi-sig scheme. There’s some machinery. Sometimes it’s on chain. Sometimes it’s off chain. But there’s some machinery that says, “Okay, here’s a list of seven signers. If I see four signatures out of this seven, then I am willing to release some transactions.” So with Gnosis Safe, you can tell it to do that. You can also build multi-sigs on Bitcoin and all of this. There, it’s an unmodified signing protocol. You use Ledger or Metamask or whatever it is to generate the signature, but then there’s a special processing step to check that I’ve got four of seven signatures and then to release something else, some signature or some transaction or something. And usually that one is not compatible. Usually something like that, with Gnosis Safe, I can tell that this is a smart contract wallet and not an EOA, not an externally owned account, not something that’s maybe in MetaMask or Ledger or something.

So our approach for this kind of thing in CubeSigner is actually to kind of give you the merging of those two worlds. So what we have is you can actually do kind of threshold signing of certain kinds, and there are certain instances where you really do want that. For example, if I have two parties, two different companies that both have to participate, that’s a case where you truly could use something like threshold signing and nothing else will get the job done. Because really threshold signing has this adversarial nature to it. The players don’t want to cooperate with one another, and yet they still do. So that’s actually a slightly different setting from the way that we see most threshold signature schemes used today. So sometimes you want that.

But most of the time what you really want is like, well, Bob and Carol and Elaine, all in finance, two of those three need to agree in order to send a payment to our supplier. And for a case like that, what you really want is, well, they can carry around YubiKeys or something or push a button on their phone or whatever it is. And once they’ve done that, once they’ve given that approval, which under the hood really is just like sending a signature, then the secure hardware is the thing that actually checks and says, okay, now I have the two of three signatures and now I will produce a normal ECDSA signature or a normal EDSA signature or whatever it is. So now you kind of get the best of both worlds, because you have on the one hand, the simplicity of a multi-signature scheme and the flexibility to use something like a YubiKey or Google Authenticator or something to be the approval mechanism, but you also get the compatibility of at the end, generating a normal standard signature.

Ian:

That seems really powerful. The case that I was thinking about, did you follow the Axie Infinity hack?

Riad:

Absolutely.

Ian:

So for people that don’t remember, Axie operated this bridge that would allow you to move assets from the Ethereum blockchain into the game and vice versa. At the peak of popularity, this bridge called the Ronin Bridge had in excess of half a billion dollars of assets on the bridge, and they operated one of these M of N schemes to allow funds to process to transfer in and out of the game across the bridge. But it turned out that of the six or seven node validators that comprised the bridge where you needed five of seven, let’s say, I don’t remember the exact number, six of them were in the same zone of Google Cloud, same availability zone, same VPC all managed and controlled with the same administrator credentials by the same admin.

And so it turned out Lazarus Group only needed to compromise one individual and then had the ability to validate any transaction across that bridge. And of course, that’s exactly what they did. They made off with 3 or 400 million worth of funds initially. I think that what you’re describing here would potentially fix that problem even if they still had the conditions of effectively no decentralization of the validators. But I want to confirm that that might be true, or maybe I’m overreaching a little bit here.

Riad:

So basically, unfortunately misconfigure it, basically, if you’re willing to just hand over the keys to the attacker, which is essentially … That’s not exactly what happened, but that’s close to what happened, then there’s very little that really anything could do. But yeah, I think there are a couple of things that you can say are improvements, are ways to make improvements and enforce those improvements. So one of them is if you’re enforcing this approval signature must come from a YubiKey, that’s a piece of hardware, you can buy these things for 50 bucks, they’re fantastic, log into Google with them and whatever. Please, if you don’t already own one, to the listeners, please get one. They’re fantastic. So if you can enforce, you can say, well, this must be a YubiKey, and the YubiKey can actually produce a little signature that says, “I’m really a real YubiKey. You can check.”

And so now you have this very strong statement. It’s not just like three random keys lying around on some machine in AWS or whatever if it’s going to approve the transactions over the bridge. It’s actually three keys that are in three physical devices and we can get very strong proof that those keys are in the physical devices and only exist there. And so that gives you one level of improvement already because now you know, well, it’s not just like break into one machine. It’s like, go to my house and steal my YubiKey and get the pin for it and all of this. So that’s one thing that you can do.

And then the other thing that you can do beyond that is, and actually this is something where this is kind of a nice collaboration with Chainalysis here. One thing that I think we know that went wrong with the Axie Infinity hack was there was basically no monitoring, no looking around and saying, “Well, does this look right? Is this anomalous? All the money’s gone?” And apparently they didn’t realize for multiple days that all the money was gone.

Ian:

Days, yeah.

Riad:

Which is just like, okay, that’s a more fundamental failure because it’s like my bridge should behave in a certain way. The money coming in should equal the money going out minus fees or whatever it is.

Ian:

Yeah.

Riad:

If that kind of thing is broken, then we know something is really wrong and we should shut the whole thing down. Even if we don’t know exactly what’s wrong, if we detect something is wrong, we should be careful and shut it down. And so with a system like ours where you have these very flexible policies, you can actually encode things like before signing, please check that some condition holds, and if not, no, go to some fail-safe state. And so in the case of something like Axie, and actually we’ve worked with the folks at Lombard who are building a Bitcoin LST, we’ve worked with them on exactly this kind of thing where they have very careful controls around, here are the funds on one side of the universe, here are the funds on the other side of the universe, and those better equal one another. And if they don’t, then everything shuts off automatically.

And that kind of thing is super, super, super important. Whenever you’re building a bridge, make sure that you enforce the balance sheet invariant automatically because no matter what, if somebody’s making off with the money, they’re going to violate that invariant and you should be able to detect it before all the money is gone, we hope.

Ian:

Yeah, you’re reminding me that I think that’s exactly the transaction they carried out. They were able to sign an invalid transaction that said money had come in that didn’t exist. No money came in, and so therefore I’m allowed to withdraw all the money to the other side. I did want to clarify, so the YubiKeys and a human logging into it seems like it works in the case of a human monitored system, but obviously the bridge is sort of an autonomous, at least in theory, autonomous system, as are a lot of DAP protocols designed to be. It’s like, “Oh yeah, we wrote the code, we deployed it, but we no longer intermediate it. It’s now a fully autonomous kind of self-feeding system.” Does CubeSiner still work in that model?

Riad:

Yeah, absolutely.

Ian:

Okay.

Riad:

I call that YubiKey specifically, but they’re sort of the industrial strength versions of these things. Well, I mean, Yubico makes these things called UBHSMs that have a similar character, but they’re for sort automated workloads. You can use something like KMS in AWS or the equivalents in, I think GCP also calls it KMS, and Azure I’m sure calls it something similar.

Ian:

Yeah.

Riad:

These all give you this kind of character where the keys are stored in secure hardware and there’s some automation around their use in the case of KMS or whatever. But you still get this very strong guarantee that the keys, the physical keys are kept … Rather, the keys are a physical object, essentially. They’re inside of a piece of hardware. And so with a YubiKey, and if you’re willing to forego automation, then somebody literally has to push a button or maybe there’s a biometric or something like that, that’s super hard to get around. But even in the case of something like using an HSM, using a KMS, one of these services, you can put in place a lot of these same kinds of restrictions.

So you can be sure, for example, that the Lazarus Group can’t just take the key material away, which is basically what they were able to do in this case, in the case of the Axie Infinity bridge was they were able to break into the machines that were generating the signatures, steal the keys, and then go off and generate signatures on their own that let them sort of take the money away. So at the very least, you can go from this kind of offline attack to forcing them to be an online attack, which gives you more opportunities for defense.

And then on top of that, you can sort of impose additional requirements. So for example, we have other policies that you could also apply in parallel. So you could say, for example, the bridge is allowed to send this much per day, and anything beyond that is going to require some kind of administrative intervention. And so I think if we looked at the amount of money flowing through that bridge on a daily basis, it was far less. Even for a really popular bridge, it’s far less than the amount that Lazarus Group was able to make off within five minutes. And so these kind of really almost too simple to be useful … It’s like intuitively, it’s like, oh my gosh, it’s really that simple. The answer is yes, if you can enforce even those very simple properties, that’s already a huge improvement to the security posture.

Ian:

Yeah, we talked earlier about the policies that I can create around the keys. One of the things that it seems like we get tripped up on security and Web3 in general is that most of the code is written in Solidity. That’s the default language for the EVM and Solidity from my non-professional software developer perspective is basically JavaScript, so it’s not type safe, and you end up with all these attacks that kind of boil down to code that was written poorly or re-entrancing vulnerabilities, for example, seem almost entirely down to just the construction of the code itself. How am I writing policies in CubeSigner. What’s the developer or the user experience to build this? And hopefully we’re not writing it in Solidity. That’s all I have to say.

Riad:

Yes. For better or for worse, we do not support Solidity as a policy language.

Ian:

Great.

Riad:

So there’s a few different options. So the most basic one is we have a set of predefined policies that you can choose from a menu, and those are basically the ones, the standard things, can only send a transaction of this amount, can only send to these receivers. And then we actually have some really, really complicated ones. So for example, we have support for Babylon, which is Bitcoin staking, and there we have this entire policy around how the Babylon staking works, and you can specify things like, well, only these providers are usable and only this size of … Anyway, those policies, even the menu, the sort of push the button ones actually can get-

Ian:

We actually just had David from Babylon Chain on the podcast, so we’ve gone deep on their staking platform and what they’re doing there, which was mind-bending for me to think about staking Bitcoin to secure other networks.

Riad:

Babylon is so cool. It’s such cool technology. I was fortunate to be able to really deep dive on the way that they implement all of their contracts and all of this, and it’s just super cool. I’m going to stop myself because I could nerd out for days about it. So that’s one option.

The other option, and this one is kind of coming online now, is writing policies in sort of an arbitrary programming language. Internally we run them as Wasm, but the idea here is basically if you can express it, you can use Rust or if you want to, you could use TypeScript and then compile it to Wasm or whatever, you can make it a policy. Internally, we’re fully a Rust shop, and so of course we would always tell people like, “Sure, use Rust, it’s great,” but we recognize that that’s not for everyone, but we want to be language agnostic here.

The idea is really, you should be able to define policies in a way that makes sense for you, and probably you have a code base that you are already referencing. It’s like if you have functionality in your code base for whatever it is, computing balances or checking on various invariants that should hold for your system, then you should be able to reuse that code. Because if you don’t, replicating code is just another way to make bugs. So from our perspective, really giving that flexibility is super, super important and gives you the most that you could ever hope for, which is really like you get an arbitrary program that you can run essentially on the key before it ever generates a signature.

Ian:

That’s awesome. I’m so glad the answer, “Well, it’s actually Solidity.”

Riad:

I don’t actually know if you can compile Solidity to Wasm, but if you can, no, no, don’t quote me on that.

Ian:

I’m not familiar actually if there is a Wasm runtime for Solidity. I don’t know the answer to that either. I’m going to have to go look.

Riad:

I bet that somebody has built one just because there’s some chain somewhere that’s using Wasm under the hood and wants to have some Solidity emulation layer on top.

Ian:

Yeah, it’s possible.

Riad:

Yeah, I’d be interested to see. It sounds slightly scary. Hey, that’s cool.

Ian:

Yeah. So as we’re winding down the conversation, give us a sense for where the product is today. Are you guys in general availability? If I’m building or even just if I wanted to secure my pile of crypto assets here, can I start using the product today? Is it generally available?

Riad:

Yeah, yeah, yeah. Get in touch with us. We’ve been in production for a long time now. Oh my gosh, a couple of years almost.

Ian:

Awesome.

Riad:

And we have, I don’t know at this point, lots of assets under management. I shouldn’t say AUM because we’re not managing anything, but I think people think about it in those terms, but our customers-

Ian:

Lots of dollars protected by the technology.

Riad:

Exactly. Right. Yeah. We have individual customers with hundreds of millions or even billions of dollars protected. So yeah, absolutely. And we always love to talk to people about interesting, weird use cases. So if you’re like, “Oh, probably doesn’t work for me, I have some special …” Please do get in touch in that case, I want to hear the weird stuff. This is cool.

Ian:

Yeah, amazing. And then my customary closing question is always about the future. So what’s on the roadmap over the next year that you’re really excited about that we should all keep an eye out for?

Riad:

The most important one from my perspective is going from … So if you think about the way that we build smart contracts today, you write some code, it runs on chain and it can generate transactions, et cetera. But basically a smart contract can’t work. This is only partially true. With enough heavy duty cryptography, you can make things work, but basically any smart contract that a mere mortal like me writes, is going to be processing transactions and assets, but is not going to be working over secrets. And so this actually has some limitations in terms of what you can build. So one example here is when we talked about this sort of multi-sig versus threshold signature scheme, the reason that an on-chain multi-sig can’t in the end produce a signature that looks like any other ECDSA signature or whatever it is because the on-chain multi-sig by its nature can’t store a secret and generate a signature using a secret key.

So as we sort of move towards these policies that are essentially arbitrary computation, you can run anything you want, of course up to some time bound or whatever it might be. Now you actually have something that goes beyond what a smart contract can do because it can actually work with secrets. Your policy is fundamentally controlling a secret. So basically now you have this sort of hardware enshrined functionality where you can actually prove to somebody else, this ran on some secure hardware and here’s the signature, the attestation that says it does, and this was the result of the computation, and here is a signature or say whatever, some kind of result of that computation that includes something that needed a secret in order to be computed like a digital signature or whatever it might be. This to me is a really cool future direction because now you actually are sort of making these, you’re sort of supercharging smart contracts because you’ve got a smart contract that’s sort of hardware enshrined and therefore can compute over secrets.

And so I think this to me is where we’re going to start seeing really, really cool functionality that is able to extend chains, especially chains like Bitcoin where the on-chain functionality is pretty limited. But really for any chain, you basically get a little bit of extra or a lot of extra power in that now your smarts can include secrets that only your “smart contract” can compute over. And so when you combine that with the on-chain smart contract functionality, now you’re able to build things that you weren’t able to before. To me, that’s one really, really important direction that I think everyone’s going to start going. We’ve already seen the folks over at Flashbots, for example, are very interested in exactly this kind of stuff where they’re running certain computations inside of secure hardware in order to get these kind of attestations. I think we’re starting to see this across the industry more and more that we’re able to get this really, really interesting functionality by adding secure hardware and sort of hardware-enshrined functionality to the functionality that we already have from blockchains. That’s one direction.

The other one very, very briefly is I think there’s still, as everyone knows, still a lot of room in terms of making Web3 more usable for my dad, for the average user. And to me, the whole area, we all get to decide among ourselves, do we want this to be sort of a place where only the hardcore bleeding edge fanatics go, or do we want it to be something that everyone uses the way that the internet is? And so to me, the answer is very obvious, and part of the way to get from here to there, to the point where my dad is sending transactions like it’s nothing is we actually have to give up a little bit on our egos here. We have to be able to say, “Yeah, it’s all complicated, but we need to be able to break it down so that mere mortals can use it,” and that doesn’t make us less. I think there’s a little bit of, I don’t know what it is, like machismo or something like, “Well, I had to suffer in order to send my first transaction, so you should too.”

I think we have to give up on that and say, “Yeah, I suffered, but you don’t have to,” and we’re going to make it easier for you. And once we do that, I think the area just becomes much more interesting because now there are applications that today we can describe. For example, I buy my tickets and I get them as an NFT. That works for, I don’t know, 150,000 people in the whole world. And then the rest of them are like, “Yeah, that sounds like future technology that I will never use.” Once it’s everyone getting their tickets as an NFTs, then we will know that we have succeeded. And in order to get from here to there, we basically have to make it usable for everyone.

And so to me, this is the other aspect of CubeSigner that’s super exciting is the way that we are building this infrastructure makes it much, much, much easier to build those kinds of experiences for users. And we’re working with a bunch of folks, I mentioned Avalanche already with their Core Wallet, but we’re working with a bunch of other folks on exactly this kind of experience. And to me, from an end user perspective, once my Web3 wallet is as easy to use as Gmail, now, things will be actually really, really interesting.

Ian:

I am so for that. You just made my day hearing that you’re working on that area. I think it’s the biggest problem in our industry right now is it’s just too hard. Every time I do anything in MetaMask, a bead of sweat trickles down my forehead in terror that I’m sending to the wrong address or on the wrong network or some other crazy thing is happening. So if we can make wallet usability at a level that the average smartphone user can handle, it’ll be an amazing day. Riad, this conversation was fantastic. We could go on for another few hours, I feel like. We’re going to have to have you back on the podcast at some point, but thank you so much for taking us through what you all are building. It’s really exciting.

Riad:

Thank you so much. Normally when people say you could go on for a few more hours, they feel threatened by that, which is just like, oh my God, can you shut up already? Me? Can I shut up? No, I can’t is obviously the answer. But Ian, thank you. This has been so fun. Thank you for letting me go on at length about interesting topics because it’s like, yeah, it’s always interesting and cool to talk with people who really think about these questions from an industry-wide perspective on a day-to-day basis. And so I really, really appreciate your perspective on this.

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *