Post
📅 Original date posted:2012-12-05 📝 Original message:Our divergence is on two points (personal opinions): (1) I don't think there is any real risk to the centralization of the network by promoting a SPV (purely-consuming) node to brand-new users. In my opinion (but I'm not as familiar with the networking as you), as long as all full nodes are full-validation, the bottleneck will be computation and bandwidth, long before a constant 10k nodes would be insufficient to support propagating data through the network. In fact, I was under the impression that "connectedness" was the real metric of concern (and resilience of that connectedness to large percentage of users disappearing suddenly). If that's true, above a certain number of nodes, the connectedness isn't really going to get any better (I know it's not really that simple, but I feel like it is up to 10x the current network size). (2) I think the current experience *is* really poor. You seem to suggest that the question for these new users is whether they will use full-node-or-lite-node, but I believe it will be a decision between lite-node-or-nothing-at-all (losing interest altogether). Waiting a day for the full node to synchronize, and then run into issues like blkindex.dat corruption when their system crashes for some unrelated reason and they have to resync for another day... they'll be gone in a heartbeat. Users need to experience, as quickly and easily as possible, that they can move money across the world, without signing up for anything or paying any fees. After they understand the value of the system and want to use it, they are much more likely to become educated and willing to support the network with full node. -Alan On 12/04/2012 07:27 PM, Gregory Maxwell wrote: > On Tue, Dec 4, 2012 at 5:44 PM, Alan Reiner <etotheipi at gmail.com> wrote: >> Greg's point looks like it's veering towards "we don't want to grow >> the network unless we're going to get more full nodes out of it." > No… > > There is no fundamental completion between taking what actions we can > to maximize the decentralization of the network and making the > software maximally friendly and painless to get started with and use. > It's possible— not even deep rocket science— to create software that > accommodates both. > > And because of this, I don't think it's acceptable to promote > solutions which may endanger the decentralization that makes the > system worthwhile in the first place. If the current experience is so > poor that you'd even consider talking about promoting directions which > reduce its robustness then thats evidence that it would be worth > finding more resources to make the experience better without doing > anything the that reduces the model, even if you've got an argument > that maybe we can get away with it. If there isn't interest in > putting in more resources to make these improvements then maybe the > issue isn't as bad as we think it is? > >> I think it is very much in everyone's interest here to encourage new users to start "using" Bitcoin, even if they don't "support" it. > Absolutely— and yet that has nothing to do with promoting software to > users which only consumes without directly contributing and which > doesn't even have the capability to do so even if the user wants to > (or much less, is indifferent).
0
📅 Original date posted:2012-12-05 📝 Original message:On Tue, Dec 4, 2012 at 9:08 PM, Alan Reiner <etotheipi at gmail.com> wrote: > Our divergence is on two points (personal opinions): > > (1) I don't think there is any real risk to the centralization of the > network by promoting a SPV (purely-consuming) node to brand-new users. > In my opinion (but I'm not as familiar with the networking as you), as > long as all full nodes are full-validation, the bottleneck will be > computation and bandwidth, long before a constant 10k nodes would be > insufficient to support propagating data through the network. Not so— a moderately fast multicore desktop machine can keep up with the maximum possible validation rate of the Bitcoin network and the bandwidth has a long term maximum rate of about 14kbit/sec— though you'll want at least ten times that for convergence stability and the ability feed multiple peers. Here are the worst blocks testnet3 (which has some intentionally constructed maximum sized blocks),E31230 : (with the new parallel validation code) - Verify 2166 txins: 250.29ms (0.116ms/txin) - Verify 3386 txins: 1454.25ms (0.429ms/txin) - Verify 5801 txins: 575.46ms (0.099ms/txin) - Verify 6314 txins: 625.05ms (0.099ms/txin) Even the slowest one _validates_ at 400x realtime. (these measurements are probably a bit noisy— but the point is that its fast). (the connecting is fast too, but thats obvious with such a small database) Although I haven't tested leveldb+ultraprune with a really enormous txout set or generally with sustained maximum load— so there may be other gaffs in the software that get exposed with sustained load, but they'd all be correctable. Sounds like some interesting stuff to test with on testnet fork that has the POW test disabled. While syncing up a behind node can take a while— keep in mind that you're expecting to sync up weeks of network work in hours. Even 'slow' is quite fast. > In fact, > I was under the impression that "connectedness" was the real metric of > concern (and resilience of that connectedness to large percentage of > users disappearing suddenly). If that's true, above a certain number of > nodes, the connectedness isn't really going to get any better (I know > it's not really that simple, but I feel like it is up to 10x the current > network size). Thats not generally concern for me. There are a number of DOS attack risks... But attacker linear DOS attacks aren't generally avoidable and they don't persist. Of the class of connectedness concerns I have is that a sybil attacker could spin up enormous numbers of nodes and then use them to partition large miners. So, e.g. find BitTaco's node(s) and the nodes for miners covering 25% hashpower and get them into a separate partition from the rest of the network. Then they give double spends to that partition and use them to purchase an unlimited supply of digitally delivered tacos— allowing their captured miners to build an ill fated fork— and drop the partition once the goods are delivered. But there is no amount of full nodes that removes this concern, especially if you allow for attackers which have compromised ISPs. It can be adequately addressed by a healthy darknet of private authenticated peerings between miners and other likely targets. I've also thrown out some ideas on using merged mined node IDs to make some kinds of sybil attacks harder ... but it'll be interesting to see how the deployment of ASICs influences the concentration of hashpower— it seems like there has already been a substantial move away from the largest pools. Less hashpower consolidation makes attacks like this less worrisome. > (2) I think the current experience *is* really poor. Yes, I said so specifically. But the fact that people are flapping their lips here instead of testing the bitcoin-qt git master which is an 1-2 order of magnitude improvement suggests that perhaps I'm wrong about that. Certainly the dearth of people testing and making bug reports suggests people don't actually care that much. > You seem to > suggest that the question for these new users is whether they will use > full-node-or-lite-node, but I believe it will be a decision between > lite-node-or-nothing-at-all (losing interest altogether). No. The "question" that I'm concerned with is do we promote lite nodes as equally good option— even for high end systems— remove the incentive for people to create, improve, and adopt more useful full node software and forever degrade the security of the system. > Waiting a day > for the full node to synchronize, and then run into issues like > blkindex.dat corruption when their system crashes for some unrelated > reason and they have to resync for another day... they'll be gone in a > heartbeat. The current software patches plus parallelism can sync on a fast system with luck network access (or a local copy of the data) in under an hour. This is no replacement for start as SPV, but nor are handicapped client programs a replacement for making fully capable ones acceptably performing. > Users need to experience, as quickly and easily as possible, that they > can move money across the world, without signing up for anything or > paying any fees. Making the all the software painless for users is a great goal— and one I share. I still maintain that it has nothing to do with promoting less capable and secure software to users.
0