Bitcoin Has Halved—What Now? - Forbes

A new whitepaper analysing the performance and scalability of the Streamr pub/sub messaging Network is now available. Take a look at some of the fascinating key results in this introductory blog

A new whitepaper analysing the performance and scalability of the Streamr pub/sub messaging Network is now available. Take a look at some of the fascinating key results in this introductory blog

Streamr Network: Performance and Scalability Whitepaper


https://preview.redd.it/bstqyn43x4j51.png?width=2600&format=png&auto=webp&s=81683ca6303ab84ab898c096345464111d674ee5
The Corea milestone of the Streamr Network went live in late 2019. Since then a few people in the team have been working on an academic whitepaper to describe its design principles, position it with respect to prior art, and prove certain properties it has. The paper is now ready, and it has been submitted to the IEEE Access journal for peer review. It is also now published on the new Papers section on the project website. In this blog, I’ll introduce the paper and explain its key results. All the figures presented in this post are from the paper.
The reasons for doing this research and writing this paper were simple: many prospective users of the Network, especially more serious ones such as enterprises, ask questions like ‘how does it scale?’, ‘why does it scale?’, ‘what is the latency in the network?’, and ‘how much bandwidth is consumed?’. While some answers could be provided before, the Network in its currently deployed form is still small-scale and can’t really show a track record of scalability for example, so there was clearly a need to produce some in-depth material about the structure of the Network and its performance at large, global scale. The paper answers these questions.
Another reason is that decentralized peer-to-peer networks have experienced a new renaissance due to the rise in blockchain networks. Peer-to-peer pub/sub networks were a hot research topic in the early 2000s, but not many real-world implementations were ever created. Today, most blockchain networks use methods from that era under the hood to disseminate block headers, transactions, and other events important for them to function. Other megatrends like IoT and social media are also creating demand for new kinds of scalable message transport layers.

The latency vs. bandwidth tradeoff

The current Streamr Network uses regular random graphs as stream topologies. ‘Regular’ here means that nodes connect to a fixed number of other nodes that publish or subscribe to the same stream, and ‘random’ means that those nodes are selected randomly.
Random connections can of course mean that absurd routes get formed occasionally, for example a data point might travel from Germany to France via the US. But random graphs have been studied extensively in the academic literature, and their properties are not nearly as bad as the above example sounds — such graphs are actually quite good! Data always takes multiple routes in the network, and only the fastest route counts. The less-than-optimal routes are there for redundancy, and redundancy is good, because it improves security and churn tolerance.
There is an important parameter called node degree, which is the fixed number of nodes to which each node in a topology connects. A higher node degree means more duplication and thus more bandwidth consumption for each node, but it also means that fast routes are more likely to form. It’s a tradeoff; better latency can be traded for worse bandwidth consumption. In the following section, we’ll go deeper into analyzing this relationship.

Network diameter scales logarithmically

One useful metric to estimate the behavior of latency is the network diameter, which is the number of hops on the shortest path between the most distant pair of nodes in the network (i.e. the “longest shortest path”. The below plot shows how the network diameter behaves depending on node degree and number of nodes.

Network diameter
We can see that the network diameter increases logarithmically (very slowly), and a higher node degree ‘flattens the curve’. This is a property of random regular graphs, and this is very good — growing from 10,000 nodes to 100,000 nodes only increases the diameter by a few hops! To analyse the effect of the node degree further, we can plot the maximum network diameter using various node degrees:
Network diameter in network of 100 000 nodes
We can see that there are diminishing returns for increasing the node degree. On the other hand, the penalty (number of duplicates, i.e. bandwidth consumption), increases linearly with node degree:

Number of duplicates received by the non-publisher nodes
In the Streamr Network, each stream forms its own separate overlay network and can even have a custom node degree. This allows the owner of the stream to configure their preferred latency/bandwidth balance (imagine such a slider control in the Streamr Core UI). However, finding a good default value is important. From this analysis, we can conclude that:
  • The logarithmic behavior of network diameter leads us to hope that latency might behave logarithmically too, but since the number of hops is not the same as latency (in milliseconds), the scalability needs to be confirmed in the real world (see next section).
  • A node degree of 4 yields good latency/bandwidth balance, and we have selected this as the default value in the Streamr Network. This value is also used in all the real-world experiments described in the next section.
It’s worth noting that in such a network, the bandwidth requirement for publishers is determined by the node degree and not the number of subscribers. With a node degree 4 and a million subscribers, the publisher only uploads 4 copies of a data point, and the million subscribing nodes share the work of distributing the message among themselves. In contrast, a centralized data broker would need to push out a million copies.

Latency scales logarithmically

To see if actual latency scales logarithmically in real-world conditions, we ran large numbers of nodes in 16 different Amazon AWS data centers around the world. We ran experiments with network sizes between 32 to 2048 nodes. Each node published messages to the network, and we measured how long it took for the other nodes to get the message. The experiment was repeated 10 times for each network size.
The below image displays one of the key results of the paper. It shows a CDF (cumulative distribution function) of the measured latencies across all experiments. The y-axis runs from 0 to 1, i.e. 0% to 100%.
CDF of message propagation delay
From this graph we can easily read things like: in a 32 nodes network (blue line), 50% of message deliveries happened within 150 ms globally, and all messages were delivered in around 250 ms. In the largest network of 2048 nodes (pink line), 99% of deliveries happened within 362 ms globally.
To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! Decentralization comes with unquestionable benefits (no vendor lock-in, no trust required, network effects, etc.), but if such protocols are inferior in terms of performance or cost, they won’t get adopted. It’s pretty safe to say that the Streamr Network is on par with centralized services even when it comes to latency, which is usually the Achilles’ heel of P2P networks (think of how slow blockchains are!). And the Network will only get better with time.
Then we tackled the big question: does the latency behave logarithmically?
Mean message propagation delay in Amazon experiments
Above, the thick line is the average latency for each network size. From the graph, we can see that the latency grows logarithmically as the network size increases, which means excellent scalability.
The shaded area shows the difference between the best and worst average latencies in each repeat. Here we can see the element of chance at play; due to the randomness in which nodes become neighbours, some topologies are faster than others. Given enough repeats, some near-optimal topologies can be found. The difference between average topologies and the best topologies gives us a glimpse of how much room for optimisation there is, i.e. with a smarter-than-random topology construction, how much improvement is possible (while still staying in the realm of regular graphs)? Out of the observed topologies, the difference between the average and the best observed topology is between 5–13%, so not that much. Other subclasses of graphs, such as irregular graphs, trees, and so on, can of course unlock more room for improvement, but they are different beasts and come with their own disadvantages too.
It’s also worth asking: how much worse is the measured latency compared to the fastest possible latency, i.e. that of a direct connection? While having direct connections between a publisher and subscribers is definitely not scalable, secure, or often even feasible due to firewalls, NATs and such, it’s still worth asking what the latency penalty of peer-to-peer is.

Relative delay penalty in Amazon experiments
As you can see, this plot has the same shape as the previous one, but the y-axis is different. Here, we are showing the relative delay penalty (RDP). It’s the latency in the peer-to-peer network (shown in the previous plot), divided by the latency of a direct connection measured with the ping tool. So a direct connection equals an RDP value of 1, and the measured RDP in the peer-to-peer network is roughly between 2 and 3 in the observed topologies. It increases logarithmically with network size, just like absolute latency.
Again, given that latency is the Achilles’ heel of decentralized systems, that’s not bad at all. It shows that such a network delivers acceptable performance for the vast majority of use cases, only excluding the most latency-sensitive ones, such as online gaming or arbitrage trading. For most other use cases, it doesn’t matter whether it takes 25 or 75 milliseconds to deliver a data point.

Latency is predictable

It’s useful for a messaging system to have consistent and predictable latency. Imagine for example a smart traffic system, where cars can alert each other about dangers on the road. It would be pretty bad if, even minutes after publishing it, some cars still haven’t received the warning. However, such delays easily occur in peer-to-peer networks. Everyone in the crypto space has seen first-hand how plenty of Bitcoin or Ethereum nodes lag even minutes behind the latest chain state.
So we wanted to see whether it would be possible to estimate the latencies in the peer-to-peer network if the topology and the latencies between connected pairs of nodes are known. We applied Dijkstra’s algorithm to compute estimates for average latencies from the input topology data, and compared the estimates to the actual measured average latencies:
Mean message propagation delay in Amazon experiments
We can see that, at least in these experiments, the estimates seemed to provide a lower bound for the actual values, and the average estimation error was 3.5%. The measured value is higher than the estimated one because the estimation only considers network delays, while in reality there is also a little bit of a processing delay at each node.

Conclusion

The research has shown that the Streamr Network can be expected to deliver messages in roughly 150–350 milliseconds worldwide, even at a large scale with thousands of nodes subscribing to a stream. This is on par with centralized message brokers today, showing that the decentralized and peer-to-peer approach is a viable alternative for all but the most latency-sensitive applications.
It’s thrilling to think that by accepting a latency only 2–3 times longer than the latency of an unscalable and insecure direct connecion, applications can interconnect over an open fabric with global scalability, no single point of failure, no vendor lock-in, and no need to trust anyone — all that becomes available out of the box.
In the real-time data space, there are plenty of other aspects to explore, which we didn’t cover in this paper. For example, we did not measure throughput characteristics of network topologies. Different streams are independent, so clearly there’s scalability in the number of streams, and heavy streams can be partitioned, allowing each stream to scale too. Throughput is mainly limited, therefore, by the hardware and network connection used by the network nodes involved in a topology. Measuring the maximum throughput would basically be measuring the hardware as well as the performance of our implemented code. While interesting, this is not a high priority research target at this point in time. And thanks to the redundancy in the network, individual slow nodes do not slow down the whole topology; the data will arrive via faster nodes instead.
Also out of scope for this paper is analysing the costs of running such a network, including the OPEX for publishers and node operators. This is a topic of ongoing research, which we’re currently doing as part of designing the token incentive mechanisms of the Streamr Network, due to be implemented in a later milestone.
I hope that this blog has provided some insight into the fascinating results the team uncovered during this research. For a more in-depth look at the context of this work, and more detail about the research, we invite you to read the full paper.
If you have an interest in network performance and scalability from a developer or enterprise perspective, we will be hosting a talk about this research in the coming weeks, so keep an eye out for more details on the Streamr social media channels. In the meantime, feedback and comments are welcome. Please add a comment to this Reddit thread or email [[email protected]](mailto:[email protected]).
Originally published by. Henri at blog.streamr.network on August 24, 2020.
submitted by thamilton5 to streamr [link] [comments]

Understanding Tether: Why it accounts for a substantial part of the crypto market cap and why its the #1 outstanding issue in crypto markets today

In this post I will go in-depth on:
  1. How Tether got to be what it is today
  2. Why Tether's market cap is a lot more than 0.5% of the total market cap for crypto you see on CoinMarketCap
  3. Tether printing timing
  4. Tether reserves
  5. What could happen to the market if Tether is found to not be backed by reserves
Tether is incredibly important to the cryptocurrency market ecosystem and I've noticed far too few people understand what is going on.
Very little actual discussion of the 2nd biggest crypto by volume happens here and whenever someone starts a discussion they most often got slapped for "FUD". Tether themselves recently hired the major New York based PR firm 5W to spread positive information online and take down critics, I'm sure some of their operatives are probably on Reddit.
But its absolutely critical you understand the risks behind Tether and especially now with the explosion in reserve liability, breakdown in relationship with banks and their auditor and recently announced subpoena.

What exactly is Tether and what happened so far?

Tether is a cryptocurrency asset issued by Tether Limited (incorporated in the British Virgin Islands and a sister company of Bitfinex), on top of the Bitcoin blockchain through the Omni Protocol Layer. It is meant to give people a "stablecoin", for example a merchant who accepts bitcoin but fears its volatility could shift bitcoin into tether, which can be easier to do than exchanging bitcoin for dollars. Recently they've also added an Ethereum-based ERC20 token. Tether Ltd claims that each one of the tokens issued is backed by actual US dollar (and more recently Euro) reserves. The idea is that when a business partner deposits US dollars in Tether’s bank account, Tether creates a matching amount of tokens and transfers them to that partner, it is NOT a fractional reserve system.
Tether makes the two following key promises in its whitepaper on which the entire premise is build:
Each tether issued will be backed by the equivalent amount of currency unit (one USDTether equals one dollar).
Professional auditors will regularly verify, sign, and publish our underlying bank balance and financial transfer statement.
Tether is centralized and dependent on your trust of Bitfinex/Tether Limited, and that the people behind it are honest people. For the new entrants to this market it will be greatly beneficial understand the timeline of Tether and their connection to Bitfinex.
A brief timeline:

Most common misconception: Tether is only a small part of the total market cap

One of the most common misconception people have about cryptocurrencies is that the "market cap" amount they see on CoinMarketCap.com is actually the amount of money that is invested in each coin.
I often hear people online dismiss any issue with tether by simply claiming its not big enough to cause any effect, saying "Well Tether is only $2.2 billion on CoinMarketCap and the market is 400 billion, its only 0.5% of the market".
But this misunderstands what market capitalization for cryptocurrency is, and just how different the market cap for Tether is to every other token. The market cap is simply the last trade price times the circulating supply. It doesn't take into account the order book depth at all. The majority of Bitcoin (and most coins) are held by those who either mined or purchased for a very low price early on and simply held on as very small portions of the total supply was rapidly bid up to their current price.
An increase in market cap of X does NOT represent an inflow of X dollars invested, not even close. A 400 billion dollar market cap for crypto does NOT mean that there is 400 billion dollars underwriting the assets. Meanwhile a 2 billion dollar Tether market cap means there should be exactly $2 billion backing up the asset.
Nobody can tell for sure exactly how much money has been invested in cryptocurrency market, but analysts from JPMorgan found that there was only net inflow of $6 billion fiat that resulted in $300 billion market cap at the time. This gives us a roughly 50:1 ratio of market cap to fiat inflow. Prominent crypto evangelist Julian Hosp gives the following estimate: "For a cryptocurrency to have a market cap of $1 billion, maybe only $50 million actually moved into the cryptocurrency."
For Tether however the market cap is simply the outstanding supply, 2.2 billion USDT is actually equal to 2.2 billion USD. In order to get $50 USDT you have to deposit $50 real U.S. dollars and then 50 completely new tokens will be issued, which never existed before on the market.
What is also often ignored is that Bitfinex allows margin trading, at a 3.3x leverage. Bitfinexed did an excellent analysis on how tether is entering Bitfinex to fund margin positions
There are $2.2 billion in Tether outstanding and the current market cap of the entire market is $400 billion according to CoinMarketCap. You can actually calculate Tether as a % of total fiat invested in the market according to the JP Morgan estimate, the following table outlines for a scenario of no margin lending and 15/25% of tether being on a 3.3x leverage margin account:
Fiat Inflow/Market Cap Ratio Tether as % of total market (no margin) Tether as % of total market (15% on margin) Tether as % of total market (25% on margin)
JP Morgan estimate (50:1) 27.5 % 36.9 % 43.3 %
Even without any margin lending Tether is underwriting the worth of about 27.5% of the cryptocurrency market, and if we assume only 25% was leveraged out at 3.3x on margin we have a whole 43% of the market cap being driven by Tether inflow.
A much better indicator on CoinMarketCap of just how influential Tether is actually the volume, its currently the 2nd biggest cryptocurrency by volume and there are even days where its volume exceeds its market cap.
What this all means is that not only is the market cap for cryptocurrencies drastically overestimating the amount of actual fiat capital that is underwriting those assets, but a substantial portion of the entire market cap is being derived from the value of Tether's market cap rather than real money.
Its incredibly important that more new investors realize that Tether isn't a side issue or a minor cog in the machine, but one of the core underlying mechanisms on which the entire market worth is built. Ensuring that whoever controls this stablecoin is honest and transparent is absolutely critical to the health of the market.

Two main concerns with Tether

The primary concerns with Tether can be split into two categories:
  1. Tether issuance timing - Does Tether Ltd issue USDT organically or is it timed to stop downward selling pressure?
  2. Reserves - Does Tether Ltd actually have the fiat reserves at a 1:1 ratio, and why is there still no audit or third party guarantee of this?

Does Tether print USDT to prop up Bitcoin and other cryptocurrencies?

In the last 3 months the amount of USDT has nearly quadrupled, with nearly a billion being printed in January alone. Some people have found the timing of the most recent batch of Tether as highly suspect because it seemed to coincide with Bitcoin's price being propped up.
https://www.nytimes.com/2018/01/31/technology/bitfinex-bitcoin-price.html
This was recently analyzed statistically:
Author’s opinion - it is highly unlikely that Tether is growing through any organic business process, rather that they are printing in response to market conditions.
Tether printing moves the market appreciably; 48.8% of BTC’s price rise in the period studied occurred in the two-hour periods following the arrival of 91 different Tether grants to the Bitfinex wallet.
Bitfinex withdrawal/deposit statistics are unusual and would give rise to further scrutiny in a typical accounting environment.
https://www.tetherreport.com
I'm still undecided on this and I would love to see more statistical analysis done, because the price of Bitcoin is so volatile while Tether printing only happens in large batches. Simply looking at the Bitcoin price graph over the last 3 months and then the Tether printing its pretty clear there is a relationship but it doesn't seem to hold over longer periods.
Ultimately to me this timing isn't that much of an issue, as long Tether is backed by US dollars. If Bitfinex was timing the prints then it accounts to not much more than an organized pumping scheme, which isn't a fundamental problem. The much more serious concern is whether those buy order are being conducted on the faith of fictitious dollars that don't exist, regardless of when those buy orders occur.

Didn't Tether release an audit in September?

Some online posters have recently tried to spread the notion that Tether has actually been audited by Friedman LLP and that a report was released in September 2017. That was actually just a consulting engagement, which you can read here:
https://tether.to/wp-content/uploads/2017/09/Final-Tether-Consulting-Report-9-15-17_Redacted.pdf
They clearly state that:
This engagement does not contemplate tests of accounting records or the performance of other procedures performed in an audit or attest engagement. Our procedures performed are not for the purpose of providing assurance...In addition, our services do not include determination of compliance with laws and regulations in any jurisdiction.
They state right from the beginning that this is a consultancy job (not an audit), and that its not meant to be assurance to third parties. Doing a consultancy job is just doing a task asked by your customer. In a consultancy job you take information as true from the client, and you have no mandate to verify whether your customer's claims are true or not. The way they checked is simply asking Tether to provide them the information:
All inquiries made through the consulting process have been directed towards, and the data obtained from, the Client and personnel responsible for maintaining such information.
Tether provided a screenshots of twp bank balances. One of these is in the name of Tether Limited, and while the other is a personal account of an individual who Tether Limited claims has a trust agreement with them:
As of September 15, 2017, the bank held $60,919,810 in an account in the name of an in individual for the benefit of Tether Limited. FLPP obtained an engagement letter for an interim settlement plan between that individual and Tether Limited and that according to Tether Limited, is the relevant agreement with the trustee. FLLP did not evaluate the substance of the letter and makes no representation about its legality.
Even worse is that later on in Note 1, they clearly claim that there is no actual evidence that this engagement letter or trust has any legal merit:
Note 1: FLLP makes no representations about sufficiency or enforceability of any trust agreement between the trustee and the Client
Essentially what this is saying is that the trust agreement may not even be worth the paper it’s printed on.
And most importantly… Note 2:
FLLP did not evaluate the terms of the above bank accounts and makes no representations about the clients ability to access funds from the accounts or whether the funds are committed for purposes other than Tether token redemptions
Basically Tether gave them a name of an individual with $60 million in their account according to a screenshot, Tether then gave them a letter saying that there is a trust agreement between this individual and Tether Limited. They also have account with $382 million but no guarantee that this account holds to any lien or other commitments, or that it can be accessed.
Currently Tether has 2.2 billion USDT outstanding and we have absolutely no idea whether this is actually backed by anything, and the long promised audit is still outstanding.

What happens if its revealed that Tether doesn't have its US dollar reserves?

According to Thomas Glucksmann, head of business development at Gatecoin: "If a tether debacle unfolds, it will likely cause quite a devastating ripple effect across many of the exchanges that see most of their volumes traded against the supposedly USD-backed cryptocurrency."
According to Nicholas Weaver, a senior researcher at the International Computer Science Institute at Berkeley: "You could see a spike in prices in tether-only bitcoin exchanges. So, on those exchanges only you will see a run up in price compared to the bitcoin exchanges that actually work with actually money. So you would see a huge price diverge as people see that only way they can turn tether into real money is to buy other cryptocurrency then move to another exchange. That is a bank run."
I definitely see the crypto equivalent of a bank run, as people actually try to secure their gains an realize that this money doesn't actually exist within the system:
If traders lose confidence in it and its value starts to drop, “people will run for the door,” says Carlson, the former Wall Street trader. If Tether can’t meet all its customers’ demand for dollars (and its Terms of Service suggest that in many cases it won’t even try), tether holders will try to snap up other cryptocurrencies instead, temporarily causing prices for those currencies to soar. With tether’s role as an inter-exchange facilitator compromised, investors might lose faith in cryptocurrencies more generally. “At the end of the day, people would be losing substantial sums, and in the long term this would be very bad for cryptocurrencies,” says Emin Gun Sirer, a Cornell professor and co-director of its Initiative for Cryptocurrencies and Smart Contracts.
Another concern is that Bitfinex might simply shut down, pocketing the bitcoins it has allegedly been stockpiling. Because people who trade on Bitfinex allow the exchange to hold their money while they speculate, these traders could face substantial losses. “The exchanges are like unregulated banks and could run off with everyone’s money,” says Tony Arcieri, a former Square employee turned entrepreneur trying to build a legally regulated exchange.
https://www.wired.com/story/why-tethers-collapse-would-be-bad-for-cryptocurrencies/
The way I see it, this would be how it plays out if Tether collapses:
  1. Tether-enabled exchanges will see a massive spike in Bitcoin and cryptocurrency prices as everyone leaves Tether. Noobs in these exchanges will think they are now millionaires until they realize they are rich in tethers but poor in dollars.
  2. Exchanges that have not integrated Tether will experienced large drops in Bitcoin and alts as experienced investors flee crypto into USD.
  3. There will be a flight of Bitcoin from Tether-integrated exchanges to non-Tether exchanges with fiat off-ramps. Exchanges running small fractional reserves will be exposed, further increasing calls for greater reserves requirements.
  4. The exchanges might slam the doors shut on withdrawals.
  5. Many exchanges that own large balances of Tether, especially Bitfinex, will likely become insolvent.
  6. There will be lawsuits flying everywhere and with Tether Limited being incorporated on a Carribean Island whose solvency and bankruptcy laws will likely ensure they don't ever get much back. This could take years and potentially push away new investors from entering the space.

Conclusion

We can't be 100% completely sure that Tether is a scam, but its so laiden with red flags that at this point I would call it the biggest systematic risk in the crypto space. Its bigger than any nation's potential regulatory steps because it cuts right into the issue of trust across the entire ecosystem.
Ultimately Tether is centralizing one of the very core mechanics of the cryptocurrency markets and asking you to trust one party to be the safekeeper, and I really see very little reason to trust Bitfinex given their history of lying and screwing over their own customers. I think that Tether initially started as a legit business to facilitate the ease of moving money and avoiding regulations, but somewhere along the lines greed and/or incompetence took over (something that seems common with Bitfinex's previous actions). Right now we're playing proverbial hot potato, and as long as people believe that Tether is worth a dollar everything is fine, but as some point the Emperor will have to step out from hiding and somebody will point out they have no clothes.
In the long term I really hope once Tether collapses we can move on and get the following two implemented which would greatly improve the market for all investors:
  1. Actual USD fiat pairings on the major exchanges for the major currencies
  2. Regulatory rules on exchange reserve requirements
I had watched the Bitconnect people insist for the last 2 years that everything about Bitconnect made perfect sense because they were getting paid daily. The scam works until one day it suddenly doesn't.
Tether could still come clean and avoid all of this "FUD" by simply getting a simple review of their banking, they don't even need a full audit. If everything was legit with Tether, it would be incredibly easy to have a segregated bank account with the funds used solely to back up Tether, then have an third party accounting firm simply review the account and a bank reconciliation statement then spend a few hours in contact with the bank to ensure no outstanding liabilities are held on that balance. This is extremely basic stuff, it would take a few hours to set up and wouldn't take a lot of man-hours for a qualified account to do, and yet they don’t do it. Why? Why hire a major PR firm and spend god knows how much money to pay professional PR representatives to attack "FUD" online instead?
I think I know why.
submitted by arsonbunny to CryptoCurrency [link] [comments]

Pitfalls of Granger Causality

Original post from blog.projectpiglet.com. However, because there is promotional activity, a text post was more appropriate, thank you Mod's for working with me.

Pitfalls of Granger Causality

One of the most common forms of analysis on the stock market is Granger Causality, which is a method for indicating one signal possibly causes another signal. This type of causality is often called “predictive causality”, as it does not for certain determine causality – it simply determines correlations at various time intervals.
Why Granger Causality? If you search “causality in the stock market“, you’ll be greeted with a list of links all mentioning “granger causality”:
Search on DuckDuckGo
In other words, it’s popular and Clive Granger won a Nobel on the matter[1]. That being said, there are quite a few limitations. In this article, we’ll be covering a brief example of Granger Causality, as well as some of the common pitfalls and how brittle it can be.

What is Granger Causality?

Granger Causality (from Wikipedia) is defined as:
A time series X is said to Granger-cause Y if it can be shown, usually through a series of t-tests and F-tests on lagged values of X (and with lagged values of Y also included), that those X values provide statistically significant information about future values of Y.
In other words, Granger Causality is the analysis of trying to find out if one signal impacts another signal (such that it’s statistically significant). Pretty straightforward, and is even clearer with an image:
From Wikipedia
n a sense, it’s just one spike in a graph causing another spike at a later time. The real challenge with this is that this needs to be consistent. It has to repeatedly do this over the source of the entire dataset. This brings us to the next part: one of the fragile aspects of this method is that it often doesn’t account for seasonality.

Granger Causality and Seasonality

One common aspect of markets is that they are seasonal. Commodities (as it relates to the futures market) related to food are extremely impacted by seasonality[2]. For instance, if there is a drought across Illinois and Indiana during the summer (killing the corn crop), then corn prices from Iowa would likely rise (i.e. the corn from Iowa would be worth more).
From Wikipedia
In the example, there may be decades where some pattern in the market holds and Granger Causality is relevant. For instance, during summer heat waves in Illinois, corn prices in Iowa increase. On the other hand, with the advent of irrigation methods that deliver water underground, heat waves may no longer impact crops[3]. Thus, the causality of heat waves in Illinois may no longer impact the corn prices in Iowa.
If we then attempt to search for Granger Causality on the entire time range (a) pre-irrigation and (b) post irrigation, we will find there is no causality!
However, during the pre-irrigation time range we will find probable causality, and for post-irrigation time range we likely won’t find probable causality. Any time you combine two timeframes like this, the default is no Granger Causality (unless it’s a very small portion of the dataset). Bringing us to the conclusion, that:
Granger Causality is very sensitive to timeframe(s)
Just a few data points in either direction can break the analysis. This makes sense, as it is a way to evaluate if two time series are related. However, it does lead one to note how brittle this method can be.

Granger Causality and Sparse Datasets

Yet another potential issue with Granger Causality is sparse datasets. Let’s say we have dataset X and dataset Y: if dataset X has 200 data points and data set Y as 150 data points, how do you merge them? Assuming they are in (datetime, value) format, if we do an inner join on “datetime”, we get something that looks like the following:
From W3School
Then we will have 150 data points in a combined X and Y dataset, i.e.: (datetime, x, y). Unforunately, this also means if the data is continuous (as most timeseries data is), then we have completely broke our Granger Causality analysis. In other words, we are just skipping over days, which would break any causality analysis.
In contrast, we could do an outer join:
From W3School
We will have 200 data points in a combined X and Y dataset. Again, there’s an issue – it means we probably have empty values (Null, NULL, None, NaN, etc. ) where the Y data set didn’t have data (recall Y only had 150 data points). The dataset would then have various entries that look as such: (datetime, x, NULL).
To fix the empty values, we can attempt to use a forward or back fill technique. A forward/back fill technique is where you fill all the empty values with the previous or following location(s) real value.
This code could look like the following:
From blog.projectpiglet.com
From the sound of it, this method sounds promising! You’ll end up with something that’s continuous with all real values. You’ll actually get a graph like this:
Change in BCH price vs Random Walk (with NaNs)
As you can see, there are large sections of time where the data is flat. Recall the seasonality issue with Granger Causality? This method of outer joins + forward / back filling will definitely cause issues, and lead to minimal to no meaningful correlations.
Sparse datasets make it very difficult (or impossible) to identify probable causality.

Granger Causality and Resampling

There is another option for us, and that is “resampling”. Where instead of just filling the empty values (Nulls / NaNs) with the previous or following real values, we actually resample the whole series). Resampling is a technique where we fill the holes in the data with what amounts to a guess of what we think the data could be.
Although there are quite a few techniques, in this example we’ll use the python package Scipy, with the Signal module.
From blog.projectpiglet.com
At first glance, this appears to have solved some of the issues:
Change in Bitcoin Price vs Random Walk
However, in reality it does not work; especially if the dataset starts or ends with NaN’s (at least when using the Scipy package):
Change in BCH price vs Random Walk (with NaNs)
If you notice, prior to the ~110 data point, the values are just oscillating up and down. The resampling method Scipy is using does not appear to be functional / practical with so few data points. This is because I selected data set for Bitcoin Cash (BCH) and the date range is prior to Bitcoin Cash (BCH) becoming a currency (i.e. there is no price information).
In a sense, this indicates it’s not possible (at least given the data provided) to attempt Granger Causality on the given date ranges. Small gaps in time can have dramatic impacts on whether or not “probable causality” exists.
When determining Granger Causaily it is extremely important to have two complete overlapping datasets.
Without two complete datasets, it’s impossible to identify whether or not there are correlations over various time ranges.
Resampling can cause artifacts that impact the Granger Causality method(s).
In fact, the most recent example was actually positive for Granger Causality (p-value < 0.05)… That is the worst scenario, as it is a false positive. In the example, the false positive occurs because when both datasets are resampled they had a matching oscillation… it wouldn’t have even been noticed if the raw data sets weren’t being reviewed.
This is probably the largest issue with Granger Causality: every dataset needs to be reviewed to see if it makes sense. Sometimes what at first appears to make sense, in reality the underlying data has been altered in some way (such as resampling).

Granger Causality and Non-Linear Regression

Changing gears a bit (before we get to a real-world ProjectPiglet.com example), it’s important to note that most Granger Causality uses linear regression. In other words, the method is searching for linear correlations between datasets:
From austingwalters.com
However, in many cases – especially in the case of markets – correlations are highly likely to be non-linear. This is because markets are anti-inductive[5]. In other words, every pattern discovered in a market creates a new pattern as people exploit that inefficiency. This is called the Efficient Market Hypothesis.
Ultimately, this means most implementations of Granger Causality are overly simplistic; as most correlations are certainly non-linear in nature. There are a large number of non-linear regression models, below is an example of Gaussian Process Regression:
From Wikipedia
Similar, non-linear regression techniques do appear to improve Granger Causality[6]. This is probably due to most linear correlations already being priced into the market and the non-linear correlations will be where the potential profits are. It remains to be seen how effective this can be, as most research in this area is kept private (increasing profits of trading firms). What we can say is that non-linear methods do improve predictions on ProjectPiglet.com. They also require a larger dataset than their linear regression counterparts.

Conclusion

Overall, Granger Causality has quite a few potential pitfalls. It is useful for indicating a potential correlation, but is only a probable correlation. It can help to identify market inefficiencies and open the opportunity to make money, but will probably require more finesse than simple linear regression.
All that being said, hope you’ve found some of the insights useful!
submitted by austingwalters to algotrading [link] [comments]

Universal Oikos

I admit this reads a bit as a fiction but the ideas I am sketching below seem so clear to me that as I work out the intricate warps and woofs I quickly cobbled together, I don’t expect the basic conclusion to be shown erroneous. So read it at your own risk. The rewards however as the words reveal are already (t)here. Perhaps I am bat shit crazy but I just don’t think so. The advances that blockchains avalanche have already started to flake off. Others just play tether ball around the tree that might snow the next 100 years of evolutionary theory.
Joan Roughgarden has propounded an evolutionary theory of social selection to replace sexual selection and has advocated, advised, and added instances of her bottom up modeling procedure. This development in evolution studies, realizes objectively equal gender classifications formerly relegated and reduced to incidentally derived dimorphic status, latterly founding reproductions of natural selection through underdeterminations of offspring first rather than overdeterminations of parental investments and divestments. Her proposal met with profound disdain and dejection from those supposedly in the know. Blockchain technology appears to be evolving along the lines of a new algorithmically instantiated platform by AVALabs from increasingly familiar consensus protocols first sketched in 2018 by an invisible crew named Team Rocket. Roughgarden’s social selection as recognized and applied largely moved from and through animal species even-the-while plants remained in it’s rear view purview. A physical property that sports the model appears to be definitively recoverable from nature in the presumption of a potentially mutual cross gender pleasure via an unknown chemical mediator. Joan has suggested one such discoverable option but it turns out supplementally that by applying a version of the avalanche protocol towards achieving consensus within plant ecologies under social selection in analogy with human economies of blockchain at scale, new insights into empirically testable scenarios for evolutionary theory can be designed which obviates the need for a specific chemical in the sustainment of theoretical trajectories the model supports. There is a sustainable cross over through Nash’s idea of parallel machine control, his notion of a bargaining equilibrium, Roughgardian social selection, and programmatic avalanche metastability. I am only going to sketch — here and now — the communicabilities within.
Suzanne Simard tested and proved that plants can send carbon through their roots to other trees. The relation of plants ( and animals) in this network of relations provided by communication of chemicals through the mycelium has been called perhaps flippantly and humoursly the Wood Wide Web but as I shall show below the manifestable narrow waist of the metabstability as designed by AVA Labs in its production engine provides an architecture which when applied to Rougarden’s use of game theory can oscillate theoretical plant sexuality ( big vs small gamete) within and between plants in such way that implies that plants have genders, a prediction that can be empirically confirmed. There is more to blockchain evolution than meets the atomic-swapping eye. I suspect that there will be more and more applications of the snow family of protocols to science just as there are increasing instantiations in the blockchain (AVAlabs, BCH, Perlin) space.
The basic idea underlying social selection is that reproduction is not about the mating process temporally per say but rather is about cooperating to raise the most number of offspring. This cooperation may occur between parents without respect to sex but interestingly may also occur between species and subspecies. That is the contribution that blockchain technology provides to evolutionary theory. It is quite remarkable. Hermaphroditic trees may court each other by choosing not to revert to global competitive Nash selfish threat points but instead ‘opt-in’ to continue to choose cooperative joint bargaining and side payments strategically when a plant team fitness function is constructed by chemically agnostic (concentration gradient driven) transmission through a stable main mycelial network. Simard has shown that “mother trees” can direct carbon deferentially to their own offspring and thus as these parent individuals in some families may be either male or female both within and between the organisms themselves (multiple genders) it is possible for team work to arise ecologically ( in the space the distribution of trees on the ground landscapes) such that other species mother and father trees receive chemicals including carbon by differential inbreeding that draws other subspecific variants within the network being provisioned underground to their offspring by excluding non-familial relatives that have opted out of helping to raise offspring and decided to compete rather than cooperate and thus bifurcate in evolutionary time the genes fungi select when evolving the proximately extant networkable connections. If the parents use an avalanche like metastability format to distribute carbon through such a growing network ( sampling courted partners both within themselves and between individuals and adopting their carbon release kinematic) and the offspring have traits passed down by grown ancestors similar to begging in baby birds utilizing such, then trees using self-DNA ‘to pay’ (from the pay-off matrix operation in game theory ) ( which inhibits self growth and thus expands the places on the ground available for growth and reproduction) during the transmission, those so strategically cooperating can move up trophic levels the network builds out purely geographically. There is no group pleasure chemical involved in this model, instead only each individual’s DNA is incorporated which can be as narrow a margin as the heritable interpretation of that supramolecular chemical tolerates as a template biophysically. This will be explained in the sequel. That is the basic idea and thus while it make take some years before this idea is networked out, the basic idea is available for those who look beyond the negatively competitive aspects of oikos information and towards the cooperation we all need both as a species and as a humanity with others.
Unfortunately for our better-selves, there has been a value judgement marshaled against at least some of those sold on bitcoin among us. Commentators have challenged up-coming POS governed blockchains as being too complex and that when making a guess at where to place one’s $R&D, the promoted projection has been into POW tech not because it might be inherently a better platform to launch a distributed ledger in, but because the threshold to user adoption appears to them as literally a no-brainer. Some have made the bet that it is easier to develop POW functions etc. than POS ones, since one does not have to assume any cognitive interest in the user- validator beyond the required instructions ( 1 — plug in computer, 2 — go hash). While Kevin Sekniqi of AVALabs has said he has no universal composable theorem/argument of POS and POW, he has made the point on multiple times that POW networks can be embedded into POS systems. This means to me that any value judgement being applied against POS support equally applies to POW manifestations (when the entire universe of future design possibilities is included in reflection on those interests that regulate the decision of how to constitute the afforded applications). Now Microsoft has recently published a patent to use body activity as proof-of-work, saying that this will help reduce energy expenditures. Microsoft is trying to patent in on the decision bitcoiners made, that it has value— that they have been convinced of the bitcoin narrative and gone the last mile to adopt it as something they choose to do and be a part of. The POW operation proposed in the Microsoft patent potentially includes “ a brain wave or body heat emitted from the user when the user performs the task provided by an information or service provider, such as viewing advertisement or using certain internet services, can be used in the mining process.” while it is determining if work was done. We do not need these companies using our interest and decisions in agreeing to a narrative of what money, whether ideal or not is, to force and coerce our behavior based on a prior knowledge about our ideas, decisions and preferences we may have expanded on socially and communicated with others publicly. Microsoft may think this is not what they are doing but the application is clear in the example of the musicians who have already had their brain waves used to select notes. It is the artist when thinking of the note to be needed creatively that produces the wave the machine records, it is not the machine that creates the image the the user’s brain produces a wave thereof. We do not need new tech companies or new tech products deciding how we use and view social media, we need them to build tech that reflects how we like to use it, how we are pleased to use it independently of how some sovereign wishes it to be regardless of how free, how much money they have or are. If they had such a device then it seems that sooner than later some will start to create advertisements that manipulate not only our pleasures and pains but also our understandings. This would be much worse than bad. It is something I would resist. Humanity made clear the distinction between the physical actions of organic bodies and bodies made of physically active materials a few hundred years ago and yet the Microsoft patent in the name of creating something new slashes and hashes right through this distinction as if it was nothing but a virtual simulation of the large scale data synthesized from a prior analysis. Seems to me that this kind of POW centric thinking and planning on control over our user interaction with machines is just not the way to go into Web3.0. With Web 3 we will, among many other things accumulate smart assets and we will need a way to sort and use our own personal portfolio of them especially if one obtains them through non-fungible tokens.
The production, wilding, collection, and reuse of these valuable digitizations is going to be a increasingly demanded functionality on Web 3. With AVA these powerful processes individuated by different businesses will thus have a programmed utility under an action — reaction horizon of superfluid network changeabilities previously invisible to intelligent creators but ones we can understand. The details of such a lightweight scalable tech remains for me to provide to you but it is clear the motivation behind the Microsoft patent is not sound. I hope to show that one on the AVA network is. Here is quick guide to my idea: It is possible to produce a body activity proof-of-work such that there is absolutely no forced cognitive decision making that is required of the user. One does not have to force/coerce the user into making new and additional cognition than those already being done.
Sounds like I am saying you can eat your cake having haved it too. But in fact the example I am suggesting is one in which the user simply adapts to technology rather than adopts it and this can be done with a body activity POW aspect within and POS horizon.
In the case of using a hand gyro for digital asset search and retrieval ( it rotates in two independent degrees of freedom that provide manual overrides) the user simply is doing something that is independent of the hashing. Electricity is generating as a side effect of the searching activity. Muscle energy rather than visual/brain energy powers the device but by being on the periphery of the nervous system provides minimal interference with physiological function. From John Nash’s perspective of the worth of a machine, it makes no sense to build one that takes more time unless there is a need to multiply the kinds of tasks we want to compute and use the computer thus for. We do not need to a make a technology that forces one to compute and do tasks just because this is easier for the computer to instruct us to do — rather we should, I feel, build a machine that does the computations that we ‘ask’ it to do. That’s my ask for digital asset creation devices. We need devices that interact with us from the outside-in not the inside-out. The hand gryo when parallelizing the inputs and scaling to many users may be designed to speed up the rate at which machines take instructions. It looks at least initially to be able to make division as a decision requiring process since it can exist at the extremity of both locomotion and computation. This device is not a world computer — it will not compute anything but it might be made to sort digital assets. This is not something that Nash considered. New decentralized blockchain tech requires new ways to parallelize digital logic for it to correspond with our social and economic activities all the while attending to our personal actions similarly. Further it is helpful when evaluating what Nash said about bitcoin to understand how he thought about computers and mentality. He wrote a paper in 1954 called “Parallell Control” and he expressed the hope that computer part separations would result in self-programmable machines. While we are now able somewhat to create programs that program themselves there is no such thing as Von Neumann’s idea of computers making themselves that is in homology with biological evolution — there are no workable disciplines of applied metabiology here. There does appear to be such a thing as the evolution of social selection by avalanche protocol applications however. The idea of dividing currency into two coins that are bound dynamically to each other and separate formerly united capital in the system, as in POS, comes out of this general idea of Nash, however it does not lead to the extreme form that he had considered where he took the analogy quite literally and thought that the communication system of the computer and the mind’s parallels were organonically ( a term from the history of logic) and materially one and the same. This may have led to some of the symptoms he claims to have apperceived but it also gave him insight into the ideas of money before others followed on. Again, POW proponents may think that this is all just too complicated and that the gains are not worth the effort and that it is better and easier to demean past decisions but the point is that POW in POS makes Nash’ s ideal not into something directly tied to the entire global financial system nor into what Bitcoin is trying to do but rather into something that does all of that in a much more restricted way. We can directly map our human economics to animal and plant eco-evolutions and we can have a new future that is positive both for us and our interests as well as with those possessed by different species if we learn how to apply evolution rather than just discuss if it exists or not.
The POW proposal of Microsoft has an analogy in the social selection of the wood wide web that further draws out the intricacy we are entering in on as a society of the 21 century — in explaining how rusts — which are fungal parasites of trees genotypically evolved into their strange and weirdly acting genetic cell types. These parasites may have taken advantage of the behavior of the mycelial network to game the social selection system of already cooperating individuals and produce throughout its geographic spread, a new kind of production of chemically fit individuals, from the outside — as sovereigns — by attaching themselves to multiple species evo-ecologically. Thus while it is possible that the Microsoft proposal can be built, it will add the kind of complication that might be suggested rusts have already inserted into the ecosystem of life itself already here on earth. The value of new blockchain techs will not only come from those who have adopted it but from those who adapt DAGs( directed acyclic graphs) to many different activities that might be homologized in tree-wise topologies of time in space — otherwise known as phylogenies. So while this still reads as a fiction and I jumped to the end before I really began — I suggest you try it again, and again…while you gain away the pain the cooperation will appear — it is a joy to realize that the past is is just that — past. Or just ask me a question directly.
submitted by Brad_McFall to u/Brad_McFall [link] [comments]

The Case for an Extreme ETH Mispricing

The format of this post has been modified to be more reddit friendly. Apologies for any momentum lost.
This piece was written in collaboration with u/beerchicken8. He deserves a massive amount of credit and our thought experiment could not have been generated without him.
We wrote this piece to remind the community and new investors that we are incredibly early to this investment, and also to demonstrate that ETH is massively undervalued even if viewed as a network utility token. We meant for this to be as simple, yet impactful as possible. We are not in the practice of writing academic papers, but the narrative is clearly demonstrated.
all data is accurate to May 22, 2017
A Crude Valuation of ETH
Pundits and the media will look at the recent price graph and will likely tell you that cryptocurrencies are in a bubble. Sure the recent price action looks aggressive and may appear unsustainable, but it is hardly a bubble. In fact, it is likely that ETH is significantly undervalued.
ETH Price Graph
Crypto skeptics attempt to value bitcoin or ETH using conventional stock market metrics like P/E ratio or by comparing market capitalizations of crypto versus blue chip companies. These metrics do not fairly translate to cryptocurrencies. We can improve on that.
Metcalfe's Law Image Description
A close friend of mine stumbled across Metcalfe’s Law in an effort to properly value the market price of ETH, the cryptocurrency of ethereum. We can think of ETH as a demand-driven digital asset, since it is converted to gas to execute the smart contracts on the blockchain. It provides a vital network function: incentivizing miners to secure the blockchain. Therefore we should attempt to value ETH by attempting to value the ethereum network itself. We can use the daily transactions as our tool.
Metcalfe’s Law aims to value the network effects of communication technologies like the Internet or social networking. The premise is that the value of a telecommunications network is proportional to the square of the number of connected users of the system.
To determine a fair market price of ETH, we can compare the ethereum network transactions squared (or the network value) versus the market cap of ethereum.
In the following chart, we chose to graph the log of our inputs for a better visualization of the correlation.
Log graph of Transaction2 and Marketcap
The scale is misleading, but when we look back at the ETH market cap and see that it fell below the network valuation around the time of the DAO hack. The market cap languished as the ETH price suffered from a lack of investor confidence. But as investors licked their wounds and Bitcoin maximalists cheered, the ethereum transactions have steadily increased; they even outpaced the price correction. Yet, that was just the log graph. This is the actual Metcalfe’s Law graph demonstrating that network value of ethereum vs the market cap:
Metcalfe's Law for Ethereum
We can see clearly that the market cap is significantly lagging the network effect. Theoretically, the network valuation calculated by transactions squared should equal the market cap.
So here we are. We can conclude ETH appears cheap. But this is probably far from the truth: If the current network value equals the current market cap, we are completely discounting the future growth of the network.
Stock investors will buy stocks on their future earnings and growth potential years in advance. The Tesla stock has outperformed every incumbent metric due to tantalizing growth projections. But Tesla will likely not generate profits for years. In the case of ETH, this growth discount is significant. Not only does it not appear to exist in the price, but we can make 3 safe assumptions to forecast the opportunity for incredible growth:
Also, there are additional factors accelerating the scarcity of ETH:
Further Reading: u/mr_yukon_c touched on some other metrics signalling the strength of Ethereum Network in an excellent post the other day:
https://np.reddit.com/ethtradecomments/6cr75s/current_state_of_the_ethereum_network_extremely/
submitted by pittinout7 to ethtrader [link] [comments]

The Great NiceHash Profit Explanation - for Sellers (the guys with the GPUs & CPUs)

Let's make a couple of things crystal clear about what you are not doing here:
But hey, I'm running MINING software!
What the hell am I doing then?!?
Who makes Profit, and how?
How is it possible everyone is making a profit?
Why do profits skyrocket, and will it last (and will this happen again)?
But my profits are decreasing all the time >:[
But why?!? I’m supposed to make lotsa money out of this!!!
But WHY!!!
  1. Interest hype -> Influx of Fiat money -> Coins quotes skyrocket -> Influx of miners -> Difficulty skyrockets -> Most of the price uptrend is choked within weeks, since it’s now harder to mine new blocks.
  2. Interest hype drains out -> Fiat money influx declines -> Coins quotes halt or even fall -> Miners still hold on to their dream -> Difficulty stays up high, even rises -> Earnings decrease, maybe even sharply, as it's still harder to mine new blocks, that may be even paid less.
So, how to judge what’s going on with my profits?
Simple breakdown of the relationship of BTC payouts by NiceHash, BTC/ALT Coins rates, and Fiat value:
BTC quote | ALTs quotes | BTC payout | Fiat value ----------------------------------------------------- UP | UP | stable*) | UP stable | UP | UP | UP UP | stable | DOWN | stable*) stable | stable | stable | stable DOWN | stable | UP | stable*) stable | DOWN | DOWN | DOWN DOWN | DOWN | stable*) | DOWN 
Some rather obvious remarks:
More help:
Disclaimer: I'm a user - Seller like you - not in any way associated with NiceHash; this is my personal view & conclusion about some more or less obvious basics in Crypto mining and particularly using NiceHash.
Comments & critics welcome...
submitted by t_3 to NiceHash [link] [comments]

Bitcoin Original: Reinstate Satoshi's original 32MB max blocksize. If actual blocks grow 54% per year (and price grows 1.54^2 = 2.37x per year - Metcalfe's Law), then in 8 years we'd have 32MB blocks, 100 txns/sec, 1 BTC = 1 million USD - 100% on-chain P2P cash, without SegWit/Lightning or Unlimited

TL;DR
Details
(1) The current observed rates of increase in available network bandwidth (which went up 70% last year) should easily be able to support actual blocksizes increasing at the modest, slightly lower rate of only 54% per year.
Recent data shows that the "provisioned bandwidth" actually available on the Bitcoin network increased 70% in the past year.
If this 70% yearly increase in available bandwidth continues for the next 8 years, then actual blocksizes could easily increase at the slightly lower rate of 54% per year.
This would mean that in 8 years, actual blocksizes would be quite reasonable at about 1.548 = 32MB:
Hacking, Distributed/State of the Bitcoin Network: "In other words, the provisioned bandwidth of a typical full node is now 1.7X of what it was in 2016. The network overall is 70% faster compared to last year."
https://np.reddit.com/btc/comments/5u85im/hacking_distributedstate_of_the_bitcoin_network/
http://hackingdistributed.com/2017/02/15/state-of-the-bitcoin-network/
Reinstating Satoshi's original 32MB "max blocksize" for the next 8 years or so would effectively be similar to the 1MB "max blocksize" which Bitcoin used for the previous 8 years: simply a "ceiling" which doesn't really get in the way, while preventing any "unreasonably" large blocks from being produced.
As we know, for most of the past 8 years, actual blocksizes have always been far below the "max blocksize" of 1MB. This is because miners have always set their own blocksize (below the official "max blocksize") - in order to maximize their profits, while avoiding "orphan" blocks.
This setting of blocksizes on the part of miners would simply continue "as-is" if we reinstated Satoshi's original 32MB "max blocksize" - with actual blocksizes continuing to grow gradually (still far below the 32MB "max blocksize" ceilng), and without introducing any new (risky, untested) "game theory" or economics - avoiding lots of worries and controversies, and bringing the community together around "Bitcoin Original".
So, simply reinstating Satoshi's original 32MB "max blocksize" would have many advantages:
  • It would keep fees low (so users would be happy);
  • It would support much higher prices (so miners would be happy) - as explained in section (2) below;
  • It would avoid the need for any any possibly controversial changes such as:
    • SegWit/Lightning (the hack of making all UTXOs "anyone-can-spend" necessitated by Blockstream's insistence on using a selfish and dangerous "soft fork", the centrally planned and questionable, arbitrary discount of 1-versus-4 for certain transactions); and
    • Bitcon Unlimited (the newly introduced parameters for Excessive Block "EB" / Acceptance Depth "AD").
(2) Bitcoin blocksize growth of 54% per year would correlate (under Metcalfe's Law) to Bitcoin price growth of around 1.542 = 2.37x per year - or 2.378 = 1000x higher price - ie 1 BTC = 1 million USDollars after 8 years.
The observed, empirical data suggests that Bitcoin does indeed obey "Metcalfe's Law" - which states that the value of a network is roughly proportional to the square of the number of transactions.
In other words, Bitcoin price has corresponded to the square of Bitcoin transactions (which is basically the same thing as the blocksize) for most of the past 8 years.
Historical footnote:
Bitcoin price started to dip slightly below Metcalfe's Law since late 2014 - when the privately held, central-banker-funded off-chain scaling company Blockstream was founded by (now) CEO Adam Back u/adam3us and CTO Greg Maxwell - two people who have historically demonstrated an extremely poor understanding of the economics of Bitcoin, leading to a very polarizing effect on the community.
Since that time, Blockstream launched a massive propaganda campaign, funded by $76 million in fiat from central bankers who would go bankrupt if Bitcoin succeeded, and exploiting censorship on r\bitcoin, attacking the on-chain scaling which Satoshi originally planned for Bitcoin.
Legend states that Einstein once said that the tragedy of humanity is that we don't understand exponential growth.
A lot of people might think that it's crazy to claim that 1 bitcoin could actually be worth 1 million dollars in just 8 years.
But a Bitcoin price of 1 million dollars would actually require "only" a 1000x increase in 8 years. Of course, that still might sound crazy to some people.
But let's break it down by year.
What we want to calculate is the "8th root" of 1000 - or 10001/8. That will give us the desired "annual growth rate" that we need, in order for the price to increase by 1000x after a total of 8 years.
If "you do the math" - which you can easily perform with a calculator or with Excel - you'll see that:
  • 54% annual actual blocksize growth for 8 years would give 1.548 = 1.54 * 1.54 * 1.54 * 1.54 * 1.54 * 1.54 * 1.54 * 1.54 = 32MB blocksize after 8 years
  • Metcalfe's Law (where Bitcoin price corresponds to the square of Bitcoin transactions or volume / blocksize) would give 1.542 = 2.37 - ie, 54% bigger blocks (higher volume or more transaction) each year could support about 2.37 higher price each year.
  • 2.37x annual price growth for 8 years would be 2.378 = 2.37 * 2.37 * 2.37 * 2.37 * 2.37 * 2.37 * 2.37 * 2.37 = 1000 - giving a price of 1 BTC = 1 million USDollars if the price increases an average of 2.37x per year for 8 years, starting from 1 BTC = 1000 USD now.
So, even though initially it might seem crazy to think that we could get to 1 BTC = 1 million USDollars in 8 years, it's actually not that far-fetched at all - based on:
  • some simple math,
  • the observed available bandwidth (already increasing at 70% per year), and
  • the increasing fragility and failures of many "legacy" debt-backed national fiat currencies and payment systems.
Does Metcalfe's Law hold for Bitcoin?
The past 8 years of data suggest that Metcalfe's Law really does hold for Bitcoin - you can check out some of the graphs here:
https://imgur.com/jLnrOuK
https://i.redd.it/kvjwzcuce3ay.png
https://cdn-images-1.medium.com/max/800/1*22ix0l4oBDJ3agoLzVtUgQ.gif
(3) Satoshi's original 32MB "max blocksize" would provide an ultra-simple, ultra-safe, non-controversial approach which perhaps everyone could agree on: Bitcoin's original promise of "p2p electronic cash", 100% on-chain, eventually worth 1 BTC = 1 million dollars.
This could all be done using only the whitepaper - eg, no need for possibly "controversial" changes like SegWit/Lightning, Bitcoin Unlimited, etc.
As we know, the Bitcoin community has been fighting a lot lately - mainly about various controversial scaling proposals.
Some people are worried about SegWit, because:
  • It's actually not much of a scaling proposal - it would only give 1.7MB blocks, and only if everyone adopts it, and based on some fancy, questionable blocksize or new "block weight" accounting;
  • It would be implemented as an overly complicated and anti-democratic "soft" fork - depriving people of their right to vote via a much simpler and safer "hard" fork, and adding massive and unnecessary "technical debt" to Bitcoin's codebase (for example, dangerously making all UTXOs "anyone-can-spend", making future upgrades much more difficult - but giving long-term "job security" to Core/Blockstream devs);
  • It would require rewriting (and testing!) thousands of lines of code for existing wallets, exchanges and businesses;
  • It would introduce an arbitrary 1-to-4 "discount" favoring some kinds of transactions over others.
And some people are worried about Lightning, because:
  • There is no decentralized (p2p) routing in Lightning, so Lightning would be a terrible step backwards to the "bad old days" of centralized, censorable hubs or "crypto banks";
  • Your funds "locked" in a Lightning channel could be stolen if you don't constantly monitor them;
  • Lighting would steal fees from miners, and make on-chain p2p transactions prohibitively expensive, basically destroying Satoshi's p2p network, and turning it into SWIFT.
And some people are worried about Bitcoin Unlimited, because:
  • Bitcoin Unlimited extends the notion of Nakamoto Consensus to the blocksize itself, introducing the new parameters EB (Excess Blocksize) and AD (Acceptance Depth);
  • Bitcoin Unlimited has a new, smaller dev team.
(Note: Out of all the current scaling proposals available, I support Bitcoin Unlimited - because its extension of Nakamoto Consensus to include the blocksize has been shown to work, and because Bitcoin Unlimited is actually already coded and running on about 25% of the network.)
It is normal for reasonable people to have the above "concerns"!
But what if we could get to 1 BTC = 1 million USDollars - without introducing any controversial new changes or discounts or consensus rules or game theory?
What if we could get to 1 BTC = 1 million USDollars using just the whitepaper itself - by simply reinstating Satoshi's original 32MB "max blocksize"?
(4) We can easily reach "million-dollar bitcoin" by gradually and safely growing blocks to 32MB - Satoshi's original "max blocksize" - without changing anything else in the system!
If we simply reinstate "Bitcoin Original" (Satoshi's original 32MB blocksize), then we could avoid all the above "controversial" changes to Bitcoin - and the following 8-year scenario would be quite realistic:
  • Actual blocksizes growing modestly at 54% per year - well within the 70% increase in available "provisioned bandwidth" which we actually happened last year
  • This would give us a reasonable, totally feasible blocksize of 1.548 = 32MB ... after 8 years.
  • Bitcoin price growing at 2.37x per year, or a total increase of 2.378 = 1000x over the next 8 years - which is similar to what happened during the previous 8 years, when the price went from under 1 USDollars to over 1000 USDollars.
  • This would give us a possible Bitcoin price of 1 BTC = 1 million USDollars after 8 years.
  • There would still be plenty of decentralization - plenty of fully-validating nodes and mining nodes), because:
    • The Cornell study showed that 90% of nodes could already handle 4MB blocks - and that was several years ago (so we could already handle blocks even bigger than 4MB now).
    • 70% yearly increase in available bandwidth, combined with a mere 54% yearly increase in used bandwidth (plus new "block compression" technologies such as XThin and Compact Blocks) mean that nearly all existing nodes could easily handle 32MB blocks after 8 years; and
    • The "economic incentives" to run a node would be strong if the price were steadily rising to 1 BTC = 1 million USDollars
    • This would give a total market cap of 20 trillion USDollars after about 8 years - comparable to the total "money" in the world which some estimates put at around 82 trillion USDollars.
So maybe we should consider the idea of reinstating Satoshi's Original Bitcoin with its 32MB blocksize - using just the whitepaper and avoiding controversial changes - so we could re-unite the community to get to "million-dollar bitcoin" (and 20 trillion dollar market cap) in as little as 8 years.
submitted by ydtm to btc [link] [comments]

The Astounding Incompetence, Negligence, and Dishonesty of the Bitcoin Unlimited Developers

On August 26, 2016 someone noticed that their Classic node had been forked off of the "Big Blocks Testnet" that Bitcoin Classic and Bitcoin Unlimited were running. Neither implementation was testing their consensus code on any other testnets; this was effectively the only testnet being used to test either codebase. The issue was due to a block on the testnet that was mined on July 30, almost a full month prior to anyone noticing the fork at all, which was in violation of the BIP109 specification that Classic miners were purportedly adhering to at the time. Gregory Maxwell observed:
That was a month ago, but it's only being noticed now. I guess this is demonstrating that you are releasing Bitcoin Classic without much testing and that almost no one else is either? :-/
The transaction in question doesn't look at all unusual, other than being large. It was, incidentally, mined by pool.bitcoin.com, which was signaling support for BIP109 in the same block it mined that BIP 109 violating transaction.
Later that day, Maxwell asked Roger Ver to clarify whether he was actually running Bitcoin Classic on the bitcoin.com mining pool, who dodged the question and responded with a vacuous reply that attempted to inexplicably change the subject to "censorship" instead.
Andrew Stone (the lead developer of Bitcoin Unlimited) voiced confusion about BIP109 and how Bitcoin Unlimited violated the specification for it (while falsely signaling support for it). He later argued that Bitcoin Unlimited didn't need to bother adhering to specifications that it signaled support for, and that doing so would violate the philosophy of the implementation. Peter Rizun shared this view. Neither developer was able to answer Maxwell's direct question about the violation of BIP109 §4/5, which had resulted in the consensus divergence (fork).
Despite Maxwell having provided a direct link to the transaction violating BIP109 that caused the chain split, and explaining in detail what the results of this were, later Andrew Stone said:
I haven't even bothered to find out the exact cause. We have had BUIP016 passed to adhere to strict BIP109 compatibility (at least in what we generate) by merging Classic code, but BIP109 is DOA -- so no-one bothered to do it.
I think that the only value to be had from this episode is to realise that consensus rules should be kept to an absolute, money-function-protecting minimum. If this was on mainnet, I'll be the Classic users would be unhappy to be forked onto a minority branch because of some arbitrary limit that is yet another thing would have needed to be fought over as machine performance improves but the limit stays the same.
Incredibly, when a confused user expressed disbelief regarding the fork, Andrew Stone responded:
Really? There was no classic fork? As i said i didnt bother to investigate. Can you give me a link to more info? Its important to combat this fud.
Of course, the proof of the fork (and the BIP109-violating block/transaction) had already been provided to Stone by Maxwell. Andrew Stone was willing to believe that the entire fork was imaginary, in the face of verifiable proof of the incident. He admits that he didn't investigate the subject at all, even though that was the only testnet that Unlimited could have possibly been performing any meaningful tests on at the time, and even though this fork forced Classic to abandon BIP109 entirely, leaving it vulnerable to the types of attacks that Gavin Andresen described in his Guided Tour of the 2mb Fork:
“Accurate sigop/sighash accounting and limits” is important, because without it, increasing the block size limit might be dangerous... It is set to 1.3 gigabytes, which is big enough so none of the blocks currently in the block chain would hit it, but small enough to make it impossible to create poison blocks that take minutes to validate.
As a result of this fork (which Stone was clueless enough to doubt had even happened), Bitcoin Classic and Bitcoin Unlimited were both left vulnerable to such attacks. Fascinatingly, this fact did not seem to bother the developers of Bitcoin Unlimited at all.
On November 17, 2016 Andrew Stone decided to post an article titled A Short Tour of Bitcoin Core wherein he claimed:
Bitcoin Unlimited is building the highest quality, most stable, Bitcoin client available. We have a strong commitment to quality and testing as you will see in the rest of this document.
The irony of this claim should soon become very apparent.
In the rest of the article, Stone wrote with venomous and overtly hostile rhetoric:
As we mine the garbage in the Bitcoin Core code together... I want you to realise that these issues are systemic to Core
He went on to describe what he believed to be multiple bugs that had gone unnoticed by the Core developers, and concluded his article with the following paragraph:
I hope when reading these issues, you will realise that the Bitcoin Unlimited team might actually be the most careful committers and testers, with a very broad and dedicated test infrastructure. And I hope that you will see these Bitcoin Core commits— bugs that are not tricky and esoteric, but simple issues that well known to average software engineers —and commits of “Very Ugly Hack” code that do not reflect the care required for an important financial network. I hope that you will realise that, contrary to statements from Adam Back and others, the Core team does not have unique skills and abilities that qualify them to administer this network.
As soon as the article was published, it was immediately and thoroughly debunked. The "bugs" didn't exist in the current Core codebase; some were results of how Andrew had "mucked with wallet code enough to break" it, and "many of issues were actually caused by changes they made to code they didn't understand", or had been fixed years ago in Core, and thus only affected obsolete clients (ironically including Bitcoin Unlimited itself).
As Gregory Maxwell said:
Perhaps the biggest and most concerning danger here isn't that they don't know what they're doing-- but that they don't know what they don't know... to the point where this is their best attempt at criticism.
Amusingly enough, in the "Let's Lose Some Money" section of the article, Stone disparages an unnamed developer for leaving poor comments in a portion of the code, unwittingly making fun of Satoshi himself in the process.
To summarize: Stone set out to criticize the Core developer team, and in the process revealed that he did not understand the codebase he was working on, had in fact personally introduced the majority of the bugs that he was criticizing, and was actually completely unable to identify any bugs that existed in current versions Core. Worst of all, even after receiving feedback on his article, he did not appear to comprehend (much less appreciate) any of these facts.
On January 27, 2017, Bitcoin Unlimited excitedly released v1.0 of their software, announcing:
The third official BU client release reflects our opinion that Bitcoin full-node software has reached a milestone of functionality, stability and scalability. Hence, completion of the alpha/beta phase throughout 2009-16 can be marked in our release version.
A mere 2 days later, on January 29, their code accidentally attempted to hard-fork the network. Despite there being a very clear and straightforward comment in Bitcoin Core explaining the space reservation for coinbase transactions in the code, Bitcoin Unlimited obliviously merged a bug into their client which resulted in an invalid block (23 bytes larger than 1MB) being mined by Roger Ver's Bitcoin.com mining pool on January 29, 2017, costing the pool a minimum of 13.2 bitcoins. A large portion of Bitcoin Unlimited nodes and miners (which naively accepted this block as valid) were temporarily banned from the network as a result, as well.
The code change in question revealed that the Bitcoin Unlimited developers were not only "commenting out and replacing code without understanding what it's for" as well as bypassing multiple safety-checks that should have prevented such issues from occurring, but that they were not performing any peer review or testing whatsoever of many of the code changes they were making. This particular bug was pushed directly to the master branch of Bitcoin Unlimited (by Andrew Stone), without any associated pull requests to handle the merge or any reviewers involved to double-check the update. This once again exposed the unprofessionalism and negligence of the development team and process of Bitcoin Unlimited, and in this case, irrefutably had a negative effect in the real world by costing Bitcoin.com thousands of dollars worth of coins.
In effect, this was the first public mainnet fork attempt by Bitcoin Unlimited. Unsurprisingly, the attempt failed, costing the would-be forkers real bitcoins as a result. It is possible that the costs of this bug are much larger than the lost rewards and fees from this block alone, as other Bitcoin Unlimited miners may have been expending hash power in the effort to mine slightly-oversized (invalid) blocks prior to this incident, inadvertently wasting resources in the doomed pursuit of invalid coins.
On March 14, 2017, a remote exploit vulnerability discovered in Bitcoin Unlimited crashed 75% of the BU nodes on the network in a matter of minutes.
In order to downplay the incident, Andrew Stone rapidly published an article which attempted to imply that the remote-exploit bug also affected Core nodes by claiming that:
approximately 5% of the “Satoshi” Bitcoin clients (Core, Unlimited, XT) temporarily dropped off of the network
In reddit comments, he lied even more explicitly, describing it as "a bug whose effects you can see as approximate 5% drop in Core node counts" as well as a "network-wide Bitcoin client failure". He went so far as to claim:
the Bitcoin Unlimited team found the issue, identified it as an attack and fixed the problem before the Core team chose to ignore it
The vulnerability in question was in thinblock.cpp, which has never been part of Bitcoin Core; in other words, this vulnerability only affected Bitcoin Classic and Bitcoin Unlimited nodes.
In the same Medium article, Andrew Stone appears to have doctored images to further deceive readers. In the reddit thread discussing this deception, Andrew Stone denied that he had maliciously edited the images in question, but when questioned in-depth on the subject, he resorted to citing his own doctored images as sources and refused to respond to further requests for clarification or replication steps.
Beyond that, the same incident report (and images) conspicuously omitted the fact that the alleged "5% drop" on the screenshotted (and photoshopped) node-graph was actually due to the node crawler having been rebooted, rather than any problems with Core nodes. This fact was plainly displayed on the 21 website that the graph originated from, but no mention of it was made in Stone's article or report, even after he was made aware of it and asked to revise or retract his deceptive statements.
There were actually 3 (fundamentally identical) Xthin-assert exploits that Unlimited developers unwittingly publicized during this episode, which caused problems for Bitcoin Classic, which was also vulnerable.
On top of all of the above, the vulnerable code in question had gone unnoticed for 10 months, and despite the Unlimited developers (including Andrew Stone) claiming to have (eventually) discovered the bug themselves, it later came out that this was another lie; an external security researcher had actually discovered it and disclosed it privately to them. This researcher provided the following quotes regarding Bitcoin Unlimited:
I am quite beside myself at how a project that aims to power a $20 billion network can make beginner’s mistakes like this.
I am rather dismayed at the poor level of code quality in Bitcoin Unlimited and I suspect there [is] a raft of other issues
The problem is, the bugs are so glaringly obvious that when fixing it, it will be easy to notice for anyone watching their development process,
it doesn’t help if the software project is not discreet about fixing critical issues like this.
In this case, the vulnerabilities are so glaringly obvious, it is clear no one has audited their code because these stick out like a sore thumb
In what appeared to be a desperate attempt to distract from the fundamental ineptitude that this vulnerability exposed, Bitcoin Unlimited supporters (including Andrew Stone himself) attempted to change the focus to a tweet that Peter Todd made about the vulnerability, blaming him for exposing it and prompting attackers to exploit it... but other Unlimited developers revealed that the attacks had actually begun well before Todd had tweeted about the vulnerability. This was pointed out many times, even by Todd himself, but Stone ignored these facts a week later, and shamelessly lied about the timeline in a propagandistic effort at distraction and misdirection.
submitted by sound8bits to Bitcoin [link] [comments]

The core concepts of DTube's new blockchain

Dear Reddit community,
Following our announcement for DTube v0.9, I have received countless questions about the new blockchain part, avalon. First I want to make it clear, that it would have been utterly impossible to build this on STEEM, even with the centralized SCOT/Tribes that weren't available when I started working on this. This will become much clearer as you read through the whole wall of text and understand the novelties.
SteemPeak says this is a 25 minutes read, but if you are truly interested in the concept of a social blockchain, and you believe in its power, I think it will be worth the time!

MOVING FORWARD

I'm a long time member of STEEM, with tens of thousands of staked STEEM for 2 years+. I understand the instinctive fear from the other members of the community when they see a new crypto project coming out. We've had two recent examples recently with the VOICE and LIBRA annoucements, being either hated or ignored. When you are invested morally, and financially, when you see competitors popping up, it's normal to be afraid.
But we should remember competition is healthy, and learn from what these projects are doing and how it will influence us. Instead, by reacting the way STEEM reacts, we are putting our heads in the sand and failing to adapt. I currently see STEEM like the "North Korea of blockchains", trying to do everything better than other blockchains, while being #80 on coinmarketcap and slowly but surely losing positions over the months.
When DLive left and revealed their own blockchain, it really got me thinking about why they did it. The way they did it was really scummy and flawed, but I concluded that in the end it was a good choice for them to try to develop their activity, while others waited for SMTs. Sadly, when I tried their new product, I was disappointed, they had botched it. It's purely a donation system, no proof of brain... And the ultra-majority of the existing supply is controlled by them, alongside many other 'anti-decentralization' features. It's like they had learnt nothing from their STEEM experience at all...
STEEM was still the only blockchain able to distribute crypto-currency via social interactions (and no, 'donations' are not social interactions, they are monetary transfers; bitcoin can do it too). It is the killer feature we need. Years of negligence or greed from the witnesses/developers about the economic balance of STEEM is what broke this killer feature. Even when proposing economical changes (which are actually getting through finally in HF21), the discussions have always been centered around modifying the existing model (changing the curve, changing the split, etc), instead of developing a new one.
You never change things by fighting the existing reality.
To change something, build a new model that makes the existing model obsolete.
What if I built a new model for proof of brain distribution from the ground up? I first tried playing with STEEM clones, I played with EOS contracts too. Both systems couldn't do the concepts I wanted to integrate for DTube, unless I did a major refactor of tens of thousands of lines of code I had never worked with before. Making a new blockchain felt like a lighter task, and more fun too.
Before even starting, I had a good idea of the concepts I'd love to implement. Most of these bullet points stemmed from observations of what happened here on STEEM in the past, and what I considered weaknesses for d.tube's growth.

NO POWER-UP

The first concept I wanted to implement deep down the core of how a DPOS chain works, is that I didn't want the token to be staked, at all (i.e. no 'powering up'). The cons of staking for a decentralized social platform are obvious: * complexity for the users with the double token system. * difficulty to onboard people as they need to freeze their money, akin to a pyramid scheme.
The only good thing about staking is how it can fill your bandwidth and your voting power when you power-up, so you don't need to wait for it to grow to start transacting. In a fully-liquid system, your account ressources start at 0% and new users will need to wait for it to grow before they can start transacting. I don't think that's a big issue.
That meant that witness elections had to be run out of the liquid stake. Could it be done? Was it safe for the network? Can we update the cumulative votes for witnesses without rounding issues? Even when the money flows between accounts freely?
Well I now believe it is entirely possible and safe, under certain conditions. The incentive for top witnesses to keep on running the chain is still present even if the stake is liquid. With a bit of discrete mathematics, it's easy to have a perfectly deterministic algorithm to run a decentralized election based off liquid stake, it's just going to be more dynamic as the funds and the witness votes can move around much faster.

NO EARLY USER ADVANTAGE

STEEM has had multiple events that influenced the distribution in a bad way. The most obvious one is the inflation settings. One day it was hella-inflationary, then suddently hard fork 16 it wasn't anymore. Another major one, is the non-linear rewards that ran for a long time, which created a huge early-user advantage that we can still feel today.
I liked linear rewards, it's what gives minnows their best chance while staying sybil-resistant. I just needed Avalon's inflation to be smart. Not hyper-inflationary like The key metric to consider for this issue, is the number of tokens distributed per user per day. If this metric goes down, then the incentive for staying on the network and playing the game, goes down everyday. You feel like you're making less and less from your efforts. If this metric goes up, the number of printed tokens goes up and the token is hyper-inflationary and holding it feels really bad if you aren't actively earning from the inflation by playing the game.
Avalon ensures that the number of printed tokens is proportional to the number of users with active stake. If more users come in, avalon prints more tokens, if users cash-out and stop transacting, the inflation goes down. This ensures that earning 1 DTC will be about as hard today, tomorrow, next month or next year, no matter how many people have registered or left d.tube, and no matter what happens on the markets.

NO LIMIT TO MY VOTING POWER

Another big issue that most steemians don't really know about, but that is really detrimental to STEEM, is how the voting power mana bar works. I guess having to manage a 2M SP delegation for @dtube really convinced me of this one.
When your mana bar is full at 100%, you lose out the potential power generation, and rewards coming from it. And it only takes 5 days to go from 0% to 100%. A lot of people have very valid reasons to be offline for 5 days+, they shouldn't be punished so hard. This is why all most big stake holders make sure to always spend some of their voting power on a daily basis. And this is why minnows or smaller holders miss out on tons of curation rewards, unless they delegate to a bidbot or join some curation guild... meh. I guess a lot of people would rather just cash-out and don't mind the trouble of having to optimize their stake.
So why is it even a mana bar? Why can't it grow forever? Well, everything in a computer has to have a limit, but why is this limit proportional to my stake? While I totally understand the purpose of making the bandwidth limited and forcing big stake holders to waste it, I think it's totally unneeded and inadapted for the voting power. As long as the growth of the VP is proportional to the stake, the system stays sybil-resistant, and there could technically be no limit at all if it wasn't for the fact that this is ran in a computer where numbers have a limited number of bits.
On Avalon, I made it so that your voting power grows virtually indefinitely, or at least I don't think anyone will ever reach the current limit of Number.MAX_SAFE_INTEGER: 9007199254740991 or about 9 Peta VP. If you go inactive for 6 months on an account with some DTCs, when you come back you will have 6 months worth of power generation to spend, turning you into a whale, at least for a few votes.
Another awkward limit on STEEM is how a 100% vote spends only 2% of your power. Not only STEEM forces you to be active on a daily basis, you also need to do a minimum of 10 votes / day to optimize your earnings. On Avalon, you can use 100% of your stored voting power in a single mega-vote if you wish, it's up to you.

A NEW PROOF-OF-BRAIN

No Author rewards

People should vote with the intent of getting a reward from it. If 75% of the value forcibly goes to the author, it's hard to expect a good return from curation. Steem is currently basically a complex donation platform. No one wants to donate when they vote, no matter what they will say, and no matter how much vote-trading, self-voting or bid-botting happens.
So in order to keep a system where money is printed when votes happen, if we cannot use the username of the author to distribute rewards, the only possibility left is to use the list of previous voters aka "Curation rewards". The 25% interesting part of STEEM, that has totally be shadowed by the author rewards for too long.

Downvote rewards

STEEM has always suffered from the issue that the downvote button is unused, or when it's used, it's mostly for evil. This comes from the fact that in STEEM's model, downvotes are not eligible for any rewards. Even if they were, your downvote would be lowering the final payout of the content, and your own curation rewards...
I wanted Avalon's downvotes to be completely symmetric to the upvotes. That means if we revert all the votes (upvotes become downvotes and vice versa), the content should still distribute the same amount of tokens to the same people, at the same time.

No payment windows

Steem has a system of payments windows. When you publish a content, it opens a payment window where people can freely upvote or downvote to influence the payout happening 7 days later. This is convenient when you want a system where downvotes lower rewards. Waiting 7 days to collect rewards is also another friction point for new users, some of them might never come back 7 days later to convince themselves that 'it works'. On avalon, when you are part of the winners of curation after a vote, you earn it instantly in your account, 100% liquid and transferable.

Unlimited monetization in time

Indeed, the 7 days monetization limit has been our biggest issue for our video platform since day 8. This incentivized our users to create more frequent, but lesser quality content, as they know that they aren't going to earn anything from the 'long-haul'. Monetization had to be unlimited on DTube, so that even a 2 years old video could be dug up and generate rewards in the far future.
Infinite monetization is possible, but as removing tokens from a balance is impossible, the downvotes cannot remove money from the payout like they do on STEEM. Instead, downvotes print money in the same way upvotes do, downvotes still lower the popularity in the hot and trending and should only rewards other people who downvoted the same content earlier.

New curation rewards algorithm

STEEM's curation algorithm isn't stupid, but I believe it lacks some elegance. The 15 minutes 'band-aid' necessary to prevent curation bots (bots who auto vote as fast as possible on contents of popular authors) that they added proves it. The way is distributes the reward also feels very flat and boring. The rewards for my votes are very predictable, especially if I'm the biggest voter / stake holder for the content. My own vote is paying for my own curation rewards, how stupid is that? If no one elses votes after my big vote despite a popularity boost, it probably means I deserve 0 rewards, no?
I had to try different attempts to find an algorithm yielding interesting results, with infinite monetization, and without obvious ways to exploit it. The final distribution algorithm is more complex than STEEM's curation but it's still pretty simple. When a vote is cast, we calculate the 'popularity' at the time of the vote. The first vote is given a popularity of 0, the next votes are defined by (total_vp_upvotes - total_vp_downvotes) / time_since_1st_vote. Then we look into the list of previous votes, and we remove all votes in the opposite direction (up/down). The we remove all the votes with a higher popularity if its an upvote, or the ones with a lower popularity if its a downvote. The remaining votes in the list are the 'winners'. Finally, akin to STEEM, the amount of tokens generated by the vote will be split between winners proportionally to the voting power spent by each (linear rewards - no advantages for whales) and distributed instantly. Instead of purely using the order of the votes, Avalon distribution is based on when the votes are cast, and each second that passes reduces the popularity of a content, potentially increasing the long-term ROI of the next vote cast on it.
Graph It's possible to chart the popularity that influences the DTC monetary distribution directly in the d.tube UI
This algorithm ensures there are always losers. The last upvoter never earns anything, also the person who upvoted at the highest popularity, and the one who downvoted at the lowest popularity would never receive any rewards for their vote. Just like the last upvoter and last downvoter wouldn't either. All the other ones in the middle may or may not receive anything, depending on how the voting and popularity evolved in time. The one with an obvious advantage, is the first voter who is always counted as 0 popularity. As long as the content stays at a positive popularity, every upvote will earn him rewards. Similarly, being the first downvoter on an overly-popular content could easily earn you 100% rewards on the next downvote that could be from a whale, earning you a fat bonus.
While Avalon doesn't technically have author rewards, the first-voter advantage is strong, and the author has the advantage of always being the first voter, so the author can still earn from his potentially original creations, he just needs to commit some voting power on his own contents to be able to publish.

ONE CHAIN <==> ONE APP

More scalable than shared blockchains

Another issue with generalistic blockchains like ETH/STEEM/EOS/TRX, which are currently hosting dozens of semi-popular web/mobile apps, is the reduced scalability of such shared models. Again, everything in a computer has a limit. For DPOS blockchains, 99%+ of the CPU load of a producing node will be to verify the signatures of the many transactions coming in every 3 seconds. And sadly this fact will not change with time. Even if we had a huge breakthrough on CPU speeds today, we would need to update the cryptographic standards for blockchains to keep them secure. This means it would NOT become easier to scale up the number of verifiable transactions per seconds.
Oh, but we are not there yet you're thinking? Or maybe you think that we'll all be rich if we reach the scalability limits so it doesn't really matter? WRONG
The limit is the number of signature verifications the most expensive CPU on the planet can do. Most blockchains use the secp256k1 curve, including Bitcoin, Ethereum, Steem and now Avalon. It was originally chosen for Bitcoin by Satoshi Nakamoto probably because it's decently quick at verifying signatures, and seems to be backdoor-proof (or else someone is playing a very patient game). Maybe some other curves exist with faster signature verification speed, but it won't be improved many-fold, and will likely require much research, auditing, and time to get adopted considering the security implications.
In 2015 Graphene was created, and Bitshares was completely rewritten. This was able to achieve 100,000 transaction per second on a single machine, and decentralized global stress testing achieved 18,000 transactions per second on a distributed network.
So BitShares/STEEM and other DPOS graphene chains in production can validate at most 18000 txs/sec, so about 1.5 billion transactions per day. EOS, Tendermint, Avalon, LIBRA or any other DPOS blockchain can achieve similar speeds, because there's no planet-killing proof-of-works, and thanks to the leader-based/democratic system that reduces the number of nodes taking part in the consensus.
As a comparison, there are about 4 billion likes per day on instagram, so you can probably double that with the actual uploads, stories and comments, password changes, etc. The load is also likely unstable through the day, probably some hours will go twice as fast as the average. You wouldn't be able to fit Instagram in a blockchain, ever, even with the most scalable blockchain tech on the world's best hardware. You'd need like a dozen of those chains. And instagram is still a growing platform, not as big as Facebook, or YouTube.
So, splitting this limit between many popular apps? Madness! Maybe it's still working right now, but when many different apps reach millions of daily active users plus bots, it won't fit anymore.
Serious projects with a big user base will need to rethink the shared blockchain models like Ethereum, EOS, TRX, etc because the fees in gas or necessary stake required to transact will skyrocket, and the victims will be the hordes of minnows at the bottom of the distribution spectrum.
If we can't run a full instagram on a DPOS blockchain, there is absolutely no point trying to run medium+reddit+insta+fb+yt+wechat+vk+tinder on one. Being able to run half an instagram is already pretty good and probably enough to actually onboard a fair share of the planet. But if we multiply the load by the number of different app concepts available, then it's never gonna scale.
DTube chain is meant for the DTube UI only. Please do not build something unrelated to video connecting to our chain, we would actively do what we can to prevent you from growing. We want this chain to be for video contents only, and the JSON format of the contents should always follow the one used by d.tube.
If you are interested in avalon tech for your project isn't about video, it's strongly suggested to fork the blockchain code and run your own avalon chain with a different origin id, instead of trying to connect your project to dtube's mainnet. If you still want to do it, chain leaders would be forced to actively combat your project as we would consider it as useless noise inside our dedicated blockchain.

Focused governance

Another issue of sharing a blockchain, is the issues coming up with the governance of it. Tons of features enabled by avalon would be controversial to develop on STEEM, because they'd only benefit DTube, and maybe even hurt/break some other projects. At best they'd be put at the bottom of a todo list somewhere. Having a blockchain dedicated to a single project enables it to quickly push updates that are focused on a single product, not dozens of totally different projects.
Many blockchain projects are trying to make decentralized governance true, but this is absolutely not what I am interested in for DTube. Instead, in avalon the 'init' account, or 'master' account, has very strong permissions. In the DTC case, @dtube: * will earn 10% fees from all the inflation * will not have to burn DTCs to create accounts * will be able to do certain types of transactions when others can't * * account creation (during steem exclusivity period) * * transfers (during IEO period) * * transfering voting power and bandwidth ressources (used for easier onboarding)
For example, for our IEO we will setup a mainnet where only @dtube is allowed to transfer funds or vote until the IEO completes and the airdrop happens. This is also what enabled us to create a 'steem-only' registration period on the public testnet for the first month. Only @dtube can create accounts, this way we can enforce a 1 month period where users can port their username for free, without imposters having a chance to steal usernames. Through the hard-forking mechanism, we can enable/disable these limitations and easily evolve the rules and permissions of the blockchain, for example opening monetary transfers at the end of our IEO, or opening account creation once the steem exclusivity ends.
Luckily, avalon is decentralized, and all these parameters (like the @dtube fees, and @dtube permissions) are easily hardforkable by the leaders. @dtube will however be a very strong leader in the chain, as we plan to use our vote to at least keep the #1 producing node for as long as we can.
We reserve the right to 'not follow' an hardfork. For example, it's obvious we wouldn't follow something like reducing our fees to 0% as it would financially endanger the project, and we would rather just continue our official fork on our own and plug d.tube domain and mobile app to it.
On the other end of the spectrum, if other leaders think @dtube is being tyranical one way or another, leaders will always have the option of declining the new hardforks and putting the system on hold, then @dtube will have an issue and will need to compromise or betray the trust of 1/3 of the stake holders, which could reveal costly.
The goal is to have a harmounious, enterprise-level decision making within the top leaders. We expect these leaders to be financially and emotionally connected with the project and act for good. @dtube is to be expected to be the main good actor for the chain, and any permission given to it should be granted with the goal of increasing the DTC marketcap, and nothing else. Leaders and @dtube should be able to keep cooperation high enough to keep the hard-forks focused on the actual issues, and flowing faster than other blockchain projects striving for a totally decentralized governance, a goal they are unlikely to ever achieve.

PERFECT IMBALANCE

A lot of hard-forking

Avalon is easily hard-forkable, and will get hard-forked often, on purpose. No replays will be needed for leaders/exchanges during these hard-forks, just pull the new hardfork code, and restart the node before the hard-fork planned time to stay on the main fork. Why is this so crucial? It's something about game theory.
I have no former proof for this, but I assume a social and financial game akin to the one played on steem since 2016 to be impossible to perfectly balance, even with a thourough dichotomical process. It's probably because of some psychological reason, or maybe just the fact that humans are naturally greedy. Or maybe it's just because of the sheer number of players. They can gang up together, try to counter each others, and find all sorts of creative ideas to earn more and exploit each other. In the end, the slightest change in the rules, can cause drastic gameplay changes. It's a real problem, luckily it's been faced by other people in the past.
Similarly to what popular and succesful massively multiplayer games have achieved, I plan to patch or suggest hard-forks for avalon's mainnet on a bi-monthly basis. The goal of this perfect imbalance concept, is to force players to re-discover their best strategy often. By introducing regular, small, and semi-controlled changes into this chaos, we can fake balance. This will require players to be more adaptative and aware of the changes. This prevents the game from becoming stale and boring for players, while staying fair.

Death to bots

Automators on the other side, will need to re-think their bots, go through the developement and testing phase again, on every new hard-fork. It will be an unfair cat-and-mouse game. Doing small and semi-random changes in frequent hard-forks will be a easy task for the dtube leaders, compared to the work load generated to maintain the bots. In the end, I hope their return on investment to be much lower compared to the bid-bots, up to a point where there will be no automation.
Imagine how different things would have been if SteemIt Inc acted strongly against bid-bots or other forms of automation when they started appearing? Imagine if hard-forks were frequent and they promised to fight bid-bots and their ilk? Who would be crazy enough to make a bid-bot apart from @berniesanders then?
I don't want you to earn DTCs unless you are human. The way you are going to prove you are human, is not by sending a selfie of you with your passport to a 3rd party private company located on the other side of the world. You will just need to adapt to the new rules published every two weeks, and your human brain will do it subconsciously by just playing the voting game and seeing the rewards coming.
All these concepts are aimed at directly improving d.tube, making it more resilient, and scale both technologically and economically. Having control over the full tech stack required to power our dapp will prevent issues like the one we had with the search engine, where we relied too heavily on a 3rd party tool, and that created a 6-months long bug that basically broke 1/3 of the UI.
While d.tube's UI can now totally run independently from any other entity, we kept everything we could working with STEEM, and the user is now able to transparently publish/vote/comment videos on 2 different chains with one click. This way we can keep on leveraging the generalistic good features of STEEM that our new chain doesn't focuses on doing, such as the dollar-pegged token, the author rewards/donation mechanism, the tribes/communities tokens, and simply the extra exposure d.tube users can get from other website (steemit.com, busy.org, partiko, steempeak, etc), which is larger than the number of people using d.tube directly.
The public testnet has been running pretty well for 3 weeks now, with 6000+ accounts registered, and already a dozen of independant nodes popping up and running for leaders. The majority of the videos are cross-posted on both chains and the daily video volume has slightly increased since the update, despite the added friction of the new 'double login' system and several UI bugs.
If you've read this article, I'm hoping to get some reactions from you in the comments section!
Some even more focused articles about avalon are going to pop on my blog in the following weeks, such as how to get a node running and running for leadewitness, so feel free to follow me to get more news and help me reach 10K followers ;)
submitted by nannal to dtube [link] [comments]

CELT update

https://steemit.com/coss/@spielley/celt-coss-exchange-liquidity-token

CELT - COSS Exchange Liquidity Token

What is CELT, why is it created?

CELT is an ERC20 token that can be bought and sold at its contract. It's created to fund a bot that operates on the COSS exchange. The bot takes the form of being a market maker. It detects how big the void is between buying and selling orders on the orderbook of a pair and decreases it, depending on the reserve it has. It decreases it by putting a buy limit order and a sell limit order within the void. When both these positions get filled the bot realises a small profit. If you want to trade you need a counterparty to trade with. If you don't have a counterparty to trade with, you can't trade, you'll need to trade at a price someone does want to trade at. The CELTbot tries to provide better prices for users to trade at instead of having to trade at a standard big loss due to the lack of standing orders on the orderbook.

Expanding functionallity with arbitrage

Because of the recent partnership with thaodehx where he's doing arbitrage between coss and Binance I decided to ask the CELT holders on reddit if I should add my own huobipro -COSS arbitrage bot into the game using CELT funds. After getting their opinions on this I finally booted it up yesterday with 0.5 ETH of my own and 30 OMG from the CELT wallet. Huobiprohistory is showing 130+ trades have been done since startup. Everybody wins with increased volume and profitable trades. The bot is active on all pairs in common between COSS and huobi.

Wallet performance

Last week @aume27 created an improved spreadsheet for me to keep track of the Wallets funds and performance. I decided to actually start keeping decent track of the wallet perfomance in ether equivalent now it has been made easyer to do so and not as time consuming. I've added huobipro's wallet equivalent in bitcoin to todays calculation and will keep doing it that way. It's not there in the previous ones cause it's only been setup since yesterday.

Wallet holdings buildup:

![](https://cdn.steemitimages.com/DQma5SvnbNDuHD5fW1Dw6MYaf4YbSrwMwMDre1wr5y6Z91g/image.png) ![](https://cdn.steemitimages.com/DQmVGQwUVSMqDUfL9715cqu9UorxihLS3YJzVZfdvioVBvY/image.png)

Wallet Etherequivalent evolution:

![](https://cdn.steemitimages.com/DQmQ6oLC1Q5bCGbW7U1kc5wcjLq8sbMzjrFTixFjKZjj9cx/image.png) Keep in mind that the first 2 colums are creating a new baseline with the new reporting system. Do not be alarmed that we are down in ethequivalent value, we have been accumulating crypto during this downtrend we'll be back in profit when the limit sell positions on top get filled again. We accumulated all the way on the downtrend and are now back in uptrend, CELTbot works best in ranging markets where people just exchange with the bot and the bot gets more turnover. You can see that in a week time the etherequivalent has been uptrending now. Todays report is a bit off because of the Huobi wallet to BTC conversion, I expect it to flatten out if I keep reporting its contents the same way each day.

The overview graph:

![](https://cdn.steemitimages.com/DQmZFJsiEnr8HK9LryPUm6mvRpA2KC8NDP5HxWCQU2PDkhY/image.png)

Buy and sell CELT at its contract.

The easiest way to buy and sell CELT is if you use this site if you have metamask: https://celt.dvx.me/ Otherwise refer to the CELT launch post to buy with Myetherwallet: https://steemit.com/celt/@spielley/the-cossening-celt-launch-and-how-to-get-and-sell-them

New funds use:

If you are buying CELT let me know where you want your funds to be used: - Increasing the orderbooks on coss and improving the spread of pairs - Increase the arbitrage part - If you want your funds to be only used to increase the orderbook of ETH/BTC for this is the heart of our COSS exchange. - Let me decide what's best use of the new funds

Example of the Bots new arbitrage trading:

![](https://cdn.steemitimages.com/DQmW92RVzBEV5q5boUjnLwndjzfK8V77HtBwydN75yL2kfZ/image.png) The first Trade happens on COSS and the 2nd happens on huobi. this is a 0.3146% diffrence in price. Deduct fees huobi: 0.2% and deduct fees COSS: 0.04% leaves us with +/- 0.0746% gain on a 0.1 LTC arb trade on LTC/BTC pairs.

Expanding arbitrage to other exchanges

If people are willing to fund this and if it is within my botting power to do so, I will be adding other exchanges to our dear CELTbot. First on my mind would be to add Kucoin to the fold as I can imagine a lot of pairs being in common. I would need to check to be sure how many there are. Since Thaodehx already has binance setup I will leave that one up to him as last I heard he was planning to move forward with the funding.

COSSbot group

Some people keep thinking that CELTbot is the same as the COSSbot from the community COSSbot group. I did join their coding effort and provided an automated trading strategy. It's now available for alpha acces. More info about this here: https://medium.com/@jimmydeal/cossbot-alpha-testing-commenced-85af5824f50b

accumulationbot.com

Further I am working on providing a freemium cossbot spinoff through https://accumulationbot.com/ When COSS releases API everybody will have acces to an accumulationbot and will be able to autotrade their coss account and accumulate their favorite crypto's!. I would still encourage everyone to join the alpha cossbot group as people in there will get a free subscription for an amount of time to accumulationbot.com. The site is still under construction and I can't ETA the actual launch of it.

decentralgear.com

The site owner is a COSS supporter and supporter of the coding effort for COSS, he is offering a 10% discount if you use the discount code SP10 at his store. So if you're looking for crypto related merch make sure to stop by his site to check out https://www.decentralgear.com/

Roadmap

I hope you guys enjoyed this week's monster steemit update. Help me out by upvoting here and spreading the word.
submitted by Spielley to CossIO [link] [comments]

What Bitcoin’s Valuation Says About Its Volatility

What Bitcoin’s Valuation Says About Its Volatility


Article by Coindesk: Noelle Acheson
Most of us think we understand the term “volatility.”
We digest headlines about tense political situations around the world; we are wary of explosive chemical compounds; some of us have had relationships with their fair share of ups and downs.
“Volatility” implies sharp and unpredictable changes, and usually has negative connotations. Even when it comes to financial markets, we intuitively shy away from investments that would produce wild swings in our wealth.
But volatility, in finance, is usually misunderstood. Even the most commonly accepted calculation is often incorrectly applied.
Its desirability is also confusing. Investors hate it unless it makes them money. Traders love it unless it means too high a risk premium.
And few of us understand where it comes from. Many think that it’s the result of low liquidity*. This intuitively makes sense: with thin trading volume, a large order can push prices sharply up or down. But empirical studies show that it’s actually the other way around: volatility leads to low liquidity, through the wider spread market makers apply to compensate the additional risk of holding a volatile asset in their inventory.
(*The misconception also stems from our mistaken conflation of low liquidity and low volume — it is possible to have high volume and low liquidity, but that’s for another post.)
This confusion matters in the crypto sector.
Bitcoin’s volatility has often been cited as the reason why it will never make a good store of value, a reliable payment token or a solid portfolio hedge. Many of us fall into the trap of assuming that as the market matures, volatility will decrease. This leads us to believe in use cases that may not ever be appropriate; it can also lead us to apply incorrect crypto asset valuation methods, portfolio weightings and derivative strategies that could have a material impact on our bottom line.
So it’s worth picking apart some of the assumptions and looking at why bitcoin’s unique characteristics can help us better understand market fundamentals more broadly.

Changing uncertainty

First, there are different types of market volatility. Academic literature provides an array of variations, each with its distinct formula and limitations. Jump-diffusion models used to value assets hint at a helpful differentiation. “Jump” volatility results, as its name implies, from a sudden event. “Diffuse” volatility, however, is part of the standard trading patterns of an asset, its “usual” variation.
With this we can start to see that, when we assume that greater liquidity will dampen price swings, we’re talking about “jump” volatility.
“Diffuse” volatility, however, is a more intrinsic concept.
The standard deviation calculation — the most commonly applied measure of volatility — incorporates the destabilizing effect of sharp moves by using the square of large deviations (otherwise they could be offset and masked by small ones). But this exaggerates the effect of outliers, which are often the result of “jump” volatility. These are likely to diminish as transaction volume grows, leading to a misleadingly downward-sloping volatility graph.
JP Koning proposes an alternative calculation that uses the deviation from the middle value rather than the average, which reduces the effect of outliers and shows a more intrinsic volatility measure. As the below chart shows, this has not noticeably decreased over the years.
(chart from Moneyness blog)
Now let’s look at why this might be. A clue lies in the methods used to value bitcoin.

Fundamental value

Bitcoin is one of the few “real assets” traded in markets today, in that it does not derive its value from another asset.
What’s more, it is a “real asset” with no discernible income stream. This makes it very difficult to value. Even junior analysts can calculate the “fair value” of an asset that spins off cash flows or that returns a certain amount at the end of its life. Bitcoin has no cash flows, and there is no “end of life,” let alone an identifiable value.
So, what drives the value of bitcoin?
Many theories have been put forward, some of which we describe in our report “Crypto’s New Fundamentals.” And as the market evolves, some may rise in favor while others get forgotten or superseded.
For now, though, the main driver of bitcoin’s value is sentiment: it’s worth what the market thinks it’s worth. In the absence of fundamentals, investors try to figure out what other investors are going to think. Keynes likened this to a contest in which “we devote our intelligences to anticipating what average opinion expects the average opinion to be.”
Gold is in a similar situation, in that it is also a “real asset” with no income stream and a market value largely driven by sentiment.
So, why is its volatility so much lower?

(chart from Woobull)
Because of “radical uncertainty.”

Changing narratives

In his book “The End of Alchemy”, Mervyn King explains that under “radical uncertainty,” market prices are determined, not by fundamentals, but by narratives about fundamentals.
Bitcoin is a new technology, and as such, we don’t yet know what its end use will be. Everyone has their theory, but as with all new technologies, no-one can be certain, which makes its narrative changeable.
Gold, on the other hand, is neither new nor a technology. It has been around for millennia, and its narrative is not uncertain. Sentiment plays an important part in its valuation, and scientists may yet uncover an innovative use for the metal that affects both demand and price. But its “story” is well established, which gives it a lower volatility profile.
For now, bitcoin’s fundamentals are its narrative, and the uncertainty about bitcoin’s “story” means that its volatility is unlikely to diminish any time soon.

A more prominent role

This matters for its eventual use case: will it always be too volatile to be used as a payment token, store of value, etc.? This in turn impacts its narrative, which affects its valuation and volatility, which affects its eventual use case. The self-perpetuating loop will eventually be broken as the sector matures and bitcoin’s role as an alternative asset class becomes more firmly consolidated — when uncertainty diminishes and its “intrinsic value” becomes easier to quantify.
But until then, its price will continue to be driven by market sentiment, which is susceptible to changeable narratives that in turn are formed by global developments and also by market sentiment.
Until then, market shifts will continue to be amplified in either direction, whatever the trading volume.
Rather than fret about this, we should accept and even embrace it. Increasingly sophisticated providers are working on improving the access to and interpretation of sentiment data, which strengthens our analytical tools. Crypto Twitter provides an engrossing platform to gauge the sector’s mood. And the identification of the impact of narrative and sentiment on an asset class will open up new avenues of investigation that is likely to spill over into other areas of investing.
What’s more, volatility may be inconvenient for some and uncomfortable for many. But it is also an important component of superior returns. Perhaps the tools and skills we develop to hone our bitcoin valuation techniques will enable a more masterful handling of volatility’s inherent uncertainty, and allow for a deeper appreciation of what it has to offer.
Roller coaster image via Shutterstock
submitted by GTE_IO to u/GTE_IO [link] [comments]

Japan Is Set For Massive Explosion In Bitcoin Acceptance ... How Does Bitcoin Work? - YouTube What is Bitcoin? Bitcoin Explained Simply for Dummies ... Coinbase Exchange Tutorial - How To Buy Bitcoin On ... Bitcoin: How Cryptocurrencies Work - YouTube

Bitcoin Graph icons PNG SVG EPS ICS and ICON FONT are available. Icons are in Line, Flat, Solid, Colored outline, and other styles. Download free and premium icons for web design, mobile application, and other graphic design work. Photo about Blockchain concept : Rise of Bitcoin price. Laptop computer screen showing increasing price graph with coins of the cryptocurrency. Image of digital, chart, exchange - 109544297. Bitcoins And Rising Chart On Laptop Computer Stock Image - Image of digital, chart: 109544297 ... Bitcoin successfully went through its third halving yesterday, seeing the daily supply of new bitcoin cut by half. The bitcoin community has now turned to what's next for the world's number one ... Your Bitcoin Value stock images are ready. Download all free or royalty-free photos and vectors. Use them in commercial designs under lifetime, perpetual & worldwide ... In the wake of its record price high in 2017, which saw it reach close to $20,000, bitcoin experienced a series of crashes throughout 2018 that saw its value eventually drop below $4,000. Bitcoin ...

[index] [5648] [47199] [14855] [42079] [26639] [27988] [40119] [32985] [41556] [16817]

Japan Is Set For Massive Explosion In Bitcoin Acceptance ...

Mining Bitcoin is as easy as installing the mining software on the PC you already own and clicking start. Anyone can do this and see the money start rolling ... 🔵 Join Coinbase Exchange + get $10 of Free Bitcoin: https://www.coinbase.com/join/5907c318879035083aa43147 In today's Coinbase Tutorial, I walk you through h... Bitcoin will be the new store of value and crypto will be the new technology evolution and I want to be a part of that trough this channel. I will be sharing my ideas about the future of Bitcoin ... Paying With Bitcoin At Bic Camera In Tokyo! https://jet-coin.com/ref/addvalueintl Japan is set to for a massive explosion in Bitcoin acceptance. This is afte... For more information: https://www.bitcoinmining.com and https://www.weusecoins.com What is Bitcoin Mining? Have you ever wondered how Bitcoin is generated? T...

#