Bitcoin now outguns all of Earth's supercomputers

Today, we start to fight back. [We want bitcoin miners converted to scientific distributed computing projects] 111 petaflops, the worlds largest supercomputer is controlled by the bitcoiners currently.

Today, we start to fight back. [We want bitcoin miners converted to scientific distributed computing projects] 111 petaflops, the worlds largest supercomputer is controlled by the bitcoiners currently. submitted by pointmanzero to EnoughLibertarianSpam [link] [comments]

Bitcoin mentioned around Reddit: EU $1.2 supercomputer project to several 10-100 PetaFLOP computers by 2020 and exaFLOP by 2022 /r/europe

Bitcoin mentioned around Reddit: EU $1.2 supercomputer project to several 10-100 PetaFLOP computers by 2020 and exaFLOP by 2022 /europe submitted by BitcoinAllBot to BitcoinAll [link] [comments]

New Japanese "K" computer hits 8.2 Petaflops! (Bitcoin network is currently 104.1 Petaflops.)

New Japanese submitted by masonlee to Bitcoin [link] [comments]

[elfa82] Top 10 Supercomputers in 2015. Distributed systems like [email protected] go beyond 40 PetaFLOPs. The bitcoin network is at 4,6 ZettaFlops today. Computational Singularity or AI can happen in a distributed system first.

submitted by raddit-bot to FuturologyRemovals [link] [comments]

If Bitcoin 'is the future' yet mining is such a lengthy, expensive & laborious process, why don't large corps with supercomputers do it in a fraction of the time and at minimal cost?

submitted by I_Always_Talk_Shite to CryptoCurrencies [link] [comments]

Fun fact: The bitcoin network has now about 50000x more computing power than the top 500 supercomputers combined while using only half the energy.

Computing power of top 500 supercomputers combined: 672 Petaflops using 475 Megawatts
Bitcoin network: 2662 PH/s = 33,807,400 Petaflops using about 266 Megawatt (at 13TH/s using 1.3kW)
Throw this in the face to people who say bitcoin is backed by nothing!
submitted by DizzySquid to Bitcoin [link] [comments]

My dad just started his new job, thought you guys might enjoy the specs of the computer he'll be working with.

My dad just started his new job, thought you guys might enjoy the specs of the computer he'll be working with. submitted by al3x3691 to pcgaming [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Details:
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
https://np.reddit.com/btc/comments/54afm7/buip024_extension_blocks_with_address_sharding/
Why aren't we as a community talking about Sharding as a scaling solution?
https://np.reddit.com/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
https://np.reddit.com/btc/comments/3v070a/brainstorming_could_bitcoin_ever_scale_like/
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
https://np.reddit.com/btc/comments/3wtwa7/brainstorming_lets_fork_smarter_not_harder_can_we/
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
  • (DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
https://youtu.be/uzahKc_ukfM?t=1101
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
MapReduce
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Conclusion
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

What if you run your uploaded self as a cryptocurrency POW algorithm?

This would reduce the cost of running a petaflop simulation of your brain (potentially thousands of dollars per hour) as people would pay to run you just to make fake money.
Of course you would not be running in real time but as a distributed distorted speed linked to the market value and uptake of the currency.
Pros: If your crypto gains popularity your simulation speed and size can be boosted.
Cons: You will never be real time as your 'frame rate' will be limited to the transaction speed of the crypto which your POW will inherently limit.
Also you will need to improve cryptocurrencies as their own growth ensures they become monolithic over time. Therefore your freedom as an AI will be limited when the size of your blockchain grows beyond desktop processing and storage limitations.
You might also want to spend some time solving climate change as power plants have to be turned off when their cooling systems (often external water supplies) are too hot to do the job.
PS For reference Bitcoin uses 256 times as much processing power as the worlds top super computers (source)
submitted by Arowx to transhumanism [link] [comments]

Lets talk about the company Silicon Graphics (SGI) & BTC.

The Economist has a recent article about Wright that references the company Silicon Graphics (SGI) four times. SGI was formerly named Silicon Graphics Computer Systems. Here are two consecutive paragraphs as a sample:
As proof of existence for his supercomputer, another element of the December story that was disputed, Mr Wright offers a >letter signed by a local SGI director, which states, among other things, that the firm is pleased to work with Cloudcroft, one >of Mr Wright’s firms, “in assisting the development of their hyper-density machines and supercomputers.” Asked about this, >SGI, which is based in Silicon Valley, has replied that its Australian director “acted as an individual and was not authorised.” >The firm has no record of selling a supercomputer to Cloudcroft, but says that it “could have been purchased on the grey >market.”
The reason why SGI has distanced itself from him, explains Mr Wright, is because he has combined the firm’s gear with that >of a competitor, Supermicro. But he doesn’t want to say much more, mainly because of security: “It’s a big expensive >machine, and we don’t want people to know where it is.” If C01N, as the machine is called, exists, it would indeed be big: >with a claimed number-crunching capacity of 3.5 petaflops (a petaflop is a thousand billion floating-point operations per> >second), it would rank 17th on the list of the world’s fastest supercomputers. All this computing power, says Mr Wright, is >used to test his ideas about how to improve bitcoin.
Where have I seen SGI, or its predecessor, before? How about here: https://en.wikipedia.org/wiki/Gavin_Andresen
Backup in case edited: http://imgur.com/jYKq8xY
Interesting . . . Plot thickens maybe. You're welcome "journalists."
submitted by v2Felt to Bitcoin [link] [comments]

That's Cute

That's Cute submitted by laxisusous to Bitcoin [link] [comments]

China’s investment in GPU supercomputing begins to pay off!

China’s investment in GPU supercomputing begins to pay off! submitted by trendzetter to technology [link] [comments]

At SXSW- CEO of Bitgo claims Bitcoin has computational network which is 38,000 times as powerful as all the world's supercomputers

At SXSW- CEO of Bitgo claims Bitcoin has computational network which is 38,000 times as powerful as all the world's supercomputers submitted by Enterpriseminer to Bitcoin [link] [comments]

Level of Difficulty attack?

Would it be possible for an entity in control of a powerful network of computers purposively drive up the difficulty of mining, and then suddenly withdraw from mining?
I imagine certain government entities have computers fine-tuned to certain hash algorithms for brute forcing encryption. I'm talking to you NSA.
(Excuse me, just a moment. My tinfoil hat is getting itchy and needs some reshaping. Ah, better)
So, let's say instead of trying to brute force a crypto breach in the Bitcoin network, they just decide to start mining. The blocks get confirmed much faster, and the level of difficulty increases dramatically. At some point in the future, the NSA (or other entity) strategically stops mining, leaving the rest of the network trying to cope with an insane level of difficulty.
Is there any chance that would significantly delay new block confirmation, and what would the consequences be?
submitted by may214 to Bitcoin [link] [comments]

Using national supercomputers to mine Bitcoin

Might a motivated country seeing the adoption of Bitcoin as a universal currency be tempted to press national computing resources to make a grab for currency? Would petaflop supercomputers be any good at mining?
submitted by russellreddit to BitcoinMining [link] [comments]

12-10 23:33 - 'Lets have a discussion about energy consumption in bitcoin mining and what that means towards the carbon footprint today.' (self.Bitcoin) by /u/Cryptolution removed from /r/Bitcoin within 1-11min

'''
There was a [very good coindesk article in July 2014]1 that broke down the carbon footprint of the bitcoin mining network. At the date of the article, our hashrate was 146,505 TH/s. Now that we are at above 13 exahashes/s this represents a 94 fold increase hashing power.
[Here is the cost breakdown chart from the coindesk article]2 .
As you can see from this image, the carbon footprint of bitcoin in 2014 is a tiny fraction compared to the carbon footprint of the traditional banking system. Yet at a 0.78 Billion per year cost in 2014, at a 94 fold increase of power that would now be 73.32 billion, which would make bitcoin 9.52 billion more in electricity costs.
But this is trying to extrapolate data in a non-accurate way. In order to understand why this is inaccurate, we must look at how all of this technology works and how technology has scaled upwards while decreasing electricty consumption.
The bitcoin network at 13 exahashes is roughly 130 times greater than the largest super computer (Sunway, 93 petahashes per sec in china, see [top500.org]3 )
So when you make that statement, you think "wow, bitcoin must use a lot of energy to be 130 times more powerful than the largest super computer network!"
But, its not apples to oranges. These super computer networks are non-specialized hardware (comparably to bitcoin) in that they have generalized computing capabilities. This means that these systems require more standardized hardware so that they can preform a large amount of different computing functions.
So, for example, the largest Sunway supercomputer @ 93 petaflops (roughly 1/130th the power of the bitcoin network) preforms its calculations at 93,014.6 petahashes @ 15,371 kW = 93014000 Gh @ 15370000 watts. Doing the maths, this comes out to a 0.16524 W/Gh.
The AntMiner S9 currently operates at 0.098 Gh ....so nearly double the energy efficiency of what the most powerful super computer network in the world operates at.
You have the Dragonmint miner coming out Q1-Q2 in 2018 which uses 0.075J/GHs ....a 30% efficiency increase over the Antminer S9.
And next year japanese giant GMO is launching into the bitcoin mining business, stating they will be releasing a 7nm ASIC design, which is more than double the efficiency of the current 16nm design the Antminer S9 uses. This will mean a more than doubling of energy efficiency. They said they have plans after the release of the first product to research "5nm, and 3.5nm mining chips"
So, what is the point of understanding all of this? Well, you have to understand how technology scales (think Moore's law) to understand how we can achieve faster computational speeds (more exahashes per second) without increasing the carbon footprint.
So if you look at a proof of work chart, you'll see it has scaled linearly upwards since the birth of bitcoin. And it would be logical to assume that the more hashes per sec thrown into the network, that it would equate to more power being spent. Yet this is not true due to advancements in ASIC chip design, power efficiency, and basic economic fundamentals.
You see, as new miners come out, because they are more efficient, people can run much faster mining rigs at much lower cost. This immediately adds much more hashing power to the network, which decreases the profitability of old miners. And to give you an idea of how much more cost efficient these are, lets look at Antminers products.
S9 - 0.098 W/Gh
S7 - 0.25 W/Gh
Avalon6 - 0.29 w/Gh
You can see the S9 is 3 times more power efficient than the Avalon6. That translates to "It costs 3 times more to operate this equipment". That aint no small difference.
These differences, combined with energy costs are what forces miners to stop running old hardware and to upgrade to newer models or exit mining completely. So as new mining equipment hits the market, old less efficient mining rigs go offline. The amount of hashes per sec continues to climb, yet the actual power usage of the entire network does not scale at the same rate that the hashes per sec scale at, due to increased energy efficiency.
The question that I would like to see answered by the community is this -
What has changed between now and 2014 in terms of total watts consumed? How can we calculate the real carbon footprint of todays bitcoin mining network compared to this data from 2014?
What equipment was running in 2013-2014, what were their W/Gh and how many of these machines do we speculate are still running vs more efficient mining rigs powering the network today? What is the Th/S differences between these mining rigs, and how much more power do we contribute towards the network today because of these optimized rigs?
Mining is not my specialty and there are going to be many people here who are better suited to tackling these problems.
I think these questions need to be answered and articulated because these are questions that im starting to see a lot from the mainstream as criticism towards bitcoin. I know the generic answer, aka "Bitcoin mining still uses a fraction of the cost that the entire global banking system does", but we really need to do better than that. We need to examine the different power types used in bitcoin mining -
How much of bitcoin mining is from hydroelectric? Nuclear? Wind? Solar? Coal? Natural Gas? What regions contribute the largest hashing power and can we evaluate whether these regions are Hydroelectric, Coal, Nuclear etc dependent?
If we are to articulate effective arguments against those who naysay bitcoin over its carbon footprint, then we must do so with good data to backup our positions.
Hopefully the numbers above are accurate/correct. Honestly only spent a few minutes doing napkin math, so I expect there to be mistakes, please let me know and thank you very much all.
'''
Lets have a discussion about energy consumption in bitcoin mining and what that means towards the carbon footprint today.
Go1dfish undelete link
unreddit undelete link
Author: Cryptolution
1: https://www.coindesk.com/microscope-conclusions-costs-bitcoin/ 2: https://imgur.com/a/eKipC 3: ww**top500*org/*ists*2*17/11*
Unknown links are censored to prevent spreading illicit content.
submitted by removalbot to removalbot [link] [comments]

How to Destroy Bitcoin

It's quite easy to ballpark how big a Computer you would need to smash Bitcoin.
Miners are as a group currently paid $1000 * 25 per day to encrypt the block chain ($1000 = market price of Bitcoin and 25 Bitcoins are released to miners each day).
Average Miners PC uses 500 Watts of Power for say 4 hours @ 12.5c per KwHour = 25c per day
Thus in the long run (we are all dead) the market can support 25000/.25 miners at equilibrium = 100,000 Miners
Each miners PC runs 8 Cores at say 5 Ghz = 40 Ghz
To smash Bitcoin you need to beat most of these miners to the 25 bitcoins, so you probably need about 5 times their computing power = 54010,000 GHz = 2 Petaflops
Which means that any of the worlds top 10 Computers could do it. http://en.wikipedia.org/wiki/TOP500
submitted by moistvonlipwig99 to Bitcoin [link] [comments]

The processing power used to power the Bitcoin network is more than all other processing power in the world!

Quote from this article: http://www.theregister.co.uk/2014/01/17/ten_bitcoin_miners/
"The processing is immense. While you can’t directly link mining hashes with FLOPS, it has been a couple of months since Bitcoin mining passed 1019 petaflops, or roughly the computing power of all the other computing tasks in the word - not allowing for what might go on in the NSA and GCHQ."
submitted by mczarnek to Bitcoin [link] [comments]

The power of the network - are there potential real world applications?

So I own some bitcoins, I am passionate and have faith in the currency and I am confident the speed of adoption is only going to increase with time. But I recently found out the bitcoin network is the largest distributed computer network in the world. It runs with over 700 petaflops, which in clearer terms is 700 quadrillion floating point operations per second. To put this into even more context the US department of energy recently built a supercomputer called Triton which cost $1.2Billion and runs with only 15 petaflops.
I started to beg the question if we could utilise this massive amount of computing power in more applications (alongside the hashing of the blockchain) that could be beneficial to all of us? Many of you will have heard of protein folding puzzles that utilise our minds and computers to solve protein folding problems, one of these being Foldit. This is just one example, but obviously there would be thousands of possible applications, computer modelling, simulations, whatever.
My question is, is this possible? Or is the network only designed for and capable of hashing the algorithm of the currency? Would more load on the network slow the blockchain down? Could we take just 10% of the network and use it for other means during less congested periods?
Thanks in advance if anyone can help answer this.
submitted by Kirby999 to Bitcoin [link] [comments]

Izumi3682 Archives

Chinese Smartphone Maker Promises to Outdo Apple With "The Real AI Phone" by izumi3682 in Futurology


[–]izumi3682[S] 1 point 6 months ago
The proof is in the pudding. We'll see what they come up with. I don't discount what China has to say out of hand. The absolutely staggering amount of technological progress that China has achieved within the last 10 years alone gives me pause. And yes I know they did that by hook or by crook, but the fact remains that China is rapidly equaling and/or exceeding pretty much any technological advance being made by the USA. I also see evidence that China's AI efforts for general consumer use exceed that of the West as well.
But we shall also see what the Apple IPhone X has to offer as well. In any event I see that human civilization is going to make a substantial leap forward in AI and mobile computer processing power going forward from here. A much higher bar has been set. Also while I have this forum, I'd like to pass on this message... PLEASE turn your phone sideways to record stuff--Thank you! --"The rest of the world".
permalinksavecontextfull comments (1)editdelete
Superpower India to Replace China as Growth Engine by izumi3682 in Futurology


[–]izumi3682[S] 1 point 6 months ago
Yes, I said that in my second paragraph.
permalinksavecontextfull comments (4)editdelete
Move Over Millennials, Here Comes 'iGen' ... Or Maybe Not by izumi3682 in Futurology


[–]izumi3682[S] 3 points 6 months ago
If you think the so-called "iGen" is crazy, just wait until the children that are now 3 and 4 grow up in a world where they have always known mobiles, VAR and AI. AI in everything, from no more driving lessons to AI assistants to boss around with loud impatient voices.
permalinksavecontextfull comments (1)editdelete
Virtual reality breathes new life into African fossils, art and artefacts by izumi3682 in Futurology


[–]izumi3682[S] 2 points 6 months ago
Oh! Try this out! I can't get it to work on my work pc, but that's because it's a work pc. On my home pc and my iphone 7 it works just fine.
https://sketchfab.com/models/1e03509704a3490e99a173e53b93e282
This is just 2D on a screen. The future of VR is going to be absolutely insane. Beyond anything we can imagine.
(I recommend just downloading "Sketchfab" on any pc or mobile you have. An incredible combination of computer processing power and narrow AI.)
permalinksavecontextfull comments (3)editdelete
In the Future, Pop Hits Will Be Made by Machines by izumi3682 in Futurology


[–]izumi3682[S] 2 points 6 months ago
For the person, that sounds like a pretty good deal.
permalinksavecontextfull comments (19)editdelete
Deep Learning Could Finally Make Robots Useful by izumi3682 in Futurology


[–]izumi3682[S] 2 points 6 months ago
In what year will I see a humanoid robot like say, "Sophia" from Hanson Robotics mixed with a deep learning ability. Oh, and then that "Sophia" robot could have these new-fangled "soft" muscles that are all the news today as well. That would be quite an accomplishment.
But why stop there. How soon until we can use narrow AI and new robotics technology to allow humanoid robots to walk the streets with humans. Would that be OK with everybody? What year will I see that I wonder.
I bet it's all gonna happen in about 20 years or less. So I'll be about 77 years chronologically. I wonder if I'll have a little age-reversing on me by then.
permalinksavecontextfull comments (4)editdelete
Make a 3D model of your face from a single photo with this AI tool by izumi3682 in Futurology


[–]izumi3682[S] 1 point 6 months ago
Could we make motion pictures of anybody in history that was photographed? Imagine, genuine motion picture images of Abraham Lincoln or, ...well I'm sure there are other famous people from after photography's invention, but before motion pictures. But mainly it was Lincoln who popped into my imagination.
permalinksavecontextfull comments (3)editdelete
Massive demand will see 5G phones arrive in 2019 says Qualcomm by izumi3682 in Futurology


[–]izumi3682[S] 1 point 6 months ago
Yeah, that's my point. Apple "steals" everyone else's ideas and perfects them to the point that they are irresistible somehow. That's what Apple does. I'm not saying it's good or bad. I'm saying that's why I trust Apple for my mobile.
permalinksavecontextfull comments (24)editdelete
In the Future, Pop Hits Will Be Made by Machines by izumi3682 in Futurology


[–]izumi3682[S] 2 points 6 months ago
What I am about to state was 10 years ago, wildest fantasy. How times have changed.
It's quite simple actually. Our computers can now process stupendous, colossal amounts of "big data". Among that data are our likes and dislikes. Also what all of our songs sound like, what all of our art looks like, how all of our motion pictures and plays and performances are accomplished. Pretty much everything we have written down. And probably tons of other data I can't recall offhand. Also screamingly funny cat videos.
Now the computers that we have use a multitude of clever human ideas like machine learning, convolutional neural networks, and I'm certain some other marvelous methods of collating, analyzing and deconstructing all that data for actionable information. This involves things like identifying edges and light and shadow, word patterns and lots of confusing criss-crossy lines in the diagrams I look at. But I'm pretty sure it knows what it is doing.
Then our computers can use "predictive analysis" to develop models of varying degrees of confidence that are constantly tested against a sort of intrinsic "critic" that says thumbs up or thumbs down based on all that big data and that collating and whatnot.
Then it spits out the "highest confidence" result. Humans experience whatever it is and send their own feedback into the computer, which assimilates any novel data from that human feedback and tweaks its models to eventually precisely push the emotional buttons that make a song "haunting" an art piece "compelling" or videos "screamingly funny".
The AI is not going to get worse at "creativity". It's going to surpass human efforts in short order. Humans will come to prefer AI art to that of inferior "human" art. What kind of world will that be? And that is just in art and stuff. The AI will dominate everything else as well. And believe it or not that is still "narrow" AI. Just wait until we successfully develop artificial general intelligence (AGI). Then we either adapt or die. And in that meantime...
"Humans Need Not Apply".
Here is a computer algorithm using narrow AI-big data-CNN-predictive analysis to model human faces that don't exist in real life. They don't look too bad today. Yes, they need work. But in about 5 years--wow! OMG!
http://alteredqualia.com/xg/examples/eyes_gaze3.html
(Run your mouse cursor over the face to really get creeped out. Click the black space on either side to see others.)
permalinksavecontextfull comments (19)editdelete
True, Bitcoin May Become Corrupt. But Banks Already Are. by izumi3682 in Futurology


[–]izumi3682[S] 1 point 6 months ago
I thought we were trying to move away from this mess. I thought the goal was the dream of Peter Diamandis' "Post-Scarcity" society. But instead we just seem to be adding more confusing layers of crap that enables those in the know to make massive amounts of money on the backs of those not as clever. The 1% persists, the 99% persists to fail.
I certainly look forward to the day when we can put AI inside of our minds and no one can be fooled or tricked or deceived any longer. I bet a lot of people think that making all humans super intelligent would not be such a good idea. It would definitely "upset the apple cart" of business as usual.
But who am I kidding. The 1% will get that AI inside of their minds and the rest of us will be their willing slaves or simply exterminated to get rid of the "surplus population" and make the Earth a nicer place to live for the 1%.
permalinksavecontextfull comments (1)editdelete
A billion new low-cost employees from china didn't cause unemployment. Why should some puny robots scare us ? by furyfairy in Futurology


[–]izumi3682 1 point 6 months ago
I think I can sum it up fairly succinctly. The industrial revolution replaced human (and horse) (and oxen) muscle. The AI revolution will replace the human mind. Watch this space in 10 years.
permalinksavecontextfull comments (9)editdelete
Massive demand will see 5G phones arrive in 2019 says Qualcomm by izumi3682 in Futurology


[–]izumi3682[S] 1 point 6 months ago
Wow! 7 years ago was a technological lifetime ago. The convergence of VR, AI and consumer level peta-scale processing power will bring about devices that are as different from the playstation console of today as the motor vehicle is from the horse in that next 7 years!
I got my IPhone 7+ in Sep of 2016. And it was totally slam awesome. But this years iteration of the IPhone is so fantastically advanced over that of my IPhone 7+, that forgive me if I show no control and leap for the next gen with it's enlarged OLED HDR screen, in-built machine learning chip and processing power and capability. Even that creepy perfected facial recognition technology. Not undependable and quirky like the current gen Samsung.
Here is IPhone 7: http://bgr.com/2016/10/21/iphone-7-specs-a10-fusion-processo
Here is IPhone X: https://www.theverge.com/2017/9/13/16300464/apple-iphone-x-ai-neural-engine
I suspect I'm not the only one who can't control myself. And of course in the fall of the year 2018 I shall hopefully be able to get a bit of a discount on my IPhone X trade-in for the next gen of the IPhone.
permalinksavecontextfull comments (24)editdelete
Is virtual reality bad for our health? The risks and opportunities of a technology revolution by izumi3682 in Futurology


[–]izumi3682[S] 3 points 6 months ago
Here is what I have to say about VR and its impacts and future.
https://www.reddit.com/Futurology/comments/6itqu4/escape_to_the_future_with_virtual_reality/dj93x8y/
permalinksavecontextfull comments (6)editdelete
Massive demand will see 5G phones arrive in 2019 says Qualcomm by izumi3682 in Futurology


[–]izumi3682[S] 3 points 6 months ago
No way man, I'll pay thru the nose for yearly exponential technological advancement. To heck with that 4 year "Playstation" console business model.
permalinksavecontextfull comments (24)editdelete
Is virtual reality bad for our health? The risks and opportunities of a technology revolution by izumi3682 in Futurology


[–]izumi3682[S] 5 points 6 months ago
Just wait until we nail resolution and FOV.
https://www.forbes.com/sites/stevenkotle2014/01/15/legal-heroin-is-virtual-reality-our-next-hard-drug/#56dd35961a01
Personally, I'm way looking forward to it. Imma Oculus Rift early adopter. I see what the future is gonna be. I often discuss VR and it's impact and future in my overview.
permalinksavecontextfull comments (6)editdelete
Why China Is So Confident by izumi3682 in Futurology


[–]izumi3682[S] 1 point 6 months ago
This is me about China and why the 21st century will belong to China until the AGI takes over about mid-century.
https://www.reddit.com/Futurology/comments/5pwnyj/china_reminds_trump_that_supercomputing_is_a_race/dcw3qyq/
permalinksavecontextfull comments (2)editdelete
Superpower India to Replace China as Growth Engine by izumi3682 in Futurology


[–]izumi3682[S] 9 points 6 months ago
No. India is far far too scattershot politically, technologically, culturally and socio-economically to ever be able to compete meaningfully with China. Aside from a few technological "city-state islands", the majority of India remains a 3rd world backwater. Filthy, ignorant and dangerous.
Having said that, I observe with sincere amazement and admiration when Indians leave India and become powerful intellects in their adopted new countries. The potential absolutely exists and if it is ever tapped would be transforming. Unfortunately the 21st century is likely to belong to China/USA until the AGI actually takes over completely mid-century or so.
permalinksavecontextfull comments (4)editdelete
How long should a $999 iPhone last? (Me: or any mobile for that matter.) by izumi3682 in Futurology


[–]izumi3682[S] 1 point 6 months ago
One year, tops. After that point I expect the exponential technological advances to be such as to render my one year old 999 dollar mobile nearly obsolete. Hopefully I can trade in my one year old mobile for a bit of a discount on my 1500 dollar, next year mobile.
I absolutely delight in the fact it becomes necessary for me to have to update my mobile once each year. It just goes to show how incredibly fast our technology is advancing.
By the way, if you can't use the website "Sketchfab" on your mobile because it won't support it, you need to upgrade your mobile. "Sketchfab" is a miracle of AI and processing power. You have to see it to believe it. Here is the pc version.
https://sketchfab.com/
permalinksavecontextfull comments (2)editdelete
Skin Patch Dissolves “Love Handles” in Mice: Researchers devised a medicated skin patch that can turn energy-storing white fat into energy-burning brown fat locally while raising the body’s overall metabolism, to burn off pockets of unwanted fat such as “love handles” and treat obesity. by mvea in Futurology


[–]izumi3682 1 point 6 months ago
Not a moment too soon for the likes of me.
permalinksavecontextfull comments (39)editdelete
I Saw Her Face, Now I’m a Believer—Facial Recognition Tech Goes Mainstream by dwaxe in Futurology


[–]izumi3682 0 points 6 months ago
I was reading that Samsung had developed the facial recognition technology for their mobiles already, but that it was so undependable and quirky that most people chose not to use it. Apple has nailed the technology. That is what Apple does.
permalinksavecontextfull comments (2)editdelete
PAL-V Just Announced Plans to Travel Around the World in a Flying Car by skoalbrother in Futurology


[–]izumi3682 1 point 6 months ago
Granted its a tiny helicopter, but it's still a helicopter. If you need tons of license and training and carrying on for your flying car, we haven't arrived. A flying car should be electric and level 5 AI autonomous. The human(s) get(s) in like a regular car, tells the flying car where to go and the flying car takes off like a king sized drone to deliver that/them human(s) to their destination in safety and comfort.
We'll get there. I see good signs. But this is just a little helicopter.
permalinksavecontextfull comments (1)editdelete
Median wealth of black Americans 'will fall to zero by 2053', warns new report | Inequality by izumi3682 in Futurology


[–]izumi3682[S] 2 points 6 months ago
Just as the transistor replaced the vacuum tube, the quantum logic gate will replace the transistor based chip. Quantum supremacy is possible by 2018. Moore's law will be transcended. In the meantime 15 petaflops computers are sprouting like dandelions. This will trickle down to the consumer as well. Just imagine the resulting VR!
permalinksavecontextfull comments (9)editdelete
Median wealth of black Americans 'will fall to zero by 2053', warns new report | Inequality by izumi3682 in Futurology


[–]izumi3682[S] 1 point 6 months ago
That's a valid consideration and I concede it to you as a possibility!
permalinksavecontextfull comments (9)editdelete
Smartphones Could Be Leading To A Mental-Health Crisis by izumi3682 in Futurology


[–]izumi3682[S] -1 points 6 months ago
Smartphones demonstrate the increasingly sharper and growing disconnect between machines like mobiles and biological human minds. This is only going to get worse. And it really is worrisome. It is almost as if humans are screamingly striving toward becoming one with the devices. You can see it so easily now in everyday life. Just go out where people are and you'll see it. But I also believe that help is on the way.
Unfortunately this help, the BMI (Brain Machine Interface) which, while it will restore equilibrium again, will also leave a far different "human" on the other side. Hopefully I'll really really like being "Human 2.0".
permalinksavecontextfull comments (8)editdelete
How Tinder Exposed Our Reliance on Racist Stereotypes by izumi3682 in Futurology


[–]izumi3682[S] 1 point 6 months ago
This is a perfect example of parsing "big data" to gain useful and sometimes distressing insights. The demographics of the USA are now in flux. The next 20 years, in the absence of overwhelming AI, will see vast changes in the way that the USA thinks. Me, I just want a comfortable, cool tech VR future, but I worry about increasing societal discord in the USA from a variety of "growing pains" that are more than just exposed racism and demographic shift.
permalinksavecontextfull comments (3)editdelete
submitted by izumi3682 to u/izumi3682 [link] [comments]

Time and energy required to brute-force a AES-256 encryption key.

I did a report on encryption a while ago, and I thought I'd post a bit of it here as it's quite mind-boggling.
AES-256 is the standardized encryption specification. It's used worldwide by everyone from corporations to the US government. It's largest key size is 256 bits. This means that the key, the thing that turns encrypted data into unencrypted data, is string of 256 1s or 0s.
With each character having two possibilities (1 or 0), there are 2256 possible combinations. Typically, only 50% of these need to be exhausted to yield the correct key, so only 2255 need to be guessed. How long would it take to flip through each of the possible keys?
When doing mundane, repetitive calculations (such as brute-forcing or bitcoin mining), the GPU is better suited than the CPU. A high-end GPU can typically do about 2 billion calculations per second (2 gigaflops). So, we'll use GPUs.
Say you had a billion of these, all hooked together in a massively parallel computer system. Together, they could perform at 2e18 flops, or
 2 000 000 000 000 000 000 keys per second (2 quintillion) 
1 billion gpus @ 2 gigaflops each (2 billion flops)
Since there are 31 556 952 seconds in a year, we can multiply by that to get the keys per year.
 *31 556 952 =6.3113904e25 keys per year (~10 septillion, 10 yottaflops) 
Now we divide 2255 combinations by 6.3113904e25 keys per year:
 2^255 / 6.3113904e25 =9.1732631e50 years 
The universe itself only existed for 14 billion (1.4e10) years. It would take ~6.7e40 times longer than the age of the universe to exhaust half of the keyspace of a AES-256 key.
On top of this, there is an energy limitation. The The Landauer limit is a theoretical limit of energy consumption of a computation. It holds that on a system that is logically irreversible (bits do not reset themselves back to 0 from 1), a change in the value of a bit requires an entropy increase according to kTln2, where k is the Boltzmann constant, T is the temperature of the circuit in kelvins and ln2 is the natural log(2).
Lets try our experiment while considering power.
most high-end GPUs take around 150 watts of energy to power themselves at full load. This doesn't include cooling systems.
 150 000 000 000 watts (150 gigawatts) 
1 billion gpus @ 150 watts
 1.5e11 watts 
This is enough power to power 50 million american households.
The largest nuclear power reactors (Kashiwazaki-Kariwa) generate about 1 gigawatt of energy.
 1.5e11 watts / 1 gigawatt = 150 
Therefore, 1 billion GPUs would require 150 nuclear power plant reactors to constantly power them, and it would still take longer than the age of the universe to exhaust half of a AES-256 keyspace.
1 billion GPUs is kind of unrealistic. How about a supercomputer?
The Tianhe-2 Supercomputer is the world's fastest supercomputer located at Sun Yat-sen University, Guangzhou, China. It clocks in at around 34 petaflops.
Tianhe-2 Supercomputer @ 33.86 petaflops (quadrillion flops)
 =33 860 000 000 000 000 keys per second (33.86 quadrilion) 3.386e16 * 31556952 seconds in a year 
2255 possible keys
 2^255 / 1.0685184e24 =1.0685184e24 keys per year (~1 septillion, 1 yottaflop) =5.4183479e52 years 
That's just for 1 machine. Reducing the time by just one power would require 10 more basketball court-sized supercomputers. To reduce the time by x power, we would require 10x basketball court-sized supercomputers. It would take 1038 Tianhe-2 Supercomputers running for the entirety of the existence of everything to exhaust half of the keyspace of a AES-256 key.
Edit: corrections on my grade 12 math.
submitted by INCOMPLETE_USERNAM to theydidthemath [link] [comments]

HOT: “Russian nuclear engineers arrested for using state supercomputers to mine Bitcoin”

Two engineers of the Russian Nuclear Center have been caught using one of the facility’s supercomputers to mine Bitcoin.
This sad story took place in the top-secret Federal Nuclear Center in Sarov, western Russia, where the first nuclear boom was produced during the Cold War by Russian scientists.
It should be noted that the Nuclear Center in Sarov cannot be found in any maps. This is one of the top-secret centers, totally separated with the rest of Sarov. Any one who wishes to enter the area must acquire a special permit.
The Russian Federal Nuclear Center is located in an isolated town with about 20,000 officers. The center launched its new supercomputer with the capacity of 1 petaflop (the capacity to process one thousand million million operations per second) in 2011. For security reasons, this supercomputer was not intended to be connected to the Internet. It is recorded that two engineers of the Russian Federal Nuclear Center used this supercomputer to mine Bitcoin. Right after these engineers attempted to connect the supercomputer to the Internet, the Federal Security Service of the Russian Federation was alerted and the engineers were caught and detained in time.
Coin mining is a method to get free digital coins without the need of purchasing. Of course, it will require other expenses. The stronger your computer is, the higher chance you can mine coin. The Russian supercomputer is obviously too suitable for this operation.

VCX

VCX coin

submitted by vcxcoin to u/vcxcoin [link] [comments]

What Bitcoin Mining Entails - D-Central The World’s Fastest Supercomputer is ARM-Based How to mine $1,000,000 of Bitcoin using just a laptop The Hottest Crypto in Computing - ANKR Mira: Argonne's 10-petaflops supercomputer

One of three Russian scientists arrested in February 2018, on charges of using a classified government computer to mine cryptocurrency has been fined 450,000 rubles, equivalent to US$7,000.Denis Baykov, a former employee of the Federal Nuclear Center in Sarov, Russia, has been found guilty of violating the computer lab’s policies, according to court documents. At any given moment, Bitcoin's peer-to-peer network contains thousands of computers linked together to generate more than 1,000 petaflops of raw computing power. Instead, they are backed by math, cryptography, and a public distributed network with a hash computing power of over 740,000 PetaFLOPS (currently). As of this writing, the Bitcoin network is over 256 times faster than the top 500 supercomputers, combined . The most powerful supercomputer in the world, Sequoia, can manage a mere 16 petaFLOPS, or just 1.6 percent of the power geeks around the world have brought to bear on mining Bitcoin. Massive Computing for Bitcoin Mining and AI. Energy (in all its forms) is Money! The other way to be Energy-Smart and meet the energy challenge in the crypto active world and in the growing AI deployment. Stephane Bilodeau. Follow.

[index] [7712] [17979] [22130] [19427] [23007] [21315] [4786] [13523] [20295] [13465]

What Bitcoin Mining Entails - D-Central

Bitcoin has been a subject of scrutiny amid concerns that it can be used for illegal activities. In October 2013 the US FBI shut down the Silk Road online black market and seized 144,000 bitcoins ... Take a look inside BP’s state-of-the-art Center for High-Performance Computing (CHPC) in Houston. It is equipped with over 3.8 petaflops of processing speed, making it the world's largest ... Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops ... Japan’s latest supercomputer ‘Fugaku’ is the world’s fastest for computing speed on Top500. It scored a High-Performance Linpack (HPL) score of 415.5 petaflops, which makes it 2.8 times ... Bitcoin operates on a public ledger framework, where bitcoin nodes give their approval to prevent double-spending (one bitcoin being spent more than once). Miners create bitcoins and add them to ...

Flag Counter