Decentralised Network File Sharing
#1
This is not about one specific app, nor making one, but both. A discussion on theory and pragma.
Since the subject of a tracker-, site-, storage-, moderator- less, and other innovations is always getting some attention, I got this one idea: Let's make a thread!

As we discussed in the thread "Atheists are more intelligent...", some people become vocal about the subject, like it was a personal matter, and even in academic debates there's room for fanaticism, preconception, dogma, you name it. Please bear in mind.

May Mod help me, for a torrent of criticism will come like the Second Flood.

I believe the question involves technical, economical, social (human) aspects, including the (imo abominable) politician/lawyer mindset.
Under technical-only PoV, it could work.

Not a new field, it has been around for long. Freenet and Zeronet are examples.
Notable options are Tribler and Fopnu. The first came over a decade ago and the last is almost brand new, still in development - but plenty operational.
Others include old file sharing systems (like Limewire/Gnutella) or still prototipes with little less than proof-of-concept abilities (like magnetico/BitTorrent).
We also have Stormcloud (wich I didn't try yet). *magnetico with a lower case "m".


On the other side, there're BitCannon, ThePirateBay, and other services. Although TPB has proxies and gives away dumps, it's pivotal and relies on the Internet.
A recent tool is Offlinebay, wich does on your local drive what BitCannon does (or did) on the cloud. Some sites say they can search hundreds of Torrent sites - but Google and other search mechanisms can do that. *They actually search all they have.

The first problem with a decentralised system is the lack of human managers. The system must be automated, as if current A.I. could do it, or stay in hands of users themselves. Some sort of ratings system must be in place to point good and bad data, and their uploaders. Of course ratings vary by personal choices and are abused by mindless trolls or professional hackers. Stormcloud has some interesting ideas on that, but old FS systems have been doing it in some way.

But the biggest problem is the lack of management; like a noise in the network, those systems are made to run free from operators or monitors, resiliant to interference, and out of control. No tirannic responsible government will tolerate such use of the Internet. They'll say it's for our own protection, for there's abuse, accountability, and the battle about illegal and what is immoral content.

Also the storage is a curious point: Some keep the current model where each peer stores what they like, a few propose automatically distributed pieces of a gigantic file-set like a distributed repertoire (not only Torrent files, but the data), and none consider cloud, for it would demand huge servers, replicated across the world, and the funds to keep it working. The ones doing it are in line with Mega Upload.

In the past, file sharing or anonimous networks were shut down. Some for the old copyright reason, others for morally objectionable or even criminal reasons.
We've Tor (created by Navy Intelligence, sometimes aka The CIA) and I2P (made by academics who may be more interested in studying the thing than making it).

Please post your important additions and commentaries - Thank you!
Reply
#2
I use TPB. And some private torrent sites. That's it.
Reply
#3
TPB if I can't find it anywhere else, elsewhere if I can.
Reply
#4
TRIBLER: The future of sharing or another future shattered File-Sharing system ?

Old news from January - https://torrentfreak.com/researchers-use...ng-180129/

"Researchers at Delft University of Technology have released a major update to their decentralized and pseudo-anonymous BitTorrent client. The new Tribler has its very own blockchain that tracks how much people are sharing, so users can be rewarded accordingly. This should ultimately improve the efficiency of the client's Tor-like protection."

Pseudo-anonymous Tor-like meaning? And now we have Fopnu, more of (kinda like) the same. Do we need them or are they brief steps in evolution?
Reply
#5
I wonder how much power would be needed to generate a file based on hash alone by brute-forcing every possible option.
Some simple calculations of this difficult task could be made on a small test file, but I'm too lazy for doing this now.

Imagine sharing only hash numbers generated by no-collision algorithm without possibility to generate same hash for different file, then using massive computing power to reduce millions of years to couple of hours.

I don't know if quantum computers would be enough for this task and using p2p network with similar to torrent clients that would reconstruct the file using all peers as one supercomputer would just be longer walk for the same output - you would still need to download the file, but would be cool to reanimate old uploads, dead torrents would cease to exist in this scenario, reconstructed pieces could be shared among peers, with initial seeding enabled as a supplementary or vice versa.

That's just wild and unrealistic for today but maybe not for tomorrow.
One microprocessor of today is faster than the fastest supercomputer of yesterday. So it could be possible to reconstruct the whole container alone without any downloading in future.

It would be like DNA samples from which we could reconstruct the whole body.

First couple bytes of the Hash could include some important things like size of the file to be found, that would reduce searching greatly.
So the shared file would be just a list of hashes for every file in the container.

Probably would be better to cut the file into pieces like current torrents do for sharing small pieces, then hash everyone of them, it would decrease the diff / increase the luck of finding/reconstructing the correct one.
Memory speed would bottleneck here the most so some solution for this or new tech would be needed.

OK now I'm going too far into this.

We don't have file sharing problems but it would maybe make it more legal for site operators to host only hashes ? there would be no downloading, only reconstructions..
Or are we just making another dusty cloud that they will dispel using some yet unwritten laws.

We are already using only Magnets.. so probably wouldn't change anything in this department.
Reply
#6
(Apr 16, 2018, 16:12 pm)Mr.Masami Wrote: I wonder how much power would be needed to generate a file based on hash alone by brute-forcing every possible option.
It would be like DNA samples from which we could reconstruct the whole body.

Theoretically possible but not by means of simple calculations, I suppose recursive polinomials would be a starting point? I'm no mathematician.
What you envision is like a compression algorithm; to be efficient, such thing should work on multiple levels - Removing redundancies, interleaving bits, and so on.

FLAC is an example of loss-less algorithm; DNA can't produce two times the exact same result, it's lossy, and too intertwined in the coding sense, some parts can help determine both eye-color and the number of arms, etc. Also human DNA has a big ammount of redundancy, see mitocondries for details.

I presume it'll require very long hashes, but proportion will be progressively better for bigger files.
Decades ago IBM worked on a technique capable of zooming in images almost infinitely, but it also got slower at each iteraction. It was taken out of public access and I never heard of it again, maybe they tought it was too good just to avoid jagged porn pictures on the Internet...

Anyway, the processing cost to achieve that in real-time will be huge, including energy. My old Pentium II had trouble with big JPGs, what to say of HD videos?
Next century, my friend, next century...
Reply
#7
(Apr 16, 2018, 16:12 pm)Mr.Masami Wrote: I wonder how much power would be needed to generate a file based on hash alone by brute-forcing every possible option.
[...]
Imagine sharing only hash numbers generated by no-collision algorithm without possibility to generate same hash for different file, then using massive computing power to reduce millions of years to couple of hours.
[...]
I don't think this is possible. You want to map infinite possibilities (because you can write any file with any composition and of any length) into a string with limited length. You can, of course, brute-force every possibility and then find file with corresponding hash, but there is no way of telling whether this was the original file.
Which leads me to the second sentence, about collision-free algorithm. Again, you cannot map infinite possibilities into limited space. If you have long enough hash and a good function, you can use it safely for quite some time, but it won't hold forever. SHA1 was considered good for a long time (it is still being used in torrents) and yet, collision was already found.
Reply
#8
(Apr 17, 2018, 03:46 am)Hiroven Wrote: ...
I don't think this is possible. You want to map infinite possibilities (because you can write any file with any composition and of any length) into a string with limited length.
...

That's why I proposed cutting files into pieces like current torrents do to limit the number of possibilities and to fit in boundaries of the algorithm used to calculate the hashes.

(Apr 16, 2018, 19:47 pm)dueda Wrote: Theoretically possible but not by means of simple calculations, I suppose recursive polinomials would be a starting point? I'm no mathematician.

Me neither, but using brute-force has nothing to do with mathematics calculations aside from calculating hash each time the new file is generated.

(Apr 16, 2018, 19:47 pm)dueda Wrote: What you envision is like a compression algorithm; to be efficient, such thing should work on multiple levels - Removing redundancies, interleaving bits, and so on.

Not that much to do with compression(we're not trying to reconstruct the file from hash, we're trying to generate file to match the hash), just imagine some really small file like 5 bytes long, you got only hash of it, just generate new file until you get the same hash. But yes, some compression prior to using this method would be advised, as you said multi-layer solution would be needed to decrease the difficulty of reconstructing the pieces.
5 bytes = 40 bits
https://en.wikipedia.org/wiki/40-bit_encryption

Of course on such small file it would have no sense because the hash alone would be bigger than the file if we used SHA1 for example.

Either way, it's not possible with current performance to make it viable with bigger files.

But the compression aspect is there, we could for example use hashing algorithm that is smaller than the pieces but enough to not make collisions, for example 10 byte piece 5 byte hash.
Working on this sizes we could theoretically half the size of the file by downloading only hashes of the pieces which would then be used to reconstruct the pieces on the client machine by brute-forcing all possible combinations on a 10 byte space.

But yeah, I'm no mathematician.
Reply
#9
Probably a bit more possible over the bittorrent protocol as each file is split into pieces so you would just be recreating the piece.
Reply
#10
(Apr 17, 2018, 05:16 am)Kingfish Wrote: Probably a bit more possible over the bittorrent protocol as each file is split into pieces so you would just be recreating the piece.

Yes, could be just a slight modification of some open source BitTorrent client to do this simple yet intensive task.
Would this make sense ? idk this is a theory thread so there's my Wink

@edit: wrote a little asm program that takes any file and generates MD5(128 bit / 16 byte) hash for every 32 byte piece and stores them all in a binary file, now running on my server and tries to find the first piece.
Didn't even check the performance so don't know how long it could take, will post result in couple millenniums.

Maybe with smaller pieces it would be possible but 50% smaller files would be cool Big Grin
Why MD5 ? coz I had it laying around Big Grin also running only on one thread, would need to multi-thread it but I can't sacrifice more than one core on my company server anyway.

Will try later with smaller pieces, for practical use where we have gigabytes of data it would have to be very fast.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Reducing mp4 file sizes Mach234 2 1,865 Mar 05, 2024, 10:18 am
Last Post: Mach234
  My Customised GNU/Linux Distros: Is it worth sharing? RobertX 16 13,114 Feb 06, 2024, 21:10 pm
Last Post: RobertX
  How to play a SSIF file ? WW3hasstarted 4 3,244 Dec 28, 2023, 11:19 am
Last Post: WW3hasstarted
  Modify date for .dat file zillah 0 9,008 Mar 29, 2022, 05:28 am
Last Post: zillah
  Power Director and HOSTS file sixeyeco 6 16,845 Dec 10, 2021, 01:06 am
Last Post: steelfox



Users browsing this thread: 1 Guest(s)