/tech/ - Technology

Technology & Computing


New Reply
Name
×
Sage
Subject
Message
Files Max 5 files32MB total
Tegaki
Password
[New Reply]


Putin's given us the boot! Read about it here: https://zzzchan.xyz/news.html#66208b6a8fca3aefee4bf211


45e99aef2cca795fa3531a8af49e97644cdbee928bc7bf7743c4234657945881.png
[Hide] (25KB, 256x256)
Lately I've been interested in looking for a final solution to the imageboard problem, deplatforming and relying on centralized authorities for hosting. P2P through TOR seems like the most logical path forward. But the software would also need to be accessible, easily installed and understood by just about anyone, and easily secure/private by default.

Retroshare seemed like a decent choice, but unfortunately its forum function is significantly lacking in features. I haven't investigate too much into zeronet either but from what I recall that was a very bloated piece of software and I'm looking for something that's light and simple. Then there's BitChan (>>507) which fits most of the bill but contrasted with Retroshare is not simple to setup.

I know there is essentially nothing else out there so this thread isn't necessarily asking to be spoonfed some unknown piece of software that went under the radar of anons. But I think the concept of P2P imageboards should be further explored even though the failure of zeronet soured a lot of peoples perspective on the concept. Imageboards are so simple by nature I feel this shouldn't be as difficult as it is. Retroshare comes close but as I understand it you can't really moderate the forums that you create. Plus the media integration is basically non-existent, though media is a lesser concern. But having everything routed through tor and being able to mail, message, and have public forums all in a single small client available on every operating system is the kind of seamlessness that a program needs for widespread adoption.
>>845 (OP) 
Really I should have made the subject "Peer2Peer Imageboard/Forum solutions", because that more accurately describes what I'm interested in. But you get the point.
c37.jpg
[Hide] (74.9KB, 495x528)
bump

>. P2P through TOR seems like the most logical path forward. But the software would also need to be accessible, easily installed and understood by just about anyone, and easily secure/private by default.

People ITT can directly help by making a guide or linking to good resources on how to do this.
Replies: >>860
This thread gets made again and again, yet anons only discuss the things that don't matter. The implementation is largely unimportant, language and code bloat autism aside. If you want a decentralized imageboard architecture, here are the three problems you need to solve: how to come up with a new pseudonymous identity that doesn't reveal much about the poster, how to keep that identity on the server for as little as you can (preferably not to keep it at all and just use it for an authorization handshake) and what measures to put in place to limit media content spam or make it possible to store large amounts of it. Nothing that exists right now fits all 3 criteria, so if you have something in mind get to work.

If you think I'm the biggest faggot in the world for suggesting identities, keep in mind that right now IPs serve that role for the WWW. Tor imageboards using currently existing engines already get spammed because the only address there is localhost. The only solution outside of that is more people to moderate content and captcha. One is basically amplifying the jannie problem, the other is just one layer in a proper security stack that is otherwise easily defeated if you don't use JS shit. Web of trust is dogshit because of the barrier of entry, cutting down the 3.5 anons that browse imageboards outside of redditchan to 1.5, as well as creating echo chambers. One option would be to use SSH keys or any other public-private key pair algorithm. But then you need to figure out for how long you would need to keep them on the server and overall visibility. That is, something to the effect of using the public key for the handshake when posting, not tying them to posts as IDs in the database and having an activity rating for each. Newer keys get to post less or have a more difficult posting challenge and after some posts or time you get to a rating where it's all the same. Combine that with some non-JS captchas and you have a potential pseudo-identity to softblock immediate spam.

Next is media content, the inevitable image, PDF and WEBM dump threads and where to keep them. This is why centralized or federated solutions are more practical than fully P2P ones. To have fast enough access to posts with media requires some nodes in a P2P swarm to operate more or less 24/7. Not everyone has the means or desire to do that. How do you identify the nodes that do? You can label them as trusted to enable faster loading times, but at that point you've already returned to a federated model. A mixnet P2P could solve this, where you have what are essentially server nodes that also act as trackers. As you are reading the thread, you may be chosen to offload some of the media content to other peers. The trusted nodes can also act as the webring of today, but have load balancing negotiations added on top to more efficiently store files attached to posts. That being said, you have to understand that any P2P solution deanonymizes you to an extent. By design, you need to peer with others at certain intervals. And a malicious entity, using Tor as an example, can track such a web because of how interconnected it is.

>>859
>bumping a thread on a slow board
What are you doing, nigger?
Replies: >>861 >>864 >>1148
>>860
>>bumping a thread on a slow board
fucking let him, it's not that big a deal

Decentralized imageboards so far have two practical problems:
>Meshnets/P2P approach
Spam CP on it; FUD and van whoever's left
With a bad setup, it's possible to identify posters (thanks 08chan)
Can spam it to shit in general
>Tor or similar
Put your controlled node in first position to the entry point; now it's a free MitM that you don't even have to set up (makes you wonder just how much Guard nodes are gov-controlled)
Can also subvert/control/blackmail the administration with social engineering since the site is centralized

My hot take on it is that P2P approach is fixable by first making users subscribe to a mandatory user-created filter list of their choice to prevent them from downloading illegal things while proxy chain is kinda-sorta alright under a VPN, but not an option since you need multiple backflips to find a trustworthy one that can actually tell the Big Gov to fuck off and not everyone can be expected to do this. Shitboards of today aren't populated enough to survive another migration unless clearnet dies, though it's not a waste to prepare for when the purges start.
>>860
>Web of trust is dogshit because of the barrier of entry
A web of trust is literally the only way to make this work because otherwise you drown in spam and sybils. Decentralized moderation won't work, because it will naturally centralize once load increases; there's basically no good incentive to moderate such a board for free.

>inb4 but XYZ did it
XYZ has probably 100 users at most. I've used completely(!) unmoderated forums before: They work when nobody knows about them and you have two dozen fags there. They go up in flames as soon as user count increases.
Replies: >>888 >>2747
bfee5c932e8f6716abedcd547dae0b4eee4b04d8254ecbfd4dedf828854c2115.jpg
[Hide] (27KB, 296x296)
>TFW I can't into computers and can't help.
Replies: >>867
>>865
Nobody can do the computer. If you learn to glue together 2 pieces of shit into a barely functional big shit, you'll basically be about as good as anyone else.
I see this subject come up over and over again and what most people keep suggesting is exactly what 08chan+tor already is. The identity shit was not even as big of an issue as it was made out to be as you could change the identity whenever you wanted. Identities were implemented to make it easier to deal with cheese spam if I remember correctly without considering how it could be used to track posters across threads. Blacklists which you could even opt out of were used to remove cheese and moderate in general, blacklisted posts/identities would never even get downloaded or seeded.
Replies: >>871 >>875 >>889
>>870
The one and only time I tried zeronet it was ridiculously sluggish and hard to run on low end hardware. Maybe it's worth another look, though.
Replies: >>872 >>875
>>871
>ridiculously sluggish and hard to run on low end hardware
Forgot about that, GUI is some html5 async abomination if I remember correctly, on top of that it could also be slow because everyone stopped using it so no peers 08chan was most of zeronet's users for a while.
Replies: >>873
>>872
Yeah I just got it running again now. For starters it isn't push button to get working over TOR. The configuration isn't all that difficult, just editing the torrc file and adding some permissions, but I'm sad to say the average capability of anons is probably below even that. Also holy fuck this shit has such a cancerous UI, animations flailing around my screen giving me eye cancer shit swooping into view and swooping out.

I want to put my fucking first through the skull of whoever the fuck thought this was acceptable. I don't care if things take awhile to load because of low peers, but holy fuck I just want static pages with static images, and static text, and static media. I don't need epic zooming and wobbling and fucking shaking. JUST LIST TEXT AND IMAGES WHY IS EVERYTHING ON MY SCREEN MOVING ALL THE TIME
Replies: >>874
ClipboardImage.png
[Hide] (173.5KB, 1126x455)
>>873
Now drag the right sidebar out for even more ebin animations :^)
>>870
>you could change the identity whenever you wanted
>Identities were implemented to make it easier to deal with cheese spam
Spot the problem.

>>871
Unlikely, Zeronet devs have a history of being complete morons. Non-CSPRNGs for crypto, pulls in the entire web shitstack, claims that are completely untestable, and on top of that, the project presents itself like a scam (look at the fucking website). Ever since it was shilled on /pol/ back in 2017 or so I have no idea why anyone ever gave it the time of day.
Replies: >>878
>>875
>Spot the problem.
No shit, it did help a bit though but they should have just implemented the blacklists without identities. Still, they are not the end all they are often made out to be.
>>864
I agree that it's the only way for a fully decentralized P2P network. That's why my overall message for the post was that a federated network with Bitchute-like clients as peers who temporarily help with content delivery during peak times is a better model.
>>870
>I see this subject come up over and over again and what most people keep suggesting is exactly what 08chan+tor already is. 
<REGISTER TO POST BECAUSE WE CANNOT DEAL WITH SPAM
die teen
Replies: >>890
>>889
Thats not how it worked.
>>845 (OP) 
I wish there weren't so many web-based approaches. I'd rather have a general API specification and a simple protocol than a full on system with even CSS and shit. People need to get the fuck out of their webbrowsers. That's why I liked the nntpchan approach, but it's not perfect. The whole thing should work as a command line application, as a tui, as a desktop gui, sure, also some kind of web implementation. But in general there seems to be too much of a focus on specific implementations. Everyone's kind of doing his own thing from start to finish but there is a lack of generalization. Would be nice if all these solutions were able to communicate with each other in the end.
Replies: >>2916
>>845 (OP) 
Personally speaking federated imageboards looks to be a better solution than p2p. P2P services honestly suck.
Replies: >>972
>>942
Federated imageboards still depend on hosting providers and centralized administration which has proven to be on multiple occasions a significant problem given the nature of the communities. A P2P imageboard that can be forked by anyone at any moment the second they want to fragment away from the original BO solves many issues. If the BO abandons the board, just fork it, if the BO chimps out, just fork it.

Too many times have sites gone down and content been lost while we depend on central authorities. Imageboards are most just text and small files, they're a prime candidate for P2P. I wish retroshare had some good imageboard software in it, the rest of its features are great besides the shitty forums. It would be awesome to have mail, file sharing, microblogging, IRC, and imageboards bundled into one neat little P2P over TOR software suite.

Retroshare is almost the perfect solution, or at least the best thing I've found that even comes close to a solution.
>>845 (OP) 
Freenet?
Replies: >>1003
>>998
Don't use Freenet. It's very easy to be traced by police.
Replies: >>1032
I'd say one possible option is to create a stand-alone imageboard consolidator, then copies of it can not only scrape their own favorite boards, but share content with each other thereafter P2P-style across onion links to fill in each other's missing content.

Kind of like a standalone BitTorrent client, but for imageboards & without the raw IP issue.
Replies: >>1030
>>1029
>can not only scrape their own favorite boards
I guess I should clarify my idea here a little. So a user of the standalone would point the scraper to their favorite imageboards, whether clearnet, Tor, Freenet, I2P, w/e, then thereafter this DL'd content would be available via P2P sharing with other users of the same content.

If an important site is deplatformed, so what? All the content is still on potentially hundreds/thousands of client machines, and readily available to everyone else using this network.
Replies: >>1031
>>1030
>and readily available to everyone else using this network.
And could provide some kind of exporter process to quickly get the deplatformed system not only back up and running elsewhere but replicated across a large number or recovery sites thereafter.
Replies: >>1033
>>1003
proofs or is this just meaningless FUD?
>>1031
And as far as spam, D&C, derailing, goonops etc. goes, individual users of the software can 'report' the content and if new content entering the network gets a lot of reports then it can be shuffled into a 'do not download' listing based on individual user's specified choices. So if glowniggers or other goons are trying to spread misinformation, cp, w/e, then that new content can get quickly flagged by consensus and can be kept from even downloading into client machines after being marked so.

Additionally, you could have autistic, self-appointed 'gate-keepers' who could incessantly watch their favorite boards and immediately blacklist anything they didn't like. Any like-minded users could subscribe to their lists ala AdBlocker-style, and content would be avoided that way. Anyone who became really adroit at blocking exactly what these groups didn't want to see would quietly (and quickly) form a positive reputation with that group. 

Using this type approach even unwanted content that managed to arrive on a client's machine could be scrubbed off according to these lists afterwards.
>>1033
-Further description about subscribing to the gatekeeper's lists
Any content a user blocks could be compared to these 'adblocker' lists and a heuristic could present that gatekeeper's list to the user with a 'likelihood of taste compatibility' score and offer to subscribe. If the matching algorithm is successful at making a good match in user tastes, then that user should have far less 'nuisance' to report about afterwards, the autist doing all the heavy lifting for them beforehand.
Replies: >>1064
>>1033
What would be the defense against malicious spam? I'm assuming that the Good Samaritan clause is applicable, that is 24 hours to delete illegal content. Say an imageboard gets spammed with pizza, the scraper mirrors it and now it's potentially mirrored on a number of clients, all of which have their IP visible to peers even if it's the IP of a VPN or an onionshare link. How do you propagate deletions of content that isn't spam, which isn't a pressing issue as it can be moderated out with time?
Replies: >>1066
>>1033
>votebanning
Welcome to reddit 2.0.
Replies: >>1065
>>1033
Any system that doesn't depend on a central moderation authority is too complex to be manageable. What you need is decentralized distribution of the content, but centralized authority over which content is distributed and removed from the network.

So you "subscribe" to a BO of /v/, but anyone could fork /v/ and become the new BO of that newly spawned /v/ fork. After that hard fork any new posts made on either board would be isolated to their own networks, and if people wanted they could choose to subscribe to the new BO's authority instead and begin seeding his network of content.

Any consensus moderation system that deviates from traditional moderation is just going to be exposed to abuse or in the best case scenario really fucking shitty. Traditional centralized moderation works fine, the issue that needs solving isn't how to moderated an imageboard but how to host and distribute the content.

I wonder if this can be achieved through blockchain tech like Ethereum. Too bad I'm a no dev brainlet and couldn't even begin to comprehend that sort of shit.
Replies: >>1064 >>1068 >>1069
>>1042
>Any system that doesn't depend on a central moderation authority is too complex to be manageable
No, see >>1034
Decentralized authority is manageable with the subscription pattern. Similar to git forking and merging. Users can pull moderation just like patches and diffs. They can also build on top of other's work and open their changes to pulls. Note that there is no ownership of the board, each user only owns the node they are on. There is and should be no need for a "management". It should also be not possible to be managed in order to protect the network from (((mass censorship))).
Usage will be different but it is still very easy to understand and handle for each individual user. Just pull lists I want, configure attachments to be downloaded only after all subscribed list's approval. To publish your work, add your own bans and deletes on top of the result and open it up for pulling.
Traditional centralized moderation doesn't work fine. BOs and mods can be compromised, they can also disappear and show up a month later. Previously, the only measure possible for dropping quality is to leave. Now, it is possible to just fork the moderation. Users won't ever be lost anymore. The network can also support any and all kinds of preferences by having different moderation for the same content.
Nothing really different from sharing your local post filters.
Replies: >>1067
>>1040
Actually, it's up to each individual whether they want to pull the moderation or not.
>>1039
>What would be the defense against malicious spam? 
Addressed in a basic way already:
>>1033
>Using this type approach even unwanted content that managed to arrive on a client's machine could be scrubbed off according to these lists afterwards.
>>1064
Yes, I think you've got a workable idea Anon. Seems worth exploring of publish/subscribe. Both for board's content and also for moderation as well. Individuals who were doing something a lot of people were also interested in would naturally 'bubble to the top' simply by dint of interest in their publish.
>>1042
>Any system that doesn't depend on a central moderation authority is too complex to be manageable.
I'm not inclined to think so. This system is intended to enable experience Anons to share the boards they like and share the moderation they choose. The intent isn't to help newfags get on board necessarily. If someone wants to create a "babby's first imageboard" experience, then that could also be a part of it I suppose ,and the software could even default to that list as an initial setup.

Apart from that the 'management' is simply left up to the individual, and they can share their moderation with everyone else. The ones who already know their shit will both find it easy, and will immediately rise to the top on the content/moderation lists.
>>1042
>Traditional centralized moderation works fine
Fine until the next Red Flag gayop intended to destroy it. With the Bolsheviks in power in the US now, you can be sure they will do everything in their power to remove anything like imageboards from existing. 'Traditional Centralized' systems are natural targets for this kind of Commie Pogrom kikery.
You know it occurs to me that one expedient for choosing what lists to subscribe could be to have some kind of 'show additional moderation' button that would in a very fluid, responsive way help you discover what a board's content looks like under different publish-moderators. Anything that's objectionable to you might already be marked so by some anon, and if you saw enough of his agreeable edits/deletions/bans then you could try out his published changes. Or, if you decide you don't like it so much after all, simply revert it and any excluded content would simply flow to you from the ones you still had active.

As suggested already ITT, I presume the best way to kick-start the system is simply to clone boards that already exist today across the Internet. I'm guessing this could even feature the idea of 'consolidating' different boards that are on the same topic? Maybe all /tech/-type boards, regardless whether they were named /g/, /tech/, /gentoo/ or w/e on their parent site could function in this system as one huge-ass /tech/ conglomerate. All shared amongst the network nodes via P2P over Tor onion services.
Replies: >>1071 >>1072 >>1076
>>1070
One other thing, instead of just having 'Ghost' conversations with cloned threads from clearnet IBs (ala 4plebs), why not make the conversation two-way? Can't the client software also act as a normal client with a standard view into a normal board? So, this zzz/tech board could be cloned into the Anon-Net lol somebody come up with a name for this soon system, then any nodes that wanted to could 'talk back' to the source board's poster (ie, zzz/tech anon) through it as well. And for site like this one that already provide their own onion service, it should be a snap to keep the comms simple and orderly.
>>1070
The original plan had been laid out in Julay/tech/: https://alogs.theguntretort.com/tech/res/2743.html#q2743
It took into account the transport layer and protection against great firewall style censorship.
Pasta:
Idea: >p2p >semi-centralized moderation >anon via Tor/i2p/freenet/loki/Gnunet >hierarchial tag based "boards" Idea detailed: >all posts + data(eg:images) is seed and transfered by p2p >moderation providers can be subscribed, anyone can be a moderation provider >moderation can be inherited with some personal changes >banned posts/checksums/regexs will not be seeded >anonymity can be provided through any chosen protocol >overboard is root, eg: /tech/robowaifu, /pol/left Reasons: >no single point of failure >easy moderation, semi-decentralized, less drama >no central cost, no donation necessary (everyone keep their machine on all the time anyways) Problems & possible solutions: >cannot ban a poster <regex ban possible <whitelist style moderation provider >post latency/conflict/feds mods <blockchain? >moderation latency, problem for blacklisting moderation (eg:download CP before banned) <don't decrypt/show posts after moderation lastpost counter How is this better than the current model? >Not everyone can pay for a server, but everyone has a computer >Massive boards possible, with combined moderation and stuff >grouping boards allows wide topic discussion and stuff
Replies: >>1073 >>1074
>>1072
Thanks Anon, I didn't know about that. I'll go check it out now.
>>1072
Format fucked up.
Idea: 
>p2p 
>semi-centralized moderation 
>anon via Tor/i2p/freenet/loki/Gnunet 
>hierarchial tag based "boards" 
Idea detailed: 
>all posts + data(eg:images) is seed and transfered by p2p 
>moderation providers can be subscribed, anyone can be a moderation provider 
>moderation can be inherited with some personal changes 
>banned posts/checksums/regexs will not be seeded 
>anonymity can be provided through any chosen protocol 
>overboard is root, eg: /tech/robowaifu, /pol/left 
Reasons: 
>no single point of failure 
>easy moderation, semi-decentralized, less drama 
>no central cost, no donation necessary (everyone keep their machine on all the time anyways) 
Problems & possible solutions: 
>cannot ban a poster 
<regex ban possible 
<whitelist style moderation provider 
>post latency/conflict/feds mods 
<blockchain? 
>moderation latency, problem for blacklisting moderation (eg:download CP before banned) 
<don't decrypt/show posts after moderation lastpost counter 
How is this better than the current model? 
>Not everyone can pay for a server, but everyone has a computer 
>Massive boards possible, with combined moderation and stuff 
>grouping boards allows wide topic discussion and stuff
Replies: >>1075 >>1077
>>1074
Looks like the original source the alogs/tech poster quoted was on the Julay/meta which is apparently gone. Fortunately, I have all of julay/meta archived personally so I can probably dig that original thread back up at some point. Glad I did.
>>1070
Some additional thinking on the 'show additional moderation' thing. We should have some kind of multi-diff function that would over this. Just like diff, but potentially pulling from more than source, depending on the particular history of that content. All identical edits, etc., would be consolidated into a single diff element so you weren't looking at a long list of redundant information.
Replies: >>1078
>>1074
Seems like a comprehensive approach.
>Massive boards possible, with combined moderation and stuff 
I'd say boards shouldn't be "combined". Users should be able to view an overboard of all the post lists(boards) they subscribe to, but in order for each board to deploy the nuanced rules they generally need they have to be distinct entities to a certain degree.
Replies: >>1079
>>1076
BTW, I used to use some kind of addon when I was snotnosed that allowed anyone using that plugin to make comments about any webpage and they looked like sticky notes when the page was rendered using the plugin. Maybe some idea kind of like that.

Anybody remember what that thing was called? I've forgotten now.
>>1077
>but in order for each board to deploy the nuanced rules they generally need they have to be distinct entities to a certain degree.
Yeah, that seems reasonable to some extent. Personally I would like having every.single. tech-ish board available as one 'board' (with it's own catalog view). No worries if anyone feels otherwise, just a personal desire from my viewpoint.

But I certainly understand what you're say and why that would be a good thing Anon.
Replies: >>1080
>>1079
>Personally I would like having every.single. tech-ish board available as one 'board' (with it's own catalog view). No worries if anyone feels otherwise, just a personal desire from my viewpoint.
I don't necessarily take issue with it. In fact I think the ability to tailor your imageboard browsing experience so all the boards and threads contained within are easily accessible and discoverable is vital.

I don't think your ability to do that should be limited, but that things should be clearly labeled and defined as to not confuse retards who are posting in one /tech/ board then click on a /tech2/ thread, post a soyjak, and get their shit deleted then sperg out. So I guess I'm against pre-configured meta-post lists for people to subscribe to that aren't actually curated but just point to other moderated lists resulting in consistent clashes of people adhering to the wrong rule sets in the wrong threads.

I think meta-boards that combine multiple boards should have to be manually set by each individual user.
Replies: >>1081
>>1080
>I think meta-boards that combine multiple boards should have to be manually set by each individual user.
Yes that might be a workable compromise approach. Regardless of whether the set is pulled from a publisher, or home-spun by Anon, there needs to be a rational mechanism defined to delineate the sources back to their originals (and vice versa, in the case of two-way traffic).
Replies: >>1082
>>1081
>a rational mechanism defined to delineate the sources back to their originals
I guess what I have in mind here is some way for a client to get to an unedited version of the original posting, even if it was initially seeded to them by a publisher who edited it. For example if there was a publisher that in general I liked his edits, etc., but he was particularly squeamish about saying the word 'nigger' and put some kind of dopey word filter on it where that post says some thing "That retarded fucking double-friend ruined everything!", then I'd like the opportunity to recover the original text by some other pathway so I could read it properly: "That retarded double-nigger ruined everything!"
Replies: >>1083
>>1082
Lol it's getting late. Just put a big button along the post headers
View Original :^)

I'm off for now, cheers /tech/.
Replies: >>1100
>>1083
This isn't IRC faggot, you don't need to literally sign off.
Replies: >>1101
>>1100
Don't get your panties in a know bro. It's just good manners where I'm from.
So, I think BitTorrent's DHT mechanism should suffice for peer discovery across the onion service distributed system. I figure the project code repo itself will maintain a known list of trusted peers to bootstrap a new node into the system.
Can someone here point out a solid reason this wouldn't work? Or suggest a better approach than DHT?
>>860
>how to come up with a new pseudonymous identity that doesn't reveal much about the poster
This has been on my mind for a few years now, and I think I've got a decent solution via cryptography.

Associate each user post with a unique asymmetric key pair and divide posts into "good" and "bad" sets. In order to submit a new post, a user must provide a ring signature over the set of "good" posts - anonymously proving that at least one of those posts was made by them. This scheme by itself isn't particularly compelling, since a user can simply rely on a single "good" post. However, by restricting the maximum ring size and/or invalidating posts over a certain age, correlation attacks are made possible. If the correlation is something that the user can measure, the user is forced to decide between continued "bad" posting and their anonymity; the only way to decrease correlation is to create "good" posts. A user with no valid posts or only posts with an undesirably high correlation from re-use is effectively banished without having been de-anonymised. Users who don't care about their anonymity will end up with extremely high correlation which can then be used to automatically invalidate their posts.

This scheme can be extended by requiring an additional signature over the user accounts (asymmetric keypairs, again) in order to maintain exclusivity. This is because the private key of the account can be used to de-anonymise the user's previous signatures (in some schemes). Alternatively, it should be possible to construct deniable signatures that protect users against key disclosure laws at the cost of not being able to restrict this behavior.

Why haven't I implemented this yet? I don't have to cryptographic chops to prove the soundness of this scheme, nor do I have an audited ring signature implementation to build this with. There's also the concern that if you're going to take anonymity seriously, then you need to consider the following attacks:
>the set of posts the user is aware of or has downloaded serves as a fingerprint
>the server can mount a sybil attack on the user to reduce their anonymity set
>the user's client can itself be fingerprinted. I am not aware of any protocol where this has been proven not to be the case.
>>1148
All good posts, and I appreciate the effort Anon. But if you end up with "And no, no way to get there from here" and just leave it hanging at that, then it's all just a nigger-pill tarbaby.

I have a feeling you don't actually think it's impossible though (your apparent conclusion notwithstanding) so even if you don't feel qualified to, why not take a whack at it? Or at least a more detailed description of the requirements? You certainly have more to offer on the topic than most of the rest of us do Anon.
Replies: >>1239
>>1224
>just leave it hanging
>your apparent conclusion 
It appears there's been a miscommunication; I was just listing out further obstacles in the hopes of additional discussion, or for someone more knowledgeable to pick up the torch. While the Sybil attack is a serious issue in the age of GPT-3, everything else is definitely manageable for someone with formal cryptographic experience. To give some ideas:
>have an audited ring signature implementation to build this with
This is important because there are many non-obvious side-channel attacks in even the most innocuous cryptosystems. However, if post generation is performed offline, then perhaps Cryptol (https://cryptol.net) could be used as a decent compromise since it eliminates large classes of programming errors and opens the code to scrutiny.
>the set of posts the user is aware of or has downloaded serves as a fingerprint
This can potentially be mitigated by either Freenet or Private Information Retrieval techniques. I didn't mention these because the former has other issues related to the way it provides plausible deniability, and the latter because it has serious performance issues.
>the server can mount a Sybil attack on the user to reduce their anonymity set
The issue with this one is it's essentially the same kind of catch-22 as trying to ban people while preserving anonymity. Even if everyone knew eachother IRL, and then the account extension to the scheme was used, a user could still flood the forum with fake posts. In principle, PoW could be applied, but given that even relatively large cryptos are "cheap" to 51% attack, I doubt anything short of 4chan or reddit-sized populations would benefit from it. The only thing working in our favor here is that GPT-3 is still discernible in conversations, so users could potentially notice such attacks in action.
>the user's client can itself be fingerprinted.
Freenet also solves this problem, since everyone's using the same client. However, I would prefer to build on a simpler protocol that allows people to write or use clients that they trust, instead of relying on a large centralized project that may be compromised. Then again, I use and rely on tor. As for the possibility of just doing it all in the browser, the issue here is that you have to trust the server to send uncompromized code. There might be a solution in using bookmarklets to act as a trusted codebase that bootstraps the rest, but I don't know of any research that confirms it as a valid tool; in the very least, there's a tension between allowing updates to the code and keeping old, well-verified code around. There's also the fact that javascript is a trash fire.

>why not take a whack at it?
Because it feels like an overwhelming task and I'm terrified that I'm full of shit and could put a lot of people in serious danger. I am also not in a good position to provide hosting, unless people are willing to donate enough XMR for me to anonymously buy it. If people are seriously interested in the idea, I guess I could give it a stab if only to raise awareness. That is, it bears repeating:
>I don't have to cryptographic chops to prove the soundness of this scheme
I am deeply worried about my own proposal because it's deliberately playing with fire by employing correlation attacks against the users. I have no idea whether the foundational assumption that producing more "good" posts reduces the correlation is true. While there is precedent for this kind of hijink in cryptogaphy (FHE being a recent example), it could easily not be the case here.

>Or at least a more detailed description of the requirements?
Is there anything in particular you think needs more detail? I didn't want to elaborate too much since I type like a fag.

>You certainly have more to offer on the topic than most of the rest of us do Anon.
I would hope not, otherwise "we" are fucked.
Replies: >>2049
>>1239
Is FHE applicable for user registration?
Replies: >>2419
>>845 (OP) 
This is all well and good but is there any want from the mods to implement something like this? It seems all the chans with users don't want to upset the status quo
Replies: >>2419 >>2420
Wow it's been a while. Had a friend with some crypto knowledge look at the problem a while back but we fell out of contact and he's busy with his own stuff.
>>2049
In what way? What problem is there with registration that needs to be solved? FHE just allows you to perform arbitrary operations on encrypted data. You could use it to implement a bunch of other cryptographic primitives/protocols, but there are often more efficient, specialized solutions for them. Besides, the issues with registration are arguably more soft/social - how do you filter the tards and glowies while preserving their anonymity?
>>2174
Maybe not right now, but it's inevitable that chans are going to be shoah'd. Decentralization and darknets are the only way they can survive the near future.
Replies: >>2693
>>2174
gladly
>inb4 implying chans with users
Replies: >>2421 >>2423
>>2420
>#nolife
>>2420
LOOOOOOOOL
Get niggered
>>845 (OP) 
Internet is a decentralyzed medium, its not that decentralization leads to things getting any better. A good example of this is bit coin, which is now under the monopoly of Chinese cryptofarms. 
The rot is even deeper, decentraliztion/federalization movement, even if it were successful in the first place and people leave the centralized parties en masse, will only result in the cycle being repeated once again. 
What we need to do is go to the root of the issue and pluck it from there.
Replies: >>2534
What about building an IB system riding on top of the Tribler system?
>>2509

What would that be, anon?
>>2419
>In what way?
Banning the author of some post without revealing their identity (while batch deletion of their posts is impossible);
Authentication for the operations that may be some hashes;
...
Replies: >>2721
What if 
>decentralized IB based on nodes 
>every user is a node, and but also metanodes are also nodes
>user subscribes to nodes to receive posts 
>nodes can either forward other nodes as meta nodes or just link to them and let the user choose to subscribe to them
>user can just create his own node and hope that people subscribe to him, or just forward his node through a meta node 
This system I just thought of one minute ago lets people hide spammers/glows while still allowing a certain degree of anonymity through meta nodes.
Replies: >>2700 >>2701
>>2699
kinda fchan hosted on I2P
>>2699
How can the user achieve anonymity through meta nodes? If post metadata is not provided (there is no author), there is no way to get rid of spams. Unless spinning up a node is difficult, a spammer can create nodes  again and again. But if spinning up a node is difficult (there isn't lots of them), a node would be more likely to be identified and the origin of a post can be figured out.
Replies: >>2702
>>2701
You can see where the posts came from, but only for the last step. If metanodes don't purge spam nodes users will just remove those metanodes. Metanodes are kind of like mod accounts in a way, letting nodes through(e.g. users or other meta nodes) and removing nodes that are spam or whatever. 
e.g.
User A posts from his node to metanode M1, metanode forwards all posts to metanode M2.
M1 know users A's posts and where they came from, but M2 only sees those posts as M1's posts. 
If user A is a spammer it's M1's responsibility to stop forwarding his node. If M1's mod is kill or the spammer isn't blocked for any other reason, M2 can just block the entire M1 node (but not the A node directly).
Replies: >>2704
>>2702
That is to say the moderation expense can be driven arbitrarily high. A fully decentralized anonymous board that doesn't die to spam is logically impossible, why do fags keep trying to square the circle?
Replies: >>2707
>>2704
Meta nodes would have to accept users, instead of just forwarding everything by default. Moderation would only be as complex as you made it. Like private trackers but it's an IB.
Replies: >>2710 >>2718 >>2720
>>2707
There will be tons of duplicated effort with so many metanodes. Censorship can also be a problem if anon posts to a metanode that hates him. He has no way to know and no one else has any chance of getting the post.
Replies: >>2719
>>845 (OP) 
>imageboard problem
I don't think that the time being spent to create an P2P/Decentralized imageboard is really worth it. Either you have no anons posting whatsoever, or your user base quickly fills up with shit stained r*dditors from cuck/kohlchan. (Other then that, an imageboard is nothing more then a simple web-application, anything that's remotely close to Zeronet will do the trick.)

Imageboards are far beyond their golden times, and their corpse is starting to smell.
Replies: >>2713
>>2712
Nobody asked for your shit stained opinion defeatist fag YOU smell.
Replies: >>2714
>>2713
Why yes thank you, I really should take a shower sometime.
>>2707
This makes it a web of trust, which is not anonymous.
Replies: >>2719
ClipboardImage.png
[Hide] (30.5KB, 914x409)
>>2710
There's duplicated effort but I'd argue redundancy is good, specially for decentralization. 
>censorship
Users can subscribe to any public nodes (including user nodes), which includes any public part of the chain. 
>He has no way to know
He'll probably be subscribed to the node to receive messages, including his own posts. If he isn't receiving his own messages that means his node was removed. 
>>2718
Just add meta nodes that accept any node requests by default, maybe with some anti-spam features.
>>2707
That's just trading in anonymity for easier moderation. Highly doubt that would help the problem any.
It occurred to me that linkable ring signatures over the set of user accounts could be used for flagging reported posts. It's not infallible, since sybil attacks and ganging up to mass report are still possible, but it's no worse than existing systems while preserving anonymity.
>>2693
>Banning the author of some post without revealing their identity 
Maybe, but it's doubtful. At the end of the day, the modified ciphertext is only useful to the person who encrypted it in the first place. Perhaps a FHE database could be used, but there's an issue with these kind of encrypted banlist schemes; you can't use a zero-knowledge proof of membership since you need an identification that can be added or removed from the database to enact the ban. If posts have an identifier (even encrypted and changing per-post!) then so long as it compares against the same database entry, you can compare pre- and post- ban databases to correlate that user's posts. Perhaps there's some way to worm out of that problem, but I'm too dumb to think of one. Perhaps some oblivious protocol that modifies the database entry on each post after authentication, but then you'd need a way for bans to work when enacted on posts older than your latest.
Replies: >>2722
>>2721
I don't know shit about crypto. But doesn't linkable ring signature solves the whole problem with banning author without revealing identity? As long as it is difficult to get a second signature, the spammer cannot spam with multiple keys and cannot easily evade ban. If there are enough keys (large amount of decoy keys can also be used), it would be difficult to deanonymize posters. A p2p database, if possible, can decentralize the signature and keys.
Replies: >>2727
>>2722
The issue is that it's a much weaker guarantee of anonymity. With a few VPS instances, an adversary could compare every known post to the one they're targeting and consequently reduce your anonymity to pseudonymity. P2P doesn't solve this since it's only a matter of time to scrape/discover all the nodes, and decoy keys would only work if they break the linkability property. To put this into numbers, check out https://www.crypto51.app/ and see how cheap it is to fully compromise even very large p2p networks. It's also important to remember that anonymity is better than pseudonymity because it reduces the amount of personally identifying information down to what can fit in a single post instead of what can be inferred from an entire posting history. The only reason why this works for reporting (and it doesn't for the usual meaning of linkable ring signatures) is that mashing the report button doesn't disclose more personal details. A restricted scheme where you can only link signatures of the same plaintext provides an attacker with nothing useful outside of perhaps some big brain number/group theoretical wizardry.

On further thought, one way for linkable signatures to work would be if you could prevent people from accumulating them or comparing them. Perhaps some MPC protocol could manage it by keeping the signatures encrypted and only operating on the ciphertexts when the poster is online (requires their private key). It would be slow, but it might be worth looking into now that I think about it. The "trick" here being that you get around the problems with banlists (because linkable signatures effectively constitute one when used like that and might as well be replaced by auth tokens here) by restricting when posts can be checked against the database to online sessions, and only allowing the prospective post to be checked.
Replies: >>2729 >>2730
>>2727
Sorry for doubleposting, but I realized that a crucial difference in the schemes is that the MPC method is not deniable by default. That is, ring signatures can be constructed such that even with the private keys of everyone in the ring, you cannot determine who produced any particular signature. In the MPC case, if the glowies get your private key they are free to run the aforementioned comparison attack and therefore link all of your posts. Attempting to change the auth token to counter this just raises the problem of how to connect bans over old posts to the latest identity without allowing an attacker to do the same.
Replies: >>2730
>>2727
>>2729
>anon > pseudoanon
Yes. There is no problem with being anonymous. But banning needs some different definition or mechanism.
Right now, banning means not accepting post from the same origin. There must be some data to prove posts are from the same origin. That is by definition not anonymous.
As long as the data is there, malicious nodes can cast wide nets and capture them. There can also not be a special mechanism limited to banning poster, because proving a post correlates to another for whatever purpose can be misused by whoever wants to know this info.
Banning needs to be redefined, or it may be not possible. Either all posts looks the same except the content (metadata should be obfuscated as much as possible), or they differ in ways that correlation is very unlikely. Banning without correlation, is it possible?
Replies: >>2733
>>2730
>Yes. There is no problem with being anonymous. But banning needs some different definition or mechanism.
>Right now, banning means not accepting post from the same origin. There must be some data to prove posts are from the same origin. That is by definition not anonymous.
Indeed, that's actually a rather nice/succinct way of looking at it.
>Banning without correlation, is it possible?
I think so. I wrote a post exploring this earlier ITT, and have been shilling it elsewhere for a while now in the hopes that someone smarter than me would pick it up. See >>1148. The main thrust behind the idea is that you can effectively banish users by encouraging them to self-censor, in this case by tying their anonymity to the average quality of their posts. The downside is that it requires striking a precarious balance given that it's fundamentally built around correlation - just not pseudonyms.
>Banning needs to be redefined, or it may be not possible.
In the end, the desire is to prevent new posts written by the author of a banned post from reaching the other users. So banning doesn't necessarily have to be server-side or absolute, but could instead be client-side and probabilistic. An unpalatable example would be something with vouchers/upboats where posting requires having accrued a minimum number that grows with the number of posts on the board. Vouchers could be treated like a currency, which is fungible and possess multiple solutions for privacy-preserving transactions, and the need to meet an ever higher cost incentivizes writing rule-abiding posts to keep up - failing to do so effectively results in banishment. An important property is that the moderation team should not be able to determine whether someone was actually banned or not, since that seems to imply the ability to arbitrarily de-anonymize.
Replies: >>2735
>>2733
Still wrapping my head around >>1148. I looked up more on ring signatures and stuff. What follows is total bullshit. I don't know what I am talking about.
Found Monero using ring sigs: https://www.getmonero.org/resources/moneropedia/ringsignatures.html . It gave me an idea: (assuming the board is on a p2p network, blockchain or something else) Let posts be signed with ring signatures that are created at post creation, with broadcasting stuff to hide the origin node. Redefine banning as filtering bad posts. There are a group of banned posters. What are the chance for a post to be signed with banned member(s) in the ring sig? The poster's identity is not compromised because the adversary aims to correlate to a single poster. But the banning mechanism tries to correlate to a group of posters. If there is only enough information to prove with ok probability that a poster is a spammer but very low probability that a poster is the same person who posted something earlier. This may do it.
But what sorts of info is enough for categorization but far from enough for specific correlation? Still thinking.
Replies: >>2736
>>2735
>What follows is total bullshit. I don't know what I am talking about.
We all got to start somewhere.
>What are the chance for a post to be signed with banned member(s) in the ring sig? 
On the face of it, this doesn't seem to work because if you can compare a signature against a banlist, then you could make a list of size 1 for each user to compare against and therefore de-anonymize them. If you had some function that only worked on large banlists it would just be circumvented by making a bunch of fake entries. You would need to prevent offline comparisons like with the MPC scheme. On the other hand, there may be a way to achieve something along similar lines by splitting identities into factors. If a post is associated with some fraction f<1 of the user's factors, then it could be compared against the entries in a banlist of factors from other posts. Since these factors are merely a fraction of the complete set, banning is probabilistic. Unfortunately, this scheme is probably useless since it would be either overzealous in banning users (false positives) or do nothing most of the time. Then again, it could be treated like a "lives" system, where each factor that gets banned is one less factor you can use to post, and one less bit of anonymity - but this would lead to everyone being impacted by shitposters unless there was a way to award factors randomly. At that point, though, it sounds a bit like the upboat/currency schemes.

Going back to ring signatures, perhaps a slightly weaker formulation of them could allow your idea to work. As it stands, accumulating ring signatures cannot give an adversary any extra information. But what if it could? Essentially, it would be a scheme such that comparing a ring signature against ||the product|| of a set of ring signatures gives a probability that the signer has a signature in that set. If this probability is low when the user only has a small number of banned posts, then this makes it difficult for an adversary to mount offline attacks since they need to try n!/(k!(n-k)!) (n=number of posts,k=anonymity threshold) combinations to de-anonymize someone, which grows at n^k (IIUC). The biggest issue would be if this is an overestimate, since it relies on the number of combinations growing too large to reasonably attack.
Replies: >>2739
>>2736
Right. Read again posts above. I went back to square one and rethink this again.
The goal is decentralized anonymous image board.
Decentralized: (((Shut it down))) can't shut everyone down. Bribe and corruption doesn't work when there is no master node.
Anonymous: There is no freedom of speech. Display your power level and get cancelled.
Image board: Text/data broadcasting platform for legitimate users. Call it a forum or anything else, the same thing.

Challenges:
State/ISP level Adversary/de-anonymiser: Assume no trust. Complete anonymity is needed. No compromise here, my previous suggestions would be not safe. ISPs are ratting you out and all systems are considered compromised.
>Timing correlation: Measuring response time to correlate posters.
>Sybil attack: Most nodes talk among other in secret to find out where new posts come from.
Spammer: This can be found everywhere. When de-platforming is not possible, they shutdown the discourse by spamming day and night. The ratio of "bad" poster can be a lot higher than "good" posters.
>Shill Sybil attack: Most posts are slide or shill posts.

Potential solution:
Anonymous network (Tor/i2p/etc) bridged whispering broadcast nodes.
Timing correlation: random delay? Or jumping over other kinds of hops.
(Shill) Sybil attack: Sounds like the cryptocurrencies consensus problem, can proof of x be useful? Each user only need to pay a small amount of something to post, but they need to spend lots of shakels to make nodes and spam GTP3.

For shills/spams:
Another solution is decentralized moderation, fighting fire with fire. This will be very expensive.

I am convinced anonymity should be the top priority, but that doesn't mean posting is quick or cheap. Thoughts on adapting proof of x to deter spammer/evil nodes?
Replies: >>2740 >>2805 >>2941
>>2739
>Thoughts on adapting proof of x to deter spammer/evil nodes?
Assuming x is some resource, I don't think proof of x is useful to us against anyone but schizos; see again just how cheap it is to overwhelm major cryptocurrencies, let alone something on the scale of an imageboard. Instead, we need to look at other qualities that differentiate us. IQcaptcha is a proof of IQ - it (or something like it) would filter tards, pajeets and robots. You could also have a web of trust system, since quite a few anons know other anons. Look also at how i2p and freenet prevent learning about the global network in order to prevent remote/global attacks. Ultimately, though, all of this only slows down concerted attacks - sybil is a really hard problem with no real solution.
>ISPs are ratting you out and all systems are considered compromised.
>Anonymous network
This. TLS and mixnets make this a non-issue. The real challenge is in state-level/global adversaries. Quite frankly, with the current state of the art, good fucking luck being 100% secure against them. The best we can hope for is making their job as hard as possible so they only strike when real shit goes down.
>Timing correlation
>random delay?
Indeed. Thankfully, imageboards don't have to be realtime, so it's really not a problem for us. However, timing attacks aren't just about message sending, but things like measuring how long it takes for you to do some crypto etc. can actually leak info. Shit like this is why writing crypto libraries is discouraged.
>Spammer
>Shill Sybil attack
Invite-only with anonymous banning solves this pretty well. The issue then becomes filtering admissions, which is something that actual criminals still struggle with against undercover cops. Shibboleths, blackmail, friend-to-friend, etc. aren't foolproof - so what could a relatively milquetoast Mongolian basket weaving forum possibly do? The only thing going for us is that we have a decent idea of what imageboard population/activity statistics look like, so we can detect when there's a wave of new accounts/coordinated activity.
>decentralized moderation
Ignoring concerns around privacy and preservation, decentralized moderation should be adopted anyway. Why should I cuck myself to some power-tripping jannie? So long as the posts are hosted p2p, I should be able to subscribe to moderation lists that I agree with or receive everything unfiltered. Centralized moderation only makes sense for centralized boards where the host has to contend with liability.
Replies: >>2741
>>2740
In the worst case scenario, without any automated spammer detection and the ability to ban, a dozen honest jannies would be enough to delete most posts. Decentralized moderation can be done with inheritance where one moderator can be a "fork" of another (to reduce duplicated effort, but with zero communication between different mods). This can be done either with an public key signature, or something more sophisticated like in Monero to protect the identity of the mod. Mod messages can be transported just like normal messages on the board.
The following should be implementable right now for mods.
deleting posts: Push a list of deleted posts.
regex/file hash filter: Push a list of "banned" regex and file hashes.
Unzip NLP spam detector: Bring on the turing test and kill GPT-3 an eye for an eye.
NLP post tagger: Eg there are already ten threads categorized as "nigger lover", it can be programmed to throttle/delete this automatically
>State-level shit
The only deniable mixnet I can think of is Tor bridges. But I suppose coming up with a deniable p2p bit-torrent like system and steg encrypted messages in data with tons of dummy content shouldn't be hard.
>invite-only
What I have observed for any structures is any structures can be compromised. If there is no structure, then (((antifa is just an idea))) and the thing will go on. Relying on post NLP and statistics may be enough to score spams. I will check out spamassassin/rspamd and see how they do it.

The next step would be a system architecture. Will give it a try soon when I have free time.
>>845 (OP) 
>>864
this >Decentralized moderation won't work, because it will naturally centralize once load increases; there's basically no good incentive to moderate such a board for free.

we should just make a federated image board so that moderation is less centralized and the entire network doesnt fail if one server goes down. then we can start to move federated sites to .onion or something similar if we want individual hosters to be more anonymous
Replies: >>2748
>>2747
Decentralized moderation works. It's the same as sage/hide/filter which many already are doing. This is just sharing your filters/"mark as spam"s. There is incentive to keep shit away from your board, at least filtering away bbc/fags would be trivial.
Replies: >>2775
>>2748
In theory that may sound nice and dandy, but we all know that this requires the majority of users to be smart enough to not fall for the bait and/or feed the troll. 

Bysides, if someone is willing to put the effort into it, and trust me, someone will be autistic enough to actually pull through with that, he can just let the board overflow with shit until it's more close to a septic tank. There's nothing theoretically nor practical stopping him from doing that. And no matter how good your filters are and how smart you are, if the actual quality content just drowns in a huge wave of shit, you're just as bad off, if not even more so, then you would've been with a centralized moderation.
Replies: >>2777
>>2775
Not sure how you got to that conclusion. There is no difference between centralized and decentralized moderation if most users are onions. Anyone autistic enough can loop exit nodes and spam any board to hell, but the mods can't ban Tor users if most users are them. Banning Tor users only harms normal users. If moderation is decentralized, users are not limited to the same set of moderators/jannies (who can be compromised). All mods are already doing it for free anyways, not sure what difference does that make. If anything, there will be more moderators and the cost of moderation will be lower.
Replies: >>2794
>>2777
What is storage scheme for decentralized moderation? How do you avoid filter bubbles and make an archive?

>If anything, there will be more moderators and the cost of moderation will be lower.
Doubtful. It is harder to reach consensus when there mods increase.
Replies: >>2798
Couldn't you just use cryptographic signatures for moderation? Then the signed moderation actions are shared inside the network. (Think of certificate transparency or blockchains) Obviously a modified client could just ignore it but it could provide a more traditional structure with board owners and moderation in a p2p network.
>>2794
There is no consensus needed to be reached. From a node perspective, filtered posts are hidden or outright deleted (CP), hidden posts are broadcasted less likely to other nodes (configurable). There is not a global state, other than posts are supposed to flood all nodes by gradually broadcasted node to node. Of course storage limit makes the board behave like an imageboard with bump limit and thread limit.
New posts can be identified by post hashes and only new posts get created. So every unseen post is new. There won't be conflict except posts may take a long time to reach a given node, but that is good for preventing timing attacks.
Replies: >>2799
>>2798
>There is no consensus needed to be reached.
Still the question: how do mods cooperate? You can't simply subscribe to a list of mods then call it a day. Conversation can't be held if posts are misclassified as spams.
>There won't be conflict except posts may take a long time to reach a given node, but that is good for preventing timing attacks.
So the posts will appear out of order every time you refresh the thread?
>inb4 the decentralized imageboard becomes a chaotic chat room
Replies: >>2800
>>2799
This is how the design works. 6 gorillion hours in vim. I still need time to think about the best architecture, so this may be changed.

--------               --------
    |Node 1|---------------|Node 2|
    --------               --------
       |                      |
       |                      |
       |                      |
       |                      |
       |                      |
    --------               --------
    |Node 3|---------------|Node 4|
    |Mod A |               |Mod B |
    --------               --------Lines represent known and reachable peers. This is arbitrarily chosen. The connection is done through mixnet or some other anonymous network. (Rotate the connections randomly to stay anonymous)
Every node have a store for posts, a signature based (or other id mechanism) moderator list, filter list and deleted list.
By default, all lists are empty.
For simplicity, assume there is only one board (no /tech/,/b/,etc).

Example exchange that show case the moderation system:
Node 3 decides to publish his moderations, using the signature Mod A.
Node 4 add Mod A's signature to his mod list.
Node 2 decides to publish his moderations, using the signature Mod B.
Node 1 add Mod B's signature to his mod list.
Node 2 posts loli waifu (Post id >>AAAA) > Node 1 and Node 4 get the post.
Node 1 marks >>AAAA with local id/time stamp.
Node 4 marks >>AAAA with local id/time stamp.
Node 1 filter/delete is empty, the post appears on his client.
Node 4 filter/delete is empty. The post is not hidden.
Both Node 1 and 4 rebroadcast the post with normal (expotential backoff) delay.
Node 2 gets >>AAAA, which is known, ignore the post.
Node 3 gets >>AAAA. 
Node 3 marks >>AAAA with local id/time stamp.
Node 3 has empty lists, post appears.
Node 3 thinks "certain combination of 0 and 1 is illegal".
Node 3 presses delete, >>AAAA is deleted from his local store.
Node 3 publishes new moderation after combining it with other stuff, sign as Mod A.
Node 1 gets Mod A's moderation, signature not in Mod list. Ignores and rebroadcast later.
Node 4 gets the same, signature in Mod list. Merges with deleted list. >>AAAA is hidden (but not yet delted).
Node 4 is weeb autist, loli is truth.
Node 4 saw loli posts deleted (through notification/monitoring the deleting list), undeletes.
Node 4 publishes new moderation after combining it with other stuff, sign as Mod B.

You can see how this is going. 

Mods don't cooperate. Mod A doesn't even need to know about Mod B. User really just subscribe to a list of mods and call is a day.
If posts are misclassified, users have all the chances to override it and even publish their changes. 
All posts seen before is given a local post id and timestamp, the local post store (can be a database or just plain files) interacts with the UI to display those posts.
Node 2's post may takes a long time to reach Node 3 if both Node 1 and 4 are slow. One problem is bad contents are not quickly removed. In that case, users can subscribe to multiple Mods (with priority overriding each other).

You are thinking this in terms of centralized operation of "a board". There is no official moderators, no central authority, no absolute consensus. You may get a post made a day before getting it. Moderation is similar to Gentoo overlays but for keywords/masks and deletes only.
>>2800
>You are thinking this in terms of centralized operation of "a board". There is no official moderators, no central authority, no absolute consensus.
Such a structure could be implemented as well if you wanted to.
4f824bfd53630d01ad095cb67f91437763dbadb99515e3ba8a0c7bf93edc1dfd.jpg
[Hide] (21.4KB, 435x435)
Anything that doesn't support centralized moderation and the ability to ban users from a given board across the network (see: you can't just switch instance to avoid bans) is doomed to an extreme niche. And there's no way to ban users across the network without allowing anyone and everyone to identify the user's IP.

The only good way to do decentralized imageboards is to have key servers that control all the traffic. If you post in a board from any instance, it gets redirected to the key server who then decides whether to allow it or not. Other instances will only mirror the board's contents, but won't be able to modify them. If the key server goes down, you can only view the board. Some kind of inheritance rules can be used to determine who's the new owner of a board in case the original key server is down for an extended period.
>>2802
And before some hurrdurr retard misunderstands and pisses their pants, every server can own various boards, but a given board cannot be owned by multiple servers at the same time.
Replies: >>2804
>>2800
>Mods don't cooperate.
>multiple Mods (with priority overriding each other)
What do you mean?
>If posts are misclassified, users have all the chances to override it and even publish their changes. 
then be filtered by another user again because he is lazy and simply subscribe to the list. Passive sybil attack?
>One problem is bad contents are not quickly removed.
A bigger problem is how many users are willing to audit the mod history (reviewing more bad contents).
>All posts seen before is given a local post id and timestamp
Yea I know it's like p2p chats.
>You are thinking this in terms of centralized operation of "a board".
because I want cooperative moderation. You expect the lazy subscribers to become new mods which are unable to work 24/7. You told me "there will be more moderators and the cost of moderation will be lower" but everyone will be ruled by different authorities, dreaming in their filter bubbles, jumping >>up and >>down, being curious about missing posts and trying to refetch banned content (only if they are properly >>linked rather than mentioned by context).

>>2802
>>2803
Yes, decentralized broadcasting (on mixnet) combined with centralized servers and moderation is the way.
>Some kind of inheritance rules can be used to determine who's the new owner of a board in case the original key server is down for an extended period.
You don't have to. It's easier and better to call the new board "XXXX Gen#2" and re-evaluate the new mods, while waiting for the old board to revive.
Replies: >>2805
>>2802
>>2804
See >>2739. The reason for this design was made clear: Anonymity and full-control override all other goals.
Any centralized or a small international clique solution will be corrupted. All man cucks, it's just a matter of time and pressure. He may be the CEO of based at his prime. Eventually age and time will be his end. Not to mention family and other attachments. I hope to approximate the final solution.
Any solution where the user cannot modify, host and manipulate by himself will be subverted. No hosting solution survives a government raid, fire or plain o' peaceful protesters. If you host it in a vault, they can cut your power, or physically remove it otherwise. But they can't come for all of us.
Centralized hosting doesn't work because there will only ever be a handful of hosts. No one wants to shoulder the responsibility. If every user are hosts, they can't pin point any one of them.
Anonymity is the most important because we are going to the gulag otherwise.

There are many existing solution, but none of them takes decentralization to the extreme and anonymity as the top. This is what is needed for the final solution.

My responses to individual questions:
>What do you mean?
Mod lists are defined by the user of the node, mods really don't cooperate. You can make a list of shops to buy parts from but the shops don't have to talk to each other.
>Pasive sybil attack
Not sure what this means. But which mods to add to your list can be thought as different groups of the warez scene, where people gets recommended by others on good mods. They will sort themselves out.
>how mnay users are willing to audit the mod history
This is more of an ui problem then architectural or users. Hidden/deleted posts can be displayed as a small box like this. Then the user doesn't need to check anything.
-----------------------------
|Anonymous recved date #AAAA|
|Nigger nigger nigger       |
-----------------------------
--------------------------------------------
|#BBBB [Post deleted by Mod A: mother dies]|
|Actions: (Undelete) (delete)              |
--------------------------------------------
-----------------------------
|Anonymous recved date #CCCC|
|>>BBBB                     |
|Doggo of reflection.jpg    |
-----------------------------Using this system without moderating is installing Gentoo but pulls in all of Gnome, KDE and systemd. No one to blame but the user himself. You also underestimate how many posters in high PPH boards hide and filter stuff, especially when it is easy.
>ruled by different authorities, dreaming in their filter bubbles
Not sure how different that is than numerous alt boards, a few high PPH boards and the normielands. If filters and moderation is made explicit and manualy controlled, this is much better than the current state. At least you get to choose and change everything you want. The curse of freedom is there are alwasy people who exploits this freedom, freedom is not free. The filter list needs watering.
>curious about missing posts
UI problem. Also posts are accepted but hidden, refetching is not needed, see the exchange in the example.
>lazy subscribers to become new mods, unable to work 24/7
I didn't expect this. That's why the mod list is a list instead of an entry. You can get a gook mod for asian timezone and a mutt for freedomland.

Assuming you are not glowing/shilling, you should
1. understand where this idea came from (read the thread, at least >>2739)
2. understand the idea and implementation (made this clear in the example >>2800)
3. understand the implementation or specific mechanisms can be changed to behave in the way where none of those small problem matters (specifically how moderation is shared or merged, how are posts displayed)
Replies: >>2938 >>2941
>>891
A more usenet-like approach and maybe even using a newsreader instead of a fucking browser would be great.
Truth of the matter is you aren't going to route, or develop around the natsec state, that is currently in bed with neo-bolsheviks.
>>2805
>I didn't expect this. That's why the mod list is a list instead of an entry. You can get a gook mod for asian timezone and a mutt for freedomland.
This could also be automated to a pretty good degree, I don't think most spam has to be a problem nowadays.
Replies: >>2941
>>2805
>posts are accepted but hidden, refetching is not needed
Spam and cp coming
>Mod lists are defined by the user of the node, mods really don't cooperate.
>people gets recommended by others on good mods
Better not share the same big board, you have a dream too big. Waste of energy on distributing user trust on different mod groups. Waste of disk space for garbage
>3. understand the implementation or specific mechanisms can be changed to behave in the way where none of those small problem matters (specifically how moderation is shared or merged, how are posts displayed)
Does not mean it will be easy
>2. understand the idea and implementation (made this clear in the example >>2800)
Too simple, because mods do not cooperate and collaboration is delegated to the clever users?
>1. understand where this idea came from (read the thread, at least >>2739)
>decentralization to the extreme and anonymity as the top
>spamming
Heck, everyone, not just mods, be ready to consume a lot of shit. No wonder why you keep talking nonsense about moderation

>>2938
>not a problem nowadays
<decentralization to the extreme and anonymity as the top
Do something
Replies: >>2945 >>2947
>>2941
>Spam and cp
Can be both hidden and encrypted (especially for attachments), only decrypt in memory when decrypting. Encryption can also be deniable, eg by https://github.com/crashdump/covert
>dream too big
Gotta dream big for the (((big brother))). Not sure about you, having boards after boards destroyed and deplatformed is all too tiresome.
>better not share the same big board
>waste of disk space for garbage
Of course different topics are contained in different boards. Different deletion strategy can be used for different data, much like bittorrent seeding, you can leach, seed ratio of 2, seed for days, seed until storage reach a configured limit, etc. Shit is configurable.
>Does not mean it will be easy
I don't see how difficult or different it is from modding games (which 4 year olds are doing it daily) if the api is mapped to C/python/lua.
>Too simple
KISS. Do you not have an ad blocker too? Assuming user IQ is not too much to ask, especially for a privacy-focused project.
>ready to consume a lot of shit
>nonsense about moderation
How nonsense is this moderation system? You can either hide in your little Tor board and wait months for your gold content, wait for the board to get popular, filled with spam/mod wars and the eventual shut it down. Or doing something about the reality of shits with a decentralized system and decentralized moderation from the start.

>Do something
This is doing something. No more repeating the eternal cycle of new board -> slow board -> fast board -> shit/glow board -> shut it down. Every time a board goes down, posters scatter and a fraction of them are gone to shitter site or log off forever. Decentralization, hell even anonymity, are not needed in a high-trust group of people. Yet, they keep pushing and will keep pushing until the time changes or all posters are gone. You are right, fucking do something. They are already shutting down everyone physically, how long does this site have, how long do you or I have?
>>2941
><decentralization to the extreme and anonymity as the top
>Do something
Who are you quoting?
>>2800
Complete decentralization of moderation would make it impossible for a community to be developed. What you are describing is a way to make ActivityPub single user instance moderation easier. But this only works for a twitter type website.
In order to have a community you need to have people that choose to view the same thing, that choose to subject themselves to the same authority.

>>2802
I think this is the best solution, combined with the decoupling of the identity system, used to identify the board owner and moderators, from the hosting system. The identity system should be completely decentralized in a way similar to keybase/keyoxide whereas the hosting/moderation will have board level centralization. Of course the identities of the board owner and moderators should be available for every node to check through cryptography.

The above process also incentivizes people to mirror the board, as in those given mod and vol positions on a board will feel more comfortable to mirror the whole thing. Of course moderators(bo, mod, vol, etc) will be able to moderate from every mirror.
If all hosts get taken down it is extremely easy to reorganize it somewhere else and for the users to check if it the same thing.

This way you provide the infrastructure for communities to form, allow webserver owners to control what is posted on their site but stop them from crippling a community by abusing their power on the community structure.
Replies: >>3021
>>3019
Can you explain why this won't work for image boards? A twitter type website without identity and links is an image board.
>community
Never been a fan of "communities". I see them as  non-organic constructs that sway participants to specific directions for money and fame. This is quite different from anything else. Almost like the creation of the internet vs just another site. Only people who yell community are people who either power trips over others by placing themselves at the top of the "community" or helpless faggots who can't think for themselves and willingly submit themselves to authority. Ironically most leftfaggots and liberals suck whatever authorities told them to, including fighting for more communities and more authority. By putting the user as the highest authority, he has all the freedom to submit and banish to any authority(even being one himself) without kicking anyone out (moving to other sites). I can see myself tortured and mind fucked to compromise any community. They just need to frame some cp and you are off to jail. No one is trustworthy, other than yourself. This is cathedral and bazaar but for forums. The argument is similar.
Replies: >>3022 >>3042
>>3021
>A twitter type website without identity and links is an image board.
Does it have a catalog with live threads? Who gets to decide what stays on that catalog? 
I've been using pleroma on the fediverse which is basically decentralized twitter. Someone even built some fediverse software where all posts are anonymized, https://humungus.tedunangst.com/r/honk

One could potentially code an addon to it where the user "watches" threads he chooses creating a custom catalog. But how do you find which threads to watch in a random sea of posts without nicknames? 
>rely on hashtags
Possibly but we're quite far away from the "decentralized chan" model we're talking about and I'm not even sure if this thing would work at scale. As in, if high quality discussions would take place on it.
 
If you have never tried fediverse, or just saw the pozzed parts of it, I'd suggest getting an account on fse, or even get your own instance.
I personally prefer the chan model with boards, threads, and the overall segregation of topics and users into boards and webring chans with gradually evolving board culture. With the state of the chans these past few years fedi has been my alternative though.

>community
>Only people who yell community are people who either power trips over others by placing themselves at the top of the "community" or helpless faggots who can't think for themselves and willingly submit themselves to authority.
Well that's because they are useless without groups. Libshits also really love twitter though. What happens is they only love groups that function inefficiently and allow them to bring everyone down to their own level. They absolutely hate groups were high quality and free discussion takes place, which eventually leads to posts standing on their merit since they are exposed for the vermin they are.

Also, that's why a healthy dose of MODS=FAGS is always welcome. But to me, groups and community are needed both online and irl simply for organizational purposes. 
Btw I don't think your idea is inherently wrong but it creates quite a different thing from what we're using here.

>They just need to frame some cp
That's why one should use boards where he trusts the moderation, or browse without downloading images unless choosing to do so.
>>3021
>A twitter type website without identity and links is an image board.
Wrong reduction. Do you like twitter type moderation, jack off in your small group huh?
>I can see myself tortured and mind fucked to compromise any community.
Adblock filters (not spam filters) works great because ads for normies and moralfags are easy to recognize especially when w/o javascript. Who the fucking freak wants to live in a global cesspool just because we can have toggles for every shit out there. Carry your guns else get shot the instant you stick your head out of the window.
>Be there spam, expect no community.
>Doomed to living in life long fear.
Replies: >>3043
>>3042
>Do you like twitter type moderation
Never used twitter. But what is the difference between twitter mods and a group of board mods & jannies?
>Who the fucking freak wants to live in a global cesspool
Want or not, faggots and glowrunners exist. They will try to get into your little bubble of safe space and spam the fuck out of it.
Mods/jannies won't cut it because all groups can be compromised.
Registration/invite only either noone but mod cocksuckers will be there, or it is famous enough that glowfuckers are infiltrating it. Then every user there are off to jail because they are not anonymous.
ublock lists are maintained by volunteers, not so different from what was proposed. The toggles are guns. If you are not a mod here, you can't do anything if the mod starts spamming cp.
There is always spam and "community" is bullshit because everyone has a different idea of spam. Election doesn't work. Centralized authority can be cucked or murdered.
Living is suffering if you are not a NPC, this is mere shifting the suffering from eyes to a button "mark as spam".
Replies: >>3045
>>3043
Twitter: circlejerk not knowing/caring the board is on fire and everyone sane is fleeing from this cuckpool. shit bubbles produce useless shit
Independent IB: unattractive incapable mods just die alone no wasting anons' time and money on uninsteresting posts
>They will try to get into your little bubble of safe space and spam the fuck out of it.
The size don't mattter big brother has (((big dick)))
>cucked and compromised
ISPs and users can
>If you are not a mod here, you can't do anything if the mod starts spamming cp.
>Election doesn't work.
Just back the fuck off you shitheads!
>"community" is bullshit because everyone has a different idea of spam
Cope. Choose. Construct. Sure we cannot find comfort from arbitrary combinations of faggots (your big board theory)
>The toggles are guns.
Escapism. Shit I thought this "inb4" unnecessary. So lame.
>There is always spam
You're not thinking in the right direction. Go find another solution (crypto coins?) I'm enough with your dead plans.
Replies: >>3046
>>3045
Fucking word filter lost my whole post. Going brief.
Stop nigger screeching, tell me what is wrong with it. You didn't say anything except >you are wrong because you are wrong
>size
Can't come for all of us.
>ISPs
Out of scope, p2p physical layer with mixnet should do by then.
>Users
Decentralized mods
>arbitrary combinations of faggots
Mods: arbitrary combinations of faggots that choose another arbitrary combinations of faggots as jannies
Registered users/invites: arbitrary combinations of faggots that choose another arbitrary combinations of faggots as users
<my big board theory
Mods: users picks their own mods, if everyone but me glows, can be the mod
Users: usual distribution of arbitrary combination of faggots
>Escapism
Know what is not escapism? Grab your crystal bow in minecraft and nuke (((villages))) and sheeps.
Delete work for users, also for other users that register remover as a mod. It is not "hiding".
Gated forums is hiding in basement with other faggots jacking off each other hoping no glowniggers or spammer finds you. The ultimate escapism. Think about it logically.
>Not thinking in the right direction
All ears man, tell me what is the holy way. And I will tell you why I am not on it.
>I'm enough with your dead plans
Noooo boss, I really need this budget to make it work.
Replies: >>3052
I see this thread is getting some activity again. Can't say I have much to offer, aside from one idea that has been bouncing around my head lately. Private invite boards are a circlejerk waiting to happen, but consider the following:

>1. no user accounts, just a single per-board passkey required to make a post on that board
>2. no mods, just a nuke button that anyone with the post key can press
>3. when things get lame, or the passkey is leaked publicly, nuke and regroup

Mods suck, accounts suck and spam sucks. Dildo anon sucks. There's a lot of things that suck currently, but there are still many good anons. A while ago a few of us exchanged contacts on Tox in the event of annudah shoah. If more of us did this the bootstrapping process might not actually be too troublesome. It would just be a matter of creating a board, hosting some gamenights and giving the key to anyone who doesn't seem like a massive tard, fag, or fed. Might be too exclusive to gain traction, but idk, it's just an idea. Obviously needs some more development for it to be viable.

On the topic of decentralization, this would map fine on to a public/private keypair for lurking/posting. Migration to decentralized network could be done very incrementally. Add support for some anonymization protocols. Retrofit existing IB software to run proxy servers on the clearnet.Create client software. Etc. 
>>3047
Addendum, Some pretty obvious things:

>Require one nukepost every 10 minutes for an hour for it to go through (or some equivalent hot head deterrent)
>Require that nukeposters solve a captcha (so that the above can't be circumvented by script)
>Make a moderated public board for support and questions (to ensure people can join gamenights and other filters)
>>3047
It is how people physically work, forming close group with trustworthy people around you. But once you allow others to join in, there will be faggots and women who just want to ruin everything. Things get lame very quickly, and people get bored in a few loops. Your idea works if we live in a rural no nigger zone. And I look forward to this day to happen.
But we live in a society.
>>3046
>Mods: users picks their own mods
>Mods don't cooperate
Doesn't know how, blindly follows suit
Divides up the power against spamming attack
Loses sight of the overall situation
>if everyone but me glows, can be the mod
Ends up helpless and washed away by the troops, faggot says what?
>Users: usual distribution of arbitrary combination of faggots
Homeless
Finds no community
Becomes homeless mobs
Provides training data for GPT-X which however can also big weapon for fighting stylometry
<The ultimate escapism.
No I worship not any existing solution
>tell me what is the holy way
I do not know
>You didn't say anything except >you are wrong because you are wrong
You wrong, haven't explained why >more users is equal to >more mods and >more power without forming communities
>Decentralized mods for big board
Not exists yet. Groundless conception for handling 100000000x more spammers
Replies: >>3056
>>3052
Appreciate for pointing out the gap.
More user -> more mods/power should be explained better.
It is generally true that top posters are small in number. But large amount of any forum users are lurkers. When there are more users, there are more old dwellers and top poster. Of course, excluding bots, there are also lots of shills and faggots. All users have the same power to publish moderation. All of them have full admin power, like a git clone of a repo, you can do whatever the fuck you want with it. That means one user can mass delete by regex, run GTP3 & friends to categorize spam, etc. And that is only one user. Less autistic lurkers can also contribute by enabling publishing of their hides/deletes. Many are hiding and filtering already, may as well not have it wasted. When publishing moderation is easy and there are a few oldfags. It can be expected that among the mod providers there would be quote a few high quality mods. Like linux distros. Since every mod has full power, all it takes is a few to suppress many real shills. Since mod forking is possible it becomes easy to pick and choose desired changes from multiple mods. This allows a wide range of favors. The best part of it is a combination of mods to reuse moderation, therefore reducing the mod power needed to moderate a big board. This doesn't divide effort to moderate, circle jackers can still form their own groups and publish as a single mod. It's just users can choose which mod to use. Assume all real people, more users, more good lurkers/posters, moderation and publish is easy + moderation combination means more moderation power.
While writing this, I figured bots/ddos are going to be a big problem. There are two possible solutions, one is to limit post rate, by means of challenges (prove of work or something else) or simple rate limiting; or use statistical/neuralnet spam filter.
Replies: >>3064
>>3056
Sounds good but how does it work at a large scale?
>rate limiting
Queuing and disordering messages is not fun
Potential spams also delay the transfering
>statistical/neuralnet spam filter
Srsly gonna fuck with AIs?
>reuse moderation
>moderation combination
How many trustworthy mods are you going to choose from? A lot?
>form their own groups and publish as a single mod
Niggas have frens?
>moderation and publish is easy
Easy for bribed niggas too
>thinks building trust is easy
Replies: >>3065
>>3064
>rate limiting
Know this too, but don't have a better idea at the moment.
>fuck with AIs
They have been fucking with us with that for years. Only fair we do the same for defense.
>how many trust worthy mods
>easy for bribed niggers
By keeping the mods also anonymous, combining mods is a self-advertising and competition process. Say mods are identified by signatures, mods can also sign some posts to advertise themselves (ultra-tripfaggotry). Some of them are going to be known as niggers because users found out they are compromised. Among oldfags, they will develop a market of high quality mod providers. There will be competition as poor mods are replaced by better mods through words of mouth (>implying ASDF isn't the most based mod) and other funny shilling. It is a dynamic process. Also a combination of mods can be done as a recursive list, that is I publish some mods that includes some other mods. Users only have to include me to get a package of mods. This process should lead to a healthy competitive environment of mods (motivation can fame through trip fagging or signing on donation links), which should produce at least a few very reliable mods.
>have frens
Someone will have, doesn't have to be me
>how does it work at a large scale
I honestly can't say for sure. My plan is to hack something together and see how it plays out. But I am such a retarded code monkey that I need to get my shits together before I can work on this (like developing an anonymous git frontend with owner identified as a signature). At least it will be fun to see it fail if this doesn't work.
Replies: >>3071
>>845 (OP) 
>trying to solve a sociopolitical problem with technology
>barely trying at that
Replies: >>3068 >>3074
>>3067
also the word "soc*al" is filtered for some reason, good job jannies
Replies: >>3072
>>3065
>a few
>very reliable mods
>recursive list
>yes! trusting trust is easy peasy
>trust me
<glowniggas stealthily bubble up, nobody knows
Replies: >>3074
>>3068
social fags rly?
>>3067
>trying to solve a sociopolitical problem with technology
Never tried to. How long until the (((problem))) is solved? Much longer than I want. Before that, it will be harder and harder to communicate anonymously without getting sites nuked.
>>3071
Don't know how bright you are, but you seem to be very keen on misreading this stuff. There is no "official" mods, the reliable mods are replaceable at any time for any reason by the user. Much like tripfags come and go, nobody cares. Even if the famous mods are glow niggers, their work can be dismissed completely by just removing them from your mod list. They cannot do anything at all to you in terms of moderation if you ignore them. You also don't need to trust me or anyone. You will not know who I am except by my public key as the vanilla version author/releaser, which is unlinked to any other services. There is no trust involved because by default there is no mod. Users choose mods and can remove them any time they want. They don't need to trust anyone. Without this model, users have to trust mods, admins and the software (because the instance is not self-hosted). Such concern is completely nullified by design of this system.
Replies: >>3086
>>1148
Who decides what qualifies as a good or bad post, and how do you avoid becoming yet another reddit clone?
How do users start posting, given that they won't have previous good posts to validate their first post?
How do you get people to care about the whole thing and keep track of their post history?
>>3074
>reliable mods are replaceable
<a few
>for any reason
<less candidates
>nobody cares
<brainless followers
>if you ignore them
<if you catch them
>no mod no trust
<you mod you cuck
Nobody is gonna pay for your cesspool
Replies: >>3090
>>3086
>if you catch them
Funny coming from a glowfucker like you. You can say the same for any other forum model.
>pay for your cesspool
>implying it is not going to be fully free and open source
lamo you think I want shekels? A piece of software that is deigned for anonymity and privacy that is not fully modifiable, understandable and can be selfhosted? You can shut your glowing retard mouth, no one will fall for that. I am not leaving any possible trace from the software to myself, including any form of donation/crypto addresses. All related communication will be hosted as part of the repository with something like https://github.com/MichaelMure/git-bug.
As for the repository itself, I am still thinking. It may even be possible to implement git on top of this thing. That'd be quite interesting.
Replies: >>3094
>>3090
Can you at least lay out why a system designed for anonymity first and foremost would actually lead to good discussions? All I see so far are ideaguy plans and a lot of unfounded optimism. If everybody has 100%, totally for real guaranteed privacy and obscurity of your identity but it's a /b/-tier stream of shit where you have to spend a day pre-configuring filters to see actually worthwhile posts, Fediverse already exists minus the anon-friendly claims.
Replies: >>3102
>the same for any other forum model
Your fail a trillion miles with that decentralized blindfold
>>implying it is not going to be fully free and open source
>foss psyops
Bandwidth is not free should I say
Replies: >>3102
>>3098
>fail
How? So you get to trust you site admin to keep no logs and mods to not glow, yet a self-hosted and moda as a choice is fail so much harder?
>foss psyops
Yeah foss is a psyop, better go back to installing windows with privacy settings on, that will keep you safe and anonymous.
>>3094
It doesn't. Never was mentioned anywhere, lots of posts shoving imagined goals to this thing that it doesn't claim to do. Better discussion is not part pf the goal.
The only goals of this plan is a forum that is
1. relatively safe from the feds for everyone using it (ie you won't get jailed for what you post). It does that by eliminating metadata and relying on anonymous transport, and perhaps deniable encryption of data
2. destruction resistant (can't be taken down, from inside or outside). Outside: deplatforming is prevented by being decentralized, they can't come for all of us. Inside: decentralized mods & dev makes it so that there is no central group to infiltrate, they must invest major resources to be the majority. Even there are bot involved, so can users. Mod combination makes it possible to combine effort to counteract bots and shills. Development is also decentralized, no organization beyond a git-based issue tracker. Any one can make a release signed with their own key. Nothing really official, or instead binary release may not exist to force users to be conscious of their software source.
While good discussion is not a goal of this plan, it certain can be a result. By providing users the ability to easily moderate and combine their results, certain amount of user will be able to enjoy the wild west of the internet. Under this environment, free speech is allowed and eventually truth will prevail.
>ideaguy plans
>unfounded optimism
Yes, it is. I am just a retarded pajeet-tier coda. Not trying to get you to use this thing or buy this shit. Who cares if my optimism is unfound, I am just trying to get people poke on it and see if I missed anything. I am going to implement it how ever long it takes. Call me ideaguy or what ever, at least I am doing something to counter the (((nwo)))'s censorship and gradual close off of the internet. Better then trusting the vac to kill them off and do nothing while they cuck the fuck out of your forums and world. Then moan and bitch about it when another site is down again.
>Fediverse
>minus the anon-friend claims
Then it is shit, federation never works only a tiny portion of users selfhosts and they are very easy to take down, just like the webring.
Replies: >>3112 >>3114
>>3102
>good discussion is not a goal of this plan
Nobody cares about where you hide your secret shit you fucking shithead
>see if I missed anything
Told you already, nobody will contribute the bandwidth you need for a cesspool
>>3102
Fair enough, guess I suck cocks and can't read.
>1. relatively safe from the feds for everyone using it (ie you won't get jailed for what you post)
While a worthwhile goal, I think it's bad to assume feds need some kind of post directly linking to a person to jail them. Laws anywhere in the world are grey enough that an adversary the size of a government can just do some psyops and creative interpretation, no need for material evidence.
>eliminating metadata and relying on anonymous transport, and perhaps deniable encryption of data
Now this I can get behind. Anonymous transport is probably outside of the scope of such a project. Consider I2P and Tor. While, in theory, I2P glows less and has less identifiable traffic patterns when looking at the network from outside, there are a lot less people using it. So you can't necessarily tell what resources people are connecting to, but it's easier to make a complete map of users around the world than it is with Tor. An acceptable form of transport could be plain old obfuscation, innocuous posts on popular resources with media containing the information you really want to transfer. This solution also crosses over with deniable encryption. As for metadata, any P2P solution inevitably generates it just because you need to connect to your peers and they'd have no real incentive to erase it. Starting from the project inception, there's also the problem of git itself having built-in metadata. Some of it is needed for your repo to properly function, but most of it is not good for anon. You could say that there's no need to keep precise times of commits, just to make sure the tree of commits is chronologically accurate.
>destruction resistant (can't be taken down, from inside or outside)
This is also something I don't agree with. Can't be taken down from inside is achievable, with enough gatekeeping and warding off the inevitable infighting and faggotry that seeps in. But there is no need for firm opposition against an enemy that overwhelms you with money, resources, human resources and (supposedly) public opinion. If your idea does end up becoming reliably decentralized, why not simply split the network based on areas of jurisdiction? Node A is located in the US, node B is located in Romania. Firewalls of both are configured such as they're unreachable from the host country to avoid obvious liability. It's illegal to talk about subject X in node A's jurisdiction, so you host it on node B and use an anonymous or just obfuscating network protocol to access it from node A should you want to read it, and in reverse. The greatest weapon in fighting deplatforming on the Internet in current year is bureaucratic red tape. The more of it you need to go through and the smaller the targets, the less effort will be expelled toward taking you down (in theory).
>I am going to implement it how ever long it takes. Call me ideaguy or what ever, at least I am doing something
Well, then I wish you good luck. Hopefully you do implement something, even if it's 5% of what you laid out. Ideaguys, as the name implies, only come up with ideas and don't do any actual coding.
>federation never works only a tiny portion of users selfhosts and they are very easy to take down, just like the webring
I think both a P2P network and a federated one can coexist, if you think my splitting by jurisdiction idea isn't full of shit.

>Mod combination
Here's what I'm personally concerned with. Say you and I are having a conversation inside some thread. I have my mod list, you have yours and they are distinct with some overlap. Now a third person joins, them also having a somewhat distinct mod list. We start talking and posts start disappearing due to filters not being uniform, so now there's a better chance of a potentially good conversation devolving into useless metafaggotry about muh filters and muh mods. It would make more sense to have OP designate mods for their own threads. Solves the hypothetical I demonstrated and also allows you to filter threads based on mod tags.
Replies: >>3151
>>3114
Been busy, replying late.
>bad to assume feds need some kind of post directly linking to a person to jail them
Sadly true. I imagine their machine learning behavioral system can predict who is going to post what on the forum before it happens. But they are not ready to apply it generally yet, otherwise I'd have been jailed. In this case if the anonymity provided by this plan is good enough, they will not be able to discern where the post came from (or even when/how). But yeah, poor opsec and too much normalfag activities lead to easy correlation for govs to know it's you. No amount of software can offset stupidity. This system can at least provide a level of confidence that peers cannot know where the message come from. It would be similar to a letter in a bottle. The reader can guess by the words in the message and the time he finds it, but the search range would be too big to correspond to a small group of people. Of course across similar messages multiple times allow the reader to guess with confidence who the sender may be. I don't see any way out of it though. At least I hope the system makes good enough bottles.  Dummy message is also an option.
>just make sure the tree of commits is chronologically accurate
Good idea.
>split the network based on areas of jurisdiction
>bureaucratic red tape
It only looks like that. But what recent events taught me all governments are in (((it))). Laws don't matter at all when they can just label you are a terrorist or say you kidnapped an AI generated girl. They all work together against you. This is why nodes are supposed to not be able to tell apart from each other, including where they come from. Otherwise anonymity is compromised.
What is illegal is also very easily changed. Better just screw their censorship completely. They will just keep on pushing and pushing until wrongthink is illegal. They won't stop just because you complied with something less.
>posts start disappearing
This is more of a policy/interface problem. The disappearance of posts can be just a hidden/collapsed box where the user can clearly see why it was collapsed. This situation also happens with symmetric moderation, usually seen in a pinned thread. A person comes in late and see reply to posts that are deleted. Without archives, users can only infer what happened. But a small policy change, moderation changes in this system can be reflected in a non-destructive manner, which solves this situation even better than the original model.
Moderation actions can even be scriptable to allow complex policies, such as ignoring moderations changes if a thread is watched, etc.
Replies: >>3158
>>3151
>I imagine their machine learning behavioral system can predict who is going to post what on the forum before it happens.
No, I didn't mean anything fantastical. Just that convincing the court of public opinion and forging some shit is enough when you're nobody. Having your node be inaccessible from the host country also makes it less likely to fall under scrutiny, see being a nobody.
>It only looks like that. But what recent events taught me all governments are in (((it)))
Does that disprove anything I said? It's very dangerous to assume feds, or globohomo, or whatever are some sort of omnipotent, all-powerful entity that somehow unite every world leader. That's just blackpilling yourself or driving your own project into an inescapable corner. If you don't believe my comment about red tape, try looking for any time a country attempted an extradition from another hostile country. Or how long and hard you have to fuck with a company for it to actually bother trying to go after you in an international court. Claiming that we are living under an incredibly competent, hyper vigilant global regime with Minority Report-tier AI predictions on /tech/ of all places is just odd. Kernels are leaky shit, OSes are unusable dogshit, Google can't stop AI from confusing niggers with gorillas but ||you||, anon, are being watched by AI God.

Why not just start small? Assume someone wants to shit up your proposed network with GPT-3 or fedposting, fed or skid. How would you deal with that or what tools would you bolt onto clients for it? Realistically, that's all you're going to get as even Tor boards deal with only this.
Replies: >>3161
>>3158
>fantastical
This is not fantastical. Human behaviors are largely personality (thought pattern) and environment determined. Even simple statistics can correlate what personalities are more likely to do something given a situation. Check out how machine learning works, it is just repeat application of statistics. It is the same as asking what is the probability that this pixel array is a cat, instead, they ask what is the probability that this person is of personality x given the behavior they exhibit and under what situation. Then ask again, what is the probability this person of personality x will do y. Psychologists have been doing it for ages without computers. More data allows for more fine grained "personality" and more accurate predictions. Not magic at all. Anyone can rig something up to do this simple correlation, with enough data and computers.
Most technologies came from military and government projects, see computers, radio communication, internet. Machine learning caught on just a few years back, what could they have been doing with it? They also have all the government ids and data leaks.
>somehow unite every world leader
>competent
At least they managed to get all major countries to nuke their economy and trigger riots for a seasonal flu. Even if this is not true, assuming a higher threat level lead to better countermeasures for even smaller threats. Never said they are competent. But they are definitely resourceful and full of momentum.
>on /tech/ of all places
Makes this place better than the rest.
>blackpilling
Blackpill is a perception. Just because it looks bad doesn't mean it isn't true. Even if it turns out false, additional measures to counter an assumed bigger threat is going to help with dealing with smaller threats. Blackpill is an excuse for doing nothing. I am going to die anyway, may as well do something useful and worthy. If they are powerful, good, I have the chance to win against something much greater than me. If they kill me, even better, fuck this gay earth, who wants to live in the same world with them. Don't be blackpilled, uses this "pill" to do something about it.
Replies: >>3164
>>3161
>Even simple statistics can correlate what personalities are more likely to do something given a situation
> It is the same as asking what is the probability that this pixel array is a cat, instead, they ask what is the probability that this person is of personality x given the behavior they exhibit and under what situation
You are reducing the idea of ML and AI way too much. No, it really isn't the same. Recognizing cats, if by cats we mean a carefully curated array of breeds, is something modern tech can indeed do. When pushed further, however, you run into problems that get written about in terms of an "AI winter". Not to mention that "recognizing what a person will do next" has so many variables and context attached to it that it's insurmountably hard. Yes, I recognize that military tech will always be ahead of what we non-glows are going to see. But that doesn't mean this gap crosses over into the realm of science fiction.
>Blackpill is a perception. Just because it looks bad doesn't mean it isn't true
If you imagine a great threat and try to make a system meant to counteract it, with very little resources in comparison, you will burn yourself out. Incremental design is the only thing that actually works. Yes, it does lead to problems down the line but so does literally every model of designing something.

Bringing the topic back to actual decentralization, there was no mention of Urbit in this thread.
>Urbit is a personal OS designed from scratch to run peer-to-peer applications. It solves the hard problems of implementing a peer-to-peer network (including identity, NAT traversal, and exactly-once delivery) in the kernel so app developers can focus on business logic.
Sounds too good to be true, so there are definitely downsides not mentioned. One of them is, much like Lokinet (now OXEN), you have to shill out. Unlike Lokinet however I'm not giving you 1500$ for a node fuck off, you can actually try it without any investment. They have gay cosmic names for their network, so planets are what you'd usually pay for - meaning data gets stored permanently. Comets are temporary, however, which is how I'd encourage anons to try it if they're bored.
>anonymity
Not provided by the protocol. Run it over Tor or a VPN. There's also a concept of unique identity associated with planets, but free comets are fine. Though I've heard that the latter are relegated to pleb status on a lot of resources there.
>personal OS
Probably what's unique about it since it aims to be an "Internet replacement". However, it's written in the most autistic language imaginable and it seems like the only working user-friendly client for this network is a fucking browser-in-disguise "app", which means you'd likely need to learn how to work with their CLI version.
Replies: >>3165
>>3164
>reducing too much
"recognizing what a person do next" of courses takes too much computational power. This is the only thing stopping them to mass apply this to everyone. But properly scoped problems are still solvable. Identifying a cat is possible and has been done, but identifying a specific race of cat is too difficult. Predicting any behavior is too hard, but trimming down the choices to three or four and it is tractable. Problems such as probability of posting on this forum, posting about a certain topic, etc are definitely possible. They just need to cleverly shove numerous variables into categories or scores (n-axis personality) to summarize the big data they've got. Think about how jewgle get to suggest users with specific search results out of all possible results, and they are asking "are these results (that I want to show him) close enough to what he wants?" Same deal and may be even more complex.
>burn out
I don't intend to do that. The plan is that it is not possible to tell where a message come from, and I am not making a new overlay network. That alone defeat most of the trouble. Rather, implementing a country boundary requires more work. The best thing is the information is just not there, the receiver cannot infer anything from metadata when these is none.
>urbit
Why does everything have to be an os? Don't get this at all. Trying to do so much more than what a project should focus on, best way to get bugs and unmanageable mess.
Here's an idea that's probably been tried before. 
>posts have no identifiers other than the hash 
>users can be "moderators" and broadcast lists of hashes that break a certain rule (e.g. CP mod that flags the posts that are CP, spam moderator that flags spam, random faggot that flags posts he doesn't like)
>users can choose to ignore, save the hash list from moderators and just mark the posts, save the hashes and don't download those posts, or save the hashes and download those posts but without the images, on a per moderator basis
Replies: >>3822
77419041.png
[Hide] (88.8KB, 460x460)
Anything I should know about FChannel and its developers, doesn't look too bad tbh.
Judging just by the logo, it's made by cuckchanners and will be promoted there.
Not_big_surprise_[N6e5i49loRQ].webm
[Hide] (26.4KB, 00:01)
https://desuarchive.org/_/search/text/FChannel/
>>3793
Here are some of my ramblings on a somewhat different concept:
>to post, you need to solve a Proof of Work(PoW) problem
>the result you get is some string satisfying some requirement, which can be checked easily
>that string is used as your used ID for your post(s)
>want to change ID? You gotta spend some minutes mining a new PoW
>now all your posts can be banned/ignored by ID!

Troubles, to list just a few:
>any posted ID can be used by any malicious user to post CP and whatnot
>in fact, every posted ID at once because why not?
<hard to counter, even through such a change would probably be obvious to human eyes
<how about this: PoW is in fact an open/private key pair, and if there is a post with your open key in the thread already, you gotta also add some proof you have the secret key too
<that'd block identity theft at least
<(while we're at it, do note how i2p already uses key pairs to send each other data so that it cannot be decrypted by anyone but the holder of the secret key)
>ability to reuse old IDs, possibly not your own, maybe from previous year threads, to bypass PoW
<no idea if can be done, but what if PoW check requirement is based on current date(GMT)?
<then a day old ID is already useless, and everyone is forced to change ID every day at least to boot
what about a realtime p2p chat that you seed like a torrent
some-squinty-eyed-smoldering.webp
[Hide] (305.9KB, 2048x2984)
>>845 (OP) 
why did you spell it "Decentralyzed"
are you the jews?
a decentralized imageboard, unaffected by US federal authority for any reason, is desperately needed. 

So far zzzchan and tvch have been good, but a fullproof site without any central authority is very much needed.
>>3047
maybe randomize different users to be mods for specific amount of time. That way it's spread out and one central authority (kikes) can't infiltrate and destroy.
Replies: >>13289
>>13286
>That way it's spread out and one central authority (kikes) can't infiltrate and destroy.
they can just create billions of "users" and then how often control lays on someone other than them is insignificant
Replies: >>13307
>>13289
Well, find out a way to identify real users from demoralizers. Maybe create your own captcha system other than cloudflare.
[New Reply]
154 replies | 9 files
Connecting...
Show Post Actions

Actions:

Captcha:

Select the solid/filled icons
- news - rules - faq -
jschan 1.4.1