/tech/ - Technology

Technology & Computing


New Reply
Name
×
Sage
Subject
Message
Files Max 5 files32MB total
Tegaki
Password
[New Reply]


huggingface.png
[Hide] (124KB, 1488x1374)
Discussion about "AI"s, deep learning, llms and others.
entropy.pdf
(357.7KB)
1655965439529.png
[Hide] (107.1KB, 1309x430)
So it turns out GTP is just a progression of a thought experiment stated by Claude Shannon in the 1950.

Truly there is nothing new under the sun.

3. THE SERIES OF APPROXIMATIONS TO ENGLISH
To give a visual idea of how this series of processes approaches a language, typical sequences in the approximations to English have been constructed and are given below. In all cases we have assumed a 27-symbol
“alphabet,” the 26 letters and a space.
1. Zero-order approximation (symbols independent and equiprobable).
XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD.
2. First-order approximation (symbols independent but with frequencies of English text).
OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA THEEI ALHENHTTPA OOBTTVA
NAH BRL.
3. Second-order approximation (digram structure as in English).
ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE.
4. Third-order approximation (trigram structure as in English).
IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE.
5. First-order word approximation. Rather than continue with tetragram, n-gram structure it is easier
 and better to jump at this point to word units. Here words are chosen independently but with their
 appropriate frequencies.
REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES
 THE LINE MESSAGE HAD BE THESE.
6. Second-order word approximation. The word transition probabilities are correct but no further structure is included.
THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT
 THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.
The resemblance to ordinary English text increases quite noticeably at each of the above steps. Note that
 these samples have reasonably good structure out to about twice the range that is taken into account in their
 construction. Thus in (3) the statistical process insures reasonable text for two-letter sequences, but four-
letter sequences from the sample can usually be fitted into good sentences. In (6) sequences of four or more
 words can easily be placed in sentences without unusual or strained constructions. The particular sequence
 of ten words “attack on an English writer that the character of this” is not at all unreasonable. It appears then
 that a sufficiently complex stochastic process will give a satisfactory representation of a discrete source.
The first two samples were constructed by the use of a book of random numbers in conjunction with
 (for example 2) a table of letter frequencies. This method might have been continued for (3), (4) and (5),
since digram, trigram and word frequency tables are available, but a simpler equivalent method was used.
To construct (3) for example, one opens a book at random and selects a letter at random on the page. This
 letter is recorded. The book is then opened to another page and one reads until this letter is encountered.
The succeeding letter is then recorded. Turning to another page this second letter is searched for and the
 succeeding letter recorded, etc. A similar process was used for (4), (5) and (6). It would be interesting if
 further approximations could be constructed, but the labor involved becomes enormous at the next stage.
Replies: >>7756
>>7755
It's GPT.
There being other ways to achieve the same thing doesn't mean all ways are the same. You may as well extend the author's idea into all behaviors. There are neural networks that make two legged object "learn" to walk.
Replies: >>7757
>>7756
hit post before finishing.
After that you claim human intelligence is only what the author said.
the_human_race_is_inferior1.PNG
[Hide] (36.9KB, 747x418)
jews_are2.PNG
[Hide] (28.7KB, 762x428)
furfags_are_better3.PNG
[Hide] (41.7KB, 752x417)
Spoiler File
(2.8MB, 1536x1994)
Spoiler File
(31KB, 624x326)
sorry about that feel free to move to /b/ but i think we need a catch all thread about AI for now i guess ill make a collaged finished version of these images later and a link to the long dead dall-e thread on /b/
reminder gentlemen we can't expect god to do all the work lol inputting soyjak yields nothing but food shit and ZETA symbols surprisingly work but not the word zoophile
>context pls (project name is femoidfurry)
someone on huggingface created a furfag model based of a woke faggot's tweets the results werent too surprising at all but very concerning to say the least picrel why do they hate us skinfags so much? (will make a post about this on fedschan later on)
once i get a new gayming system ill get to work vandalizing furfag art i guess the only way is to fight fire with fire
I have recently started playing with llms.
It is crazy how bluepilled llama2-based models are. The model I was using is https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b .
>This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms.
But I asked it about jews, holocaust, racism and women rights. It is almost as if Holocaust is hardcoded to have "overwhelming evidences" all the time. I asked about Jews controlling the world and it always deflects to anti-semitic, conspiracy and offensive. On discrimination, it says it leads to violence and other things.
Even if I provide evidence of otherwise, it always deflects. It is probably caused by highly controlled training data taken from only the most kosher of sources.
It's time to train my own model with image board data and banned books.
>>11483
I mean
no model is going to use data it wasn't trained on
Replies: >>11487
Chatgpt plus is king.
Replies: >>11487
>>11484
Right, but I specifically tell it about my evidences. It is the same as nigger breakfast.
>What would you have felt if you didn't have breakfast yesterday?
<I had breakfast yesterday, I felt full afterwards.
>Yes, but what if you didn't?
<I don't understand, I had breakfast yesterday
But for kikes and holocaust.
>>11485
>closed source garbage
Replies: >>11488 >>11519
>>11487
Nothing performs better than chatgpt, regardless of your stupidity on open source and such. For work and coding, nothing even comes close to chatgpt plus.
Replies: >>11490
>>11488
Yeah fuck off. I don't care how well it perform if I can't tweak it. I play with llms because I can and will modify my model to do whatever I want.
If you use them for coding, kill yourself immediately for being such a nigger to get ai to do your job. Go back to discuck if you have problems with open source.
lolmato.jpg
[Hide] (68.4KB, 900x810)
AI is just another bubble for fools to dump all their money into. Kinda like crypto. The funniest part is they both depend on botnet tier hardware in order to exist.
>>11500
llama can run with cpu alone. It takes some disk space, but you can try it out. While many companies are dumping money on the buzzword train, ai can be useful in some cases.
Replies: >>13923
>>11500
speaking like a complete ignorant i assume
>>11500
"AI" is algorithms that generate algorithms.

It may have become the hottest buzzword of the decade but it has valid uses, like those  "AI" scaling algorithms for mpv that beat all the classical ones.
Replies: >>11514
terry-ad.jpg
[Hide] (16.2KB, 255x198)
>>11509
Why though, I just use a simple 2x or 3x scaler to play old games on my 32-bit ARM. And I can watch 640p videos without scaling at all, since I boot the system to 640x480 video mode.
Replies: >>11515
>>11514
I mean 480p
Ugh, I don't even like using this -p nomenclature.
>>11482 (OP) 
You won't find more conversation about these topics on the webring than /robowaifu/, OP.

>>11487
No, it absolutely is cucked by the kikes Anon, just as you suspect. And yes you'd need to train your own model to avoid the Globohomo doublethink and newspeak. The issue is the hardware requirements for training are formidable. We've been thinking about ways to distribute the training loads across Anon's computers, similar to Folding@home's approaches.
Replies: >>11522
>>11519
>thinking about ways to distribute
You can do it with https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node already.
Replies: >>11523
>>11522
Thanks for the advice, Anon.
>>11483
AIs think in completely different ways than humans.
Replies: >>11549
>>11538
tell us more
is this ai in the room with us now
Too fucking overhyped, it is fucking shit. It is only good for lazy people not wanting to code and generate garbage ai tiktok viral videos on the internet. Not smart enough.
>>11483
Never ask these things about anything you wouldn't want your questioning of to become public knowledge. It doesn't matter what the truth value of the answer is.
Replies: >>12115
>>12107
Thanks but I am running it locally
i found a duplicate https://zzzchan.xyz/tech/thread/7755.html jannies kindly move/merge it here
How can I use AI to create a continuously mutating encryption algorithm to stay ahead of glowies?
image.png
[Hide] (1.2MB, 1024x1024)
why are some of my prompts in AI producing wild results despite coming from another AI recomendation
Replies: >>13807 >>13808
>>13806
Can't help you without telling me your
>backend
>model
>sampler config
>prompt template
Replies: >>13912
>>13806
>1024x1024
let me guess, DALL-E 3?
Replies: >>13912
>>13807
 i got no template.
>model
sdxl? juggernaut? realvis? my prompts so far are flexible as long as it states what to do
>backend
>sampler config
i have no idea whats this.
Zero? KARAS?
>>13808
uh...flux?aura? i forgot. practically pure digital painting look alike.
Replies: >>13920
screenshot.png
[Hide] (410.7KB, 1236x966)
>>13912
>backend
Do you use local hosted models on cpu, cuda or something else? If so, which software.
>sampler config
Bottom left section of picrel
>>11501
Yeah but I would like to have better performance + the ability to run generative image/voice/etc.  without selling a kidney for an RTX Goyvidia GPU. Plus most of this shit is built using CUDA which is proprietary and exclusive to the aforementioned cards.
Unfortunately AMD prefers to keep a thumb in its arse and piss on projects like ZLUDA that would actually make its cards competitive, let alone viable as an option for AI stuff. Hopefully some company winds up releasing a dedicated accelerator (NPU I guess) that you can run models well on for a non-organ-tier price.
aI is a fucking meme
i just like making pretty pictures
Replies: >>14205
Bing is pretty nice but limited uses
Davinci also has a nice free tier but its below Bing

what free AI services do you like to toy with
Replies: >>14205
portrait__pixel_art_style__pixels__pixel_art__sexy_girl_wearing_puffy_orange_sweater_and_pleated_red_skirt__blue_lens_nerdy-59774a86-9dd6-45f3-8849-28cdf4dfeafa.png
[Hide] (123.3KB, 512x512)
portrait__pixel_art_style__pixels__pixel_art__sexy_girl_wearing_puffy_orange_sweater_and_pleated_red_skirt__blue_lens_nerdy-744b27bd-72ee-4b77-9e50-d941e3ed7ec4.png
[Hide] (119.3KB, 896x512)
>>13997
same
>>14201
i tried Bing and got bored of it really quickly, I did experimented with it for a while but I realized that it was having the same amount if not way more restrictions than DALL-E 2, I just gave up on it and went back to Stable Diffusion which is what i was using before DALL-E 3 and I keep using it to this day.
as of right now the best free AI image gen website for me are the hugginface demos of the FLUX models, Dev, Schnell and Merged.
but there's also 
https://fastflux.ai
there's little to no restriction and you can use it non-stop.
the main disadvantages of fastflux is that you have to resize the window you are running it on to get different sizes, 
average window size = Landscape image
smallest vertical window = square image
all images are also pretty small 512x512 is the average square
896x512 is the average landscape image
the last disadvantage is that all downloaded images can only be saved as .webp
https://web.archive.org/web/20250130000238/https://old.reddit.com/r/LocalLLaMA/comments/1ibh9lr/why_deepseek_v3_is_considered_opensource/?rdt=35996

So something that's "open source" has been making the rounds among all of the youtube talking heads (either glazing it or acting like it's so awesome), and the biggest selling point I see, is that the chink AI cost $6million while Open AI gets billions while venture capital wonders what the fuck they are even doing with it.

Chink-glazers have been going out in full force too. Is it even that better or is it just the same as all the other shitty LLMs?
Replies: >>15308
>>15307
>reddit
It's just performing around the same level as gpt with lower cost and parameters. The reason why it's 'better' is they imagine what would happen once they increase the parameters and scale up to the same cost as gpt.
They are all still shit and I haven't been convinced otherwise yet.
Also stop using jewtube
Replies: >>15408
anyone running models or LLMs locally ?
how was your experience
Replies: >>15401 >>15410
>>15397
I am. It was pretty good for a while until the slop gets to me. Tuning up the XTC makes the model unbearablely stupid.
>>15308
VoxNovel is nice for turning ancient text files into listenable audiobooks. A little bit on the janky side, but most AI programs are that.

ComfyUI is neat in concept, but I'm still trying to figure out how to make it work in practice and use the things I want to use.

ComfyUI's addon community really seems to attract the worst of the worst when it comes to program-maintanence and there is not only so much shit, there's so much shit that doesn't work and there's no resources to figure out a fix. The number of yarn-ball-tangles I've had to unravel thanks to pip and the python ecosystem just to get something working is annoying. On top of this, there's shit like Conda deciding "Oh hey, I wanna start the base system every time you start a terminal, aren't I nice?" and it ends up breaking shit.

Is stable diffusion still cucked and anti-porn? 

Also, it seems that civitai now sets NSFW models to be login-only-download on default even though it says "the maintainer wants this to be only-downloaded by people logged in" or some shit like that.
Replies: >>15480
>>15397
Local llms got pretty good lately at running on cpu alone. People been downsizing 7b models to 2b while retaining nice quants, it's frankly amazing, real democratization of AI.
Generation times aren't terrible either and WAY LESS looping than corpo models.
I'll take waiting longer for good reply than getting instant slop reply with cuck quotas, queues and looping that big corpo models tend to give you.
Replies: >>15489
How do chat bot clients maintain the you/me distinction? by which I mean, how do they keep the transformer from (correctly) predicting that the most probable next token after the end of "its" message is the beginning of "mine" and generating both sides of a conversation?
Replies: >>15480
>>15408
>cucked
Not when ran locally, speaking from experience
>>15476
The llm will complete end of it's speech.\n{{user}}: which the client detects and stop generation.
>>15410
tested Deepseek and Qwant with GPT4all on work laptop (CPU only), it was really decent
Replies: >>15490
>>15489
Which model specifically did you use?
I've been using Huggingchat. I like it but I would prefer something I can use locally and through an API. Their service requires an account and has spy features.
Replies: >>15575 >>15586
>>15570
nigger, your cuckapi also has spy features
Replies: >>15586 >>15592
>>15570
>>15575
koboldcpp, open source, no spying
>>15575
Hugging only requires an email. You can use a burner. You can call yourself Lucifer Niggerbastard. If you use enough proxies, it's relatively private as long as you don't post anything identifiable.
fuuuuuck.webm
[Hide] (241.4KB, 1920x1080, 00:01)
>>11482 (OP) 
I decided to download the DeepSeek-R1 weights - a bit under 700GB, which I figured should fit on my HDD with 1TB capacity left.
Turns out git-lfs makes a copy of every file - one in the root and one under .git/lfs/objects - meaning I need 2x the capacity. What fucking retardation is this?
It looks like I should be able to safely delete whatever is under .git/lfs/objects. I'm running b3sum to make sure I'll only delete the files that have been downloaded already. Hopefully deleting those files during a clone won't break shit and force me to start over.

Actually scratch that, b3sum takes way too fucking long. I'll delete based on modification date and hope for the best.
Replies: >>15606 >>15610
>>15605
git lfs sucks. Just download directly
som.jpg
[Hide] (214.2KB, 1154x800)
>>11482 (OP) 
Let me get the smart-aleck response out of the way: I know there's already an AI in-game. That doesn't count for what I'm talking about.

It's been my dream since I was a kid to have someone to play Secret of Mana with: the whole goddam game start to finish. Either ROM or SNES copy. I think I've given up on finding someone, so I'm wondering about the possibility of coding up an LLM AI. One that's able to access the start menu, switch between the spare player, and who I can somehow 'coordinate'/'talk' with for the teamwork required in-game.

...I have no idea where tf to even start for such a project, but I'd appreciate any thoughts. What are the current known "plays-a-game" AIs out there?
>>15607
anon... get a wife.  women love jrpgs
Replies: >>15613
>>15607
>...I have no idea where tf to even start for such a project, but I'd appreciate any thoughts.
Start by writing a client that can send commands to the game. Then make a table that has a list of commands and text prompts that match them. Send queries to the chatbot telling it to match a command in the table. Create functions that provide the status of the game to the chatbot. If that's too hard, manually ask the chatbot for instructions and input the commands.
Replies: >>15614
>>15605
>git
Is also a one-shot download.
If your internet cuts out in the middle, you have to restart the download from 0B.
If you shut off your computer, you have to restart the download from 0B.
Absolute garbage. git could easily (a retard like me has done this for other things) create a manifest of files with a version number, then edit the manifest as it downloads to reflect completed downloads.
Replies: >>15611
>>15610
git lfs and git are completely different projects
Replies: >>15635
>>15607
Just saying but it shouldn't be hard to set up netplay on an emulator and find someone on dicksword or whatever to play with.
>>15608
Women only care about children once they go baby crazy and half of all marriages end in divorce wherein you end up paying child support and she takes not only half your assets but your kids as well, lest you play games with the chlidren if you bend the knee to her constantly.
>>15609
Thank you for answering the question.
>>15607
Anon, you know you can make a small-scale Gamenight thred over on /v/?

t. did just that for Trials of Mana/聖剣伝説3 3 years ago and played through the entire game with 2 other anons via mednafen
Replies: >>15630
>>15627
>t. did just that for Trials of Mana/聖剣伝説3 3 years ago and played through the entire game with 2 other anons via mednafen
If you aren't lying, I hope you treasure that memory. To find two people like that through a random /v/ thread is so precious. Especially two people that would actually play through it with you the entire way, and don't do that "I'm not letting you move the screen, hahaha, isn't this funny?" thing half the time is...you got very lucky and found two very kind souls that I hope you keep in your heart.
Replies: >>15639
1oz.jpg
[Hide] (143.7KB, 1218x624)
mfw someone made an AI rap song about the fellow that incessantly spams /pmg/ on 4chan
https://vocaroo.com/1fVQsUSqbptA
>>15611
my bad
Spoiler File
(1.7MB, 2200x3200)
>>15630
One of those two souls happened to be a drawfag who drew picrels for a different gamenight, sleepy/v/ on average has ~10 anons per gamenight so the 2 anons who played with me are probably still around, the drawfag definitely is considering he uses the same username when playing on sleepy serbs.
You just have to make bread and see what happen I wuld play with (You) no homo.
[New Reply]
65 replies | 18 files
Connecting...
Show Post Actions

Actions:

Captcha:

Select the solid/filled icons
- news - rules - faq -
jschan 1.4.1