/tech/ - Technology

Technology & Computing


New Reply
Name
×
Sage
Subject
Message
Files Max 5 files32MB total
Tegaki
Password
[New Reply]


huggingface.png
[Hide] (124KB, 1488x1374)
Discussion about "AI"s, deep learning, llms and others.
Replies: >>11519
entropy.pdf
(357.7KB)
1655965439529.png
[Hide] (107.1KB, 1309x430)
So it turns out GTP is just a progression of a thought experiment stated by Claude Shannon in the 1950.

Truly there is nothing new under the sun.

3. THE SERIES OF APPROXIMATIONS TO ENGLISH
To give a visual idea of how this series of processes approaches a language, typical sequences in the approximations to English have been constructed and are given below. In all cases we have assumed a 27-symbol
“alphabet,” the 26 letters and a space.
1. Zero-order approximation (symbols independent and equiprobable).
XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD.
2. First-order approximation (symbols independent but with frequencies of English text).
OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA THEEI ALHENHTTPA OOBTTVA
NAH BRL.
3. Second-order approximation (digram structure as in English).
ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE.
4. Third-order approximation (trigram structure as in English).
IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE.
5. First-order word approximation. Rather than continue with tetragram, n-gram structure it is easier
 and better to jump at this point to word units. Here words are chosen independently but with their
 appropriate frequencies.
REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES
 THE LINE MESSAGE HAD BE THESE.
6. Second-order word approximation. The word transition probabilities are correct but no further structure is included.
THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT
 THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.
The resemblance to ordinary English text increases quite noticeably at each of the above steps. Note that
 these samples have reasonably good structure out to about twice the range that is taken into account in their
 construction. Thus in (3) the statistical process insures reasonable text for two-letter sequences, but four-
letter sequences from the sample can usually be fitted into good sentences. In (6) sequences of four or more
 words can easily be placed in sentences without unusual or strained constructions. The particular sequence
 of ten words “attack on an English writer that the character of this” is not at all unreasonable. It appears then
 that a sufficiently complex stochastic process will give a satisfactory representation of a discrete source.
The first two samples were constructed by the use of a book of random numbers in conjunction with
 (for example 2) a table of letter frequencies. This method might have been continued for (3), (4) and (5),
since digram, trigram and word frequency tables are available, but a simpler equivalent method was used.
To construct (3) for example, one opens a book at random and selects a letter at random on the page. This
 letter is recorded. The book is then opened to another page and one reads until this letter is encountered.
The succeeding letter is then recorded. Turning to another page this second letter is searched for and the
 succeeding letter recorded, etc. A similar process was used for (4), (5) and (6). It would be interesting if
 further approximations could be constructed, but the labor involved becomes enormous at the next stage.
Replies: >>7756
>>7755
It's GPT.
There being other ways to achieve the same thing doesn't mean all ways are the same. You may as well extend the author's idea into all behaviors. There are neural networks that make two legged object "learn" to walk.
Replies: >>7757
>>7756
hit post before finishing.
After that you claim human intelligence is only what the author said.
the_human_race_is_inferior1.PNG
[Hide] (36.9KB, 747x418)
jews_are2.PNG
[Hide] (28.7KB, 762x428)
furfags_are_better3.PNG
[Hide] (41.7KB, 752x417)
Spoiler File
(2.8MB, 1536x1994)
Spoiler File
(31KB, 624x326)
sorry about that feel free to move to /b/ but i think we need a catch all thread about AI for now i guess ill make a collaged finished version of these images later and a link to the long dead dall-e thread on /b/
reminder gentlemen we can't expect god to do all the work lol inputting soyjak yields nothing but food shit and ZETA symbols surprisingly work but not the word zoophile
>context pls (project name is femoidfurry)
someone on huggingface created a furfag model based of a woke faggot's tweets the results werent too surprising at all but very concerning to say the least picrel why do they hate us skinfags so much? (will make a post about this on fedschan later on)
once i get a new gayming system ill get to work vandalizing furfag art i guess the only way is to fight fire with fire
I have recently started playing with llms.
It is crazy how bluepilled llama2-based models are. The model I was using is https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b .
>This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms.
But I asked it about jews, holocaust, racism and women rights. It is almost as if Holocaust is hardcoded to have "overwhelming evidences" all the time. I asked about Jews controlling the world and it always deflects to anti-semitic, conspiracy and offensive. On discrimination, it says it leads to violence and other things.
Even if I provide evidence of otherwise, it always deflects. It is probably caused by highly controlled training data taken from only the most kosher of sources.
It's time to train my own model with image board data and banned books.
>>11483
I mean
no model is going to use data it wasn't trained on
Replies: >>11487
Chatgpt plus is king.
Replies: >>11487
>>11484
Right, but I specifically tell it about my evidences. It is the same as nigger breakfast.
>What would you have felt if you didn't have breakfast yesterday?
<I had breakfast yesterday, I felt full afterwards.
>Yes, but what if you didn't?
<I don't understand, I had breakfast yesterday
But for kikes and holocaust.
>>11485
>closed source garbage
Replies: >>11488 >>11519
>>11487
Nothing performs better than chatgpt, regardless of your stupidity on open source and such. For work and coding, nothing even comes close to chatgpt plus.
Replies: >>11490
>>11488
Yeah fuck off. I don't care how well it perform if I can't tweak it. I play with llms because I can and will modify my model to do whatever I want.
If you use them for coding, kill yourself immediately for being such a nigger to get ai to do your job. Go back to discuck if you have problems with open source.
lolmato.jpg
[Hide] (68.4KB, 900x810)
AI is just another bubble for fools to dump all their money into. Kinda like crypto. The funniest part is they both depend on botnet tier hardware in order to exist.
>>11500
llama can run with cpu alone. It takes some disk space, but you can try it out. While many companies are dumping money on the buzzword train, ai can be useful in some cases.
>>11500
speaking like a complete ignorant i assume
>>11500
"AI" is algorithms that generate algorithms.

It may have become the hottest buzzword of the decade but it has valid uses, like those  "AI" scaling algorithms for mpv that beat all the classical ones.
Replies: >>11514
terry-ad.jpg
[Hide] (16.2KB, 255x198)
>>11509
Why though, I just use a simple 2x or 3x scaler to play old games on my 32-bit ARM. And I can watch 640p videos without scaling at all, since I boot the system to 640x480 video mode.
Replies: >>11515
>>11514
I mean 480p
Ugh, I don't even like using this -p nomenclature.
>>11482 (OP) 
You won't find more conversation about these topics on the webring than /robowaifu/, OP.

>>11487
No, it absolutely is cucked by the kikes Anon, just as you suspect. And yes you'd need to train your own model to avoid the Globohomo doublethink and newspeak. The issue is the hardware requirements for training are formidable. We've been thinking about ways to distribute the training loads across Anon's computers, similar to Folding@home's approaches.
Replies: >>11522
>>11519
>thinking about ways to distribute
You can do it with https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node already.
Replies: >>11523
>>11522
Thanks for the advice, Anon.
>>11483
AIs think in completely different ways than humans.
Replies: >>11549
>>11538
tell us more
is this ai in the room with us now
Too fucking overhyped, it is fucking shit. It is only good for lazy people not wanting to code and generate garbage ai tiktok viral videos on the internet. Not smart enough.
>>11483
Never ask these things about anything you wouldn't want your questioning of to become public knowledge. It doesn't matter what the truth value of the answer is.
Replies: >>12115
>>12107
Thanks but I am running it locally
i found a duplicate https://zzzchan.xyz/tech/thread/7755.html jannies kindly move/merge it here
How can I use AI to create a continuously mutating encryption algorithm to stay ahead of glowies?
[New Reply]
26 replies | 10 files
Connecting...
Show Post Actions

Actions:

Captcha:

Select the solid/filled icons
- news - rules - faq -
jschan 1.4.1