/tech/ - Technology

Technology & Computing


New Reply
Name
×
Sage
Subject
Message
Files Max 5 files32MB total
Tegaki
Password
[New Reply]


Putin's given us the boot! Read about it here: https://zzzchan.xyz/news.html#66208b6a8fca3aefee4bf211


Nested-Virtualization-Experiment-How-Deep-can-we-Go-VirtualBox-VMware-41.png
[Hide] (81.3KB, 600x589)
I'd previously assumed that Electron-esque garbage like Snap and Flatpak were just a fad confined to lazy commercial software, but along with a slow general decline in community packager activity, I've recently noticed more and more dev projects like GIMP and Handbrake abandoning official Linux builds for distro-native package formats. Reading a bit about it, the underlying tools and standards for packaging appear to be in general decay, and I was surprised to see some distros like Ubuntu and Fedora making noises about completely abandoning their package managers at some (usually vague) point in the future!

Throughout the span of modern Linux distros, before the need to resort to manually installing every single version of a piece of software, as an alternative to waiting for the distro's repo to update from (sometimes painfully outdated) stable versions, there were pretty much always builds of whatever available from either the developers themselves or some helpful person's PPA. Without that, Linux will become much less convenient to use at best, far more bloated and broken at worst.

It has been suggested by some, such as this article:
https://ludocode.com/blog/flatpak-is-not-the-future
that the main problem which allowed such moronic software to gain momentum (aside from security flimflam exaggerating its sandbox capabilities) was Linux's notoriously unstable ABI, a problem that is gradually fixing itself. But I worry the only real solution to lazy devs would be package managers that cleanly install multiple versions of the same dep, and an explicit way for packages to specify minimum/maximum versions for dep compatibility.

I'm also unsure as to how much inertia there is behind this. In the case of Ubuntu, for instance, is this something that's definitely going to be forced down all the other big forks' (e.g.: Xubuntu, Lubuntu, Kubuntu, etc.) throats? Or is this something that will probably remain opt-out beyond Ubuntu Desktop? I'm also unsure of how strong awareness or backlash against it is, as in the case of Mint balking at Ubuntu upstream's silent replacement of their web browser packages:
https://archive.ph/hpMug

I realize this is a very Linux-specific problem, moreover one of less immediate severity for rolling distros, but is the heart of Linux's ecosystem dying? Or am I blowing this out of proportion?
>>4739 (OP) 
It's not the fault of Linux the fault is with developers who can't code and pile dependencies ontop of dependencies ontop of dependencies.
Replies: >>4746
The issue here is 2-fold.
First, nudevs nigger rigging their build. 
2nd, the death spiral Linux has been in for the last 12 or so years.

The solution is the same as always. Don't use shit distros and cancer langs. And probably don't use Linux in general.

>unstable ABI
Don't use proprietary software. Fixed.
Replies: >>4746 >>4747 >>4773
Install Gentoo. Source-based distributions avoid this problem by design. There is definitely a long trend of windowsization and complification of software, but once again source-based distributions are the most effective way to fight this (unless you want to fork the universe) because they let you patch retarded shit and dependencies out easily.
Replies: >>4745 >>4746 >>4765
>>4744
Chances are any software that takes too long to compile on a typical home computer isn't worth using, too. A source based distro shows you which software fails that test.
Replies: >>4755 >>4773
works-on-my-machine-starburst_2.png
[Hide] (236.3KB, 497x480)
>>4740
Excess (especially unused spurious ones in very sloppy software) dependencies are certainly a problem, but even something simple enough it has few or no dependencies still must be packed and tested against Deb/YPM/ebuild/etc.

Agreed that dep bloat has been getting steadily worse, even beyond Linux.

>>4741
>Don't use shit distros and cancer langs
The entire appeal of major Linux distros is that they're popular enough most critical functions (especially drivers!) work out-of-the-box, and most anything else can be installed or updated with minimal effort. Otherwise...
>don't use Linux in general
If it gets bad enough, sure, people will start jumping ship to the "write your own mobo drivers" Paleozoic Linux experience that less popular OSs like BSD or unfinished meme OSs like Haiku today remain in.
>Don't use proprietary software. Fixed.
Unstable ABI still causes tons of unnecessary bugs and QA burden even in open source software. Basically every other OS has a stable ABI, including other open ones like BSD.

>>4744
If I have to manually unfuck each update of each package to make it install and run properly, because the packager's default config wasn't good enough, that obviates the entire benefit of a package manager.
Replies: >>4747 >>4755
>>4746
> Basically every other OS has a stable ABI, including other open ones like BSD.
>>4741 is correct, the only reason you need a stable ABI is to support proprietary crapware that you can't just recompile every update. Since everything is assumed to be open source on Linux then it doesn't matter.
Replies: >>4762 >>4773
>>4745
Unfortunately not 100% true, but it's a very good heuristic in general. You may say that "I never had to care about this shit on other distros!!!1", but you always had to care and you always suffered the effects of it. You just didn't look. Just like software with security bugs was always insecure and didn't become insecure with the publication of an exploit.

>>4746
>If I have to manually unfuck each update of each package to make it install and run properly, because the packager's default config wasn't good enough, that obviates the entire benefit of a package manager.
Removing a feature you don't like is much easier than preparing everything related to the installation. The distro maintainers handle the second part. Especially on Gentoo, USE flags already do a lot of this work for you and it has a user patch system that often lets you reuse patches from old versions.
Replies: >>4758 >>4760
>>4755
>Removing a feature you don't like
That's something users should actually be expected to do (though as conveniently as possible) since it's a matter of personal taste. I'm thinking more objective "the package's default config is broken on your install" or "the default config bricks your install" issues. Granted, in theory that's not a source vs. binary distro thing, but a rolling vs. release distro thing. Though I can't think of any source-based distros that aren't rolling. 
>preparing everything related to the installation
./configure && make && make install isn't really that much harder. The most valuable thing source package managers add is automatic install of missing deps, which Autotools/pkg-config really ought to be capable of themselves (indeed, there have been some attempts at fixing that, such as auto-apt).
>The distro maintainers handle the second part
No, the job of distro maintainers isn't packaging, it's to test packages against each other, ensuring that (at least under default settings) you can reliably install and update stuff without breakage.

That's why the biggest fault point for "containerized" software is backports of bleeding-edge software (or, admittedly, "forwardports" of stale proprietary software) outside the stable(r) mainline repos of any given distro.
Replies: >>8504
>>4755
>Unfortunately not 100% true
I am calling your larp now. Explain what is a ABI and why you think it is so important without copy pasting from google
Replies: >>4761
>>4760
Other anon's point was obviously that some good software takes a long time to compile, even if most big software is useless bloat. IMHO obsession with build times for its own sake is deranged by definition, user compilation except when absolutely necessary is LARPing, and user time is overwhelmingly more valuable than dev time except for the most specialized bespoke applications.
Replies: >>4762 >>4763
>>4761
I thought you were responding to >>4747
I guess you had the good sense to ignore it.
>>4761
>Other anon's point was obviously that some good software takes a long time to compile, even if most big software is useless bloat. 
Correct.
>user compilation except when absolutely necessary is LARPing, and user time is overwhelmingly more valuable than dev time except for the most specialized bespoke applications.
Long build times make development harder and are usually a symptom of something going seriously wrong. The users pay for that already, even if they are unaware of it, through degraded software quality.
Replies: >>4764
>>4763
>Long build times make development harder
Only if everything is designed as a giant statically linked ball of mud that needs to be recompiled all at once
>are usually a symptom of something going seriously wrong
Though yeah that is true
Replies: >>4773
Even Debian is better than Ubuntu/Fedora.

>>4744
>Install Gentoo
This. Gentoo is the sanest GNU/Linux distro.
Also, see >>932
>snap
>flatpack
irrelevant noise. never used them. they will die like all other memes
a92335ef42e5dd49955e8f1831b78c489e645a6b471295539270a554f5e283c3.png
[Hide] (27.8KB, 381x399)
>snap
gay shit that's dependent on systemdicks and never even worked when i tried it anyways. you'd click "install" and nothing would happen, no error, nothing.
>flatpak
has the advantage of actually working since it's all packaged up with what it needs to run but doesn't have good integration with the system. good for if you can't find it from another source but that's about it.
>appimage
annoying as fuck, very little system integration unless you use appimagelauncher. devs rambling about how it's the future are retarded and should use a proper distribution method.
>standard package managers
don't know why people think we should move away from these, they just work. people are just too lazy to run a repo i guess.
>building from source
most tedious to do if something goes wrong and nobody else has the same error. requires a lot of other bullshit to be installed to compile it but you can take pride in know that you built that shit from scratch muthafucka.
>a single executable/ELF/bash script you download in a tarball with a bunch of SOs
shiggy diggy
Replies: >>4771 >>4773
What happened to 0install? It's semi-related. https://github.com/0install/0install

>>4768
>flatpak has the advantage of actually working
But fagpak doesn't werk very well, see the link from OP: https://ludocode.com/blog/flatpak-is-not-the-future and https://flatkill.org/
Replies: >>4773 >>7363
3e8c5a03e3cbc2fe3b20d396d3dc001aac61ede00869f55be08e739551d6d5ef.gif
[Hide] (151.7KB, 606x423)
>>4739 (OP) 
>Are packages dying?
Yes, and about time too.
Traditional package managers are, for the most part, impractical to both the distro/PM maintainer and the application developer. The maintainer has to continuously add and update every single piece of software in existence, as well as test it against other pieces of software in the repo so it doesn't break or get broken by them. Meanwhile the developer cannot immediately publish/update his software at any time without going through the unsurprisingly slow maintainer, or alternatively hosting a PPA and having *buntu users perpetually adding PPAs each time they want to try an application...
Now don't get me wrong, a package manager would be very useful having a limited selection of commonly used libraries, frameworks and programs. That way any user can quickly create a minimal working environment for regular tasks, and any developer can easily start writing and compiling software. Adding anything else would be a waste of time and energy.

And what's the deal with having a gorillion package managers and formats anyway? Oh right, "choice"!.. Users have been asking for a unified/standardized distro-agnostic packaging format for years now, if not decades, and none of the distros gave a shit. So a few "gifted" developers came up with their own solutions and those were Flatpaks and AppImages, and they got widely adopted, you know why? Because they work.
I personally don't like Flatpaks and their bloated runtime, they don't even launch on my machine, but I'm a huge fan of AppImages and have been packaging my own for a while now. They're perfect for "freezing" versions of software that I want to preserve along with all their dependencies (incl. glibc), and they work everywhere, forever. And if it wasn't for the cancer that is dynamic linking on linux I would have ditched AppImages altogether and statically linked all my programs into native self-contained binaries instead.

>but what about [obvious flaw of flatpak/appimage]?
I don't care. If they work I'll use them, simple as that. Fix the root problem that inspired them in the first place and I'll stop using them. Speaking for myself and other average/casual linux users here.

>>4741
>>4747
>unstable ABI
<just recompile everything goy!

>>4745
<just stop using software with any degree of complexity! it's not like you actually need it to do actual work
A lot of modern software is overly bloated and complex for sure, but you're beyond retarded for equating complexity with uselessness.

>>4764
>Only if everything is designed as a giant statically linked ball of mud that needs to be recompiled all at once
Every web browser on linux is dynamically linked yet still takes ages to compile, your point is moot. Also whatever miniscule amount of time you save when compiling a dynamically linked program, is paid for by the user tenfold when starting the program. If you can't tell why then read this: https://drewdevault.com/dynlib

>>4768
>>appimage
>very little system integration
Which is a good thing, isolation from the system is always a desirable quality and lets you run several versions of the program in parallel. Plus integration is always an option.
>people are just too lazy to run a repo i guess.
See above.

>>4771
>0install
Looks interesting, is there any distro that relies on it?
>>4773
I don't fucking want the /dev/looper to push his shit without going through the slow maintainer.  The /dev/looper should not have a direct line to my HDD.  If Linux became pozzed to that level, I'll switch to OpenBSD for sure.
Replies: >>4796
>>4773
>"choice"
>Users have been asking for a unified/standardized distro-agnostic packaging format for years now
Only Windows users afaik. Choice is what makes different distros. You are part of the GKH gang who aims to dictate how people use your software because your software isn't designed with compatibility in mind (he also advocate coc/anti-rescind shit).
Why and how distros came to be is the choice and configuration of software. This is mostly provided though a package manager. Think of it a LFS user posting their configs, patches and binaries. Without this user, everyone has to test their own software because no developer should test their software against a million different software combination.
>inb4 they shouldn't have such combination
What's next? Shipping binaries directly? Not sharing sources because users who compile it fucks it up? Or it can only work with the same model of computer the developer is using? It is not the developer's business to dictate which version of library user uses. If your shit doesn't work without a specific version of library, choose a better library.
The few developer had this problem because they depends on shit libraries which has unstable ABI. You want to freeze the version of those shit because your software isn't flexible to version changes. Blame the unstable ABI of the libraries you use.
Yes, one solution was to recompile it. Non-rolling release has the version freezing problem that all their software must move at the same time. But that doesn't reflect development nor user demands. Gentoo's solution solves the problem easily. The cost of depending on unstable libraries should be paid this way. Why share the source if you don't expect users to compile it? Sharing binaries will always have this problem as long as unstable ABI libraries are used.
Replies: >>4796
1638423817252-1.jpg
[Hide] (97.9KB, 1024x768)
What do people mean by unstable ABI?
Hasn't it been strictly SysV ABI since the beggining? Same executable format, registers / calling convention, stack layout, signals, syscall numbers.
Only 11 syscalls were ever removed, and plenty of legacy kernel interfaces are maintained (oss, old alsa, dnotify, a.out, etc.).
Or was there a change to posix and the semantics of syscalls?
The only issue I see is shared objects.
If I built glibc,SDL,xlib etc. from 20 years ago what is stopping me from running Sim City 3000?
I've chrooted into very early versions of slackware succesfully.
I've heard Linus talk about "not breaking userspace", perhaps I've misunderstood that.
or is there such thing as "userspace ABI?"
Replies: >>4778 >>4780
sasteroids.png
[Hide] (302.4KB, 640x480)
>>4776
I've run old binaries from the 90's, for example a pinball game called Roll'em Up.  It needed an old libc or something, that I simply grabbed from an old debian release.  So I didn't even build anything at all, just checked for what libs the program needed with "ldd" and then added them to my LD_LIBRARY_PATH.
Some games might not be possible to run like that, if the library no longer works (I don't know if you can still enable SVGAlib with modern kernels).
>>4776
They are not talking about kernel api. Yes, there is userspace ABI such as glibc, mesa and other gpu shit.
>built glibc,DSL,xlib
That's the problem, it is technically possible to do that, depending on how those libraries play with version-specific quirks and low level shit. Doing that would take so much effort to get right, some versions may not be available.
Replies: >>4787
>>4773
>Meanwhile the developer cannot immediately publish/update his software at any time without going through the unsurprisingly slow maintainer
And that's a good thing. This kind of shit is security-critical infrastructure, and I'd much rather have the distro maintainers handle it than rely on every single developer not fucking it up by himself. It amazes me every time when people go "yeah hygiene is great and all, but what I really want is shit and malaria in my kitchen!". Next you're going to say we should just download and execute random unverified executables from the net.
Replies: >>4796
>>4773
>Adding anything else would be a waste of time and energy
Exactly, which is why alternate repos like:
>hosting a PPA and having *buntu users perpetually adding PPAs each time they want to try an application...
Is a perfectly fine solution for that "5% of a user's software that must be bleeding edge" beyond the "95% of a user's software that doesn't need to be up to date except security patches".
>And what's the deal with having a gorillion package managers and formats anyway?
>isolation from the system is always a desirable quality
No it fucking isn't. I use the OS I do instead of another OS because it's designed a certain way, and enforces certain norms. I expect software I install to comply with those norms. That means its UX matches the OS's look & feel, its defaults are inherited from what I told the OS elsewhere, its libraries and APIs are those preferred by the OS, and it seamlessly takes advantage of all the resources the OS exposes through them.
>lets you run several versions of the program in parallel
Agreed this is a weakness with Linux, but fixing it doesn't and shouldn't require degrading everything to the level of a Java Swing app from the '90s.
>integration is always an option
Unless this "containerized" bullshit kills OSs entirely. Users should not be made to tolerate a Web 2.0-esque patchwork of discordant siloed bullshit as the minimal expectation for how software should work.
>takes ages to compile, your point is moot
That's final fully optimized compilation, which is something a buildfarm does without human involvement, irrelevant to both development and non-LARPing end users. I'm referring to REcompilation during actual development, which competent use of caching or fully incremental compilation in better toolchains renders the difference between recompiling small versus large projects nonexistent.
> https://drewdevault.com/dynlib
The solution to the above is more dynamic linking, not less. IMHO bigger dependencies (e.g.: Qt, SDL, Boost, etc.) need to be broken up into smaller independent pieces. Not to the meme degree of Node.js left-pad or whatever, but no single lib needs to be multiple MB in size.

>>4780
Exactly, on other OSs with stable ABIs like BSD/Win/Mac, libraries are expected to remain backward compatible, so you can reliably run almost any binary from a decade ago, and most binaries from decades ago, using modern libs. There are still some cunning tricks used to fix edge cases, like SxS in Windows, or bundles on Mac, both of which are about a billion times less awful and broken than the "containerization" discussed in this thread. Aside from old binaries, this focus on ABI stability also reduces breakage in up/downstream software and libraries, saving effort by other developers.
Replies: >>4796 >>4835
aebd380f69157a8a02139abd73394811fd803e8c307f1d9a7218524cc4b99daa.png
[Hide] (67KB, 640x400)
Your choices are stupid (slackware), broken (arch), out of date (debian), or painful (gentoo / ports trees).
I've been fairly happy with "broken".

huh, I need program X.
apt-get install X
...
Oh no, feature Fx is missing!
...
git clone goyhub.il/X
git checkout stable
./configure --enable-hiv
Checking if your computer is running... [yes]
Checking if we can write to stdout... [yes]
Checking your C compiler works... [yes]
Checking if 1 > 0... [yes]
[9000 lines of inane nonsense]
error. pkg-config: shitlib >= v1.23.45 Missing.

fuck.
wget freetard.gnu.org/archive/shitlib.1.23.45.tar.gz
./configure --prefix=/usr && make
oh. I guess I should remove the old version?
apt-get remove shitlib
Error: shitlib v1.23.44 has 420 reverse dependencies.

Do I just static link? Build a site-local package that replaces the old one? Or just say fuck it and make install?
Its never just one library either, and most likely you've run out of patience.

Lets forget about packaging for a moment.
The build systems are broken, packing could be an extension of the build system.
cmake, premake, meson, scons, waf, et. al are nothing more than bloated reimplemtations of autotools.
You either take the UNIX approach and keep it retarded -- plain old Makefiles and tarballs without install scripts.
Or you reinvent user space completely.
They might be obnoxious freetards, but the GUIX project deserves a *tip* of the hat.

TLDR;
I've concluded that the problem is that debian/cannonical/redhat are not spending enough on training sessions for women.
>>4788
>gentoo
>painful 
But it's not! Gentoo is mostly automated: you define global USE flags in your make.conf and run emerge. If you want/need to customize USE flags on per-package basis, you do it just only once per package in your package.use file/dir.
Install Gentoo >>932
Replies: >>4793
>>4788
Gentoo solves the apt example easily. There are package use flags that turn off or on things. If there isn't, it is also easy to patch the package build process.
Different software having different build system can't be solved. The packaging process is the workaround to abstract that from users. The build system doesn't do dependency management and it shouldn't. Keeping the build system retarded and combine it with something else (package manager) is the thing. Binary distributions package management has this problem because different combination of features and dependencies needs to have their own build hosted. Gentoo doesn't have this problem.
Never got into GUIX, how is it different from portage?
Replies: >>4795 >>4842
>>4789
>Gentoo is mostly automated
Only if you're using systemd. I ran into a lot of problems when installing it and I had to stumble into almost decade old forums posts to solve them.
Replies: >>4794 >>4807
>>4793
Works on my machine, Gentoo's default is openrc, how could you get a lot of problems?
https___bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com_public_images_0a897e7d-5bb0-49bb-8093-4b085390acaf_905x960.jpeg
[Hide] (115.8KB, 905x960)
>>4790
>Never got into GUIX, how is it different from portage?
It uses Stallman scheme instead of Python. And they use namespaces to create a consistent minimalist build environment for each package. So in theory you can build a guix package, then download the same package someone else built and they will be identical (reproducible builds).

The reason I stopped bothering is because the gnu autism makes some useful """non-free""" packages hard to get.
Replies: >>4807
>>4774
Then just compile from source you sperg.

>>4775
<suggesting a uniform package standard to be adopted by distros?? you're LITERALLY preventing people from making and using their own package managers! muh dictator! muh coc!
<inb4 [made up argument]
<no it's actually the libraries that are unstable! libraries working on all other OS without fail is just a coincidence!
I refuse to believe this isn't bait.

>>4784
>I'd much rather have the distro maintainers handle it than rely on every single developer not fucking it up by himself
How can you guarantee that the maintainer won't fuck it up too?
You can keep using package managers, nobody is stopping you.

>>4787
>>hosting a PPA and having *buntu users perpetually adding PPAs each time they want to try an application...
>Is a perfectly fine solution for that "5% 50% of a user's software that must be bleeding edge not severely outdated"
FTFY. You keep swimming in your PPA soup while the rest of the world moves on.
>That means its UX matches the OS's look & feel [...] and it seamlessly takes advantage of all the resources the OS exposes through them.
AppImages already do this.
>its libraries and APIs are those preferred by the OS
AppImages bundle all libraries except video/GPU/driver libraries which are pulled directly from the system, any library not bundled in an AppImage is automatically substituted with the system one. This mechanism allows AppImages to Just Werk™, but you can very easily change that by making the AppImage look for libraries in the system first before falling back to the bundled libraries... You clearly have never used an AppImage in your life.
>its defaults are inherited from what I told the OS elsewhere
AppImages also do this but it depends on the program and whether it was packaged in a way that enables this, because users may want separate configs for the AppImage and the native app.
>Unless this "containerized" bullshit kills OSs entirely
AppImages are nothing more than binary + dependencies, how exactly is that a "container"?
>The solution to the above is more dynamic linking, not less.
<let's INCREASE the number of calls to disk and load MORE data into memory that won't be used!
Read the article again.

>>4788
That sounds very frustrating, but then again I can see this being a consistent problem on debian-based distros because of the severely outdated libraries... You could do all your compilation in a barebones CentOS/Rocky/Alma VM (or a chroot but I haven't tested that) and just wrap software you built into an AppImage to be used everywhere. It will never break.
>Do I just static link?
If you can, definitely.
I hate packages, they make software developers lazy. With packages you have depency web and are forced to have 20 different versions of the same library just to have everything working. Static linking often allows you to drop 80% of the library code. You can get even better results by tinkering with library code directly, or by writing everything without relying on libraries at all.
o.png
[Hide] (1.5KB, 640x480)
>>4796
The point of going through maintainers is you're also going through a QA stage, which is why it's "slow".  But you can also run bleeding edge "unstable" release, and then you become part of the QA team.  Or you can just run the stable release, and then build the few packages you want newer versions of.  This is what I do most of the time, unless I'm running OpenBSD on hardware that needs -current for some new drivers or whatever.  Or sometimes I'll rebuild some package (not necessarily a newer version) just because I want it built differently.  I did that with Allegro, so I can run those games in the framebuffer console.
Replies: >>4809
>>4793
>Only if you're using systemd.
False.
((( SystemDick ))) only makes the GANOO/Lunix experience worse. SystemD doesn't add any useful features (or additional automation to Gentoo); it's better to simply choose OpenRC installation iso & stage3 tarball & choose a profile doesn't pull in SystemD,(choose any profile that doesn't have "systemd" in its name) and install OpenNTPD and a cron daemon (like cronie) if you need it. Have you actually even tried to install Gentoo? It's not hard because the Handbook is step-by-step guide. If anything, choosing SystemD makes the process harder. Just read the Handbook and the thread >>932 


>>4795
>The reason I stopped bothering is because the gnu autism makes some useful """non-free""" packages hard to get.
You need to add an additional repo: https://github.com/wingo/guix-nonfree
>>4796
>suggesting a uniform package standard
No you didn't do that, you suggest adopting a uniform binary distribution format instead of a package manager. Did I hit a nerve? Or is that the real bait? It's hard to tell these days.
After such adoption, as you expected, package managers on distros that adopt this will be gone soon. You are also encouraging development of software without regard of forward compatibility. And if you can do it this way, it will be done, software would be less and less compatible with different library combinations. Then you can get your Windows style "download the installer and all bundled libraries (x64-bit)".
If libraries are stable, there won't be much testing and problem moving from version to version. Hell, you can even use a binary compiled for a different version of library and still have it working after upgrading the library.
>You can keep using package managers
Except what you are advocating implies having less support of package managers and user packaging.

Different from commercial software industry, where they try to make a product just werks. The mindset is different. It is the developer's job to make sure their program is robust to api changes. Ideally it should work regardless of versions of libraries it is using, because the source and build instructions is publicly available. They are designed to be compatible with different environments. Before distros, every user is LFS and it is their job to integrate your software into their special environment. Now n00bs can apt this and that with worry because the package maintainers do this job. Snowflake software that only works with a certain version of libraries is usually bad. It is the pronouns of software, "only call me nigger/faggot".
Replies: >>4809 >>4842
>>4798
I remember you from /v/ I think, unless there are 2 fb spergs on zzz.
>The point of going through maintainers is you're also going through a QA stage
Which can also be patchy as fuck because maintainers don't always do their job correctly, and by the time a package makes it through it's already severely outdated anyway and probably unusable.
>This is what I do most of the time
That's great and I'm in the same boat but it doesn't solve the problem. You can't ask everyday users to just constantly compile/recompile everything all the damn time.
>I did that with Allegro, so I can run those games in the framebuffer console.
4 or 5? I'm somewhat intrigued.

>>4808
>Users have been asking for a unified/standardized distro-agnostic packaging format for years now
<no you did not suggest a new packaging format!!
Learn to read, schizo.
>package managers on distros that adopt this will be gone soon
God I fucking hope so, but they'll probably never be 100% gone because some autists like you will keep maintaining them for years to come.
>You are also encouraging development of software without regard of forward compatibility
???
<if libraries were stable, everything will be perfect!
Sure, let's organize a meeting with all the library developers across the world and tell them to play nice with each other! And let's NEVER stabilize the linux ABI!
>Hell, you can even use a binary compiled for a different version of library and still have it working after upgrading the library.
Which already happens on almost every OS except linux, must be the pesky libraries!
>Snowflake software that only works with a certain version of libraries is usually bad
So basically linux?
Replies: >>4810 >>4812
>>4809
>maintainers don't always do their job correctly
And the developer always do that?
>Learn to read
>appimage is just a format
It is not just a format, it is a binary distribution of a piece of software.
<if libraries were stable, everything will be perfect!
>let's NEVER stabilize the linux ABI!
??? Linux ABI is quite stable, "don't break the userland". It is the userland that's unstable. What, if not unstable library, is stopping me from using 5 years old software with opengl now? Deprecated functions and changed behaviors. What is stopping a debian faggot to just install a piece of new software without having to fuck over half of his ancient non-rolling release install? Minimum library version requirement in that new software and "untested" newer version of libraries.
>So basically linux?
Any piece of software that the developer have to even think about cramming dependency into their shit. So all windows binary distributions where developers are so afraid of dll hell that they have to do that.

You seem to conflating two problems. One is non-rolling distros, another is unstable api.
Assuming you are a developer write a program for non-rolling distro such as debian and ubuntu. It takes a long time for them to check every box and user often have to reach out for ppa and shit. This is because their software are not rolling and updated as released. And this is where the slowness come from. This is an inherent problem of non-rolling distros, no way around it. You will get user asking for fix which you can't provide because their distro made the decision to freeze versions. Upgrading will probably change part of their install into testing or other strange versions too.
The other problem is unstable api. Software that can't work with newer library versions either uses shit libraries or is shit themselves.
You "hate" package managers because of non-rolling releases, binary rolling distros like arch (gentoo is cheating in this comparison) don't have this problem and can always run testing software because they don't freeze versions. Major dependencies can be upgraded easily.
This led to another problem caused by unstable library api, which is old dynamically linked software don't work with newer version of libraries. That's where a solution is needed. Say my whole system is on openssl 999 but a piece of program only works with 998, if the cause is unstable abi, I can just recompile that software. But if it is using deprecated api or depends on changed behavior, the developer have to patch the software or the user have to use appimages. Now it is possible to develop software that don't work like that? Either the library have to be backward compatible or the software have to be forward compatible.
Replies: >>4818
>>4809
If the maintainers are shit, then you need to push for better ones, or volunteer, or switch to a different distro.  But that QA stage is critical, and under no circumstance will I use any big, bloated, complicated, and constantly changing modern OS that doesn't go through this step.  Already enough things break as it is.  For example that systemd bullshit, it broke spectacularly after an update, and it took some time for Ubuntu to get a fixed version going.  I'm not very surprised, since I run on 32-bit ARM, and it's much lower priority than x86.  Instead of waiting for Ubuntu to fix it, I simply flipped /sbin/init to point to busybox, and configured that program (with the oldschool /etc/inittab method).  Anyway if this kind of problem can happen with Ubuntu (who are pretty damn good about testing their stuff), then it would likely be even worse if I was stuck downloading stuff straight from /dev/loopers who only deal with x86.
Also I'm using the same Allegro 4.4 that came with Ubuntu 16.04, I just did "apt-get source" the library and rebuilt the packages with my settings.  I don't k now if I'll upgrade to 5, especially not if they pull the same shit that SDL did with SDL2 (basically it will use Mesa if you don't have a working GPU, and I don't want a GPU, because everything I do worked fine in software rendering on much older computers than I have now).
Replies: >>4818
>>4810
>And the developer always do that?
No they don't, stop being a smartass. But it's sure as hell easier to not fuck up managing a few pieces of software you personally wrote rather than thousands of pieces of software written by other people.
>It is not just a format, it is a binary distribution of a piece of software.
No shit? I never said AppImage/Flatpak were a packaging format, I said they were a response to there not being a standard packaging format. Read my post again.
>It is the userland that's unstable.
Does e.g. glibc count as userland? Because it sure as hell ain't stable despite literally everything depending on it.
>What, if not unstable library, is stopping me from using 5 years old software with opengl now?
I can run 20+ y/o OpenGL software on wangblows just fine, I try running the corresponding loonix versions and all hell breaks loose. Same libraries, same build date, same everything.
>Any piece of software that the developer have to even think about cramming dependency into their shit.
That's the stupidest arbitrary metric I've ever seen. You can never guarantee what a linux user has on his system, a gorillion combinations of software/libraries are possible and there's never a single common baseline. That's the "beauty" of choice I guess... Realistically speaking you have one of two options:
  -  Create a system with a fixed API/ABI and never ever change it, never let the user change it either, thus providing a stable foundation for developers to build software on without ever having to bundle system dependencies. This is windows.
  -  Continuously change your system and its components, and give users the "freedom" to do the same. Developers have zero foundation to build software on, so they either just provide the source and have the users go through the (often painful) compilation process, or release their software with all dependencies bundled down to the most basic. This is linux.

>You seem to conflating two problems. One is non-rolling distros, another is unstable api.
I think the two problems are very much linked.
<developer writes software on rolling release distro
<software doesn't work on fixed release distro because linux ABI changed
<users ask dev to compile for fixed release distro / dev asks users to compile the software themselves
<rolling distros mitigate the moody ABI by constantly updating and recompiling everything
<fixed distros just freeze everything and force themselves into a bubble, then two years later recompile everything (not necessarily applying any updates) and move to a different bubble, rinse and repeat
<both fixed and rolling distros can't be bothered to work together around the unstable linux ABI (e.g. by implementing a common packaging standard) and end everyone's suffering
<linux ABI keeps changing and further widening the divide between fixed and rolling distros

>You "hate" package managers because of non-rolling releases
I'm already daily driving a rolling distro, I would say I'm the least affected by linux's moodiness, but I don't bury my head in the sand and pretend there isn't a problem.
>Say my whole system is on openssl 999 but a piece of program only works with 998 [...] if it is using deprecated api or depends on changed behavior, the developer have to patch the software
What if the developer already did? That makes it the fault of your distro for being fixed release. You can't have it both ways.
None of this would even be a problem if it weren't for linux's obsession with dynamic linking and "muh sharing resources!!" which only introduces more problems. If you read the link I posted earlier you'd find that resources are barely ever shared to begin with.
>Now it is possible to develop software that don't work like that?
Yes it's called statically linking and making reproducible builds. Otherwise the developer must be maintaining the software, all its libraries, and the entire system it's gonna run on to keep all moving parts synchronized.

>>4812
>If the maintainers are shit, then you need to push for better ones, or volunteer, or switch to a different distro.
<don't like our unstable house of cards? just add more cards, make better cards, or move to a different house of cards!
You are beyond delusional.
>Already enough things break as it is.
I wonder why...
>Ubuntu (who are pretty damn good about testing their stuff)
No they're fucking not, Ubuntu devs break shit all the time, only it's broken "relatively" less frequently than other (rolling) distros namely Arch an co.
>I don't k now if I'll upgrade to 5
Only one way to find out.
>SDL2 (basically it will use Mesa if you don't have a working GPU, and I don't want a GPU, because everything I do worked fine in software rendering
I'm not sure I'm following. Mesa already supports software rendering, you're saying Mesa doesn't work on your older machines? Also AFAIK using the GPU is entirely optional in SDL2 to begin with since SDL2 is supposed to support devices with weak/no GPU.
Replies: >>4819 >>4822 >>4842
o.png
[Hide] (4.8KB, 640x480)
>>4818
SDL2 no longer supports the old, plain software Linux framebuffer that appeared in kernel 2.x as fbcon/fbdev driver, with device named /dev/fb0, /dev/fb1, etc.  This is the simplest graphics interface for Linux that doesn't suck, which is why I'm still using it to this day.
if I want to build SDL2, here's the relevant options (pic).
If you ignore all the X/Wayland/OpenGL/KMS stuff that I don't care about, and ignore all the stuff for other platforms I don't have, that leaves only directfb, which itself he marked as "non-buildable, plz send patches" in the docs.  But directfb itself is pretty much in limbo, and anyway it's more complicated than what I'm using now, so I don't want it.
Replies: >>4842
>>4818
You are right about static linking being a solution. But there are benefits to dynamic linking. I will get back to this later this post.
What I don't get is why should the developer care about how the software is built and used. If you are releasing binaries, of course you want to static link/bundle all dependencies to your releases. If the source is available, integration of the software with the rest of the system is not the developer's business. Some people like to static link, some people like to dynamic link, as long as your software can be compiled, who cares.
Dynamic linking has benefits other than saving disk space. It allows users to swap library globally without recompiling other software. Now this is where unstable api comes in. We know the api aren't stable, but they aren't changing every patch version either. Say there is a critical security patch for openssl. If you static link or bundle them with your program, and you aren't around to make another release, your program will be a security hole. With dynamic linking, provided that the api is stable enough, ancient unmaintained programs can still use new libraries without any change.
While it shouldn't be a developer's job to make releases with libraries, a package maintainer is the person for the job. If you like static linking so much, you can make a distro that static link all programs. You can even make a universial appimage package manager.
But understand it is debian and ubuntu's decision to dynamic link and be non-rolling. Their users by extension made the same decision. As long as there are people who like to configure their software differently, distros will exist in one way or another. The developer should write software that can be built and configured conforming to standards. And let the package maintainer do their job. That is to say, developers don't have to maintain a whole system. It is simply not his job. Developer maintains his code, not the binaries nor packages for specific systems.
>pretend there isn't a problem
It is a problem they chose, they chose to pay the price of dynamic linking instead of static linking and potentially a lot more rebuilding. Dynamic linking has its own problem, unstaable api being an example, but so do static linking.
Replies: >>4826
pokething.gif
[Hide] (1.1MB, 500x375)
>>4822
Just re-link all ur static binaries!
> mfw Linux has a linker, but no unlinker
Replies: >>4830
>>4821
What an awful word salad you have posted...

>stupid question but if i were to make my own china-themed proprietary linux operating system should i start with LFS or BSD for the low-level kernel stuff then which init should i put next 
Your question is indeed stupid.
First, why would you choose a *BSD kernel if you want to use Linux? If you want to make a proprietary OS, then you should go with FreeBSD (because of the license). Second, you should either use the default init system or Runit. The default just works and Runit is sane alternative.

>all i ever wanted was a decent reliable linux distro
Install Gentoo.

>wangblows users just stick to our equipment and we dont complain that much if something breaks
When you use non-free software, you can't do anything if something breaks.


TL;DR I'm now on chemo therapy because of your cancerous post.
Replies: >>4830 >>4832
>>4826
>Just re-link all ur static binaries!
That's what static linking and appimages implies. Someone has to recompile or at least repackage the software. But there is no unlinker.
>>4828
Why even bother replying? Real /tech/ discussion is too confusing.
Replies: >>4831
>>4830
Meh, it doesn't have to be one or the or the other.  OpenBSD uses static binaries in /bin and /sbin, to make system recovery easier (what if the linker itself is hosed?) but not for the rest because then you start chewing up disk space for no fucking good reason.  And don't nobody give me that shit about just buy a new HDD, because I got some systems that used NAND flash and eMMC.  I'll do it when it makes sense (like I built a static busybox for my Linux computer) but otherwise they can fuckoff with their bullshit.
Replies: >>4833 >>4842
>>4828
>being surprised that a namefag's post is cancerous
You only have yourself to blame.
>>4831
Agreed. Some packages have static bundled libraries by default (I heard d some browsers bundles ffmpeg).
Off topic question, have you ever manage to use mmc nand? Shit is unsupported everywhere, ubifs dropped it a long time ago.
Replies: >>4836
*mlc nand
Dynamic linking is a temporary measure against bloated software.
It doesn't fix it, so the software is bound to continue regressing until your dynamically linked system is bigger than previous statically linked systems. It was only a temporary contermeasure.

It's like Wangblows users buying SSDs to make Wangblows snappier and now their old SSDs are still too slow, the core issue wasn't addressed and it was allowed to get worse until the temporary countermeasures broke, when any modern HDD can read at 200MiB/s and if that's not enough to move the rectangles of a GUI around on the screen then your software objectively sucks massively.

I look forward to seeing what the next temporary countermeasure to bloated software in the dynamic linking tech line is, we already have >>4787 suggesting taking the brain damage further.
Replies: >>4842
nand-Micron.JPG
[Hide] (193.1KB, 1600x1200)
>>4833
Mine looks kinda like this, but the markings are slightly different:
> 1730    1-7
> 29F64G08CBABA
> WP      B
> (rotated) X88J
I couldn't find any specific info about it, so I'm not sure exactly what type it is.  Anyway it's on my Cubietruck, and doing all the prerequisite steps to make the flash usable for Linux, I simply used the sunxi-nand-part program that comes with sunxi-tools to divide it into two partitions:  a tiny FAT just for u-boot, and an ext4 for Linux.  Yeah, I'm not even using any fancy filesystem at all!  But then again this is just a recovery setup for me, I hardly even boot into it.  It's there just in case I totally mess up both the HDD and SD card installs.  Also I'm using an old 3.4 kernel, because the NAND driver is gone in newer kernels.
egg-eating-snake-timothy-hacker.jpg
[Hide] (93.4KB, 900x600)
>>4788
>Or you reinvent user space completely.
>They might be obnoxious freetards, but the GUIX project deserves a *tip* of the hat.
There have been a number of implementations for clean support of multiple simultaneous versions in Linux, probably the oldest is Gobo.

>>4790
>There are package use flags that turn off or on things. If there isn't, it is also easy to patch the package build process.
That's still not quite a solution. I can also manually chroot separate installs of different dep versions for exotic software in whatever non-rolling binary distro, but that's something I have to do manually. What's needed is either a 1-click way to install software with multiple dep versions, or vast improvement to ABI stability.
>Keeping the build system retarded and combine it with something else (package manager) is the thing
That would be nice, but I'm not aware of any actively developed permutation of autotools/pkg-config that can automatically download and install all deps from your package manager when it notices headers missing.

>>4796
>Then just compile from source
That would be even less vetted
<no it's actually the libraries that are unstable! libraries working on all other OS without fail is just a coincidence!
That actually is the problem, yes. To be clear, "ABI (in)stability" on Linux is as follows:
<Kernelspace:Kernelspace
Intentionally unstable, this is why out-of-tree drivers are such a giant PITA on Linux, and is different from other OSs (Win, Mac, BSD, etc.)
<Kernelspace:Userland
Intentionally actually stable, same as other OSs.
<Userland:Userland
Intended to be stable, usually NOT actually stable, though this IS getting better. Intentionally actually stable on other OSs. This is what we're mostly gnashing our teeth about in this thread.

>I refuse to believe this isn't bait
Agreed other anon obviously misread you thinking "uniform package format" meant "uniform package manager". It's worth mentioning exactly such a thing had managed to gain pretty wide support for several years, called PackageKit, but the main author suduku'd the project by 2014 in preference for containerized formats.

>must be bleeding edge not severely outdated
Not actually important for most software for most users aside from critical security fixes

>you can very easily change that
Won't that break the clunky autoupdater (that most AppImages don't use)?
>AppImages are nothing more than binary + dependencies, how exactly is that a "container"?
Because they're also an entire read-only filesystem, leaning on goofy daemons to (optionally, in most AppImages not even) halfass various parts of what is normally handled by the OS or package manager. Even use of system deps as first priority before fallback to the AppImage's deps is highly unusual, basically all AppImages will prefer their copy of every dep, only using system deps that were excluded from the AppImage. And reconfiguring AppImages yourself is a PITA compared to operating any package manager.

<let's INCREASE the number of calls to disk and load MORE data into memory that won't be used!
>Read the article again.
Leaving aside the fact that the low-level specifics of the article are hilariously inane, for instance the fact that even 2 uses of a package mean memory/space savings over static, or that his complaint about not every symbol of a dynamic lib being used ignores static builds won't unlink below the function level even with LTO, or that transitive deps can be unlinked with dynamic but not static... All without mentioning the numerous massive advantages of dynamic linking beyond size and efficiency. My argument was that the present typical strengths of both approaches should be unified by making the granularity of dynamically linked libs finer, thus more fully minimizing the amount of unused or redundant code in memory or storage.

>>4808
>You are also encouraging development of software without regard of forward compatibility. And if you can do it this way, it will be done, software would be less and less compatible with different library combinations. Then you can get your Windows style "download the installer and all bundled libraries (x64-bit)".
Counterpoint: Windows software has excellent forward compatibility, its high API & ABI stability making the development of drop-in replacements for libraries and IPC clients ubiquitous, and most of those installers will add their libraries system-wide. Note: Nothing resembling the containerized cancer metastisizing within Linux is common on Windows. Even when distributing libs with software, they can be overridden with a simple drag and drop.

>>4831
>OpenBSD uses static binaries in /bin and /sbin, to make system recovery easier
That isn't just to make recovery easier, and they obviously aren't statically linked. BSD, unlike Linux, separates "base" from all other packages ("ports") and software, as a fundumental aspect of both the OS and project's architecture.

>>4818
>Does e.g. glibc count as userland? Because it sure as hell ain't stable despite literally everything depending on it.
The glibc ABI is stable on Linux, anything built targeting an older version will reliably work on a newer version. A lot of Linux binaries needlessly use newer versions, but that's because they're retarded.
<What, if not unstable library, is stopping me from using 5 years old software with opengl now?
>I can run 20+ y/o OpenGL software on wangblows just fine, I try running the corresponding loonix versions and all hell breaks loose. Same libraries, same build date, same everything.
The Linux versions of those libs aren't ABI stable, simple as.
>Create a system with a fixed API/ABI and never ever change it, never let the user change it either, thus providing a stable foundation for developers to build software on without ever having to bundle system dependencies. Add mechanisms to allow multiple versions of the same library either distributed with the app (in directory w/ .exe) or system-wide (SxS, named versions). This is windows.

>>4819
>framebuffer
LARP, it's just a dirty hack implemented back in the day to substitute for HW text mode on platforms that lacked it, use KMS/DRM if you don't wanna load X/Wayland or accelerated GPU drivers.

Speaking of true VGA text mode, anybody know how to get this on modern Linux? It's the one thing I miss from my Win ≥XP partition for playing roguelikes.

>>4835
You're one step of contrarianism away from "Electron is  sound in concept, actually".
DuckDuckGo.gif
[Hide] (243KB, 480x480)
>>4842
In OpenBSD, the binaries in /bin and /sbin are statically linked.  The binaries in /usr/bin and /usr/sbin aren't.  All of those are part of the OS proper, and not ports/packages, which themselves live under /usr/local.
NetBSD might be similar, but I never bothered to check if /bin and /sbin are statically linked.  They also shove 3rd party packages under /usr/local, but via a different package manager (pkgsrc), which allegedly also works in Linux as well (but nobody is using it there?)
Replies: >>4847
>>4842
Also why are you calling me a LARP when you can't even get your facts straight Re: OpenBSD static binary linking?  I actually do use the framebuffer console every day in Linux.  It's not a dirty hack, it actually works well, unlike the SVGAlib which often required a reboot if the screen got corrupted.  But with fbdev I can nearly always recover from that kind of situation via Alt-SysRq (no reboot necessary).
I don't want the fucking KMS/DRM bullshit, when the simpler fbdev works fine.  Those new shits are designed under the assumption you got a GPU, that was the entire point of those new shits.
Replies: >>4847
o3.png
[Hide] (3.3KB, 640x480)
>>4842
Hey don't mind me, it's just a LARP.  It's not like I can actually play BBS door games on my framebuffer, without needing all kinds of stupid extra software like everyone else is having to use (SyncTERM and other dumb shits).  HAHAHA sucks to be larping and not forced to use bloated software. XD
Replies: >>4847
>>4843
>>4844
>>4845
Yeah I admit I'm not terribly familiar with BSD, and I assumed it was being implied software in ports couldn't dynamically link to deps in base. Looking a little into it, probably the oddest feature is /rescue containing a sort of BusyBox style single binary behind a pile of symlinks.
>why are you calling me a LARP
Attachment to fbdev is purely sentimental fetishism, it is a mere implementation detail irrelevant to applications, and no more compact nor clean than another stack (not to mention slower than textmode/serial/SSH/etc. that preceded it). There is already a modern driver that functions as a drop-in replacement for unaccelerated fallback video on VESA or whatever, while remaining fully backward compatible with fbdev, called SimpleDRM, also allowing easy replacement with an accelerated DRM/KMS driver if "abominable ARM SoC X" ever gets its GPU blob reverse engineered.
Replies: >>4859
>>4847
There's no /rescue in OpenBSD, that must be one of the others.  Instead you get static binaries in /bin & /sbin, and a backup of important config files in /var/backups.  That's enough to recover from a lot of problems.  At least if you know how to use /bin/ed like all real sysadmins do. :D
Anyway I don't want a fucking "modern graphics stack", I just want a plain, simple framebuffer, without extra layers of abstraction.  Text mode is an x86 thing, all other computers I owned were always in graphics mode.  Even my first 8-bit computer was that way.  So it's the IBM PC that's fucked-up and full of dirty hacks, not the other way around.  The reason Linux didn't start out with a framebuffer display is because Linus Torvalds was a poorfag who couldn't afford a Sun workstation, or even a cheapass Amiga or Atari with 68030, apparently. :DDD
Replies: >>4864
1612927058199.png
[Hide] (184.7KB, 626x800)
At this point, I would like to compile a list of alternative package managers &c. that can be used along side what your distro offers:
>pkgsrc : https://www.pkgsrc.org
>guix : https://guix.gnu.org/ (great for lisp packages) https://github.com/wingo/guix-nonfree (there is more up-to-date version around)
>Gentoo prefix : https://wiki.gentoo.org/wiki/Project:Prefix
>0install : https://github.com/0install/0install
>Ravenports : http://www.ravenports.com/ (especially interesting if you use FreeBSD or DragonFly BSD)
(also, unrelated but Slackware/Slackbuilds, CRUX Linux and Source Mage GNU/Linux (SMGL) have something that resembles ports)
>A chroot system or Docker
>((( flatpak ))) : https://flatpak.org/ https://flathub.org/ https://registry.fedoraproject.org/ https://community.kde.org/Guidelines_and_HOWTOs/Flatpak


I guess LFS is kind of related https://www.linuxfromscratch.org/ because you learn the PAIN of building a system and maintaining it. Perhaps it bearable no, it's not with pkgsrc or guix?
I won't even include ((( snap ))) or Appimages (which are quite nice when compared to Fagpak or Snapshit) or the 1000 package managers that are made for programming languages (like Ruby Gems). Useds of Windoze can use Cygwin or MSYS2. I guess Macfags have homebrew? Android has F-droid...


Oh, and TempleOS has supplemental ISOs.
>https://web.archive.org/web/20171218125834/http://www.templeos.org/Downloads
>https://web.archive.org/web/20170429174359/http://www.templeos.org/Wb/Home/Web/AppStore/AppStore.html
>https://archive.org/details/TempleOS_ISO_Archive
Replies: >>4863 >>11883
>>4862
I've used pkgsrc plenty and it's pure cancer.
An entire packaging infrastructure in Make and Shell, anyone who has used those to write anything with more than 3 lines of code knows the sheer insanity of that, even Satan's curse upon the programming world (Javascript) would suck less there. 
The main thing it has going for it is that it's portable unlike all the others.

I remain interested in the subject though, Guix seems like the most promising one.
>>4859
>Text mode is an x86 thing, all other computers I owned were always in graphics mode.
IBM wasn't at all unusual for the era. Every 8-bit micro I'm aware of had dedicated text hardware that they also relied on for their "low-res graphics" modes, which on low-end platforms (PET, TRS-80, Speccy, etc.) were the ONLY "graphics" modes. Even 16-bit micros without truly dedicated text modes per se (ST, Amiga, X68k) had mandatory graphics acceleration with features that were also used to offload standard text modes in their default terminal emulators and OS/ROM CLI, rather than a much more demanding true SW framebuffer that was only used as a last resort to get around HW limitations. The same was true of eunuchs workstations/X-terminals, which invariably had pretty meaty graphics hardware (or a dedicated serial terminal with a second monitor at your desk!), even if that "GPU" was something silly like another m68k or i960 on a VME card.

I say this having grown up an Applefag, watching with smug contempt as adherents of rival microcomputer and set-top/arcade platforms boasted about the tradeoffs of their shmancy sprite blitter multi voice FM synth whatsits. Meanwhile whenever their games were ported to the Mac, all of that ended up reimplemented in pure software WITH enhancements, because Apple consciously forewent hardware acceleration of any kind in favor of faster a CPU, more RAM, and better A/V DACs.
Replies: >>4869
bas.png
[Hide] (2.2KB, 768x576)
>>4864
Amiga Workbench uses the hardware for some stuff, like moving pointer and windows, yeah.  That doesn't really help you any if you're writing a game, because then you have to program the custom chips directly yourself, if you want your game to be optimized.  So there's nothing "free" in that sense.
But that's neither here nor there, since I was only talking about how those other platforms haven't got an actual text mode.  I frankly don't know about the C64 or Speccy, because my first computer was an Amstrad CPC, and that's only got graphics modes.  You can PRINT text and on the same screen draw some lines or whatever, it just fucking works.  As far as the text characters go, they're just a table of bitmaps in memory, and you can change them at will.  If you print a character, then change it, and print it again, now you have two different 8x8 graphics on the screen.  If you switch to another graphics mode, it's just the screen resolution and available number of colors that change, that's it.
Anyway that doesn't really matter.  The point is, none of my hardware has a "text mode" since I left x86 behind some years ago.  The question was why don't I use their new KMS shit that's designed for GPUs, since maybe it kinda-sorta has a fallback for software-only plebs.  My answer is: why should I jump through an extra layer of abstraction code that's just going to send me back a place much like where I already am.  It's just a waste of time.
If you're gonna dream of turning Linux into a faggier Windows, it should at least include completely moving away from X to Wayland on all hardware and flexible SELinux profiles that are kept up to date by people who used to be package maintainers. Not even touching AppImage or any of that other shit without both conditions being met.
Replies: >>4871
>>4870
>flexible
>SELinux
You can choose only one.
Replies: >>4873
>>4871
Android-style permissions are my idea of flexible, with autistic profile ricing allowed for everybody willing. The same design as the proposed Wayland permissions. Right now my only barriers against glowing or skid code are the package QA from my distro, unmaintained AppArmor profiles and the illusion of security from firejail and bubblewrap; so I completely resent the statement that package maintainers are a hindrance. Worst comes to worst, it's off to Gentoo or any of the two BSDs that matter.
Replies: >>4874
>>4873
If you don't even trust the software you are using, what makes you think selinux or whatever faggotry solution would do it? May as well go full qubes and put each program in its own vm (but roll your own because systemd).
Replies: >>4880
>>4739 (OP) 
I don't know or can speak if package managers are -dying- persay, but I do feel like that package managers have problems that linux devs and users aren't willing to talk about.

Obtuse or ZERO usability for offline installs or updates

I'm stuck with one-particular distro on a low-power SBC setup because it simply doesn't have internet access, and is the only fucking system where you don't need to connect to the internet and use a package manager to download updates.

Not many of these distros had a sane way to set up offline package management- IF you actually were able to do offline package management with that distro in the first place.

I consider myself median-experience in the terminal. I'm not a newbie, but god-forbid anyone that -is- a newbie and is afraid of the terminal trying to do an obtuse setup of package management.

Bonus points if you have to download an entire repo's worth of software just to install 1% of it on your offline system.

Getting lost in Dependency Hell Is a fucking massive cockblock

One little thing that just won't fucking work or compile because "fuck you"? Well, good luck trying to fix something that's definitely your problem, and definitely 'not' your fault.

Now, install issues are typically a pain in the ass regardless if it's a downloaded binary or a package managed one. The difference is, the proprietary style install-or-uninstall is easier on single-package cases.

When issues happen and your system breaks when you uninstall some packages? Who's at fault then? The user for using a software that has no sanity checks? Or the developer for designing a package manager that destroyed a crucial element of the system it needs to function?

Of course, an intrusive software that makes itself as hard as possible to uninstall will always be a problem no matter if it was installed via downloaded binaries or package managers.

When RTFM goes wrong

I've noticed a problem with command-line software and their manuals: 'They are great for showing your every way the program works, but can be absolutely horrible for teaching you how to actually use it.'

My main problem with -nix manuals on certain software, is that it will explain every single detail of the command and how to use it... And seemingly-not-elaborating on 'basic procedure or examples'.

It's like giving someone a list of ingredients and settings to bake stuff with, and leaving the actual baking process as either a two-sentence footnote or completely leaving it out with the assumption that the cook knows what to do.

This isn't even just a linux problem, it's a -nix problem all around. Package managers are one of the worst places to have this because it's one of the most important tools for any sort of -nix system use.

Now granted, not all linux software has this problem, but there is a significant amount of software that -does- have this problem.

Silver lining? At least it's actual offline documentation and not some garbage where the only docs are a bunch of outdated wiki articles.

Package managers are still awesome too

With a package manager, you can download and install an entire suite of software in one command. 

That's amazing. That's efficient. That's linux.

It's just that some people are dead set on "the old ways" because when you change shit up, it alienates them because they're used to how it used to work. The thing is, Microshit and big tech can make this exact argument as a reason to not use -nix OSes, 'but they're guilty of changing shit for no reason too'!
Replies: >>4877 >>4878
>>4876
Given the actual purpose of package management, offline use is going to be an inherently inferior experience. But it is a solved problem with a couple solutions:
<Use an offline package manager
This is a simple 3-step process of recording a list of all the packages your offline "target" system has in a file, using that file with an online "source" system to download all the appropriate packages you want, and then bringing those packages to your offline "target", where it will install them per instructions. Here's a good example of automatic software to do this for APT-based distros, which will run on Linux or Windows:
https://cube.camicri.com/
<Use an offline repo
This could be an install DVD, a handpicked repo by you, or (if you're feeling ambitious/lazy) even a mirror of the entire repo, which even today is surprisingly small enough to fit on an SD card at 20GB-200GB depending on architecture, distro, and edition. e.g.:
https://help.ubuntu.com/community/Debmirror
>This isn't even just a linux problem, it's a -nix problem all around.
Nah, it's a CLI problem. Good WIMP GUIs offer a clean learning curve from the most intuitive way to do something (e.g.: look in the menu bar, see commands listed, click one) to the fastest way (e.g.: keyboard combo shown in menu). In order for CLI to match that, it would need a middle ground between "enter opaque magic incantations you memorized" and "plod through MAN pages" aside from "crib black box recipe from some scriptkiddy on Stack Exchange". A reasonable attempt at something like this was the Commando utility Apple included with its ancient A/UX UNIX for interactively constructing CLI commands, which worked sort of like Handbrake does with FFMPEG, but for most of the UNIX coreutils.
>With a package manager, you can download and install an entire suite of software in one command.
I personally think the greatest unique strength of package management isn't installation and uninstallation, but instead the ability to update all software. It's a shame MS/Apple/Google are dead set on restricting their erzats package management to membership in a "curated" walled garden with only the clunkiest possible means of "sideloading", instead of just offering an unrestricted API for random 3rd-parties to use. But until they fix that, package management is going to remain a unique perk of freetard OSs.
>>4876
Reddit spacing makes your post hard to read.

>Obtuse or ZERO usability for offline installs or updates
Debian has apt-offline (apparently there is GUI, too) which is relatively easy to use (if I recall correctly). With Gentoo/Artix/Guix, couldn't you build binary packages and just deliver those?

>Getting lost in Dependency Hell Is a fucking massive cockblock
Doesn't really happen in Gentoo.

>When issues happen and your system breaks when you uninstall some packages?
It only has happened to me when I have used --force switch. I was using Debian or Xubuntu at the time and I just fixed it by installing it again or running apt-get install -f.

>When RTFM goes wrong
The man-pages certainly should contain more examples.

>Package managers are still awesome too
>With a package manager, you can download and install an entire suite of software in one command. 
And you can keep everything up-to-date with a single command!
Replies: >>4879
>>4878
>apt-offline
Is ded, I linked its successor, Cube, above.
>build binary packages and just deliver those
Only if you knew which deps were installed on the target
>Doesn't really happen in Gentoo
Source-based distros are functionally identical to binary-based distros, if the distro's repo maintainer fugs up, bad things happen. Period.

The only real solution is incremental snapshots allowing you to rollback changes, either using backups like Timeshift, or a deterministic package manager like Nix.
>>4874
I don't trust it to not overreach. This isn't the same as expecting every piece of software to have kernel breaking exploits. My security model is software that is constrained to do only what it says it's doing. The words of developers have 0 worth, so this is a pragmatic level of QA. Qubes is overkill for this (and because VMs just mean stacking more shit on top expecting different results). Misbehaving software that is too complex to have a proper security profile can rot inside a chroot jail.

Sadly, none of what I listed above applies to any GUI software running on X as it has no isolation mechanisms. That's where Wayland comes in, hopefully as soon as Nvidia stops drinking retard juice for at least a little while.
Replies: >>4888 >>4889
>>4880
>hopefully as soon as Nvidia stops drinking retard juice for at least a little while
Well, last year they got proprietary driver HW acceleration working for XWayland and added GBM in the 495.29.05 driver. So at least things are headed in the right direction.
>>4880
>words of developers have 0 worth
>using nvidia proprietary driver
At least trade for an AMD gpu with mainstream driver.
Replies: >>4891
>>4889
>using nvidia proprietary driver
Nouveau performance is unusable for anything except dinosaur chips, where it's merely garbage.
>AMD gpu
Not as embarrassing as before RDNA, but going team red still means eating at least a 10% performance gap. Granted this is a somewhat academic point for anyone except giant institutional buyers so long as cryptoponzi scalpers are making the dGPU market radioactive
>With the Flatpak version of Dolphin, you will not be able to preview videos because it does not contain the packages mentioned above
<https://fedoramagazine.org/how-i-customize-fedora-silverblue-and-fedora-kinoite/
Is it true that Flatpaks can't use the packages you have installed either via the traditional package manager or by installing another Flatpak package? Or is the problem related to what features have been enabled/disable at compile-time? How much more resources does ((( Fedora ))) silverblue/kinoite use when compared to Fedora Workstation or the KDE spin of Fedora?
And the wtf is a "Toolbox" in the context of Fedora?
Replies: >>4907
>>4901
>Is it true that Flatpaks can't use the packages you have installed either via the traditional package manager or by installing another Flatpak package?
No, though they do needlessly install slightly different versions of the same packages and use those instead.
>is the problem related to what features have been enabled/disable at compile-time?
Yes. To reduce the horrific bloat Flatpak unavoidable causes, most Flatpak builds of a given program will omit a bunch of features. This is in addition to various features that are totally impossible with Flatpak because of its perverse design.
>wtf is a "Toolbox" in the context of Fedora?
A way to install and use simple utilities without root access, using containers with podman.
You ain't seen nothing goy.
Freeciv is about to abandon all compilers and platforms other than Emscripten.
Replies: >>4940 >>4948
freeciv-1.0-screenshot-city.png
[Hide] (41.8KB, 728x539)
>>4938
I still haven't played that game.  The very early releases with Athena toolkit are pretty nice looking.  The oldest I could find was an old version 1.5 tarball that also uses plain old libXaw.  The tiles look like the same respectably crapi amateur retro graphics, which is fucken' perfect!
Replies: >>4952
freeciv-3-qt.jpg
[Hide] (131.8KB, 1280x800)
Freeciv-webgl_100.jpeg
[Hide] (929.5KB, 3840x2160)
>>4938
The project split in 2007 between the original game that is played natively:
http://freeciv.org/
And a server/client derivative that is only playable in a web browser:
http://freecivweb.org/
Both projects are still putting out new versions, though the web fork is more active.
>>4940
>The oldest I could find was an old version 1.5 tarball
Wut? The very first thing you see listed in their git is the original 1.0a release:
https://github.com/freeciv/freeciv-1.0
Replies: >>4954 >>4983
>>4952
Thanks, but I think it's just the same tiles, so the 1.5 is good enough.  I'm not gonna build any of this anyway, I just like the crapi graphics.  I collect crapi games, you see (and I always hate new games and later versions that try to modernise).  As far as playing it, I'm probably a lot better off to just run the original game in DOSBox, especially since it's got excellent keyboard support (an important feature to me).
7fd895155833843e9c0860683dab387a4a5189e7fc3d5f60062b8d86a86def35.png
[Hide] (969.6KB, 1289x1128)
>>4952
Well (a > 5) and we all know zeros are bullshit sandnigger gayops so actually your wrong.
Replies: >>4985
shiva_avatar.jpg
[Hide] (315.6KB, 667x1000)
>>4983
1.0 was never publicly released
>sandnigger gayops
That's mostly the glyphs, zero came from a little further east, by way of muh aryans.
Replies: >>4986
>>4985
Curryniggers then. Thank you for helping me be racist more accurately.
Packages aren't dying, it's just corporations and cancers upon the FLOSS community doing what they do best: reinventing the wheel 10 times worse and forcing it on everyone.
They did it with init, ifconfig, login, now they're doing it with packaging.
install gentoo
Why there hasn't been a package manager that used BitTorrent? Integrity of files would be automatically checked by the protocol and it would be extremely easy to contribute to the project (just use a bit of your bandwidth to distribute the packages that you have downloaded).
Replies: >>5743 >>6370 >>6735
>>5742
It would need regular http(s) as a fallback, since you don't want a package to be unavailable because nobody is seeding. Distros like Debian already have mirrors across the world so the theoretical usefulness is also somewhat limited.
Replies: >>5771 >>5778
>>5743
This fallback already exists because of webseed/webtorrent and most clients implement it. The reason it's not needed as much is because a lot of universities offer to host mirror packages of Linux distros for free. However, if you do go this route, plugins or patches for something like libtorrent would be needed to get rid of the dogshit SHA-1 hashing.and implement the BEP for signing files with RSA.
>>5743
The distro is free to seed the torrent. All those mirrors are free to seed the torrents too. In fact, anyone can become a mirror by simply firing up a torrent client. 

You have used torrents before haven't you?
>>5742
Bittorrent is blocked by some firewalls and ISPs. There'd also be the issue of regularly updating the torrent whenever new package versions come out.
Imagine if IPFS worked well enough to serve the entire package repository of a distro.
>>5742
There have been a few tries at this, here's one from 2013 that also discusses previous efforts:
https://wiki.debian.org/DebTorrent
>>4771
> https://flatkill.org/
Someone wrote a response to that: https://theevilskeleton.gitlab.io/2021/02/11/response-to-flatkill-org.html
<"muh access to your ~ doesn't matter if we block access to your .bashrc!!!1"
>what are ransomware???
Also:
>defending that Flatpaks contain outdated libraries (in this case a library with known vulnerability)

tl;dr
Flatpak still sucks.
Replies: >>7371
stone_soup.jpeg
[Hide] (136.2KB, 640x425)
>>7363
My favorite line there is the ever popular
>okay, so flatpak's security is broken as shipped. but! you can sorta fix it by manually piling additional optional software on top of it.
Which, of course, also applies to LITERALLY ANY SOFTWARE using ACTUAL sandboxing.

It's the tech equivalent of:
https://1d4chan.org/wiki/Oberoni_Fallacy
ef0c838ba2bc0cbf41c4f729df50918e42434d1b.jpg
[Hide] (11.8KB, 300x300)
>>4788

>The build systems are broken, packing could be an extension of the build system.
>cmake, premake, meson, scons, waf, et. al are nothing more than bloated reimplemtations of autotools.

This hits the nail right on the head imo. Packaging should be as easy as clicking a fucking button, yet it isn't.
This is because of various factors, such as there not being a format to specify metadata (package description, tags, etc) until very recently (appinfo.xml), and build systems not forcing users to make their software compatible with distro packaging standards.

I regularly package software for gentoo, and most of the time is spent fixing broken build systems and software. Either because they hardcoded paths, install shit in wrong locations, hardcoded utilities and CFLAGS, made it impossible to cross compile, modify data in /usr/share/.mono or /usr/share/myshitapp on runtime, don't have a way to select whether a dependency is enabled or disabled (instead querying pkg-config and relying on its results), is bundling some shitlib without an option to unbundle it, is downloading shit from the internet resulting in inconsistent builds, doesn't build correctly with the latest compiler, or is using some language/framework that breaks any of the former (e.g. the cancer that are rust and go requiring you to use their package managers).

All of these issues have been solved in various ways in various build systems for ages, but software developers are never taught how to use them correctly. A well-behaved autotools project requires only three lines worth of a script for packaging, a well-behaved cmake project only needs an extra "inherit cmake" to be packaged. Instead, I'm spending most of my time patching your broken cmake files because you decided to fucking call "make" on a submodule despite CMAKE_GENERATOR being set to "Ninja", instead of using whatever mechanisms cmake has for this shit, or you're installing a submodule's shared library that's already available system-wide, causing who knows how many headaches, because it happens to honor -DCMAKE_BUILD_SHARED=ON, even though the rest of your project doesn't.

The only build system I've yet to have any headaches with is meson. "inherit meson" has never fucking failed me, and I wish more people used it. It actually does bundled libraries properly, allowing distros to just unbundle them easily, and upstream devs to be happy that their libraries are statically linked.

The only thing other than a good build system that we need, is a distro-agnostic method of specifying dependencies. Solus OS has the right idea on this, using "(bin)command-name", "(pkgconfig)library-name" or even "(python)library-name" as dependency specifications, since the names that shit is actually called programming-language wise are distro-agnostic, and their packages export a "provides" field, so the requirements can always be found regardless of how the packages are split up or named in the distro. You could probably generate a good chunk of these "provides" on a cross-distro basis by just scanning their package file listings, and make a tool to convert these dependency specifications to actual package names.

And with all of this, you essentially now have a system where a well-behaved dev just presses a fucking button and gets well-behaved and compliant package build scripts for 20 distros. Add some docker magic and you would even get the binaries and publish the repos.
But of course, all of this is wishful thinking. There's still library compatibility issues (what if a dev wants to statically link a slightly newer library? need toggles for this now... also rebuilds on rolling distros), and the tool would get hopelessly complex.

Flatpak is the easier solution -- just shove packaging standards and compatibility out of the window, give the dev a sandbox to fuck up anything in, and don't let them touch the system.
I don't mind flatpak's existence, but I do believe it's a broken remedy for a broken system, and it makes software devs even more greedy about their compliance. Attempting to package flatpak apps is sometimes an exercise in insanity, since devs are increasingly relying on bleeding edge libraries and specific commits of them, and sometimes those libraries just can't be statically linked.
My biggest hope right now is that flatpak simply becomes an option for the impatient or specific needs that distros just can't package, and distro packages continue to thrive with stable, even if a bit outdated, and well-integrated software. The main reason I use gentoo is because it makes it piss easy for me to patch out deficiencies in software (and not have to do it for every release) -- flatpak and every language enforcing static linking (npm, go, rust, dotnet, ...) are the antithesis to this.
I fear that instead it will become a vehicle for devs to be lazy and care about distro packaging even less than they have been at this point. Talking to some devs about the virtues of dynamic linking and working together with packagers to ensure they can integrate it with a distro properly (and the packagers can help them do QA, they're always the first testers) and take care of bugs/exploits in libraries, reveals to me that most of them only care about their software, not collaborating to an operating system's ecosystem. They all want their little island, they all want to be incompatible with everything, and flatpak is perfect for that.
Replies: >>7782 >>7786 >>7792
>>7778
>All of these issues have been solved in various ways in various build systems for ages
It sounds like the problem is to teach developers to do that. Most developers did a degree in computer science, not software engineering. The practice of designing, implementing and building software is similar to the trades. It is important to learn good designs by example and by doing. People used to learn from a master of his craft or an experienced worker to develop his own skills.
Replies: >>7786
>>7778
>>7782
Sounds like a good opportunity to start writing some sort of "packaging handbook"... Since there are obviously anons with experience on the topic why not pool resources and good practices into one place?
I'd rather just compile an .exe for Windows and let winefags mop it up. It just werks. I'm not playing your linuxnigger games, you've had 30 years to fix this shit.
Replies: >>7788
>>7787
Either distribute the source code or create a Flatpak or an Appimage you double wigger.
good_runtime_environment.jpg
[Hide] (163.2KB, 1145x1034)
>>7778
>Packaging should be as easy as clicking a fucking button, yet it isn't.
Not a packager myself, but in my experience "./configure && make && make install"-ing tarballs the most annoying pitfall by far compared to using a (whether binary or source based) package manager is the inability to automagically retrieve (or even enumerate a comprehensive list of!) deps. As I mentioned upthread, I'm aware of at least one piece of software that attempted to solve this problem:
https://help.ubuntu.com/community/AutoApt
Of course, it died almost a decade ago.
>I don't mind flatpak's existence, but I do believe it's a broken remedy for a broken system, and it makes software devs even more greedy about their compliance.
I do. Ditto anything using the same placebo containerization (e.g.: AppImage, Snap, Steam) on the basis that they are a tumerous burden on billions of end-users as well as an inherent attack against the control of the OS over their installed software and its interoperability. HOWEVER, your complaints about your woes arising almost entirely from the sloppiness of the least competent devs makes me think something almost exactly inverted from Flatpak, solely intended for devs, would be useful:
>most of the time is spent fixing broken build systems and software
>is downloading shit from the internet resulting in inconsistent builds
>is using some language/framework that breaks any of the former (e.g. the cancer that are rust and go requiring you to use their package managers).
>give the dev a sandbox to fuck up anything in, and don't let them touch the system
>Add some docker magic and you would even get the binaries and publish the repos
And what >>4796 said:
>You could do all your compilation in a barebones CentOS/Rocky/Alma VM (or a chroot but I haven't tested that)
As a last resort to get sloppy open source devs in line, what if in addition to tarballs, it became a common requirement to offer some sort of standardized diff'd VM image capable of building hash verified binaries without Internet access, reissued for each release? All tools, all deps, all caches, all configs, bundled together and ready to run. It would be useful in a variety of contexts, such as forcing devs to make sure they could get their software to build in something other than their own environment, allowing other devs to figure out how to reproduce bad devs' environments, and as a last resort for quickly getting a nightly buildbot on its feet.
>My biggest hope right now is that flatpak simply becomes an option for the impatient or specific needs that distros just can't package, and distro packages continue to thrive with stable, even if a bit outdated, and well-integrated software.
That is my nightmare scenario. These pseudo-container platforms exist as the result of Linux being broken in a way no other OS is. Either Linux will be fixed, or Linux will be crushed to death under such software.

Making packaging easier would be good, but it would not be enough. The only solution is to do what every major OS other than Linux does: Stabilize the ABI (already slowly underway), and eliminate "dependency hell" with an explicit mechanism to effortlessly install and use multiple simultaneous versions of the same deps for software that explicitly specify minimum/maximum versions (None popularly adopted, but various exist: Nix, GUIX, Gobo, etc.).
>>4739 (OP) 
But why is pacman liked so much?  I think it's CLI doesn't make much sense and removing orphaned packages is difficult. There is no equivalent of emerge --depclean or apt autoremove. You have to use pacman -Qdtq | pacman -Rs - but it will also remove AUR packages. If you want to remove packages that are no longer in the repos, as a workaround you can use rua upgrade to upgrade AUR packages and then manually pacman -Rs the packages that aren't found in AUR or official repos (rua will report these packages). Even Void's xbps has xbps-remove -RoO. I think there used to be issues on the pacman function that saves new config files? It's also somewhat annoying that upgrades sometimes need "manual intervention" but the package manager or PKGBUILD script doesn't print anything. I will update my RSS feeds before upgrading because of this. On Gentoo, you will get a new post in eselect news list (and you will get notified).

re. pacman's CLI:
xbps is split to multiple binaries which makes it more intuitive (and it follow Eunics philosophy more closely), apt has subcommands and emerge has --long options that make sense (in addition to short options). Why you need to run pacman -Sy instead of just pacman -S? Why -y = --refresh? It makes no sense to me. Why pacman -S foobar will install packages? The (only?) saving grace that pacman has is that PKGBUILD scripts are easy to write and read (and that it's not apt/yum/dnf). But the same applies to: *BSD ports, APKBUILD (alpine linux), Slackbuilds (although package management is a bit painful on Slackware), whatever GoboLinux packages are called and Gentoo's ebuilds (which are slightly more complex but, on the other hand, also more powerful). GNU Guix packages are also pretty easy to read/write, provided you know R5RS and Guile (the fact that they didn't invent a new language is a big pro for me when comparing Guix to NixOS).

I think I might switch to Void Linux and just package the few programs that aren't in Void repos, yet.


Some resources if you are interested in becoming a package maintainer
 Generic tutorial for new package maintainers: https://github.com/jubalh/awesome-package-maintainer 
>https://github.com/void-linux/void-packages/blob/master/CONTRIBUTING.md
>https://wiki.alpinelinux.org/wiki/Creating_an_Alpine_package & https://wiki.alpinelinux.org/wiki/Alpine_Linux:Contribute & https://wiki.alpinelinux.org/wiki/Category_talk:Developer_Documentation
>https://wiki.archlinux.org/title/PKGBUILD & https://wiki.archlinux.org/title/Arch_Build_System & https://wiki.archlinux.org/title/Namcap & https://wiki.archlinux.org/title/Category:Package_development
>https://wiki.gentoo.org/wiki/Project:Proxy_Maintainers (also check out GURU) & https://devmanual.gentoo.org & https://wiki.gentoo.org/wiki/Basic_guide_to_write_Gentoo_Ebuilds & https://wiki.gentoo.org/wiki/Category:Contributing_to_Gentoo & https://wiki.gentoo.org/wiki/Category:Gentoo_development
<(I also found these 2 videos: https://yewtu.be/watch?v=GY0NAAVp5mE & https://vid.puffyan.us/watch?v=3mwNpEowuVU Also, join #gentoo-dev-help @ Libera.chat)

>(yuck!) https://wiki.debian.org/Packaging/Intro
>(yuck!) https://docs.fedoraproject.org/en-US/package-maintainers/Packaging_Tutorial_GNU_Hello/ & https://rpm-packaging-guide.github.io & https://invidious.kavin.rocks/watch?v=woFtdIS6x0Q

My point here is that becoming a package maintainer is not hard (unless the program that you want to package uses retarded/broken build system). You only need to know Bash and how to build the program from source tarball.  Maintaining packages does eat some of your time, though. Some services for package maintainers: https://openbuildservice.org & https://copr.fedorainfracloud.org & https://repology.org/projects/
>>4739 (OP) 
>but is the heart of Linux's ecosystem dying?
Isn't this the future you chose Anon? They tell me you can't have your cake and eat it too. If you want to pack open-sauce efforts with LGBTFAG+, stronk, independynts, and niggers, then you can't really expect a healthy ecosystem to follow-on from that right?

The Western civilization is dead now b/c Jew's intentional efforts at such. The groups above are their golems for such evil handiwork. Pajeets are bad, but these nuSoycaf groups are much worse. You can cherry-pick exceptional cases pro & con, but if the general drive is to literally pack the software industries with morons, what other outcome is possible? Rely on "AI" to fix everything for us now that we royally-fucked it? Lol, what could possibly go wrong?

Don't call it a grave.
>>4758
>Though I can't think of any source-based distros that aren't rolling
crux
Has anybody tried using Slackware, Crux or similar distro with Guix package manager, Pkgsrc or Ravenports? I was wondering if it's usable and whether it's a good idea.
Replies: >>12015 >>12030
>>4862
Don't for get nix . It is a distro and also a package manager.
>>11847
>I was wondering if it's usable and whether it's a good idea.
It makes it easier to distro hop since your package manager comes with you. It's also useful if your company IT only lets you use one distro but it doesn't have the packages you want.
>>11847
At that point just run Slackware or Guix or whatever, why bother?

I have a VM with OmniOS and pkgsrc. Better than Linux, but then, so is almost every other OS.
ClipboardImage.png
[Hide] (26.9KB, 1104x159)
ClipboardImage.png
[Hide] (11KB, 948x57)
ClipboardImage.png
[Hide] (69.5KB, 708x457)
>>4842
>The glibc ABI is stable on Linux, anything built targeting an older version will reliably work on a newer version.
https://github.com/ValveSoftware/Proton/issues/6051
https://bugzilla.redhat.com/show_bug.cgi?id=2129358
https://abi-laboratory.pro/?view=timeline&l=glibc
Replies: >>12060
>>12059
>program uses library in ways not specified by the documentation
>update changes implementation details
ABI still not broken.
Replies: >>12062
>>12060
>>program uses library in ways not specified by the documentation
Where is the documentation for DT_GNU_HASH? And what standard says it's the default symbol table?
https://blog.hiler.eu/win32-the-only-stable-abi/
>ABI still not broken.
<all those symbols removed in nearly every glibc version
Here's a (You) for effort
Replies: >>12063
>>12062
>Where is the documentation for DT_GNU_HASH
Exactly. It's an implementation detail. Go back to Windows, proprietary boy/
[New Reply]
111 replies | 28 files
Connecting...
Show Post Actions

Actions:

Captcha:

Select the solid/filled icons
- news - rules - faq -
jschan 1.4.1