>>8668
>Every intelligent agent has goals. Otherwise, the intelligent agent is not detectable,
Agreed. It is for that reason I have long come to think that the idea of an "emotionless" logical AI is unrealistic, because what I suspect emotions boil down to is a state weighting mechanism for the formulation and prioritization of goals, a sort of tiebreaker for situations where "what is my next task?" has no direct answer. So any AI tasked with anything beyond the most trivially clear assignment must have something like emotions to avoid simply going limp when it runs out of logical instructions.
>Consider what makes two agents work independently instead of as one.
Soldiers in an army, organs in a person, cells in an organ, proteins in a cell? Beyond common goals, the distinction is somewhat arbitrary. I think instead of information, a better criterion for independence is dispensibility. To accomplish some goal, can the entity in question be moved to a different role? Replaced? Modified? Duplicated? Perfectly? Outright destroyed? At insignificant cost? We don't have any sapient examples of minds where most of that is true, except perhaps extremely fanatical people. For many types of AI some or all of those would be inherently true, and that would make them much less like humanity, or even pets, but far more like tools.
>Those examples exhibit an inevitable conclusion. AIs serving human is contradictory.
The literal message of both stories is "computer programming is hard", which pretty obviously represents the broader message of "precise language is hard", both typical of the era's SF. Of course, also typically of the era, both authors were seasoned professional scientists who also dabbled in engineering. This contrasts against more straightforward "killer AI" stories, largely a product of later eras of SF written by engineers, computer dweebs, laymen whose sole source material was other SF, or worst of all delusional social science majors.
>What is even humanity?
Evolving. Which is not the same thing as intentional self destruction, at least not in principle.
>>8708
>This is because videos that are actually trying to teach or explain a subject in detail do not have to worry about attracting Attention Deficient Hyperactive Zoomers
Nah, that's just how video sites are. It's like complaining about the dumbass cover on retail pocket paperback novels, everyone does it.
Although I disagree on one point not just for superficial reasons, but because it fundumentally conflicts with "what justifies something being a video essay instead of, e.g., a text essay or infographic?":
>A video duration of less than 10 minutes
A good video is good because it does something impossible or ineffective in any other medium. In the case of video, that means something incorporating graphics and motion, possibly also sound, to communicate something that would be less clear in any other form. Contrariwise, the most common failure mode for video essayists is "camera pointed at talking head (entire video track could be deleted) reading a text essay or interviewing over Skype (entire audio track could be STT'd) maybe with some backing music and occasional stock video snippets or pictures of tangential relevance (poor multimedia integration)". Needless to say, that failure mode typically corresponds to very long runtimes and low production values, while the ideal format can communicate difficult points quickly at the cost of high production values.
The classic specimen of a perfect video essay from TV was The Mechanical Universe, which used simple 2D or 3D CGI with minimal narration to explain complex abstract math, broken into segments as short as under a minute. Something from Youtube similar in that regard is the aforementioned GameHut.