I was looking through the video editing thread on /v/ and figured it would be appropriate to have something similar here now that /tech/ exists. Except I think it would be ideal if the focus was on free software solutions, and also encompassing a broader spectrum of content rather than just videos.
In this thread give tips and ask questions about image editing, video editing, encoding, and anything else you think might be relevant for creating content.
Recording & Streaming
>OBS (Open Broadcaster Software)
>HandBrake (FFmpeg GUI)
>youtube-dl (Downloading video files from virtually any site)
Let me mention gallery-dl, which is an extremely useful tool for gathering large volumes of images and other media from most online art gallery places. It even takes Nitter/Twitter URLs, although the default output file paths are weird. You can find it easily with a search, either on Github or the Python file hosting community.
My copy of Vegas is getting kind of old, so I figured I'd give it a shot. It's pretty lean and smooth, which I like, but every time I jump to a different frame in the timeline it plays the audio at that specific point. Is there any way to disable this without disabling the audio from the timeline itself? I'm on the wangblows version if that makes a difference.
unironically a good video editor, what other video editor allows you to render a spinning 3d cube on top of a video?
Also OBS decided to fug my shit up, might have to look into logs if it has any. So I had to fall back to the chad ffmpeg
-f pulse -ac 2 -i default \
-r 60 \
-f x11grab $(slop -l -n 1 -f '-video_size %wx%h -i +%x,%y') \
-b:v 4M \
-b:a 80k \
As long as you don't move the window around, it should be ok but isn't as good as Windows' gdigrab feature
I would also recommend Aegisub, absolute god tier subtitle editor - pretty sure most dedicated fansubbers use this program.
Is there an equivalent of WebM for Retards on Linux? It doesn't work under wine and I don't want to put in 100 different modifiers in a terminal just to make a file under 16MB or whatever the limit is for my Estonian bus enthusiast forums. A GUI is simpler for all the options it has. QWinFF doesn't have the features I want, most specifically how it dynamically changes commands to get it under a certain filesize.
HandBrake used to let you target file size in an earlier version but they removed it for some reason. You could maybe try using that version but it's quite old, it's 0.9.5. A more clean solution is probably some bash scripting, but that's out of my depth.
The target file size feature in webm for retards never worked properly for me anyways. These days I just do the bitrate calculations manually. Target file size / duration. If you're using handbrake you might need to account for the audio bitrate as well. Hopefully some other anon has a better more retard proof solution up their sleeves. I have almost no clue what I'm talking about
> I don't want to put in 100 different modifiers in a terminal just to make a file under 16MB or whatever the limit is
It's 32MB here (literally read the post form, newfag), but are you really making videos that are so complex? My default use case for encoding a webm or MP4 is like:
ffmpeg -i input.mp4 -b:v video_bitrate -b:a audio_bitrate -s resolution -fs 16MB -f webm output.webmwhich is hardly difficult to modify with a bit of trial and error, since the only things I ever really change are the video bitrate and the resolution. I imagine you must be making something more complicated.
Unless I'm mistaken -fs just cuts off the video when it reaches the file size.
Is there a realistic replacement for GIMP at this point? Every update makes the UI worse and tools worse. It's become difficult to use these days.
Try Krita. It's targeted at mostly artfags, but it can also be used as an image manipulation program.
I have some txt files that i need to convert to mp4, any reliable program to do that?
How do you calculate estimated webm size, to help you pick bitrates without completely blind trial and error? I remember there was some method to do it.
>too many packets buffered for output stream 0:1, conversion failed!
On my first day too.
I don't know that much about ffmpeg and conversion, but the only format that is reliably calculated is constant bitrate. As simple as taking the bitrate and multiplying it by seconds, just keep in mind that it's measured in bits. However, some sources say that constant rate factor is better, since it gives a less blocky picture during sudden movements as opposed to CBR. With CRF, there is no size estimation per se, you just specify a desired size and 2-pass encoding picks a CRF value to match that. I've seen a manual method mentioned, where it takes 5-10% of overall frames in the video and does a one pass encode with the value you specified. It's supposed to be fast enough to estimate size and play around with the settings, but I have no idea if it works.
On a side note, does anybody have that ffmpeg image guide that circulated around boards for video posting?
How did no one post this.
thank you so much i was looking for this
When I make webms, I usually consult >>397 and the presets on https://developers.google.com/media/vp9/settings/vod\ .
it only works on windows but I like paint.net a lot
Krita is pretty good. The UI is a little weird but nowhere near as janky as GIMP's, and it has a few useful features that GIMP never had (like an actually usable color-to-alpha tool, or rectangles).
How can I split a single FLAC file into multiple FLAC files with the cue file, preferably with ffmpeg?
I like Openshot for video editing, although I think I had problems with it crashing when I used it on Linux.
Thanks, been looking for something like this and was worried I'd have to write something myself. Anything you know of for hashing large numbers of images and moving unique ones?
How does blender's video editing mode compare to kdenlives and such?
Audacity a shit, any alternatives? Not looking for anything fancy just very basic audio editing and maybe a few effects.
Thanks for this. I have a question; when downloading from tumblr, does it download the original uncompressed images or only the publicly available 1280px versions?
>Is there any way to disable this
Settings -> (uncheck) Scrub audio
>Also OBS decided to fug my shit up
What happened? I don't use it much but it runs pretty smoothly, granted I only use the internal ffmpeg encoder and never the stock one.
I already managed to capture application audio using it but there's always a delay of a few seconds and the audio sometimes gets choppy. Ironically I don't have that problem with OBS.
Use webm.py: https://github.com/Kagami/webm.py it's a small wrapper for ffmpeg with very sane encode settings that I've backported into my own vanilla ffmpeg scripts, it's that good.
To get a 16MB video you just do webm -i input.mp4 -l 16
Disable tool grouping and rearrange the tools any way you want, that should fix most/all UI woes. Alternatively downgrade to 2.8.22: https://download.gimp.org/mirror/pub/gimp/v2.8/windows/
>How do you calculate estimated webm size
Exact formula is covered here: https://trac.ffmpeg.org/wiki/Encode/H.264#twopass
On windows you can use CUETools, on linux you can use flacon (needs flac to be installed). This is the easiest, graphical way.
Otherwise google found this for ffmpeg: https://gist.github.com/Theldus/bf7b9fd7370f4ea4c57da5cb596e9da9
It crashes on windows too, no reason to use it when Shotcut exists.
obs works on my machine now
Any ffmpeg experts that can help me? I'm trying to create a program that makes creating webms and gifs easier, think of it like 'Webm for Retards' 2.0.
The idea is that you can accurately pick the time/frame that you want and see a thumbnail of the current time, but my problem is that I don't know how to accurately measure time, specifically frame-by-frame.
To give a more concrete problem: when the video is seeked to a specific HH:MM:SS time, I want to be able to see the frame number and/or press a key to move to the next frame. How could I accomplish this? Is there some trick I could use to convert a timestamp into a frame number and vice-versa?
Oh and I can get info with ffprobe as well, in case that has a way.
Cool, not an expert dude but ffmpeg has fuckeasy any video to gif creation commands all you have to do simplify and call it with the UI right? You can even do variable fps feasible up to 60(i dont have the commands on hand right now) and if you dont already know you can set time and cuts in ffmpeg, this only requires you to watch the given video and memorize numbers to figure what you want to cut which you will try and eliminate with your GUI.
>To give a more concrete problem: when the video is seeked to a specific HH:MM:SS time, I want to be able to see the frame number and/or press a key to move to the next frame. How could I accomplish this? Is there some trick I could use to convert a timestamp into a frame number and vice-versa?
I don't know but read ffmpeg source code to get inspired and maybe you can rip it off, I know it has neat stuff like frame counters of a given input (on conversion.)
How do I improve the quality of gifs made with ffmpeg? Mostly talking about dithering and the color palette. I tried searching but can't figure out how to apply the shit I find. Pics related, it bleeds dithering out of it's ass even though the image is full of flat solid colors.
The command used for pic related was:
ffmpeg -i "/path/to/file.mkv" -ss starttime -t duration -filter:v scale=w=400:h=400:force_original_aspect_ratio=decrease -y "path/to/out.gif"
How can I modify it to remove dithering, limit the number of colors to somewhere between 32-128, or make the color palette less retarded? I can't figure out the answer to any one of those questions.
The way Shotcut does it is represents time as HH:MM:SS:FF where FF is the frame number in a given second SS. For example a 60 FPS video would have FF values from 00 to 59 for every second... There was an alternative method from what I recall where you could just input HH:MM:SS.MS where MS is milliseconds and it automatically converts it to the first format... You could look into implementing that system.
The dithering algorithm in your picture is called "Positioned" and it's what MS Paint uses lol, you're better off using an algorithm called "Floyd-Steinberg". Pics related (in order) show the difference between the original image, no dithering, positioned dithering, Floyd-Steinberg (normal) and Floyd-Steinberg (reduced color bleeding).
Now I don't know how to tune ffmpeg for creating GIFs however here is a very simple way to do it using GIMP:
>export your video as PNG/JPEG frames using ffmpeg (resize if needed)
>make a new file in GIMP with the dimensions of the video and delete the default Background layer
>drag all frames into the GIMP window and they will be inserted on top of each other in a stack way (from bottom to top 1,2,3...etc)
<save the file as XCF (backup)
>click Image -> Mode -> Indexed
>- choose the maximum number of colors (128-256 is good)
>- check "Remove unused"
>- uncheck both "dithering transparency" and "dithering text"
>- choose Floyd-Steinberg dithering or F-S (reduced color bleeding) and click convert
>click File -> Export as and type filename.gif then go from there
Forgot to say, before you export go to
>Filters -> Animation -> Optimize (for GIF) or (Difference) it doesn't matter much)
to reduce file size as much as possible.
>represents time as HH:MM:SS:FF where FF is the frame number in a given second SS
That won't make sense for videos with 29.97 fps for example. I also don't want to approximate the duration of frames in milliseconds, because that may lead to frames being skipped when you jump forward frame-by-frame, supposedly the frames don't always have the same duration.
For now I'm jumping forward 10 milliseconds when you press the arrow keys, and 100 if you hold shift. It's good enough for finding the frame you want, but it's obviously not ideal.
I need it for the aforementioned program, so Gimp is not an option. Exporting the animation as frames is something I didn't consider though, I suppose I could have that option so they can be used in a different program to finish the job, maybe even use imagemagick to create the final animation automagickally.
>That won't make sense for videos with 29.97 fps for example.
You can probably do HH:MM:SS:FF.FF who knows. Anyway vanilla ffmpeg supports HH:MM:SS.MSS where MSS is milliseconds so you can just go with that.
See the color mapping section.
>See the color mapping section.
This is exactly the kind of shit that I don't know how to apply. I get how to generate the palette since all I need to do is add ",palettegen" to the filters, but I have no idea how to then use it because all the examples use -filter_complex with seemingly completely different syntax and the ffmpeg documentation doesn't tell me anything about what the difference is.
I think this is a two-pass process: first pass generates the palette and stores it in a temporary location, then second pass uses that palette to encode. Check the "usage" section in the link.
I know it is, but the syntax suddenly goes from a relatively understandable "s1=xxx,s2=zzz" style to having brackets and semicolons in a nonsensical pattern.
Anyway I did get it to work but the results are worse than they were at the beginning. It generates a stupid palette that completely misses certain colors, the dithering becomes incredibly noisy and bloats the filesize, and the only dithering methods that even use the generated palette remotely correctly in the first place have the most noise and create a file that's literally 20 times larger than the non-paletted one. Even if I disable dithering, the filesize is still about 12% bigger despite looking like complete trash.
I think ffmpeg is a lost cause and I just need to use a different program to make a gif from extracted frames or something.
And here's an example of it missing colors.
Can't believe I spent an entire day fighting with that shit. Just exporting frames and putting them together in imagemagick immediately gives great results. Only problem is that it's having a stroke and jumping to the first frame over and over.
Look, I don't mean to discourage you but I think you have a long way to go in understanding how ffmpeg works to begin with before even attempting to leverage it with an automated script, let alone your own tool.
You need to google more until you get to a point where you already know how to do everything through commands, and only need a simple program to call these commands.
By the way if imagemagick isn't cutting it try graphicsmagick.
the truth is thats actually the best gif itt
>you have a long way to go in understanding how ffmpeg works to begin with before even attempting to leverage it with an automated script, let alone your own tool
I'm already doing it though. The problem is that I've googled everything I'm capable of and have exhausted my ability to get any more out of it. ffmpeg just doesn't seem to be capable of exporting good gif animations period.
Meanwhile imagemagick gives a glitchy result even though I literally copypasted the exact command someone mentioned, there's nothing esoteric about it, just input frames output gif.
magick frame_*.png output.gif
The command I used for the previous gif was a bit different but it's glitched no matter what. There's nothing wrong about the frames either, pic related.
Well yeah, that's the first gif made with imagemagick instead of ffmpeg.
That command was written for linux, take note of the * implying a wildcard. Windows doesn't work that way.
Well it successfully finds the frames anyway. It's not a call to windows, it's something I'm giving as an input to ffmpeg.
>Well yeah, that's the first gif made with imagemagick instead of ffmpeg.
I mean in that it's terrorizingly glitching
What the fuck, all I had to do was rename the files to frame_001 etc instead of frame_1. I can kind of guess why, but I don't understand why it only jumps to the first frame every time instead of repeating frame_1, frame_10, frame_11 or similar pattern. Maybe they were too similar so they got merged into a single frame which is why I only saw a single incorrect frame at a time in gimp.
Forgot, but here's the commands in case anyone wants a way to make gifs.
First, use ffmpeg to create a list of images from the video frames (as seen in >>1057 )
ffmpeg -ss 00:00:00.000 -i INPUTFILE -t 00:00:00.000 -filter:v scale=w=400:h=400:force_original_aspect_ratio=decrease OUTPUTPATH/frame_%04d.png
Put the start time after -ss and the duration after -t. w=400:h=400 is the maximum width and height of the gif you want, around 400-600 is good for gifs. Replace INPUTFILE and OUTPUTPATH with whatever you want, I recommend outputting to an empty folder since it creates tons of files.
If you want to crop it, you can put it before the scale:
You can also crop in the next step, but I don't have the command for that.
Next, use imagemagick to combine the frames into an optimized gif animation:
magick -delay 1xFRAMERATE INPUTPATH/frame_*.png -fuzz 2% -layers OptimizeFrame -layers OptimizeTransparency +fuzz OUTPUTFILE.gif
FRAMERATE must be the framerate of the video, for example 1x30 if the video is 30fps, see below. Again replace INPUTPATH and OUTPUTFILE with whatever. You can increase the fuzz value to improve filesize (it ignores pixels whose color changes less than 2%), but it may create artifacts, 1-3% seems usually fine.
You can find out the fps of a video by using ffprobe: ffprobe -hide_banner -i INPUTFILE there should be "xx fps" somewhere. If it's something like 29.97, then it's fine to just pretend it's 30.
Is there a way to get ffprobe's functionality with ffmpeg?
I'm trying to put together >>1026 and want to make a separate version with ffmpeg included with it so you don't have to install it separately, but I don't want to include both ffmpeg and ffprobe because they're 64MB each. Their file sizes are almost identical so I thought maybe ffmpeg has a command or something that makes it act like ffprobe.
My ffprobe is only about 150K. You can bundle a dynamically linked version. Or write a C wrapper around it's library in your program(which should be what you are doing anyway).
I have no idea how to use it as a library. Literally every single library except stb_* is incompatible with my environment, and I'm not going to start digging for macros and types within the source code so I could nigger rig a dll. I would be much happier and my program would be much better if I was able to use it that way though.
Maybe forums with bigger market share are more suitable for you.
Look into libffmpeg, I haven't personally used it but a lot of programs do. Pretty sure the ffprobe functionality is included, besides you can compile with only the codecs you want in order to shrink size.
And seconding what >>1191 said, you're better off programming on linux, even when targeting windows. Make a linux VM and develop on it, should be much easier.