Real-time 2d to 3d conversion from h.264 video compression
Howerver, even with -qp 0 and a rgb or yv12 source I still get some differences, minimal but present. I can use as sources either avisynth or lossless yv12 lagarith (to avoid the colorspace compression warning). For example, it may be useful for storing video for editing without taking huge amounts of space and not losing quality and spending too much encoding time every time the file is saved. I am well aware that there is no way I could tell (with just my eyes) the difference between and uncompressed clip and another compressed at a high rate in H264, but I don't think it is not without uses. Series of lossless bitmaps (in any colorspace) -> some transformation -> h264 encode -> h264 decode -> some transformation -> the original series of lossless bitmapsĮDIT: There is a VERY valid point about lossless H264 not making too much sense. In summary what I want to know if that is there a way to go from
#REAL TIME 2D TO 3D CONVERSION FROM H.264 VIDEO COMPRESSION HOW TO#
I don't know how to feed a series of YV12 bitmaps and in what format to x264 anyway, so I cannot even try. It may be because my source AVI file is in RGB colorspace instead of YV12. I tried with Handbrake first, but it turns out it doesn't support lossless encoding. I can then extract the frames from the resulting h264 video with something like mplayer.
I generate a bunch of frames, then I encode the image sequence to an uncompressed AVI (with something like virtualdub), I then apply lossless h264 (the help files claim that setting -qp 0 makes lossless compression, but I am not sure if that means that there is no loss at any point of the process or that just the quantization is lossless).
Is it possible to do completely lossless encoding in h264? By lossless, I mean that if I feed it a series of frames and encode them, and then if I extract all the frames from the encoded video, I will get the exact same frames as in the input, pixel by pixel, frame by frame.