AVSEdit Plus is an AviSynth script editor with side by side preview and video encoding GUI for command line encoders: - edit AviSynth scripts, - preview script video results, - buffer slow scripts and preview them in real time, - enable side-by-side preview for two script video results, - work with command line encoders using an encoding GUI, - create an encoding queue.
- This is a plugin for AviSynth, a tool for video processing where user writes scripts involving different filters and plugins that manipulate video data and then AviSynth makes these scripts act like video files. You can find AviSynth and learn about it here: avisynth.nl.
- Install AviSynth and copy the plugins from the plugin package to the correct locations. Then to use QTGMC, write a script like this. Use the multithreaded plugin pack in this case. You need to tweak multi-threaded scripts to suit your system. Here is a basic template - read and follow the comments carefully and provide the values noted (you.
Where everything flows. In 3D.
Pantarheon 3D AviSynth Toolbox
Current version: 1.1
The Pantarheon 3D AviSynth Toolbox is a set of scripted functions for AviSynth. I wrote the Toolbox to complement the Bororo 3D plug-in because some things are simply difficult to do with the current version of Sony Vegas. And because not everyone has Sony Vegas (AviSynth is free, Vegas is not).
To use the Toolbox, first, if you have not done so yet, download and install Avisynth, then download the Windows Installer file for the Pantarheon 3D Avisynth Toolbox and install it, ordownload the .zip version of the Toolbox, unzip it, and copy the file Pantarheon3D.avsi to the Avisynth plugins directory (which will be something along the lines of C:Program FilesAviSynth 2.5plugins, then read the rest of this page to learn how to use the Toolbox.
Basic Functions
The Toolbox contains a number of basic functions which allow you to multiplex the left and right views found in two separate videos into one video, using several of the common methods currently in use. All of these functions take two parameters, the first one with the left view, the second with the right view, like this:
You can also name the two arguments left and right, and then you can list them in any order:
To illustrate the basic functions visually, we will not use actual left and right stereoscopic footage. For the left view we will use this:
And for the right view, we will use this:
This will allow you to see exactly what the various functions do without having to analyze the image to see which view is left, and which is right.
Additionally, to save on bandwidth we will, in most cases, reduce the size of the above images to one quarter, so the results will be smaller than the originals.
The most important basic functions are:
I said they were the most important ones because they have a very important property: They do not change the quality of the video. This is because all they do is arrange the two videos into one, and do so without changing their resolution. Here is what each of them does:
LeftRight3D will create a video whose width is double that of either the left or the right video (they both must be of the same size and pixel type, this is true of all Toolbox functions that take two arguments), and will place the left view in the left half of the new video, and the right view into its right half. So, for example, if your left.avi and right.avi are 1920x1080 pixels, the result will be 3840x1080 pixels. This is the best format to store all of your 3D videos in for archiving purposes, and then convert that to whatever format you need to publish your videos in.
CrossEyed3D does the same, but places the left view in the right half and the right view in the left half of the new video.
TopDown3D will create a video whose height is double that of the two original videos and will place the left view in the top half and the right view in the bottom half of the new video. So, if your originals are 1920x1080 pixels, the new video will have 1920x2160 pixels.
DownTop3D does the same but will store the left view in the bottom and the right view in the top half of the video.
By the way, these two formats are great for comparing how the different objects within your videos are shifted to the left and to the right in the two different views, a useful tool for learning 3D.
HDMI3D will produce a video in the HDMI v.1.4a format. It will create a new video whose height is twice the originals, plus 45. It will place the left view at the top and the right view at the bottom of the video, and leave 45 empty lines between the two.
There is a catch: It is impossible to create the HDMI v.1.4a 3D in the YUV format. Why? Because the YUV format compresses two lines at the same time. But the HDMI format always produces a video with an odd number of lines (2 * height + 45 = an odd number). That means that HDMI did not create the standard for storage in files but for video players and games to produce the image from some other format, or even on the fly. This is particularly clear when you consider that MPEG files use the YUV format. So, do not blame me, blame HDMI.
Anyway, you can always use AviSynth to convert your MPEG files into another format, such as YUY2, for example:
You can even combine it all into one line:
I think, however, that it is obvious the multiline version is easier to write, easier to read, and easier to debug!
If you do not perform this conversion, don’t worry. HDMI3D checks what format the videos are in and will convert them to YUY2 as needed. Just do not be surprised when your videos converted into the HDMI v.1.4a 3D format end up in the YUY2 mode. It’s not a bug, it’s a feature.
Now, here are the remaining basic functions:
The main difference between them and the ones discussed above is they do not change the size of the original videos, so 1920x1080 originals will produce a 1920x1080 video. That means the originals will be squeezed to fit. That also means they lose one half of their resolution and, therefore, are not ideally suited for archiving purposes, only for delivery to those people whose software requires them (e.g., YouTube).
LeftRight3DReduced changes the width of the left and right video to one half, then places the reduced left view to the left half of the output and the right view to the right half of the output. So, if the left and right videos are both 1920x1080, they are reduced to 960x1080 each, placed next to each other, and the result will have 1920x1080 pixels.
CrossEyed3DReduced is the same, but the reduced left view goes to the right half and the reduced right view to the left half of each frame of the output video.
Yt3D is exactly the same as CrossEyed3DReduced. It exists as a separate function only because it is the 3D format used by YouTube, and having it as a separate function allows you to produce 3D videos for YouTube without having to remember just which format YouTube uses. It is simply a function of convenience, as are all the other functions with Yt3D in their name listed below.
TopDown3DReduced changes the height of the left and right videos by half, then places the reduced left view to the top half and the reduced right view to the bottom half of the output video. So, if your originals are 1920x1080, they are reduced to 1920x540 each, then combined to a 1920x1080 output.
DownTop3DReduced is the same, but the reduced left view goes to the bottom and the reduced right half to the top half of the final output.
Please note there is no HDMI3DReduced function because the HDMI 1.4a 3D specification does not mention any reduced format.
Anaglyph Functions
For decades the main way of presenting 3D movies, videos, photographs, as well as comics and other graphics, was the anaglyph, which uses glasses with a different color lens in front of each eye. While most of us working with 3D would like the anaglyph to die of old age, it is still in use.
Therefore, the Toolbox allows you to create four basic types of anaglyphs, made possible by the MergeRGB function built into AviSynth. Many various algorithms for “better” anaglyphs exist, but they require more than an AviSynth script to create. If you need them, my Bororo 3D plug-in can create just about any anaglyph.
Here are the four anaglyph functions offered by the Toolbox. Like the basic functions, they take two arguments each, a left and a right clip:
Anaglyph produces the “classical” anaglyph, which is strictly monochrome (“black & white”). It can be viewed with red/blue, red/green, or red/cyan glasses, with the red lens going in front of the left eye.
The remaining three anaglyph functions create color anaglyphs (but read the next paragraph!), red/cyan,green/magenta and yellow/blue respectively.
There is a catch: Due to the way the MergeRGB function of AviSynth works, only videos in the RGB format can produce color anaglyphs. The functions still work, mind you, but you will end up with monochrome results. This may be exactly what you want, so the functions do not convert the left and right videos to RGB automatically. If you want, for example, a yellow/blue anaglyph in color from MPEG videos, you need to write something along these lines:
The same precaution holds true for any conversion to anaglyph functions listed below.
Extraction Functions
The Pantarheon 3D AviSynth Toolbox not only lets you combine two videos into one 3D video, it also makes it possible for you yo extract the left or the right view from anything created with the Basic Functions into a 2D video. Note I only mentioned the Basic Functions, but not anaglyphs. That is because the anaglyphs do not have enough of the original video data available to reconstruct the left and right originals.
All of these functions have a name of the corresponding Basic Function followed by either ToLeft or ToRight respectively. They all take one parameter, namely the clip that contains the 3D video.
Here is a list of all the functions that extract the left view from a 3D video without having to resize the video frames:
Next is the list of all the functions that extract the left view from a reduced 3D video. Because the original videos were reduced in half width or half height, the functions resize the extracted view into its original size.
Note:LeftRight3DReducedToLeft was missing in version 1.0. If that is what you have, please download the current version.
That means that if you pass a reduced left/right video to the CrossEyed3DReducedToRight function, it will extract the left view of your left/right video.
The list of functions that extract the right view without having to resize it follows:
And here are the functions that extract the right view from reduced videos:
Note:LeftRight3DReducedToRight was missing in version 1.0. If that is what you have, please download the current version.
Conversion Functions
The Toolbox also contains various functions to convert among the various types of 3D formats. Each of them takes one parameter, the clip from which to convert. The names of the functions consist of the name of the format we are converting from, followed by To, followed by the name of the format we are converting to but without the final 3D (except for Yt3D):
In all of these functions, c refers to a clip. For example:
This will open LeftRight3D.avi and convert it from a LeftRight3D video into a Yt3D video suitable for upload to YouTube as a 3D video.
Note: A number of these functions was missing in version 1.0. If that is what you have, please download the current version.
Sample Scripts
Five sample scripts are included. One of them will work off the bat. The remaining four require that you have an MPEG decoder installed on your system. Note that when running the 32-bit version of AviSynth on a 64-bit system, you need a 32-bit MPEG decoder. Additionally, three of the scripts require that you install DGDecode installed on your system (but install AviSynth first).
YouTube.avs is the one script that will work off the bat. It loads two files, left.avi (which just displays the word “Left”) and right.avi (which displays the word “Right”) and combines them into a YouTube compatible yt3d video. Since the two videos are not true 3D views, it just illustrates how a left and a right video are combined for YouTube.
Just right click on the YouTube.avs file and play it with the Windows Media Player.
DeTube.avs illustrates how to convert a YouTube yt3d video (Hello.mpg) into a color red/cyan anaglyph. It requires an MPEG decoder.
The remaining three samples require both the MPEG decoder and DGDecode. 720p.avs shows how to convert a 1080p YouTube yt3d video into a 720p color red/cyan anaglyph.
WhenIWas.avs shows how to extract the left view from a YouTube yt3d video, effectively converting it to a 2D video.
And finally, WhenIWasHDMI.avs shows how to convert a YouTube yt3d video into an HDMI v.1.4a video. And yes, that is me in all those pictures. ☺
Comments & Questions
If you have any comments, questions, or requests, I log in to the 3D Stereoscopic Production & Delivery section of the DVInfo Forum several times every day. It is a great forum, and it is the best place to contact me. Much better, by the way, than e-mailing me. I delete most of the e-mail that reaches me without reading it because I get way too many mails telling me I won a prestigious lottery, inherited millions of dollars and similar nonsense. So I could easily forward your e-mail to Spamcop by mistake. And even if I actually download your e-mail to my computer, chances are I will think it deserves a good and well thought-out reply, so I would not reply immediately, and then will get distracted. Contacting me in that forum is the best way.
Copyright © 2010 G. Adam Stanislav.
All rights reserved.
Since I've had a few commenters on my videos ask about VapourSynth, I figured it was time to give it a look. For those who don't know, VapourSynth is a python-based video processing scripting system similar to AVISynth, and can actually use AVISynth plugins. My main interest in VS is that it has a native QTGMC port, and before AVISynth+ 64-bit got stable, VS has been a preferred method among some users for faster/more stable conversions.All rights reserved.
It took me about a morning to get everything I needed and set up a sample script for QTGMC conversion. I decided to try to keep the setup as bloat-free as possible by using 'portable' versions of the apps involved.
Here was my ultimate workflow. I won't try to explain everything since I'm not totally fluent in Python, but adapting the following settings should allow you to get it work:
BIG DISCLAIMER: This process may not work, may crash, or do other things to your system. Virus scanning all files is strongly encouraged, but not a 100% guarantee of safety.
You have been warned.
If you're on a deadline (and using Premiere Pro, After Effects, or Final Cut Pro) probably your best best is to use a paid plugin like FieldsKit. And no, they aren't paying me to say that.
Also, this tutorial is for Windows 10. Most of the steps work for other OSes, but I won't cover the differences here.
Here's a video version of the tutorial:
First, grabbed the embeddable version of Python 3.7.X:
https://www.python.org/downloads/
(Click on the name of the latest version of Python 3.7, then scroll down to find the embeddable version.)
Then, downloaded the portable VapourSynth:
https://github.com/vapoursynth/vapoursynth/releases
(I'm using the 64-bit portable version.)
Then, grabbed VapourSynth Editor (VSEdit):
![Avisynth gui Avisynth gui](/uploads/1/1/8/1/118137093/243111077.png)
Extracted the Python archive to a directory. Extract both VapourSynth and VSEdit to the same directory, in that order.
![Avisynth filters Avisynth filters](/uploads/1/1/8/1/118137093/608708942.jpg)
Now, for the needed plugins and VapourSynth Python modules:
FFmpegSource:
https://github.com/FFMS/ffms2/releases
havsfunc
https://github.com/HomeOfVapourSynthEvolution/havsfunc/releases
(The source code is what you want here.)
mvsfunc
https://github.com/HomeOfVapourSynthEvolution/mvsfunc/releases
adjust
https://github.com/dubhater/vapoursynth-adjust/releases
nnedi3_resample
https://github.com/mawen1250/VapourSynth-script
(Click on the 'Clone or download button' and select 'Download ZIP')
I also grabbed fmtconv for colorspace conversion:
https://github.com/EleonoreMizo/fmtconv/releases
Then, the VapourSynth versions of the needed QTGMC prerequisites:
https://github.com/dubhater/vapoursynth-mvtools/releases
https://github.com/dubhater/vapoursynth-nnedi3/releases
(You'll also need nnedi3_weights.bin from here. Left-click on the link, don't right-click/save)
Scanned all the above files for viruses.
Extracted the .py files in the main directory. Extracted the (64-bit) .dll files to the vapoursynth64/plugins directory
Opened VSEdit. Made the following initial script:
import vapoursynth as vsNote that the final backslash before the input file name needs to be 'escaped' with another backslash.
import havsfunc as haf
core = vs.get_core()
clip = core.ffms2.Source(source='F:directoryinput movie.mov')
clip = haf.QTGMC(clip, Preset='Slower', TFF=False)
clip = core.resize.Spline36(clip, 720, 540, matrix_in_s='709')
clip.set_output()
With the above settings, the source colorspace will be preserved, but the color matrix will be shifted to REC.709 on resize. To change both the output colorspace and color matrix, use:
clip = core.resize.Spline36(clip, 720, 540, format=vs.YUV422P10, matrix_in_s='709')If you're coming from a source file with a non-recognized colorspace, you can use:
clip = core.fmtc.resample (clip=clip, css='420')right after the ffms2 command.
clip = core.fmtc.bitdepth (clip=clip, bits=8)
Update: As UniveralAl1 mentions in a comment on the tutorial video, it may be possible to skip this by just converting once to YUV422p10 before QTGMC. The resulting script might look something like this:
import vapoursynth as vs
import havsfunc as haf
core = vs.get_core()
clip = core.ffms2.Source(source='E:Archiveinput video.avi')
clip = vs.core.resize.Point(clip, format = vs.YUV422P10)
clip = haf.QTGMC(clip, Preset='Slower', TFF=False)
clip = core.resize.Spline36(clip, 720, 540)
clip.set_output()
Please try this first rather than the final script below.
All of the above ended up being necessary for the video file I selected for testing due to it using 4:1:1 chroma subsampling, so here is my final script:
import vapoursynth as vs
import havsfunc as haf
core = vs.get_core()
clip = core.ffms2.Source(source='E:Archiveinput video.avi')
clip = core.fmtc.resample (clip=clip, css='420')
clip = core.fmtc.bitdepth (clip=clip, bits=8)
clip = haf.QTGMC(clip, Preset='Slower', TFF=False)
clip = core.resize.Spline36(clip, 720, 540, format=vs.YUV422P10, matrix_in_s='709')
clip.set_output()
Saving this script gives you a .vpy file.
To render it out, I used a combination of vspipe and FFMPEG as per the documentation. However, VapourSynth does not handle audio and video at the same time, so I had to use the mapping command in FFMPEG to copy over the audio separately:
vspipe --y4m Upscale.vpy - | ffmpeg -i pipe: -i 'C:pathtoinputmovie.mov' -c:v prores -profile:v 3 -c:a copy -map 0:0 -map 1:1 'F:Tempoutput file.mov'
Preliminary testing gives me 67 fps average for VapourSynth and 77fps average for AVS+ using the following script:
SetFilterMTMode('QTGMC', 2)
FFmpegSource2('ITVS Trailer.avi', atrack=1)
ConvertToYV12()
AssumeBFF()
QTGMC(Preset='Slower', EdiThreads=3)
Spline36Resize(720, 540)
Prefetch(10)