Steinberg Cubase 8 Manual
Have a look at the manual Steinberg Cubase 8 Manual online for free. It’s possible to download the document as PDF or print. UserManuals.tech offer 523 Steinberg manuals and user’s guides for free. Share the user manual or guide on Facebook, Twitter or Google+.
![](/img/blank.gif)
Export Audio Mixdown The available file formats 951 Windows Media Audio Pro files (Windows only) This is a continuation of the Windows Media Audio format developed by Microsoft Inc. Due to the advanced audio codecs and lossless compression used, WMA Pro files can be decreased in size with no loss of audio quality. Furthermore, WMA Pro features the possibility of mixing down to 5.1 surround sound. The files have the extension “.wma”. When you select “Windows Media Audio File” as the file format, you can click the “Codec Settings…” button to open the “Windows Media Audio File Settings” window. Note that the configuration options may vary, depending on the chosen output channels. General Tab In the Input Stream section, you set the sample rate (44.1, 48 or 96 kHz) and the bit resolution (16 bit or 24 bit) of the encoded file. Set these to match the sample rate and bit resolution of the source material. If no value matches that of your source material, use the closest available value that is higher than the actual value. For example, if you are using 20 bit source material, set the bit resolution to 24 bit rather than 16 bit. • The setting in the Channels field depends on the chosen output and cannot be changed manually. The settings in the Encoding Scheme section are used for defining the desired output from the encoder, e. g. a stereo or a 5.1 surround file. Make settings appropriate for the intended use of the file. If the file will be downloaded or streamed on the internet, you might not want too high bit rates, for example. See below for descriptions of the options. • Mode pop-up menu The WMA Pro encoder can use either a constant bit rate or a variable bit rate for encoding to 5.1 surround, or it can use lossless encoding for encoding to stereo. The options on this menu are as follows:
![](/img/blank.gif)
Export Audio Mixdown The available file formats 952 Constant Bitrate This will encode to a 5.1 surround file with a constant bit rate (set in the Bit Rate/Channels menu, see below). Constant bit rate is preferably used if you want to limit the size of the final file. The size of a file encoded with a constant bit rate is always the bit rate times the duration of the file. Variable Bitrate Encodes to a 5.1 surround file with a variable bit rate, according to a quality scale (the desired quality is set in the Bit Rate/Quality menu, see below). When you encode with variable bit rates, the bit rate fluctuates depending on the character and intricacy of the material being encoded. The more complex passages in the source material, the higher the bit rate – and the larger the final file. Lossless Encodes to a stereo file with lossless compression. • Bit Rate/Quality pop-up menu This menu allows you to set the desired bit rate. The available bit rate settings vary depending on the selected mode and/or output channels (see above). If the Variable Bitrate mode is used, the menu allows you to select from various levels of quality, with 10 being the lowest and 100 the highest. Generally, the higher the bitrate or quality you select, the larger the final file will be. The menu also shows the channel format (5.1 or stereo). Advanced tab • Dynamic Range Control These controls allow you to define the dynamic range of the encoded file. The dynamic range is the difference in dB between the average loudness and the peak audio level (the loudest sounds) of the audio. These settings affect how the audio is reproduced if the file is played on a Windows computer with a player from the Windows Media series, and the “Quiet Mode” feature of the player is activated to control the dynamic range. The dynamic range is automatically calculated during the encoding process, but you can specify it manually as well. To manually specify the dynamic range, first put a checkmark in the box to the left by clicking in it, and then enter the desired dB values in the Peak and Average fields. You can enter any value between 0 and -90 dB. Note, however, that it is usually not recommended to change the Average value, since this affects the overall volume level of the audio and therefore can have a negative effect on the audio quality.
![](/img/blank.gif)
Export Audio Mixdown The available file formats 953 The Quiet Mode in a Windows Media player can be set to one of three settings. Below, these settings are listed together with an explanation of how the Dynamic Range settings affect them: • Off: If Quiet Mode is off, the dynamic range settings that were automatically calculated during the encoding will be used. • Little Difference: If this is selected and you have not manually changed the dynamic range settings, the peak level will be limited to 6 dB above the average level during playback. If you have manually specified the dynamic range, the peak level will be limited to the mean value between the peak and average values you specified. • Medium Difference: If this is selected and you have not manually changed the dynamic range settings, the peak level will be limited to 12 dB above the average level. If you have changed the dynamic range, the peak level will be limited to the peak value you specified. • Surround Reduction Coefficients Here you can specify which amount of volume reduction, if any, is applied to the different channels in a surround encoding. These settings affect how the audio is reproduced on a system incapable of playing back the file in surround, in which case the surround channels of the file will be combined into two channels and played back in stereo instead. The default values should produce satisfactory results, but you can change the values manually if you wish. You can enter any value between 0 and -144 dB for the surround channels, the center channel, the left and right channels and the LFE channel, respectively. Media tab In these fields you can enter a number of text strings with information about the file – title, author, copyright information and a description of its contents. This information will then be embedded in the file header and can be displayed by some Windows Media Audio playback applications. RELATED LINKS Surround Sound (Cubase Pro only) on page 558
![](/img/blank.gif)
954 Synchronization Background What is synchronization? Synchronization is the process of getting two or more devices to play back together at the same exact speed and position. These devices can range from audio and video tape machines to digital audio workstations, MIDI sequencers, synchronization controllers, and digital video devices. Synchronization basics There are three basic components of audio/visual synchronization: position, speed, and phase. If these parameters are known for a particular device (the master), then a second device (the slave) can have its speed and position “resolved” to the first in order to have the two devices play in perfect sync with one another. Position The position of a device is represented by either samples (audio word clock), video frames (timecode), or musical bars and beats (MIDI clock). Speed The speed of a device is measured either by the frame rate of the timecode, the sample rate (audio word clock) or by the tempo of the MIDI clock (bars and beats). Phase Phase is the alignment of the position and speed components to each other. In other words, each pulse of the speed component should be aligned with each measurement of the position for the most accuracy. Each frame of timecode should be perfectly lined up with the correct sample of audio. Put simply, phase is the very precise position of a synchronized device relative to the master (sample accuracy). Machine control When two or more devices are synchronized, the question remains: how do we control the entire system? We need to be able to locate to any position, play, record, and even jog and scrub the entire system using one set of controls.
![](/img/blank.gif)
Synchronization Timecode (positional references) 955 Machine control is an integral part of any synchronization setup. In many cases, the device simply called “the master” will control the whole system. However, the term “master” can also refer to the device that is generating the position and speed references. Care must be taken to differentiate between the two. Master and slave Calling one device the “master” and another the “slave” can lead to a great deal of confusion. The timecode relationship and the machine control relationship must be differentiated in this regard. In this document, the following terms are used: • The “timecode master” is the device generating position information or timecode. • The “timecode slave” is any device receiving the timecode and synchronizing or “locking” to it. • The “machine control master” is the device that issues transport commands to the system. • The “machine control slave” is the device receiving those commands and responding to them. For example, Cubase could be the machine control master, sending transport commands to an external device which in turn sends timecode and audio clock information back to Cubase. In that case, Cubase would also be the timecode slave at the same time. So calling Cubase simply the master is misleading. NOTE In most scenarios, the machine control slave is also the timecode master. Once it receives a play command, that device starts generating timecode for all the timecode slaves to synchronize to. Timecode (positional references) The position of any device is most often described using timecode. Timecode represents time using hours, minutes, seconds, and frames to provide a location for each device. Each frame represents a visual film or video frame. Timecode can be communicated in several ways: • LTC (Longitudinal Timecode) is an analog signal that can be recorded on tape. It should be used for positional information primarily. It can also be used for speed and phase information as a last resort if no other clock source is available. • VITC (Vertical Interval Timecode) is contained within a composite video signal. It is recorded onto video tape and is physically tied to each video frame. • MTC (MIDI Timecode) is identical to LTC except that it is a digital signal transmitted via MIDI.
![](/img/blank.gif)
Synchronization Timecode (positional references) 956 Timecode standards Timecode has several standards. The subject of the various timecode formats can be very confusing due to the use and misuse of the shorthand names for specific timecode standards and frame rates. The reasons for this confusion are described in detail below. The timecode format can be divided into two variables: frame count and frame rate. Frame count (frames per second) The frame count of timecode defines the standard with which it is labeled. There are four timecode standards: 24 fps Film (F) This frame count is the traditional count for film. It is also used for HD video formats and commonly referred to as “24 p”. However, with HD video, the actual frame rate or speed of the video sync reference is slower, 23.976 frames per second, so timecode does not reflect the actual realtime on the clock for 24p HD video. 25 fps PAL (P) This is the broadcast video standard frame count for European (and other PAL countries) television broadcast. 30 fps non-drop SMPTE (N) This is the frame count of NTSC broadcast video. However, the actual frame rate or speed of the video format runs at 29.97 fps. This timecode clock does not run in realtime. It is slightly slower by 0.1 %. 30 fps drop-frame SMPTE (D) The 30 fps drop-frame count is an adaptation that allows a timecode display running at 29.97 fps to actually show the clock-on-the-wall-time of the timeline by “dropping” or skipping specific frame numbers in order to “catch the clock up” to realtime. Confused? Just remember to keep the timecode standard (or frame count) and frame rate (or speed) separate. Frame rate (speed) Regardless of the frame counting system, the actual speed at which frames of video go by in realtime is the true frame rate. In Cubase the following frame rates are available: 23.9 fps (Cubase Pro only) This frame rate is used for film that is being transferred to NTSC video and must be slowed down for a 2-3 pull-down telecine transfer. It is also used for the type of HD video referred to as “24 p”.
![](/img/blank.gif)
Synchronization Clock sources (speed references) 957 24 fps This is the true speed of standard film cameras. 24.9 fps (Cubase Pro only) This frame rate is commonly used to facilitate transfers between PAL and NTSC video and film sources. It is mostly used to correct for some error. 25 fps This is the frame rate of PAL video. 29.97 fps This is the frame rate of NTSC video. The count can be either non-drop or drop-frame. 30 fps This frame rate is not a video standard anymore but has been commonly used in music recording. Many years ago it was the black and white NTSC broadcast standard. It is equal to NTSC video being pulled up to film speed after a 2-3 telecine transfer. 59.98 fps (Cubase Pro only) This rate is also referred to as “60 p”. Many professional HD cameras record at 59.98 fps. While 60 fps could theoretically exist as a frame rate, no current HD video camera records at a full 60 fps as a standard rate. Frame count vs. frame rate Part of the confusion in timecode stems from the use of “frames per second” in both the timecode standard and the actual frame rate. When used to describe a timecode standard, frames per second defines how many frames of timecode are counted before one second on the counter increments. When describing frame rates, frames per second define how many frames are played back during the span of one second of realtime. In other words: Regardless of how many frames of video there are per second of timecode (frame count), those frames can be moving at different rates depending on the speed (frame rate) of the video format. For example, NTSC timecode (SMPTE) has a frame count of 30 fps. However, NTSC video runs at a rate of 29.97 fps. So the NTSC timecode standard known as SMPTE is a 30 fps standard that runs at 29.97 realtime. Clock sources (speed references) Once the position is established, the next essential factor for synchronization is the playback speed. Once two devices start playing from the same position, they must run at exactly the same speed in order to remain in sync. Therefore, a single speed reference must be used and all devices in the system must follow that reference. With digital audio, the speed is determined by the audio clock rate. With video, the speed is determined by the video sync signal.
![](/img/blank.gif)
Synchronization The Project Synchronization Setup dialog 958 Audio clock Audio clock signals run at the speed of the sample rate used by a digital audio device and are transmitted in several ways: Word clock Word clock is a dedicated signal running at the current sample rate that is fed over BNC coaxial cables between devices. It is the most reliable form of audio clock and is relatively easy to connect and use. AES/SPDIF Digital Audio An audio clock source is embedded within AES and SPDIF digital audio signals. This clock source can be used as a speed reference. Preferably, the signal itself does not contain any actual audio (digital black), but any digital audio source can be used if necessary. ADAT Lightpipe ADAT Lightpipe, the 8-channel digital audio protocol developed by Alesis, also contains audio clock and can be used as a speed reference. It is transmitted via optical cables between devices. NOTE Do not confuse the audio clock embedded in the Lightpipe protocol with ADAT Sync, which has timecode and machine control running over a proprietary DIN plug connection. MIDI clock MIDI clock is a signal that uses position and timing data based on musical bars and beats to determine location and speed (tempo). It can perform the same function as a positional reference and a speed reference for other MIDI devices. Cubase supports sending MIDI clock to external devices but cannot slave to incoming MIDI clock. IMPORTANTIMPORTANTIMPORTANTIMPORTANT MIDI clock cannot be used to synchronize digital audio. It is only used for MIDI devices to play in musical sync with one another. Cubase does not support being a MIDI clock slave. The Project Synchronization Setup dialog Cubase’s Project Synchronization Setup dialog provides a central place to configure a complex synchronized system. In addition to settings for timecode sources and machine control settings, project setup parameters are available along with basic transport controls for testing the system. To open the Project Synchronization Setup dialog, proceed as follows:
![](/img/blank.gif)
Synchronization The Project Synchronization Setup dialog 959 • On the Transport menu, select the “Project Synchronization Setup…” option. • On the Transport panel, [Ctrl]/[Command]-click the Sync button. The dialog is organized into sections separating related groups of settings. The arrows shown between the various sections of the dialog indicate how settings in one section influence settings in another section. In the following, the available sections are described in detail. The Cubase Section At the center of the Project Synchronization Setup dialog is the Cubase section. It is provided to help you visualize the role that Cubase takes in your setup. It shows which external signals enter or leave the application. Timecode Source The Timecode Source setting determines whether Cubase is acting as timecode master or slave. When set to “Internal Timecode”, Cubase is the timecode master, generating all position references for any other device in the system. The other options are for external timecode sources. Selecting any of these, makes Cubase a timecode slave when the Sync button is activated. Internal Timecode Cubase generates timecode based on the project timeline and project setup settings. The timecode will follow the format specified in the Project Setup section. MIDI Timecode Cubase acts as a timecode slave to any incoming MIDI timecode (MTC) on the port(s) selected in the MIDI Timecode section, to the right of the Timecode Source section. Selecting “All MIDI Inputs” allows Cubase to sync to MTC from any MIDI connection. You can also select a single MIDI port for receiving MTC.
![](/img/blank.gif)
Synchronization The Project Synchronization Setup dialog 960 ASIO Audio Device This option is only available with audio cards that support ASIO Positioning Protocol. These audio cards have an integrated LTC reader or ADAT sync port and can perform a phase alignment of timecode and audio clock. VST System Link VST System Link can provide all aspects of sample-accurate synchronization between other System Link workstations. RELATED LINKS Working with VST System Link on page 966 Timecode Preferences When MIDI Timecode is selected, additional options become available in the Cubase section, providing several options for working with external timecode. Lock Frames This setting determines how many full frames of timecode it takes for Cubase to try and establish sync or “lock”. If you have an external tape transport with a very short start-up time, try lowering this number to make lock-up even faster. This option can only be set to multiples of two. Drop Out Frames This setting determines the amount of missed timecode frames it takes for Cubase to stop. Using LTC recorded on an analog tape machine can result in some amount of drop outs. Increasing this number allows Cubase to “free-wheel” over missed frames without stopping. Lowering this number causes Cubase to stop sooner once the tape machine has stopped. Inhibit Restart ms Some synchronizers still transmit MTC for a short period after an external tape machine has been stopped. These extra frames of timecode sometimes cause Cubase to restart suddenly. The “Inhibit Restart ms” setting allows you to control the amount of time in milliseconds that Cubase will wait to restart (ignoring incoming MTC) once it has stopped.