Here is a specific list of the actions needed to capture and publish the lecture:
1. Establish a Session Initiative Protocol (SIP) call to the TCS – use the remote call to select the
call from the directory or call history, and press the green ‘call’ button to confirm.
2. At the end of the recording session – drop the call – press the red ‘end call’ button.
3. Wait for approximately 2x length of the recording.
4. Access the Show and Share server and match the names from the LDAP-based directory (one
off action) and optionally add chapter markers.
5. Video is now ready for end users to view.
Optionally
Once the video has been encoded it is possible for the server to make a version available for
download and further editing. This might include adding title screens at the start of each lecture,
removing any dead space between lectures etc. If this is done, the edited version can then be
reloaded to the Show and Share portal and published. The speaker names and key word index will be
reproduced automatically without any manual intervention. A version of the TERENA lectures was
edited in this way using iMovie on MacBook Pro. It is also possible to control the format of the final
video by arranging the two video windows (speaker and presentation) as is considered most
appropriate. Either the speaker or presentation video may be configured to take up the full screen, or
both videos can be viewed simultaneously, with the presentation occupying most of the screen and
the speaker appearing in the corner.
It is also possible for the incoming live feeds to be re-broadcast to allow remote learners to watch the
transmission. Where this is done, the remote learners may provide feedback using other
communication methods such as Twitter.
In cases where access to the live transmission needs to be extended to remote participants, this may
be achieved by bringing additional VC end points into the call using a Multipoint Control Unit (MCU)
device. Here remote participants would be able to interact with the lecturer as the lesson proceeded.
When many end points are included in this way, it is further recommended that remote sites interact
on an invitation only basis to prevent too many interruptions to the lecture flow. Where services like
Twitter are employed, remote attendees may discuss/question lecture content as the event proceeds
without restriction.
Once the lecture is recorded the system can be set up to automatically add watermarks, bumpers and
trailers. These can be edited to provide information that is specific to the local circumstances. For
example, a bumper might include information about the university where the recording was made,
etc. In addition, the operator can manually add chapter markers to allow specific points in the
running order to be accessed directly.
The end product video can be viewed via a web browser that supports Flash. The Show and Share
portal is an integral part of the workflow and this exposes the metadata added by Cisco Pulse
Analytics. This metadata includes the lecturer voice profile and key word index. The lecturer voice
profile is exposed in the form of colour coding on the video’s timeline, with a different colour for each
speaker throughout the recording. The key word index is automatically produced based on a pre-
defined vocabulary, which can be further customised to reflect the needs of specific subject matters. A
lecture on a science subject like nuclear physics would clearly have different vocabulary requirements
from a lecture on politics.
It is also possible to have a stand-alone video format (such as .MP4) produced as part of the
automated process, that can then be downloaded and edited offline. This would be useful when the
Comentarios a estos manuales