extract frames during progressive download and store in cache for scrubbing (back)

Topics: Windows 8 Xaml, Windows Phone 8
Jun 20, 2013 at 4:04 AM
Edited Jun 20, 2013 at 4:17 AM
is there a "fast scrub" feature available i.e. Fast Scrub offers some of the smoothest timeline performance in the business — so smooth that you can often check your work just by dragging through it.....

how can I emulate fast scrub using player framework...
is it possible to cache the frames in windows phone 8 using player framework?

if so is this performance driven enough to extract frames during video playback at the decoder phase of the framework?

why? so during scrubbing I can access the cached video frames, the scrubbing is slow to respond... using progressive download in WP8

is there some sample code or guidance on how to achieve this?
Jun 20, 2013 at 4:03 PM
To cache the content to ensure high performance scrubbing, you could pre-download the video to isolated storage and load the video from an IsolatedStorageStream.

Alternatively, you could create a MediaStreamSource and use that to load the source, this gives you complete control over how the media is loaded. However, this is not trivial and will require knowledge of the video format and low level container parsing.

Lastly, if you were using smooth streaming (which it sounds like you are not), you could create an ISmoothStreamingCache implementation to persist and retrieve chunks from isolated storage on the fly.
Jun 20, 2013 at 4:18 PM
Edited Jun 20, 2013 at 4:20 PM
thanks for the response.

option #1 will not fly.

option #3 seems ideal but I will have to setup a streaming server at this point.

option#2 is more appealing....
I tried option #2 previously with the sample app crashing but as you noted, my knowledge of video format and low level container parsing is limited. I like this approach but need some sample code to guide on option#2.
Jun 27, 2013 at 5:38 AM
" require knowledge of the video format and low level container parsing.... "

so I've found a solution which is a custom MediaStreamSource "videomediastreamsource" with media player framework

code below:
namespace MyBytes
    public class VideoMediaStreamSource : MediaStreamSource
        //private Stream _videoStream;
        private Stream _audioStream;
        private WaveFormatEx _waveFormat;
        private byte[] _audioSourceBytes;
        private long _currentAudioTimeStamp;

        private MediaStreamDescription _audioDesc;

        private Stream _frameStream;

        private int _frameWidth;
        private int _frameHeight;

        private int _framePixelSize;
        private int _frameBufferSize;
        public const int BytesPerPixel = 4;   // 32 bit including alpha

        private byte[][] _frames = new byte[2][];
        private byte[] _frame = null;
        private int _currentReadyFrame;
        private int _currentBufferFrame;

        private int _frameTime;
        private long _currentVideoTimeStamp;

        private MediaStreamDescription _videoDesc;
        private Dictionary<MediaSampleAttributeKeys, string> _emptySampleDict = new Dictionary<MediaSampleAttributeKeys, string>();

        public byte[] CurrentFrameBytes
            get { return _frame; }

        public void WritePixel(int position, Color color)
           // BitConverter.GetBytes(color).CopyTo(_frame, position * BytesPerPixel);

            int offset = position * BytesPerPixel;

            _frames[_currentBufferFrame][offset++] = color.B;
            _frames[_currentBufferFrame][offset++] = color.G;
            _frames[_currentBufferFrame][offset++] = color.R;
            _frames[_currentBufferFrame][offset++] = color.A;

            if (position < 10)
                System.Diagnostics.Debug.WriteLine("Pixel at {3} is {0} {1} {2}", color.R, color.B, color.G, position);

        public VideoMediaStreamSource(Stream audioStream, int frameWidth, int frameHeight)
            _audioStream = audioStream;

            _frameWidth = frameWidth;
            _frameHeight = frameHeight;

            _framePixelSize = frameWidth * frameHeight;
            _frameBufferSize = _framePixelSize * BytesPerPixel;

            // PAL is 30 frames per second
            _frameTime = (int)TimeSpan.FromSeconds((double)1 / 30).Ticks;

            _frames[0] = new byte[_frameBufferSize];
            _frames[1] = new byte[_frameBufferSize];

            _currentBufferFrame = 0;
            _currentReadyFrame = 1;

        public void Flip()
            int f = _currentBufferFrame;
            _currentBufferFrame = _currentReadyFrame;
            _currentReadyFrame = f;

        private void PrepareVideo()
            _frameStream = new MemoryStream();

            // Stream Description 
            Dictionary<MediaStreamAttributeKeys, string> streamAttributes =
                new Dictionary<MediaStreamAttributeKeys, string>();

            streamAttributes[MediaStreamAttributeKeys.VideoFourCC] = "H264";
            streamAttributes[MediaStreamAttributeKeys.Height] = _frameHeight.ToString();
            streamAttributes[MediaStreamAttributeKeys.Width] = _frameWidth.ToString();
            streamAttributes[MediaStreamAttributeKeys.CodecPrivateData] = "000000012742000D96540A0FD8080F162EA00000000128CE060C88";
            MediaStreamDescription msd =
                new MediaStreamDescription(MediaStreamType.Video, streamAttributes);

            _videoDesc = msd;

        private void PrepareAudio()
            short BitsPerSample = 16;
            int SampleRate = 8000;          // change this to something higher if we output sound from here
            short ChannelCount = 1;
            int ByteRate = SampleRate * ChannelCount * (BitsPerSample / 8);

            _waveFormat = new WaveFormatEx();
            _waveFormat.BitsPerSample = BitsPerSample;
            _waveFormat.AvgBytesPerSec = (int)ByteRate;
            _waveFormat.Channels = ChannelCount;
            _waveFormat.BlockAlign = (short)(ChannelCount * (BitsPerSample / 8));
            _waveFormat.ext = null; // ??
            _waveFormat.FormatTag = WaveFormatEx.FormatPCM;
            _waveFormat.SamplesPerSec = SampleRate;
            _waveFormat.Size = 0; // must be zero


            _audioStream = new System.IO.MemoryStream();
            _audioSourceBytes = new byte[ByteRate];

            // TEMP just load the audio buffer with silence
            for (int i = 1; i < SampleRate; i++)
                _audioSourceBytes[i] = 0;

            // Stream Description 
            Dictionary<MediaStreamAttributeKeys, string> streamAttributes = new Dictionary<MediaStreamAttributeKeys, string>();
            streamAttributes[MediaStreamAttributeKeys.CodecPrivateData] = _waveFormat.ToHexString(); // wfx
            MediaStreamDescription msd = new MediaStreamDescription(MediaStreamType.Audio, streamAttributes);
            _audioDesc = msd;


        protected override void OpenMediaAsync()
            // Init
            Dictionary<MediaSourceAttributesKeys, string> sourceAttributes =
                new Dictionary<MediaSourceAttributesKeys, string>();
            List<MediaStreamDescription> availableStreams =
                new List<MediaStreamDescription>();



            // a zero timespan is an infinite video
            sourceAttributes[MediaSourceAttributesKeys.Duration] =

            sourceAttributes[MediaSourceAttributesKeys.CanSeek] = false.ToString();

            // tell Silverlight that we've prepared and opened our video
            ReportOpenMediaCompleted(sourceAttributes, availableStreams);

        protected override void GetSampleAsync(MediaStreamType mediaStreamType)
            if (mediaStreamType == MediaStreamType.Audio)
            else if (mediaStreamType == MediaStreamType.Video)

        private void GetAudioSample()
            int bufferSize = _audioSourceBytes.Length;

            // spit out one second
            _audioStream.Write(_audioSourceBytes, 0, bufferSize);

            // Send out the next sample
            MediaStreamSample msSamp = new MediaStreamSample(

            _currentAudioTimeStamp += _waveFormat.AudioDurationFromBufferSize((uint)bufferSize);


        //private static int offset = 0;
        private void GetVideoSample()
            // seems like creating a new stream is only way to avoid out of memory and
            // actually figure out the correct offset. that can't be right.
            _frameStream = new MemoryStream();
            _frameStream.Write(_frames[_currentReadyFrame], 0, _frameBufferSize);

            // Send out the next sample
            MediaStreamSample msSamp = new MediaStreamSample(

            _currentVideoTimeStamp += _frameTime;

 __           ReportGetSampleCompleted(msSamp);__

        protected override void CloseMedia()
            _currentAudioTimeStamp = 0;
            _currentVideoTimeStamp = 0;

        protected override void GetDiagnosticAsync(MediaStreamSourceDiagnosticKind diagnosticKind)
            throw new NotImplementedException();

        protected override void SwitchMediaStreamAsync(MediaStreamDescription mediaStreamDescription)
            throw new NotImplementedException();

        protected override void SeekAsync(long seekToTime)
            _currentVideoTimeStamp = seekToTime;


I get a null exception here -> __ ReportGetSampleCompleted(msSamp);__

when I call it during the async...
                request = WebRequest.CreateHttp(url);
                request.AllowReadStreamBuffering = true; 
                IAsyncResult result = request.BeginGetResponse(new AsyncCallback(this.RequestCallback), null);
    private void RequestCallback(IAsyncResult asyncResult)
        HttpWebResponse response = request.EndGetResponse(asyncResult) as HttpWebResponse;
        Stream s = response.GetResponseStream();
        VideoMediaStreamSource vss = new VideoMediaStreamSource(s, videoframeW, videoframeH);
            () =>
Pls can someone guide here or at least tell me what I may be missing out on....
I was thinking maybe the time to decode was too short in the AsyncCallback or perhaps there is a thread issue here ????