.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/introduction/demo_batching.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_introduction_demo_batching.py: ========================================================== Computing motion energy features from batches of stimuli ========================================================== This example shows how to extract motion energy features from batches of a video. When the stimulus is very high-resolution (e.g. 4K) or is several hours long, it might not be possible to fit load the stimulus into memory. In such situations, it is useful to load a small number of video frames and extract motion energy features from that subset of frames alone. In order to do this properly, one must avoid edge effects. In this example we show how to do that. .. GENERATED FROM PYTHON SOURCE LINES 13-17 Features from stimulus ====================== First, we specify the stimulus we want to load. .. GENERATED FROM PYTHON SOURCE LINES 17-24 .. code-block:: default import moten import numpy as np import matplotlib.pyplot as plt stimulus_fps = 24 video_file = 'http://anwarnunez.github.io/downloads/avsnr150s24fps_tiny.mp4' .. GENERATED FROM PYTHON SOURCE LINES 25-26 Load the first 300 images and spatially downsample the video. .. GENERATED FROM PYTHON SOURCE LINES 26-36 .. code-block:: default small_vhsize = (72, 128) # height x width luminance_images = moten.io.video2luminance(video_file, size=small_vhsize, nimages=300) nimages, vdim, hdim = luminance_images.shape print(vdim, hdim) fig, ax = plt.subplots() ax.matshow(luminance_images[200], vmin=0, vmax=100, cmap='inferno') ax.set_xticks([]) ax.set_yticks([]) .. image:: /auto_examples/introduction/images/sphx_glr_demo_batching_001.png :alt: demo batching :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 72 128 [] .. GENERATED FROM PYTHON SOURCE LINES 37-38 Next we construct the pyramid and extract the motion energy features from the full stimulus. .. GENERATED FROM PYTHON SOURCE LINES 38-47 .. code-block:: default pyramid = moten.pyramids.MotionEnergyPyramid(stimulus_vhsize=(vdim, hdim), stimulus_fps=stimulus_fps, filter_temporal_width=16) moten_features = pyramid.project_stimulus(luminance_images) print(moten_features.shape) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none (300, 2530) .. GENERATED FROM PYTHON SOURCE LINES 48-54 Features from stimulus batches ============================== Next, instead of computing the features from the full stimulus, we compute them from separate but continous stimulus chunks. These stimulus chunks are the stimulus batches. We have to include some padding to the batches in order to avoid convolution edge effects. The padding is determined by the temporal width of the motion energy filter. By default, the temporal width is 2/3 of the stimulus frame rate (`int(fps*(2/3))`). This parameter can be specified when instantating a pyramid by passing e.g. ``filter_temporal_width=16``. Once the pyramid is defined, the parameter can also be accessed from the ``pyramid.definition`` dictionary. .. GENERATED FROM PYTHON SOURCE LINES 54-57 .. code-block:: default filter_temporal_width = pyramid.definition['filter_temporal_width'] .. GENERATED FROM PYTHON SOURCE LINES 58-59 Finally, we define the padding window as half the temporal filter width. .. GENERATED FROM PYTHON SOURCE LINES 59-63 .. code-block:: default window = int(np.ceil((filter_temporal_width/2))) print(filter_temporal_width, window) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 16 8 .. GENERATED FROM PYTHON SOURCE LINES 64-65 Now we are ready to extract motion energy features in batches: .. GENERATED FROM PYTHON SOURCE LINES 65-90 .. code-block:: default nbatches = 5 batch_size = int(np.ceil(nimages/nbatches)) batched_data = [] for bdx in range(nbatches): start_frame, end_frame = batch_size*bdx, batch_size*(bdx + 1) print('Batch %i/%i [%i:%i]'%(bdx+1, nbatches, start_frame, end_frame)) # Padding batch_start = max(start_frame - window, 0) batch_end = end_frame + window stimulus_batch = luminance_images[batch_start:batch_end] batched_responses = pyramid.project_stimulus(stimulus_batch) # Trim edges if bdx == 0: batched_responses = batched_responses[:-window] elif bdx + 1 == nbatches: batched_responses = batched_responses[window:] else: batched_responses = batched_responses[window:-window] batched_data.append(batched_responses) batched_data = np.vstack(batched_data) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Batch 1/5 [0:60] Batch 2/5 [60:120] Batch 3/5 [120:180] Batch 4/5 [180:240] Batch 5/5 [240:300] .. GENERATED FROM PYTHON SOURCE LINES 91-92 They are exactly the same. .. GENERATED FROM PYTHON SOURCE LINES 92-94 .. code-block:: default assert np.allclose(moten_features, batched_data) .. GENERATED FROM PYTHON SOURCE LINES 95-96 In this example, the stimulus (``luminance_images``) is already in memory and so batching does not provide any benefits. However, there are situations in which the stimulus cannot be loaded all at once. In such situations, batching is necessary. One can modify the code above and write a function to load a subset of frames that can fit into memory (e.g. ``stimulus_batch = load_my_video_frames_batch(`my_stimulus_video_file.avi`, batch_start, batch_end)``). .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 29.012 seconds) .. _sphx_glr_download_auto_examples_introduction_demo_batching.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: demo_batching.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: demo_batching.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_