moten.extras module
Compute total motion energy from greyscale videos.
- class moten.extras.StimulusTotalMotionEnergy(video_file, size=None, nimages=inf, batch_size=1000, output_nonlinearity=<function pointwise_square>, dtype='float32', mask=None)[source]
- Bases: - object- Compute the principal components of the total motion energy. - Total motion energy is defined as the squared difference between the previous and current frame. The pixel-by-pixel covariance of the total energy is computed frame-by-frame. Then, the spatial principal components are estimated. In a second pass, the temporal principal components are computed by projecting the total energy onto the spatial components. - Parameters:
- video_file (str) – Full path to the video file. The video must be greyscale. 
- size (optional, tuple (vdim, hdim)) – The desired output image size. If specified, the image is scaled or shrunk to this size. If not specified, the original size is kept. 
- nimages (optional, int) – If specified, only nimages frames are loaded. 
- batch_size (optional, int) – Number of frames to process simultaneously while computing the pixel covariances. 
- output_nonlinearity (optional, function) – Defaults to point-wise square. 
 
 - Notes - The time-by-pixel total motion energy matrix is defined as \(T\). Its singular value decomposition is \(U S V^{\intercal} = T\). The spatial components are \(V\) and the temporal components are \(U\). - As implemented in this class, - The spatial components computed are as above (\(V\)). 
- The temporal compoonents are scaled by their singular values (\(US\)). 
- The eigenvalues are the squared singular values (\(S^2\)). 
 - Variables:
- decomposition_spatial_pcs (np.ndarray, (npixels, npcs)) 
- decomposition_temporal_pcs (list, (ntimepoints, npcs)) 
- decomposition_eigenvalues (np.ndarray, (min(npixels, nframes),)) 
 
 - Examples - >>> import moten.extras >>> video_file = 'http://anwarnunez.github.io/downloads/avsnr150s24fps_tiny.mp4' >>> small_size = (36, 64) # (vdim, hdim) >>> totalmoten = moten.extras.StimulusTotalMotionEnergy(video_file, small_size, nimages=300) >>> totalmoten.compute_pixel_by_pixel_covariance() >>> totalmoten.compute_spatial_pcs(npcs=10) >>> totalmoten.compute_temporal_pcs() - compute_pixel_by_pixel_covariance(generator=None)[source]
- Compute the pixel-by-pixel covariance of the total energy video. - Notes - Covariance is estimated in batches. - Parameters:
- generator (optional) – The video frame difference generator. Defaults to the - video_fileused to instantiate the class.
- Variables:
- covariance_pixbypix (np.ndarray, (npixels, npixels)) – The full covarianc matrix 
- covariance_nframes (int) – Number of frames used in estimating the covariance matrix. Defaults to the total number of frames in the video. 
- npixels (int) – Total number of pixels in the video (after downsampling). 
 
 
 - compute_spatial_pcs(npcs=None)[source]
- Compute the principal components from the pixel-by-pixel total energy covariance matrix. - Parameters:
- npcs (optional, int) – Number of principal components to keep 
- Variables:
- decomposition_spatial_pcs (np.ndarray, (npixels, npcs)) 
- decomposition_eigenvalues (np.ndarray) 
 
 
 - compute_temporal_pcs(generator=None, skip_first=False)[source]
- Extract the temporal principal components of the total motion energy. - Parameters:
- generator (optional) – The video frame difference generator. Defaults to the - video_fileused to instantiate the class.
- skip_first (optional, bool) – By default, set the first timepoint of all the PCs is set to zeros because the first timepoint corresponds to the difference between the first frame and nothing. If skip_first=True, then the first frame is removed from the timecourse. 
 
- Variables:
- decomposition_temporal_pcs (list, (ntimepoints, npcs)) – The temporal compoonents are scaled by their singular values (\(US\)). 
 
 
- moten.extras.pixbypix_covariance_from_frames_generator(data_generator, batch_size=1000, output_nonlinearity=<function pointwise_square>, dtype='float32')[source]
- Compute the pixel-by-pixel covariance from a video frame generator in batches. - Parameters:
- data_generator (generator object) – Yields a video frame of shape (vdim, hdim) 
- batch_size (optional, int) – Number of frames to process simultaneously while 
- output_nonlinearity (optiona, function) – A pointwise function applied to the pixels of each frame. 
 
 - Examples - >>> import moten >>> video_file = 'http://anwarnunez.github.io/downloads/avsnr150s24fps_tiny.mp4' >>> small_size = (36, 64) # downsample to (vdim, hdim) 16:9 aspect ratio >>> fdiffgen = moten.io.generate_frame_difference_from_greyvideo(video_file, size=small_size, nimages=333) >>> nimages, XTX = moten.extras.pixbypix_covariance_from_frames_generator(fdiffgen)