Neuroglancer module

This module has several methods for helping create the precomputed data. It also has the main class to convert numpy arrays (images) into the precomputed format.

class library.image_manipulation.neuroglancer_manager.NumpyToNeuroglancer(animal: str, volume, scales, layer_type, data_type, num_channels=1, chunk_size=[64, 64, 1], offset=[0, 0, 0])

Bases: object

Contains collection of methods used to transform Numpy arrays into ‘precomputed cloud volume’ format More info: https://github.com/seung-lab/cloud-volume

add_downsampled_volumes(layer_path, chunk_size=[128, 128, 64], num_mips=3) None

Augments ‘precomputed’ cloud volume with additional resolutions using chunk calculations tasks = tc.create_downsampling_tasks(cv.layer_cloudpath, mip=mip, num_mips=1, factor=factors, compress=True, chunk_size=chunks)

Parameters:
  • chunk_size – list size of chunks

  • num_mips – number of levels in the pyramid

add_rechunking(outputpath, chunks=[64, 64, 64], mip=0, skip_downsamples=True) None

Augments ‘precomputed’ cloud volume with additional chunk calculations [so format has pointers to location of individual volumes?]

Parameters:
  • outpath – path of file location

  • downsample – boolean

  • chunks – list of chunk sizes

add_segment_properties(cloud_volume, segment_properties) None

Augments ‘precomputed’ cloud volume with attribute tags [resolution, chunk size]

Parameters:
  • cloud_volume – Cloudvolume object

  • segment_properties – dictionary of labels, ids

add_segmentation_mesh(layer_path, mip=0) None

Augments ‘precomputed’ cloud volume with segmentation mesh

Parameters:
  • shape – list[int]

  • mip – int, pyramid level

init_precomputed(path: str, volume_size, starting_points=None) None

Initializes ‘precomputed’ cloud volume format (directory holding multiple volumes)

Parameters:
  • path – str of the file location

  • volume_size – size of the volume

  • starting_points – initial starting points

  • progress_id – progress ID

init_volume(path: str) None

Initializes ‘precomputed’ cloud volume (‘volume’ is a collection image stack with same resolution)

Parameters:

path – path of file location

normalize_stack(layer_path, src_path=None, dest_path=None)

This does basically the same thing as our cleaning process.

process_image(file_key: tuple[int, str, any, str]) None

This reads the image and starts the precomputed data

Parameters:

file_key – file_key: tuple

process_image_mesh(file_key)

This reads the image and starts the precomputed data

Parameters:

file_key – file_key: tuple

process_image_shell(file_key)

Processes an image file and updates the precomputed volume.

Args:

file_key (tuple): A tuple containing the index, input file path, and progress directory.

Returns:

None

The function performs the following steps: 1. Extracts the index, input file path, and progress directory from the file_key. 2. Checks if the image has already been processed by looking for a progress file. 3. Reads the image from the input file path. 4. Converts the image to the specified data type (MESHDTYPE). 5. Attempts to reshape the image. 6. Logs the unique IDs and their counts in the image. 7. Updates the precomputed volume with the processed image. 8. Creates a progress file to indicate that the image has been processed. 9. Deletes the image from memory.

If any step fails, appropriate error messages are printed, and the function returns early. 13281.tif dtype=uint8, shape=(518, 563, 1), ids=[0 1], counts=[291540 94] dtype=uint8, shape=(1796, 984, 1)

library.image_manipulation.neuroglancer_manager.calculate_chunks(downsample, mip)

Function returns chunk sizes for different ‘precomputed cloud volume’ image stack resolutions. Image stack will be created from full-resolution images but must be chunked for efficient storage and loading into browser. Default values are [64, 64, 64] but may be modified for different resolutions. More info: https://github.com/seung-lab/cloud-volume

Note: highest resolution tier (mip) is 0 and increments

Parameters:
  • downsample – boolean

  • mip – integer telling us which pyramid level we want

Return d:

dictionary

library.image_manipulation.neuroglancer_manager.calculate_factors(downsample, mip)

Scales get calculated by default by 2x2x1 downsampling

Parameters:
  • downsample – boolean

  • mip – which pyramid level to work on

Return list:

list of factors

library.image_manipulation.neuroglancer_manager.get_segment_ids(volume)

Gets the unique values of a numpy array. This is used in Neuroglancer for the labels in a mesh

Parameters:

volume – numpy array

Return list:

list of segment IDs