You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
however, I wanted to make a programatically defined shader that utilized the omero metadata in the .zattrs to set the levels, so i wrote some neuroglancer python code to do so in a general way. See below.
import pandas as pd
import neuroglancer
from decimal import Decimal
from cloudpathlib import S3Client
import json
c = S3Client(no_sign_request=True)
def convert_scale(d):
unit = d.get('unit', '')
scale = d.get('scale', 1)
scale = Decimal(str(scale))
if unit=='millimeter':
print(scale, 'mm')
return scale, 'mm'
#if i do the conversion to m using decimal then everything works fine
#return scale/Decimal('1000'), 'm'
elif unit=='':
return scale, ''
def define_interp_shader_code(omero_channels):
shader_lines = []
for k,channel in enumerate(omero_channels):
color = channel['color']
active = channel['active']
start = channel['window']['start']
end = channel['window']['end']
min = channel['window']['min']
max = channel['window']['max']
# this will ensure that the max value is not outside the range of the dtype
if end>max:
end = max
if channel.get('inverted',False):
max, min = min, max
checkbox_line = f"#uicontrol bool channel{k}_visable checkbox(default={str(active).lower()});"
color_line = f'#uicontrol vec3 channel{k}_color color(default ="{color}");'
vec3_line = f"vec3 channel{k} = vec3(0); "
invertp_line = f"#uicontrol invlerp channel{k}_lut(range=[{start}, {end}], window=[{min}, {max}], channel=[{k}], );"
shader_lines.append(checkbox_line)
shader_lines.append(color_line)
shader_lines.append(vec3_line)
shader_lines.append(invertp_line)
shader_lines.append('void main() { ')
for k,channel in enumerate(omero_channels):
channel_line = f"if (channel{k}_visable == true) channel{k} = channel{k}_color * ((channel{k}_lut(getDataValue({k})) ));"
shader_lines.append(channel_line)
channel_names = [f'channel{k}' for k in range(len(omero_channels))]
rgb_line = f'vec3 rgb = (' + (" + ").join(channel_names) + ");"
shader_lines.append(rgb_line)
shader_lines.append("float maxValue = max(max(max(rgb.x, rgb.y), rgb.z),0.01);")
shader_lines.append("vec4 rgba = vec4(rgb, maxValue);")
shader_lines.append("vec4 render = min(rgba,vec4(1));")
shader_lines.append("emitRGBA(render);")
shader_lines.append('}')
combined_string = "\n".join(shader_lines)
return combined_string
image_series_id = 1327426755
modality = 'EPI'
if modality=='EPI':
img_source = f's3://allen-genetic-tools/epifluorescence/{image_series_id}/ome_zarr_conversion/{image_series_id}.zarr/'
elif modality=='STPT':
img_source = f's3://allen-genetic-tools/tissuecyte/{image_series_id}/ome-zarr/'
else:
raise ValueError(f'Unrecognized modality {modality}')
zattrs_loc = img_source + '.zattrs'
with c.CloudPath(zattrs_loc).open("r") as f:
zattrs = json.load(f)
omero_channels = zattrs['omero']['channels']
shader_code = define_interp_shader_code(omero_channels)
channel_labels = [chan['label'] for chan in omero_channels]
viewer = neuroglancer.Viewer()
unit_translation = {'millimeter': 'mm', '':''}
with viewer.txn() as s:
dims = []
axes=zattrs['multiscales'][0]['axes']
axes[0]['name'] = 'c^'
scale_unit_tuples=[convert_scale(a) for a in axes]
coord_space = neuroglancer.CoordinateSpace(
names=[a['name'] for a in axes],
scales=[t[0] for t in scale_unit_tuples],
units=[t[1] for t in scale_unit_tuples],
coordinate_arrays=[
neuroglancer.CoordinateArray(labels=channel_labels),
None,
None,
None]
)
tform = neuroglancer.CoordinateSpaceTransform(output_dimensions = coord_space)
s.layers[f"{image_series_id}"] = neuroglancer.ImageLayer(
source=neuroglancer.LayerDataSource(url='zarr2://'+img_source, transform =tform),
shader = shader_code)
s.layout = 'yz'
s.showAxisLines = False
s.crossSectionBackgroundColor = "#000000"
s.showDefaultAnnotations = False
s.crossSectionScale = 15.0/(1000.0*1000.0)
state=viewer.state.to_json()
state["toolPalettes"]= {
"Shader controls": {
"row": 1,
"query": "type:shaderControl"
}
}
print(neuroglancer.url_state.to_url(state))
Now this script produces this link which is black image, and if you scroll you get an image with some variable downsampling levels across the image.
This is all due to the fact that the json state that is created has
rather than 0.00003 m which matches the exact resolution of the dataset, and this dataset only has a single z plane.
I'm guessing this has to do with this function which is utilizing floating point math to convert the to the base si unit in python. The same issue does not seem to occur when loading the source directly via the UI. Probably could be fixed by utilizing the Decimal module as commented out in this script.
I'd also note that it's annoying that I have to even setup these coordinate spaces with units when all I really wanted to do was programmatically change the default c' to a c^.
The text was updated successfully, but these errors were encountered:
The Python API should avoid unnecessary loss of precision when doing unit conversions. I think I was a bit more careful about that in the Neuroglancer JavaScript code than in the Python code.
Neuroglancer should still do something reasonable if there is a mismatch between the global coordinate space and the data coordinate space.
The OME-zarr spec does not specify a scale field within the axis metadata (I argued for allowing a scale within unit, but that also is not currently supported). Neuroglancer just silently ignores unknown fields currently, so it just ignores those scale properties. Instead, Neuroglancer infers the "base resolution" based on the coordinate transformation that is specified for the first scale level. Note: If the scale properties were not ignored, then the coordinate transformation would instead have to be an identity transform.
So i was trying to make some neuroglancer links to an OME-zarr file that has a single plane of images.
source:
s3://allen-genetic-tools/epifluorescence/1327426755/ome_zarr_conversion/1327426755.zarr/|zarr2:
however, I wanted to make a programatically defined shader that utilized the omero metadata in the .zattrs to set the levels, so i wrote some neuroglancer python code to do so in a general way. See below.
Now this script produces this link which is black image, and if you scroll you get an image with some variable downsampling levels across the image.
This is all due to the fact that the json state that is created has
rather than 0.00003 m which matches the exact resolution of the dataset, and this dataset only has a single z plane.
I'm guessing this has to do with this function which is utilizing floating point math to convert the to the base si unit in python. The same issue does not seem to occur when loading the source directly via the UI. Probably could be fixed by utilizing the Decimal module as commented out in this script.
I'd also note that it's annoying that I have to even setup these coordinate spaces with units when all I really wanted to do was programmatically change the default c' to a c^.
The text was updated successfully, but these errors were encountered: