Individual glacier data inspection#

This notebook will walk through steps to read in and organize velocity data and clip it to the extent of a single glacier. The tools we will use include xarray, rioxarray, geopandas, and flox.

To clip ITS_LIVE data to the extent of a single glacier, we will use a vector dataset of glacier outlines, the Randolph Glacier Inventory.

Learning goals#

Concepts#

  • Subset large raster dataset to area of interest using vector data

  • Examine different types of data stored within a raster object (in this example, the data is in the format of a zarr datacube

  • Handling different coordinate reference systems and projections

  • Dataset inspection using:

    • Xarray label- and index-based selection

    • Grouped computations and reductions

    • Visualization tools

  • Dimensional reductions and computations

  • Examining velocity component vectors

  • Calculating the magnitude of the displacement vector from velocity component vectors

Techniques#

  • Read in raster data using xarray

  • Read in vector data using geopandas

  • Manipulate and organize raster data using xarray functionality

  • Explore larger-than-memory data using dask and xarray

  • Troubleshooting errors and warnings

  • Visualizing Xarray data using FacetGrid objects

Other useful resources#

These are resources that contain additional examples and discussion of the content in this notebook and more.

Software + Setup#

First, let’s install the python libraries that we’ll need for this notebook:

%xmode minimal
Exception reporting mode: Minimal
import geopandas as gpd
import os
import numpy as np
import xarray as xr
import rioxarray as rxr
import matplotlib.pyplot as plt
from shapely.geometry import Polygon
from shapely.geometry import MultiPolygon
from shapely.geometry import Point
import cartopy
import json
import urllib.request
import pandas as pd

%config InlineBackend.figure_format='retina'
from dask.distributed import Client, LocalCluster
import psutil
import logging
#cluster = LocalCluster(
#    n_workers = psutil.cpu_count(logical=True)-1, 
#    silence_logs = logging.ERROR, 
#    threads_per_worker=1,
#)
#client = Client(cluster)
#client

Reading in ITS_LIVE ice velocity dataset (raster data)#

We will use some of the functions we defined in the data access notebook in this notebook and others within this tutorial. They are all stored in the itslivetools.py file. If you cloned this tutorial from its github repository you’ll see that itslivetools.py is in the same directory as our current notebook, so we can import it with the following line:

import itslivetools

First, let’s read in the catalog again:

itslive_catalog = gpd.read_file('https://its-live-data.s3.amazonaws.com/datacubes/catalog_v02.json')

Next, we’ll use the gind_granule_by_point() and read_in_s3() functions to read in the ITS_LIVE zarr datacube as an xarray.Dataset object.

The read_in_s3() function will take a url that points to a zarr data cube stored in an AWS S3 bucket and return an xarray dataset.

I started with chunk_size='auto' which will choose chunk sizes that match the underlying data structure (this is generally ideal). More about choosing good chunk sizes here. If you want to use a different chunk size, specify it when you call the read_in_s3() function.

url = itslivetools.find_granule_by_point([95.180191, 30.645973])
url
'http://its-live-data.s3.amazonaws.com/datacubes/v2/N30E090/ITS_LIVE_vel_EPSG32646_G0120_X750000_Y3350000.zarr'
dc = itslivetools.read_in_s3(url)
dc
<xarray.Dataset>
Dimensions:                     (mid_date: 25243, y: 833, x: 833)
Coordinates:
  * mid_date                    (mid_date) datetime64[ns] 2022-06-07T04:21:44...
  * x                           (x) float64 7.001e+05 7.003e+05 ... 8e+05
  * y                           (y) float64 3.4e+06 3.4e+06 ... 3.3e+06 3.3e+06
Data variables: (12/60)
    M11                         (mid_date, y, x) float32 dask.array<chunksize=(25243, 30, 30), meta=np.ndarray>
    M11_dr_to_vr_factor         (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    M12                         (mid_date, y, x) float32 dask.array<chunksize=(25243, 30, 30), meta=np.ndarray>
    M12_dr_to_vr_factor         (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    acquisition_date_img1       (mid_date) datetime64[ns] dask.array<chunksize=(25243,), meta=np.ndarray>
    acquisition_date_img2       (mid_date) datetime64[ns] dask.array<chunksize=(25243,), meta=np.ndarray>
    ...                          ...
    vy_error_modeled            (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    vy_error_slow               (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    vy_error_stationary         (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    vy_stable_shift             (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    vy_stable_shift_slow        (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    vy_stable_shift_stationary  (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
Attributes: (12/19)
    Conventions:                CF-1.8
    GDAL_AREA_OR_POINT:         Area
    author:                     ITS_LIVE, a NASA MEaSUREs project (its-live.j...
    autoRIFT_parameter_file:    http://its-live-data.s3.amazonaws.com/autorif...
    datacube_software_version:  1.0
    date_created:               25-Sep-2023 22:00:23
    ...                         ...
    s3:                         s3://its-live-data/datacubes/v2/N30E090/ITS_L...
    skipped_granules:           s3://its-live-data/datacubes/v2/N30E090/ITS_L...
    time_standard_img1:         UTC
    time_standard_img2:         UTC
    title:                      ITS_LIVE datacube of image pair velocities
    url:                        https://its-live-data.s3.amazonaws.com/datacu...

Working with dask#

We can see that this is a very large dataset. We have a time dimension with a length of 25,243, and then x and y dimensions that are both 833 units long. There are 60 variables within this dataset that exist along some or all of the dimensions (with the exception of the mapping data variable which contains geographic information. If you expand one of the 3-dimensional variables (exists along x,y, and mid_date) you’ll see that it is very large (65 GB!). This is more than my computer can handle so we will use dask, which is a python package for parallelizing and scaling workflows that let’s us work with larger-than-memory data efficiently.

dc.v
<xarray.DataArray 'v' (mid_date: 25243, y: 833, x: 833)>
dask.array<open_dataset-v, shape=(25243, 833, 833), dtype=float32, chunksize=(25243, 30, 30), chunktype=numpy.ndarray>
Coordinates:
  * mid_date  (mid_date) datetime64[ns] 2022-06-07T04:21:44.211208960 ... 201...
  * x         (x) float64 7.001e+05 7.003e+05 7.004e+05 ... 7.999e+05 8e+05
  * y         (y) float64 3.4e+06 3.4e+06 3.4e+06 ... 3.3e+06 3.3e+06 3.3e+06
Attributes:
    description:    velocity magnitude
    grid_mapping:   mapping
    standard_name:  land_ice_surface_velocity
    units:          meter/year

We are reading this in as a dask array. Let’s take a look at the chunk sizes:

Note

chunksizes shows the largest chunk size. chunks shows the sizes of all chunks along all dims, better if you have irregular chunks

dc.chunksizes
ValueError: Object has inconsistent chunks along dimension y. This can be fixed by calling unify_chunks().

As suggested in the error message, apply the unify_chunks() method:

dc = dc.unify_chunks()

Great, that worked. Now we can see the chunk sizes of the dataset:

dc.chunksizes
Frozen({'mid_date': (25243,), 'y': (30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 23), 'x': (30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 23)})

Note

Setting the dask chunksize to auto at the xr.open_dataset() step will use chunk sizes that most closely resemble the structure of the underlying data. To avoid imposing a chunk size that isn’t a good fit for the data, avoid re-chunking until we have selected a subset of our area of interest from the larger dataset

Check CRS of xr object:

dc.attrs['projection']
'32646'

Let’s take a look at the time dimension (mid_date here). To start with we’ll just print the first 10 values:

for element in range(10):
    
    print(dc.mid_date[element].data)
2022-06-07T04:21:44.211208960
2018-04-14T04:18:49.171219968
2017-02-10T16:15:50.660901120
2022-04-03T04:19:01.211214080
2021-07-22T04:16:46.210427904
2019-03-15T04:15:44.180925952
2002-09-15T03:59:12.379172096
2002-12-28T03:42:16.181281024
2021-06-29T16:16:10.210323968
2022-03-26T16:18:35.211123968

It doesn’t look like the time dimension is in chronological order, let’s fix that:

Note

The following cell is commented out because it produces an error and usually leads to a dead kernel. Let’s try to troubleshoot this below. {add image of error (in screenshots)}

dc_timesorted = dc.sortby(dc['mid_date'])

Note: work-around for situation where above cell produces error#

Note

In some test cases, the above cell triggered the following error. While the current version runs fine, I’m including a work around in the following cells in case you encounter this error as well. If you don’t, feel free to skip ahead to the next sub-section heading{add image of error (in screenshots)}

Let’s follow some internet advice and try to fix this issue. We will actually start over from the very beginning and read in the dataset using only xarray and not dask.

dc_new = xr.open_dataset(url, engine='zarr')

Now we have an xr.Dataset that is built on numpy arrays rather than dask arrays, which means we can re-index along the time dimension using xarray’s ‘lazy’ functionality:

dc_new_timesorted = dc_new.sortby(dc_new['mid_date'])
for element in range(10):
    
    print(dc_new_timesorted.mid_date[element].data)
1986-09-11T03:31:15.003252992
1986-10-05T03:31:06.144750016
1986-10-21T03:31:34.493249984
1986-11-22T03:29:27.023556992
1986-11-30T03:29:08.710132992
1986-12-08T03:29:55.372057024
1986-12-08T03:33:17.095283968
1986-12-16T03:30:10.645544000
1986-12-24T03:29:52.332120960
1987-01-09T03:30:01.787228992

Great, much easier. Now, we will chunk the dataset.

dc_new_timesorted = dc_new_timesorted.chunk()
dc_new_timesorted.chunks
Frozen({'mid_date': (25243,), 'y': (833,), 'x': (833,)})
dc_new_timesorted
<xarray.Dataset>
Dimensions:                     (mid_date: 25243, y: 833, x: 833)
Coordinates:
  * mid_date                    (mid_date) datetime64[ns] 1986-09-11T03:31:15...
  * x                           (x) float64 7.001e+05 7.003e+05 ... 8e+05
  * y                           (y) float64 3.4e+06 3.4e+06 ... 3.3e+06 3.3e+06
Data variables: (12/60)
    M11                         (mid_date, y, x) float32 dask.array<chunksize=(25243, 833, 833), meta=np.ndarray>
    M11_dr_to_vr_factor         (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    M12                         (mid_date, y, x) float32 dask.array<chunksize=(25243, 833, 833), meta=np.ndarray>
    M12_dr_to_vr_factor         (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    acquisition_date_img1       (mid_date) datetime64[ns] dask.array<chunksize=(25243,), meta=np.ndarray>
    acquisition_date_img2       (mid_date) datetime64[ns] dask.array<chunksize=(25243,), meta=np.ndarray>
    ...                          ...
    vy_error_modeled            (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    vy_error_slow               (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    vy_error_stationary         (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    vy_stable_shift             (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    vy_stable_shift_slow        (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
    vy_stable_shift_stationary  (mid_date) float32 dask.array<chunksize=(25243,), meta=np.ndarray>
Attributes: (12/19)
    Conventions:                CF-1.8
    GDAL_AREA_OR_POINT:         Area
    author:                     ITS_LIVE, a NASA MEaSUREs project (its-live.j...
    autoRIFT_parameter_file:    http://its-live-data.s3.amazonaws.com/autorif...
    datacube_software_version:  1.0
    date_created:               25-Sep-2023 22:00:23
    ...                         ...
    s3:                         s3://its-live-data/datacubes/v2/N30E090/ITS_L...
    skipped_granules:           s3://its-live-data/datacubes/v2/N30E090/ITS_L...
    time_standard_img1:         UTC
    time_standard_img2:         UTC
    title:                      ITS_LIVE datacube of image pair velocities
    url:                        https://its-live-data.s3.amazonaws.com/datacu...

You can see in the above object that while we technically now have a ‘chunked dataset’, the entire object is

chunking_dict = dc.chunksizes
chunking_dict
Frozen({'mid_date': (25243,), 'y': (30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 23), 'x': (30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 23)})
dc_rechunk = dc_new.chunk(chunking_dict)
dc_rechunk.chunks
Frozen({'mid_date': (25243,), 'y': (30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 23), 'x': (30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 23)})

Great, now we have our ITS_LIVE dataset, organized by time and with appropriate chunking. Let’s move on and read in vector data describing some physical features we’d like to examine with the ITS_LIVE dataset.

Incorporating glacier outlines (vector data)#

Vector data represent discrete features. They contain geometry data as well as attribute data about the features. For a more in-depth description of vector data, read this. We will use vector data to focus our analysis on specific glaciers. The dataset we will be using is called the Randolph Glacier Inventory (RGI). It is a very important dataset for glaciology research; you can read more about it here.

Read in vector data#

We will read in just one region of the RGI (region 15, SouthAsiaEast). RGI data is downloaded in lat/lon coordinates. We will project it to match the coordinate reference system (CRS) of the ITS_LIVE dataset and then select an individual glacier to begin our analysis. ITS_LIVE data is in the Universal Transverse Mercator (UTM) coordinate system, and each datacube is projected to the UTM zone specific to its location. You can read more about these concepts here.

se_asia = gpd.read_parquet('rgi7_region15_south_asia_east.parquet')

Handling projections#

Check the projection of the ITS_LIVE dataset.

dc.attrs['projection']
'32646'

This means the data is projected to UTM zone 46N.

#project rgi data to match itslive
#we know the epsg from looking at the 'spatial epsg' attr of the mapping var of the dc object
se_asia_prj = se_asia.to_crs(f'EPSG:{dc.attrs["projection"]}') 
se_asia_prj.head(3)
rgi_id o1region o2region glims_id anlys_id subm_id src_date cenlon cenlat utm_zone ... zmin_m zmax_m zmed_m zmean_m slope_deg aspect_deg aspect_sec dem_source lmax_m geometry
0 RGI2000-v7.0-G-15-00001 15 15-01 G078088E31398N 866850 752 2002-07-10T00:00:00 78.087891 31.398046 44 ... 4662.2950 4699.2095 4669.4720 4671.4253 13.427070 122.267290 4 COPDEM30 173 POLYGON Z ((-924868.476 3571663.111 0.000, -92...
1 RGI2000-v7.0-G-15-00002 15 15-01 G078125E31399N 867227 752 2002-07-10T00:00:00 78.123699 31.397796 44 ... 4453.3584 4705.9920 4570.9473 4571.2770 22.822983 269.669144 7 COPDEM30 1113 POLYGON Z ((-921270.161 3571706.471 0.000, -92...
2 RGI2000-v7.0-G-15-00003 15 15-01 G078128E31390N 867273 752 2000-08-05T00:00:00 78.128510 31.390287 44 ... 4791.7593 4858.6807 4832.1836 4827.6700 15.626262 212.719681 6 COPDEM30 327 POLYGON Z ((-921061.745 3570342.665 0.000, -92...

3 rows × 29 columns

len(se_asia_prj)
18587

Crop RGI to ITS_LIVE extent#

Now we have a raster object (the ITS_LIVE dataset) and a vector dataset containing many (18,587!) features, which in this case are individual glacier outlines. We first want to crop the vector dataset by the extent of the raster dataset so that we can look only at the glaciers that are covered by the velocity data. In the first notebook, accessing_s3_data, we defined a function to create a vector object that describes the footprint of a raster object. Instead of defining it again here we can call it from the itslivetools script that we imported at the beginning of this notebook.

#first, get vector bbox of itslive
bbox_dc = itslivetools.get_bounds_polygon(dc)
bbox_dc['geometry']
0    POLYGON ((700132.500 3300067.500, 799972.500 3...
Name: geometry, dtype: geometry

The CRS of the dc and bbox_dc objects should be the same but its a good idea to check.

#first check the projection of dc. Notice that it is stored in the attributes of the dc object
dc.attrs['projection']
'32646'
#then check the crs of bbox_dc. Do this by accessing the crs attribute of the bbox_dc object
bbox_dc.crs
<Projected CRS: EPSG:32646>
Name: WGS 84 / UTM zone 46N
Axis Info [cartesian]:
- E[east]: Easting (metre)
- N[north]: Northing (metre)
Area of Use:
- name: Between 90°E and 96°E, northern hemisphere between equator and 84°N, onshore and offshore. Bangladesh. Bhutan. China. Indonesia. Mongolia. Myanmar (Burma). Russian Federation.
- bounds: (90.0, 0.0, 96.0, 84.0)
Coordinate Operation:
- name: UTM zone 46N
- method: Transverse Mercator
Datum: World Geodetic System 1984 ensemble
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich

If they weren’t the same, you could use the code that is commented out below to reproject the bbox_dc object to match dc.

#project from latlon to local utm 
#bbox_dc = bbox_dc.to_crs(f'EPSG:{dc.attrs["projection"]}')
#bbox_dc
#plot the outline of the itslive granule and the rgi dataframe together
fig, ax = plt.subplots()

bbox_dc.plot(ax=ax, facecolor='None', color='red')
se_asia_prj.plot(ax=ax, facecolor='None')
<Axes: >
_images/ba5aa9fb5c951852943433b46f773eb9e80d5c9061e435d6e36a8fb539546fd7.png

The plot above shows the coverage of the vector dataset, in black, relative to the extent of the raster dataset, in red. We will use the geopandas .clip() method to subset clip the RGI object (se_asia_prj) by the bbox_dc object which represents the footprint of the raster data cube.

#subset rgi to bounds 
se_asia_subset = gpd.clip(se_asia_prj, bbox_dc)
se_asia_subset.head()
rgi_id o1region o2region glims_id anlys_id subm_id src_date cenlon cenlat utm_zone ... zmin_m zmax_m zmed_m zmean_m slope_deg aspect_deg aspect_sec dem_source lmax_m geometry
16373 RGI2000-v7.0-G-15-16374 15 15-03 G095930E29817N 930178 752 2005-09-08T00:00:00 95.929916 29.817003 46 ... 4985.7314 5274.0435 5142.7660 5148.8170 27.024134 139.048110 4 COPDEM30 756 POLYGON Z ((783110.719 3302487.481 0.000, 7831...
16374 RGI2000-v7.0-G-15-16375 15 15-03 G095925E29818N 930160 752 2005-09-08T00:00:00 95.925181 29.818399 46 ... 4856.2790 5054.9253 4929.5560 4933.6890 44.126980 211.518448 6 COPDEM30 366 POLYGON Z ((782511.360 3302381.154 0.000, 7825...
16376 RGI2000-v7.0-G-15-16377 15 15-03 G095915E29820N 930107 752 2005-09-08T00:00:00 95.914583 29.819510 46 ... 5072.8910 5150.6196 5108.5020 5111.7217 23.980000 219.341537 6 COPDEM30 170 POLYGON Z ((781619.822 3302305.074 0.000, 7816...
16371 RGI2000-v7.0-G-15-16372 15 15-03 G095936E29819N 930215 752 2005-09-08T00:00:00 95.935554 29.819123 46 ... 4838.7646 5194.8840 5001.5117 4992.3706 25.684517 128.737870 4 COPDEM30 931 POLYGON Z ((783420.055 3302493.804 0.000, 7834...
15879 RGI2000-v7.0-G-15-15880 15 15-03 G095459E29807N 928789 752 1999-07-29T00:00:00 95.459374 29.807181 46 ... 3802.1846 4155.1255 4000.2695 4000.4404 28.155806 116.148640 4 COPDEM30 776 POLYGON Z ((737667.211 3300277.169 0.000, 7377...

5 rows × 29 columns

We can use the geopandas .explore() method to take a look at the RGI dataset:

se_asia_subset.explore()
/home/emmamarshall/miniconda3/envs/itslive_tutorial/lib/python3.11/site-packages/folium/features.py:1102: UserWarning: GeoJsonTooltip is not configured to render for GeoJson GeometryCollection geometries. Please consider reworking these features: [{'rgi_id': 'RGI2000-v7.0-G-15-16433', 'o1region': '15', 'o2region': '15-03', 'glims_id': 'G095721E29941N', 'anlys_id': 929520, 'subm_id': 752, 'src_date': '2005-09-08T00:00:00', 'cenlon': 95.7211016152286, 'cenlat': 29.940902187781784, 'utm_zone': 46, 'area_km2': 0.340954350813452, 'primeclass': 0, 'conn_lvl': 0, 'surge_type': 0, 'term_type': 9, 'glac_name': None, 'is_rgi6': 0, 'termlon': 95.72222864596793, 'termlat': 29.937137080413784, 'zmin_m': 4657.792, 'zmax_m': 5049.5625, 'zmed_m': 4825.1104, 'zmean_m': 4839.4185, 'slope_deg': 23.704372, 'aspect_deg': 145.20973, 'aspect_sec': 4, 'dem_source': 'COPDEM30', 'lmax_m': 891}, {'rgi_id': 'RGI2000-v7.0-G-15-12194', 'o1region': '15', 'o2region': '15-03', 'glims_id': 'G095869E30315N', 'anlys_id': 929951, 'subm_id': 752, 'src_date': '2005-09-08T00:00:00', 'cenlon': 95.86889789565677, 'cenlat': 30.3147685, 'utm_zone': 46, 'area_km2': 8.797406997273084, 'primeclass': 0, 'conn_lvl': 0, 'surge_type': 0, 'term_type': 9, 'glac_name': None, 'is_rgi6': 0, 'termlon': 95.89518363763428, 'termlat': 30.307036248571297, 'zmin_m': 4642.1445, 'zmax_m': 5278.752, 'zmed_m': 5011.06, 'zmean_m': 4993.9243, 'slope_deg': 12.372513, 'aspect_deg': 81.418945, 'aspect_sec': 3, 'dem_source': 'COPDEM30', 'lmax_m': 4994}, {'rgi_id': 'RGI2000-v7.0-G-15-11941', 'o1region': '15', 'o2region': '15-03', 'glims_id': 'G095301E30377N', 'anlys_id': 928228, 'subm_id': 752, 'src_date': '2007-08-20T00:00:00', 'cenlon': 95.30071978915663, 'cenlat': 30.3770025, 'utm_zone': 46, 'area_km2': 0.267701958906151, 'primeclass': 0, 'conn_lvl': 0, 'surge_type': 0, 'term_type': 9, 'glac_name': None, 'is_rgi6': 0, 'termlon': 95.30345982475616, 'termlat': 30.380097687364806, 'zmin_m': 5475.784, 'zmax_m': 5977.979, 'zmed_m': 5750.727, 'zmean_m': 5759.621, 'slope_deg': 41.069595, 'aspect_deg': 350.3331518173218, 'aspect_sec': 1, 'dem_source': 'COPDEM30', 'lmax_m': 807}] to MultiPolygon for full functionality.
https://tools.ietf.org/html/rfc7946#page-9
  warnings.warn(
Make this Notebook Trusted to load map: File -> Trust Notebook