degurba 0.1__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- degurba-0.1/PKG-INFO +105 -0
- degurba-0.1/README.md +81 -0
- degurba-0.1/degurba/__init__.py +3 -0
- degurba-0.1/degurba/io.py +491 -0
- degurba-0.1/degurba/load_data.py +162 -0
- degurba-0.1/degurba/main.py +193 -0
- degurba-0.1/degurba/utils.py +102 -0
- degurba-0.1/degurba.egg-info/PKG-INFO +105 -0
- degurba-0.1/degurba.egg-info/SOURCES.txt +12 -0
- degurba-0.1/degurba.egg-info/dependency_links.txt +1 -0
- degurba-0.1/degurba.egg-info/requires.txt +2 -0
- degurba-0.1/degurba.egg-info/top_level.txt +1 -0
- degurba-0.1/setup.cfg +4 -0
- degurba-0.1/setup.py +23 -0
degurba-0.1/PKG-INFO
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
1
|
+
Metadata-Version: 2.2
|
|
2
|
+
Name: degurba
|
|
3
|
+
Version: 0.1
|
|
4
|
+
Summary: Urban Boundary Extraction Software Based on Degree of Urbanization
|
|
5
|
+
Home-page: https://github.com/djw-easy/degurba
|
|
6
|
+
Author: Your Name
|
|
7
|
+
Author-email: djw@lreis.ac.cn
|
|
8
|
+
Classifier: Programming Language :: Python :: 3
|
|
9
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
10
|
+
Classifier: Operating System :: OS Independent
|
|
11
|
+
Requires-Python: >=3.7
|
|
12
|
+
Description-Content-Type: text/markdown
|
|
13
|
+
Requires-Dist: rasterio
|
|
14
|
+
Requires-Dist: scipy
|
|
15
|
+
Dynamic: author
|
|
16
|
+
Dynamic: author-email
|
|
17
|
+
Dynamic: classifier
|
|
18
|
+
Dynamic: description
|
|
19
|
+
Dynamic: description-content-type
|
|
20
|
+
Dynamic: home-page
|
|
21
|
+
Dynamic: requires-dist
|
|
22
|
+
Dynamic: requires-python
|
|
23
|
+
Dynamic: summary
|
|
24
|
+
|
|
25
|
+
# Urban Boundary Extraction Software Based on Degree of Urbanization
|
|
26
|
+
|
|
27
|
+
## Project Introduction
|
|
28
|
+
|
|
29
|
+
This project aims to provide a tool for urban boundary extraction based on the Degree of Urbanization (DEGURBA) algorithm. By integrating multi-source geospatial data, it offers a fast, flexible, and efficient method for extracting urban boundaries at specific times and locations, providing technical support and data for researchers.
|
|
30
|
+
|
|
31
|
+

|
|
32
|
+
<center>
|
|
33
|
+
(a) Gridded population of Beijing in 2020.
|
|
34
|
+
(b) Grid cell level classification result by DEGURBA.
|
|
35
|
+
(c) Local unit classification result by DEGURBA.
|
|
36
|
+
</center>
|
|
37
|
+
|
|
38
|
+
## Features
|
|
39
|
+
|
|
40
|
+
- **Multi-source Data Support**: Supports downloading WorldPOP and GPWV4 grid population data.
|
|
41
|
+
- **Grid Cell Classification**: Classifies grid cells into urban centers, urban clusters, and rural grid units based on population density, continuity, and scale.
|
|
42
|
+
- **Local Unit Classification**: Overlays the grid cell classification results onto local spatial units and further classifies them into urban areas, semi-dense areas, and rural areas.
|
|
43
|
+
- **Flexible and Efficient**: Users can generate urban boundary data with different time, location, and classification accuracy requirements.
|
|
44
|
+
|
|
45
|
+
## Installation
|
|
46
|
+
|
|
47
|
+
### Install using QGIS
|
|
48
|
+
|
|
49
|
+
#### 1. Install QGIS
|
|
50
|
+
- Download and install QGIS, version 3.20 or higher.
|
|
51
|
+
|
|
52
|
+
#### 2. Install rasterio
|
|
53
|
+
- In the QGIS QSGeo4W Shell, run `pip install rasterio`.
|
|
54
|
+
|
|
55
|
+
#### 3. Configure Plugin
|
|
56
|
+
- Place the project code folder into the QGIS plugin directory.
|
|
57
|
+
- Open QGIS, click on "Manage Plugins" and install the "DEGURBA" plugin.
|
|
58
|
+
- Open the Processing Toolbar and select the appropriate tools for operation.
|
|
59
|
+
|
|
60
|
+
### Install using Python
|
|
61
|
+
|
|
62
|
+
You can install the package using pip:
|
|
63
|
+
|
|
64
|
+
~~~
|
|
65
|
+
pip install degurba
|
|
66
|
+
~~~
|
|
67
|
+
|
|
68
|
+
## Usages with QGIS
|
|
69
|
+
|
|
70
|
+
1. **Download Population Data**:
|
|
71
|
+
- Use the "download worldpop grid data" or "download gpwv4 grid data" tool to select the desired dataset, country, year, and clipping area (optional).
|
|
72
|
+
|
|
73
|
+
2. **Grid Cell Classification**:
|
|
74
|
+
- Use the "Grid Cell Classification" tool, input the population grid data, and output raster data.
|
|
75
|
+
|
|
76
|
+
3. **Local Unit Classification**:
|
|
77
|
+
- Use the "Local Units Classification" tool, input the grid cell classification result data, and output local unit data (vector data).
|
|
78
|
+
|
|
79
|
+
## Example
|
|
80
|
+
|
|
81
|
+
Using Beijing's data for the year 2020 as an example:
|
|
82
|
+
|
|
83
|
+
1. Download the WorldPOP population grid data for Beijing, setting the MASK layer to Beijing's vector boundary.
|
|
84
|
+
|
|
85
|
+

|
|
86
|
+
|
|
87
|
+

|
|
88
|
+
|
|
89
|
+
2. Perform grid cell classification on the downloaded population grid data.
|
|
90
|
+
|
|
91
|
+

|
|
92
|
+
|
|
93
|
+

|
|
94
|
+
|
|
95
|
+
3. Overlay the classification results onto Beijing's local spatial units (such as districts) to complete the local unit classification.
|
|
96
|
+
|
|
97
|
+

|
|
98
|
+
|
|
99
|
+

|
|
100
|
+
|
|
101
|
+
## Notes
|
|
102
|
+
|
|
103
|
+
- Ensure compatibility between QGIS version and the plugin.
|
|
104
|
+
- When downloading data, pay attention to selecting the correct dataset and parameters.
|
|
105
|
+
- When performing local unit classification, ensure that the input grid cell classification result data is complete and accurate.
|
degurba-0.1/README.md
ADDED
|
@@ -0,0 +1,81 @@
|
|
|
1
|
+
# Urban Boundary Extraction Software Based on Degree of Urbanization
|
|
2
|
+
|
|
3
|
+
## Project Introduction
|
|
4
|
+
|
|
5
|
+
This project aims to provide a tool for urban boundary extraction based on the Degree of Urbanization (DEGURBA) algorithm. By integrating multi-source geospatial data, it offers a fast, flexible, and efficient method for extracting urban boundaries at specific times and locations, providing technical support and data for researchers.
|
|
6
|
+
|
|
7
|
+

|
|
8
|
+
<center>
|
|
9
|
+
(a) Gridded population of Beijing in 2020.
|
|
10
|
+
(b) Grid cell level classification result by DEGURBA.
|
|
11
|
+
(c) Local unit classification result by DEGURBA.
|
|
12
|
+
</center>
|
|
13
|
+
|
|
14
|
+
## Features
|
|
15
|
+
|
|
16
|
+
- **Multi-source Data Support**: Supports downloading WorldPOP and GPWV4 grid population data.
|
|
17
|
+
- **Grid Cell Classification**: Classifies grid cells into urban centers, urban clusters, and rural grid units based on population density, continuity, and scale.
|
|
18
|
+
- **Local Unit Classification**: Overlays the grid cell classification results onto local spatial units and further classifies them into urban areas, semi-dense areas, and rural areas.
|
|
19
|
+
- **Flexible and Efficient**: Users can generate urban boundary data with different time, location, and classification accuracy requirements.
|
|
20
|
+
|
|
21
|
+
## Installation
|
|
22
|
+
|
|
23
|
+
### Install using QGIS
|
|
24
|
+
|
|
25
|
+
#### 1. Install QGIS
|
|
26
|
+
- Download and install QGIS, version 3.20 or higher.
|
|
27
|
+
|
|
28
|
+
#### 2. Install rasterio
|
|
29
|
+
- In the QGIS QSGeo4W Shell, run `pip install rasterio`.
|
|
30
|
+
|
|
31
|
+
#### 3. Configure Plugin
|
|
32
|
+
- Place the project code folder into the QGIS plugin directory.
|
|
33
|
+
- Open QGIS, click on "Manage Plugins" and install the "DEGURBA" plugin.
|
|
34
|
+
- Open the Processing Toolbar and select the appropriate tools for operation.
|
|
35
|
+
|
|
36
|
+
### Install using Python
|
|
37
|
+
|
|
38
|
+
You can install the package using pip:
|
|
39
|
+
|
|
40
|
+
~~~
|
|
41
|
+
pip install degurba
|
|
42
|
+
~~~
|
|
43
|
+
|
|
44
|
+
## Usages with QGIS
|
|
45
|
+
|
|
46
|
+
1. **Download Population Data**:
|
|
47
|
+
- Use the "download worldpop grid data" or "download gpwv4 grid data" tool to select the desired dataset, country, year, and clipping area (optional).
|
|
48
|
+
|
|
49
|
+
2. **Grid Cell Classification**:
|
|
50
|
+
- Use the "Grid Cell Classification" tool, input the population grid data, and output raster data.
|
|
51
|
+
|
|
52
|
+
3. **Local Unit Classification**:
|
|
53
|
+
- Use the "Local Units Classification" tool, input the grid cell classification result data, and output local unit data (vector data).
|
|
54
|
+
|
|
55
|
+
## Example
|
|
56
|
+
|
|
57
|
+
Using Beijing's data for the year 2020 as an example:
|
|
58
|
+
|
|
59
|
+
1. Download the WorldPOP population grid data for Beijing, setting the MASK layer to Beijing's vector boundary.
|
|
60
|
+
|
|
61
|
+

|
|
62
|
+
|
|
63
|
+

|
|
64
|
+
|
|
65
|
+
2. Perform grid cell classification on the downloaded population grid data.
|
|
66
|
+
|
|
67
|
+

|
|
68
|
+
|
|
69
|
+

|
|
70
|
+
|
|
71
|
+
3. Overlay the classification results onto Beijing's local spatial units (such as districts) to complete the local unit classification.
|
|
72
|
+
|
|
73
|
+

|
|
74
|
+
|
|
75
|
+

|
|
76
|
+
|
|
77
|
+
## Notes
|
|
78
|
+
|
|
79
|
+
- Ensure compatibility between QGIS version and the plugin.
|
|
80
|
+
- When downloading data, pay attention to selecting the correct dataset and parameters.
|
|
81
|
+
- When performing local unit classification, ensure that the input grid cell classification result data is complete and accurate.
|
|
@@ -0,0 +1,491 @@
|
|
|
1
|
+
import os
|
|
2
|
+
import json
|
|
3
|
+
import math
|
|
4
|
+
import numpy as np
|
|
5
|
+
import rasterio as rio
|
|
6
|
+
from rasterio import crs
|
|
7
|
+
from affine import Affine
|
|
8
|
+
from osgeo import gdal, ogr
|
|
9
|
+
from rasterio import features
|
|
10
|
+
from rasterio.windows import Window
|
|
11
|
+
from rasterio.transform import guard_transform
|
|
12
|
+
from rasterio.warp import calculate_default_transform, reproject
|
|
13
|
+
|
|
14
|
+
|
|
15
|
+
def clip_raster(in_raster,
|
|
16
|
+
cutline_shp,
|
|
17
|
+
out_raster,
|
|
18
|
+
srcSRS=None, dstSRS=None,
|
|
19
|
+
srcNodata=None, dstNodata=None,
|
|
20
|
+
**kwargs):
|
|
21
|
+
"""Clip raster with shapefile.
|
|
22
|
+
Args:
|
|
23
|
+
in_raster:
|
|
24
|
+
cutline_shp:
|
|
25
|
+
out_raster:
|
|
26
|
+
srcSRS, dstSRS:
|
|
27
|
+
srcNodata, dstNodata:
|
|
28
|
+
"""
|
|
29
|
+
ds = gdal.Open(in_raster)
|
|
30
|
+
band = ds.GetRasterBand(1)
|
|
31
|
+
if srcSRS == None:
|
|
32
|
+
srcSRS = ds.GetProjection()
|
|
33
|
+
if dstSRS == None:
|
|
34
|
+
dstSRS = srcSRS
|
|
35
|
+
if srcNodata == None:
|
|
36
|
+
srcNodata = band.GetNoDataValue()
|
|
37
|
+
if dstNodata == None:
|
|
38
|
+
dstNodata = srcNodata
|
|
39
|
+
clip_options = gdal.WarpOptions(format='GTiff',
|
|
40
|
+
cutlineDSName=cutline_shp,
|
|
41
|
+
cropToCutline=True,
|
|
42
|
+
srcSRS=srcSRS, dstSRS=dstSRS,
|
|
43
|
+
srcNodata=srcNodata, dstNodata=dstNodata,
|
|
44
|
+
**kwargs)
|
|
45
|
+
ds_clip = gdal.Warp(out_raster, ds, options=clip_options)
|
|
46
|
+
ds = band = ds_clip = None
|
|
47
|
+
|
|
48
|
+
|
|
49
|
+
def rowcol(x, y, affine, op=math.floor):
|
|
50
|
+
""" Get row/col for a x/y
|
|
51
|
+
"""
|
|
52
|
+
r = int(op((y - affine.f) / affine.e))
|
|
53
|
+
c = int(op((x - affine.c) / affine.a))
|
|
54
|
+
return r, c
|
|
55
|
+
|
|
56
|
+
|
|
57
|
+
def bounds_window(bounds, affine):
|
|
58
|
+
"""Create a full cover rasterio-style window
|
|
59
|
+
"""
|
|
60
|
+
w, s, e, n = bounds
|
|
61
|
+
row_start, col_start = rowcol(w, n, affine)
|
|
62
|
+
row_stop, col_stop = rowcol(e, s, affine, op=math.ceil)
|
|
63
|
+
return (row_start, row_stop), (col_start, col_stop)
|
|
64
|
+
|
|
65
|
+
|
|
66
|
+
def window_bounds(window, affine):
|
|
67
|
+
(row_start, row_stop), (col_start, col_stop) = window
|
|
68
|
+
w, s = affine * (col_start, row_stop)
|
|
69
|
+
e, n = affine * (col_stop, row_start)
|
|
70
|
+
return w, s, e, n
|
|
71
|
+
|
|
72
|
+
|
|
73
|
+
def beyond_extent(window, shape):
|
|
74
|
+
"""Checks if window references pixels beyond the raster extent"""
|
|
75
|
+
(wr_start, wr_stop), (wc_start, wc_stop) = window
|
|
76
|
+
return wr_start < 0 or wc_start < 0 or wr_stop > shape[0] or wc_stop > shape[1]
|
|
77
|
+
|
|
78
|
+
|
|
79
|
+
def geometry_bounds(geometry):
|
|
80
|
+
"""Return a (left, bottom, right, top) bounding box.
|
|
81
|
+
Parameters
|
|
82
|
+
----------
|
|
83
|
+
geometry: GeoJSON-like feature (implements __geo_interface__),
|
|
84
|
+
feature collection, or geometry.
|
|
85
|
+
|
|
86
|
+
Returns
|
|
87
|
+
-------
|
|
88
|
+
tuple
|
|
89
|
+
Bounding box: (left, bottom, right, top)
|
|
90
|
+
"""
|
|
91
|
+
geom_types = {'Polygon', 'MultiPolygon'}
|
|
92
|
+
|
|
93
|
+
geom = getattr(geometry, '__geo_interface__', None) or geometry
|
|
94
|
+
|
|
95
|
+
try:
|
|
96
|
+
geom_type = geom["type"]
|
|
97
|
+
if geom_type not in geom_types.union({'GeometryCollection'}):
|
|
98
|
+
return False
|
|
99
|
+
|
|
100
|
+
except (KeyError, TypeError):
|
|
101
|
+
return False
|
|
102
|
+
|
|
103
|
+
if 'bbox' in geom:
|
|
104
|
+
return tuple(geom['bbox'])
|
|
105
|
+
|
|
106
|
+
geom = geom.get('geometry') or geom
|
|
107
|
+
|
|
108
|
+
# geometry must be a geometry, GeometryCollection, or FeatureCollection
|
|
109
|
+
if not ('coordinates' in geom or 'geometries' in geom or 'features' in geom):
|
|
110
|
+
raise ValueError(
|
|
111
|
+
"geometry must be a GeoJSON-like geometry, GeometryCollection, "
|
|
112
|
+
"or FeatureCollection"
|
|
113
|
+
)
|
|
114
|
+
|
|
115
|
+
if geom_type == 'Polygon':
|
|
116
|
+
coords = np.vstack(geom['coordinates'])
|
|
117
|
+
left, right = np.min(coords[:, 0]), np.max(coords[:, 0])
|
|
118
|
+
bottom, top = np.min(coords[:, 1]), np.max(coords[:, 1])
|
|
119
|
+
return (left, bottom, right, top)
|
|
120
|
+
|
|
121
|
+
if geom_type == 'MultiPolygon':
|
|
122
|
+
# Muti polygons must have at least one Polygon
|
|
123
|
+
coords = []
|
|
124
|
+
for c in geom['coordinates']:
|
|
125
|
+
coords.append(np.vstack(c))
|
|
126
|
+
coords = np.vstack(coords)
|
|
127
|
+
left, right = np.min(coords[:, 0]), np.max(coords[:, 0])
|
|
128
|
+
bottom, top = np.min(coords[:, 1]), np.max(coords[:, 1])
|
|
129
|
+
return (left, bottom, right, top)
|
|
130
|
+
|
|
131
|
+
|
|
132
|
+
def geometries_bounds(geometries):
|
|
133
|
+
"""Return a (left, bottom, right, top) bounding box.
|
|
134
|
+
Parameters
|
|
135
|
+
----------
|
|
136
|
+
geometries : iterable over geometries (GeoJSON-like objects)
|
|
137
|
+
|
|
138
|
+
Returns
|
|
139
|
+
-------
|
|
140
|
+
tuple
|
|
141
|
+
Bounding box: (left, bottom, right, top)
|
|
142
|
+
"""
|
|
143
|
+
bounds = []
|
|
144
|
+
for geometry in geometries:
|
|
145
|
+
bounds.append(geometry_bounds(geometry))
|
|
146
|
+
bounds = np.array(bounds)
|
|
147
|
+
left, right = np.min(bounds[:, 0]), np.max(bounds[:, 2])
|
|
148
|
+
bottom, top = np.min(bounds[:, 1]), np.max(bounds[:, 3])
|
|
149
|
+
return (left, bottom, right, top)
|
|
150
|
+
|
|
151
|
+
|
|
152
|
+
def geometry_window(geometry, affine):
|
|
153
|
+
bounds = geometry_bounds(geometry)
|
|
154
|
+
window = bounds_window(bounds, affine)
|
|
155
|
+
return window
|
|
156
|
+
|
|
157
|
+
|
|
158
|
+
def overlap(shape, win):
|
|
159
|
+
height, weight = shape
|
|
160
|
+
(r_start, r_stop), (c_start, c_stop) = win
|
|
161
|
+
|
|
162
|
+
# Calculate overlap
|
|
163
|
+
or_start = max(min(r_start, height), 0)
|
|
164
|
+
or_stop = max(min(r_stop, height), 0)
|
|
165
|
+
oc_start = max(min(c_start, weight), 0)
|
|
166
|
+
oc_stop = max(min(c_stop, weight), 0)
|
|
167
|
+
|
|
168
|
+
return (or_start, or_stop), (oc_start, oc_stop)
|
|
169
|
+
|
|
170
|
+
|
|
171
|
+
def boundless_array(arr, window):
|
|
172
|
+
"""Return a numpy masked array by the window of arr
|
|
173
|
+
Parameters
|
|
174
|
+
----------
|
|
175
|
+
arr: numpy array, 2D or 3D
|
|
176
|
+
if arr is a 3D numpy array, it's shape must be (channels, height, weight)
|
|
177
|
+
"""
|
|
178
|
+
|
|
179
|
+
dim3 = False
|
|
180
|
+
if len(arr.shape) == 3:
|
|
181
|
+
dim3 = True
|
|
182
|
+
channels, height, weight = arr.shape
|
|
183
|
+
elif len(arr.shape) != 2:
|
|
184
|
+
raise ValueError("Must be a 2D or 3D array")
|
|
185
|
+
else:
|
|
186
|
+
height, weight = arr.shape
|
|
187
|
+
|
|
188
|
+
# unpack for readability
|
|
189
|
+
(wr_start, wr_stop), (wc_start, wc_stop) = window
|
|
190
|
+
|
|
191
|
+
# Calculate overlap
|
|
192
|
+
(olr_start, olr_stop), (olc_start, olc_stop) = overlap((height, weight), window)
|
|
193
|
+
|
|
194
|
+
# Calc dimensions
|
|
195
|
+
overlap_shape = (olr_stop - olr_start, olc_stop - olc_start)
|
|
196
|
+
if dim3:
|
|
197
|
+
window_shape = (channels, wr_stop - wr_start, wc_stop - wc_start)
|
|
198
|
+
else:
|
|
199
|
+
window_shape = (wr_stop - wr_start, wc_stop - wc_start)
|
|
200
|
+
|
|
201
|
+
# create an array of nodata values
|
|
202
|
+
out = np.ma.MaskedArray(
|
|
203
|
+
np.zeros(shape=window_shape, dtype=arr.dtype),
|
|
204
|
+
mask=True)
|
|
205
|
+
|
|
206
|
+
# Fill with data where overlapping
|
|
207
|
+
nr_start = olr_start - wr_start
|
|
208
|
+
nr_stop = nr_start + overlap_shape[0]
|
|
209
|
+
nc_start = olc_start - wc_start
|
|
210
|
+
nc_stop = nc_start + overlap_shape[1]
|
|
211
|
+
if dim3:
|
|
212
|
+
out[:, nr_start:nr_stop, nc_start:nc_stop] = \
|
|
213
|
+
arr[:, olr_start:olr_stop, olc_start:olc_stop]
|
|
214
|
+
else:
|
|
215
|
+
out[nr_start:nr_stop, nc_start:nc_stop] = \
|
|
216
|
+
arr[olr_start:olr_stop, olc_start:olc_stop]
|
|
217
|
+
|
|
218
|
+
return out
|
|
219
|
+
|
|
220
|
+
|
|
221
|
+
class Vector(object):
|
|
222
|
+
|
|
223
|
+
def __init__(self, path, layer=0):
|
|
224
|
+
if not os.path.isabs(path):
|
|
225
|
+
path = os.path.abspath(path)
|
|
226
|
+
if not os.path.exists(path):
|
|
227
|
+
raise ValueError("The path {} is not exist.".format(path))
|
|
228
|
+
self.ds = ogr.Open(path, update=1)
|
|
229
|
+
self.layer = self.ds.GetLayer(layer)
|
|
230
|
+
self.layer_def = self.layer.GetLayerDefn()
|
|
231
|
+
self.columns = []
|
|
232
|
+
for i in range(self.layer_def.GetFieldCount()):
|
|
233
|
+
field_def = self.layer_def.GetFieldDefn(i)
|
|
234
|
+
self.columns.append(field_def.GetName())
|
|
235
|
+
|
|
236
|
+
def __getitem__(self, column):
|
|
237
|
+
"""
|
|
238
|
+
"""
|
|
239
|
+
if (column != 'geometry' and
|
|
240
|
+
column not in self.columns):
|
|
241
|
+
raise KeyError("The column {} is not exist. ".format(column))
|
|
242
|
+
|
|
243
|
+
values = []
|
|
244
|
+
feature_count = self.layer.GetFeatureCount()
|
|
245
|
+
if not feature_count:
|
|
246
|
+
return values
|
|
247
|
+
|
|
248
|
+
for i in range(feature_count):
|
|
249
|
+
feature = self.layer.GetFeature(i)
|
|
250
|
+
if column == 'geometry':
|
|
251
|
+
geom = feature.GetGeometryRef()
|
|
252
|
+
geom = json.loads(geom.ExportToJson())
|
|
253
|
+
value = geom
|
|
254
|
+
# value = shape(geom.ExportToJson())
|
|
255
|
+
else:
|
|
256
|
+
value = feature.GetField(self.columns.index(column))
|
|
257
|
+
values.append(value)
|
|
258
|
+
|
|
259
|
+
return values
|
|
260
|
+
|
|
261
|
+
@property
|
|
262
|
+
def geometry(self):
|
|
263
|
+
return self.__getitem__('geometry')
|
|
264
|
+
|
|
265
|
+
def _type_convert(self, type):
|
|
266
|
+
type_convert = {'int': ogr.OFTInteger64,
|
|
267
|
+
'float': ogr.OFTReal,
|
|
268
|
+
'string': ogr.OFTString}
|
|
269
|
+
return type_convert[type]
|
|
270
|
+
|
|
271
|
+
def create_field(self, name, type,
|
|
272
|
+
width=50,
|
|
273
|
+
values=None):
|
|
274
|
+
if values != None:
|
|
275
|
+
length = len(values)
|
|
276
|
+
feature_count = self.layer.GetFeatureCount()
|
|
277
|
+
if length != feature_count:
|
|
278
|
+
raise ValueError(
|
|
279
|
+
'The length of input value must be {}. '.format(feature_count))
|
|
280
|
+
|
|
281
|
+
type = self._type_convert(type)
|
|
282
|
+
field_def = ogr.FieldDefn(name, type)
|
|
283
|
+
field_def.SetWidth(width)
|
|
284
|
+
self.layer.CreateField(field_def)
|
|
285
|
+
# self.layer_def.AddFieldDefn(field_def)
|
|
286
|
+
|
|
287
|
+
for i in range(feature_count):
|
|
288
|
+
feature = self.layer.GetFeature(i)
|
|
289
|
+
if values == None:
|
|
290
|
+
feature.SetFieldNull(name)
|
|
291
|
+
value = values[i]
|
|
292
|
+
feature.SetField(name, value)
|
|
293
|
+
self.layer.SetFeature(feature)
|
|
294
|
+
|
|
295
|
+
self.columns.append(name)
|
|
296
|
+
|
|
297
|
+
def close(self):
|
|
298
|
+
self.ds.Destroy()
|
|
299
|
+
|
|
300
|
+
def __del__(self):
|
|
301
|
+
self.close()
|
|
302
|
+
|
|
303
|
+
|
|
304
|
+
class Raster(object):
|
|
305
|
+
"""Return a numpy masked array by the window of arr
|
|
306
|
+
Parameters
|
|
307
|
+
----------
|
|
308
|
+
raster: numpy array or a path to an raster source
|
|
309
|
+
If raster is a numpy array, it's shape should be 2D or 3D.
|
|
310
|
+
If raster is a 3D numpy array, it's shape must be (channels, height, weight)
|
|
311
|
+
"""
|
|
312
|
+
|
|
313
|
+
def __init__(self, raster, affine=None, crs=None, nodata=None, band=1) -> None:
|
|
314
|
+
self.nodata = nodata
|
|
315
|
+
self.affine = affine
|
|
316
|
+
self.crs = crs
|
|
317
|
+
self.band = band
|
|
318
|
+
if isinstance(raster, np.ndarray):
|
|
319
|
+
if affine is None or crs is None:
|
|
320
|
+
raise ValueError(
|
|
321
|
+
"Specify affine transform and crs for numpy arrays")
|
|
322
|
+
self.array = raster
|
|
323
|
+
elif isinstance(raster, str):
|
|
324
|
+
if not os.path.isabs(raster):
|
|
325
|
+
raster = os.path.abspath(raster)
|
|
326
|
+
src = rio.open(raster, 'r')
|
|
327
|
+
self.affine = guard_transform(src.transform)
|
|
328
|
+
self.crs = src.crs
|
|
329
|
+
self.array = src.read(self.band, masked=True)
|
|
330
|
+
# create a mask array by nodata value
|
|
331
|
+
if self.nodata != None:
|
|
332
|
+
self.array = np.ma.masked_array(self.array,
|
|
333
|
+
self.array == self.nodata)
|
|
334
|
+
# add nan mask (if necessary)
|
|
335
|
+
if np.issubdtype(self.array.dtype, np.floating):
|
|
336
|
+
self.array = np.ma.masked_array(self.array,
|
|
337
|
+
np.isnan(self.array))
|
|
338
|
+
else:
|
|
339
|
+
self.array = np.ma.masked_array(self.array, None)
|
|
340
|
+
self.shape = self.array.shape
|
|
341
|
+
|
|
342
|
+
def read(self,
|
|
343
|
+
bounds=None,
|
|
344
|
+
window=None,
|
|
345
|
+
boundless=True,
|
|
346
|
+
only_array=False):
|
|
347
|
+
""" Performs a read against the underlying array source
|
|
348
|
+
|
|
349
|
+
Parameters
|
|
350
|
+
----------
|
|
351
|
+
bounds: bounding box
|
|
352
|
+
in w, s, e, n order, iterable, optional
|
|
353
|
+
window: rasterio-style window, optional
|
|
354
|
+
bounds OR window are required,
|
|
355
|
+
specifying both or neither will raise exception
|
|
356
|
+
boundless: boolean
|
|
357
|
+
allow window/bounds that extend beyond the dataset’s extent, default: True
|
|
358
|
+
partially or completely filled arrays will be returned as appropriate.
|
|
359
|
+
only_array: boolean
|
|
360
|
+
return a masked numpy array, default: False
|
|
361
|
+
bounds OR window are required, specifying both or neither will raise exception
|
|
362
|
+
Returns
|
|
363
|
+
-------
|
|
364
|
+
Raster object with update affine and array info
|
|
365
|
+
"""
|
|
366
|
+
# Calculate the window
|
|
367
|
+
if bounds and window:
|
|
368
|
+
raise ValueError("Specify either bounds or window")
|
|
369
|
+
|
|
370
|
+
if bounds:
|
|
371
|
+
win = bounds_window(bounds, self.affine)
|
|
372
|
+
elif window:
|
|
373
|
+
win = window
|
|
374
|
+
else:
|
|
375
|
+
raise ValueError("Specify either bounds or window")
|
|
376
|
+
|
|
377
|
+
if not boundless and beyond_extent(win, self.shape):
|
|
378
|
+
raise ValueError(
|
|
379
|
+
"Window/bounds is outside dataset extent and boundless reads are disabled")
|
|
380
|
+
|
|
381
|
+
out = boundless_array(self.array, window=win)
|
|
382
|
+
|
|
383
|
+
if only_array:
|
|
384
|
+
return out
|
|
385
|
+
|
|
386
|
+
c, _, _, f = window_bounds(win, self.affine) # c ~ west, f ~ north
|
|
387
|
+
a, b, _, d, e, _, _, _, _ = tuple(self.affine)
|
|
388
|
+
new_affine = Affine(a, b, c, d, e, f)
|
|
389
|
+
|
|
390
|
+
return Raster(out, new_affine, self.crs)
|
|
391
|
+
|
|
392
|
+
def read_from_geometry(self, geometries, boundless=True, all_touched=False):
|
|
393
|
+
"""
|
|
394
|
+
Parameters
|
|
395
|
+
----------
|
|
396
|
+
geometries : iterable over geometries (GeoJSON-like objects)
|
|
397
|
+
all_touched : boolean, optional
|
|
398
|
+
If True, all pixels touched by geometries will be burned in. If
|
|
399
|
+
false, only pixels whose center is within the polygon or that
|
|
400
|
+
are selected by Bresenham's line algorithm will be burned in.
|
|
401
|
+
Returns
|
|
402
|
+
-------
|
|
403
|
+
Raster object with update affine and array info
|
|
404
|
+
"""
|
|
405
|
+
if not isinstance(geometries, (tuple, list)):
|
|
406
|
+
geometries = [geometries]
|
|
407
|
+
bounds = geometries_bounds(geometries)
|
|
408
|
+
window = bounds_window(bounds, self.affine)
|
|
409
|
+
|
|
410
|
+
clip_raster = self.read(window=window, boundless=boundless)
|
|
411
|
+
geometry_mask = features.geometry_mask(
|
|
412
|
+
geometries=geometries,
|
|
413
|
+
out_shape=clip_raster.shape,
|
|
414
|
+
transform=clip_raster.affine,
|
|
415
|
+
all_touched=all_touched,
|
|
416
|
+
invert=True)
|
|
417
|
+
array = np.ma.masked_array(clip_raster.array, ~geometry_mask)
|
|
418
|
+
|
|
419
|
+
return Raster(array, clip_raster.affine, clip_raster.crs)
|
|
420
|
+
|
|
421
|
+
def xy(self, row, col):
|
|
422
|
+
y = self.affine.f + (row + 0.5) * self.affine.e
|
|
423
|
+
x = self.affine.c + (col + 0.5) * self.affine.a
|
|
424
|
+
return (x, y)
|
|
425
|
+
|
|
426
|
+
def index(self, x, y):
|
|
427
|
+
col = int((x - self.affine.c) // self.affine.a)
|
|
428
|
+
row = int((self.affine.f - y) // abs(self.affine.e))
|
|
429
|
+
|
|
430
|
+
def reproject(self, epsg):
|
|
431
|
+
dst_crs = crs.CRS.from_epsg(epsg)
|
|
432
|
+
height, width = self.shape
|
|
433
|
+
left = self.affine.c
|
|
434
|
+
top = self.affine.f
|
|
435
|
+
right = self.affine.c + self.affine.a * width
|
|
436
|
+
bottom = self.affine.f + self.affine.e * height
|
|
437
|
+
dst_transform, dst_width, dst_height = calculate_default_transform(
|
|
438
|
+
src_crs=self.crs,
|
|
439
|
+
dst_crs=dst_crs,
|
|
440
|
+
width=width,
|
|
441
|
+
height=height,
|
|
442
|
+
left=left,
|
|
443
|
+
bottom=bottom,
|
|
444
|
+
right=right,
|
|
445
|
+
top=top
|
|
446
|
+
)
|
|
447
|
+
# Determine the nodata value
|
|
448
|
+
if self.nodata is None:
|
|
449
|
+
if np.issubdtype(self.array.dtype, np.floating):
|
|
450
|
+
dst_nodata = np.nan
|
|
451
|
+
else:
|
|
452
|
+
dst_nodata = np.iinfo(self.array.dtype).max
|
|
453
|
+
else:
|
|
454
|
+
dst_nodata = self.nodata
|
|
455
|
+
dst_array = np.empty((dst_height, dst_width), dtype=self.array.dtype)
|
|
456
|
+
# 重投影
|
|
457
|
+
reproject(
|
|
458
|
+
# 源文件参数
|
|
459
|
+
source=self.array,
|
|
460
|
+
src_crs=self.crs,
|
|
461
|
+
src_transform=self.affine,
|
|
462
|
+
# 目标文件参数
|
|
463
|
+
destination=dst_array,
|
|
464
|
+
dst_transform=dst_transform,
|
|
465
|
+
dst_crs=dst_crs,
|
|
466
|
+
dst_nodata=dst_nodata,
|
|
467
|
+
num_threads=4)
|
|
468
|
+
return Raster(dst_array, dst_transform, dst_crs, dst_nodata)
|
|
469
|
+
|
|
470
|
+
def save(self, path, nodata=None):
|
|
471
|
+
# Determine the nodata value
|
|
472
|
+
if nodata == None:
|
|
473
|
+
if self.nodata == None:
|
|
474
|
+
if np.issubdtype(self.array.dtype, np.floating):
|
|
475
|
+
nodata = np.nan
|
|
476
|
+
else:
|
|
477
|
+
nodata = np.iinfo(self.array.dtype).max
|
|
478
|
+
else:
|
|
479
|
+
nodata = self.nodata
|
|
480
|
+
arr = self.array.filled(nodata)
|
|
481
|
+
with rio.open(path, 'w',
|
|
482
|
+
driver='GTiff',
|
|
483
|
+
nodata=nodata,
|
|
484
|
+
height=self.shape[0],
|
|
485
|
+
width=self.shape[1],
|
|
486
|
+
count=1,
|
|
487
|
+
dtype=self.array.dtype,
|
|
488
|
+
crs=self.crs,
|
|
489
|
+
transform=self.affine,
|
|
490
|
+
compress='lzw') as src:
|
|
491
|
+
src.write(arr, 1)
|
|
@@ -0,0 +1,162 @@
|
|
|
1
|
+
import os
|
|
2
|
+
import zipfile
|
|
3
|
+
import tempfile
|
|
4
|
+
import urllib.request
|
|
5
|
+
from glob import glob
|
|
6
|
+
from .io import Raster, Vector
|
|
7
|
+
|
|
8
|
+
|
|
9
|
+
gpw_v4_unadjusted_1km = {
|
|
10
|
+
"name": "Gridded Population of the World (GPW), v4 ( 30 arc-second )",
|
|
11
|
+
"base_url": "https://sedac.ciesin.columbia.edu/downloads/data/gpw-v4/gpw-v4-population-count-rev11/gpw-v4-population-count-rev11_{year}_30_sec_tif.zip",
|
|
12
|
+
}
|
|
13
|
+
|
|
14
|
+
gpw_v4_adjusted_1km = {
|
|
15
|
+
"name": "Gridded Population of the World (GPW), v4 ( 30 arc-second )",
|
|
16
|
+
"base_url": "https://sedac.ciesin.columbia.edu/downloads/data/gpw-v4/gpw-v4-population-count-adjusted-to-2015-unwpp-country-totals-rev11/gpw-v4-population-count-adjusted-to-2015-unwpp-country-totals-rev11_2000_30_sec_tif.zip",
|
|
17
|
+
}
|
|
18
|
+
|
|
19
|
+
gpw_datasets = {
|
|
20
|
+
'gpw_v4_unadjusted_1km': gpw_v4_unadjusted_1km,
|
|
21
|
+
'gpw_v4_adjusted_1km': gpw_v4_adjusted_1km,
|
|
22
|
+
"valid_years": [2000, 2005, 2010, 2015, 2020]
|
|
23
|
+
}
|
|
24
|
+
|
|
25
|
+
worldPop_unadjusted_1km = {
|
|
26
|
+
"name": "Unconstrained individual countries 2000-2020 ( 1km resolution )",
|
|
27
|
+
"base_url": "https://data.worldpop.org/GIS/Population/Global_2000_2020_1km/{year}/{country_upper}/{country_lower}_ppp_{year}_1km_Aggregated.tif"
|
|
28
|
+
}
|
|
29
|
+
|
|
30
|
+
worldPop_adjusted_1km = {
|
|
31
|
+
"name": "Unconstrained individual countries 2000-2020 UN adjusted ( 1km resolution )",
|
|
32
|
+
"base_url": "https://data.worldpop.org/GIS/Population/Global_2000_2020_1km_UNadj/{year}/{country_upper}/{country_lower}_ppp_{year}_1km_Aggregated_UNadj.tif"
|
|
33
|
+
}
|
|
34
|
+
|
|
35
|
+
wp_datasets = {
|
|
36
|
+
'worldPop_unadjusted_1km': worldPop_unadjusted_1km,
|
|
37
|
+
'worldPop_adjusted_1km': worldPop_adjusted_1km,
|
|
38
|
+
"valid_years": list(range(2000, 2021))
|
|
39
|
+
}
|
|
40
|
+
|
|
41
|
+
wp_info = {"Algeria": "dza", "Angola": "ago", "Benin": "ben", "Botswana": "bwa", "Burkina Faso": "bfa", "Burundi": "bdi", "Cameroon": "cmr", "Cape Verde": "cpv",
|
|
42
|
+
"Central African Republic": "caf", "Chad": "tcd", "Comoros": "com", "Congo": "cog", "Cote d'Ivoire": "civ", "Democratic Republic of the Congo": "cod",
|
|
43
|
+
"Djibouti": "dji", "Egypt": "egy", "Equatorial Guinea": "gnq", "Eritrea": "eri", "Ethiopia": "eth", "Gabon": "gab", "Gambia": "gmb", "Ghana": "gha",
|
|
44
|
+
"Guinea": "gin", "Guinea-Bissau": "gnb", "Kenya": "ken", "Lesotho": "lso", "Liberia": "lbr", "Libya": "lby", "Madagascar": "mdg", "Malawi": "mwi", "Mali": "mli",
|
|
45
|
+
"Mauritania": "mrt", "Mauritius": "mus", "Mayotte": "myt", "Morocco": "mar", "Mozambique": "moz", "Namibia": "nam", "Niger": "ner", "Nigeria": "nga", "Réunion": "reu",
|
|
46
|
+
"Rwanda": "rwa", "Saint Helena, Ascension and Tristan da Cunha": "shn", "Sao Tome and Principe": "stp", "Senegal": "sen", "Seychelles": "syc", "Sierra Leone": "sle",
|
|
47
|
+
"Somalia": "som", "South Africa": "zaf", "South Sudan": "ssd", "Sudan": "sdn", "Swaziland": "swz", "Tanzania": "tza", "Togo": "tgo", "Tunisia": "tun", "Uganda": "uga",
|
|
48
|
+
"Western Sahara": "esh", "Zambia": "zmb", "Zimbabwe": "zwe", "Anguilla": "aia", "Antigua and Barbuda": "atg", "Argentina": "arg", "Aruba": "abw", "Bahamas": "bhs",
|
|
49
|
+
"Barbados": "brb", "Belize": "blz", "Bermuda": "bmu", "Bolivia": "bol", "Bonaire, Sint Eustatius and Saba": "bes", "Brazil": "bra", "British Virgin Islands": "vgb",
|
|
50
|
+
"Canada": "can", "Cayman Islands": "cym", "Chile": "chl", "Colombia": "col", "Costa Rica": "cri", "Cuba": "cub", "Curaçao": "cuw", "Dominica": "dma", "Dominican Republic": "dom",
|
|
51
|
+
"Ecuador": "ecu", "El Salvador": "slv", "Falkland Islands (Malvinas)": "flk", "French Guiana": "guf", "Greenland": "grl", "Grenada": "grd", "Guadeloupe": "glp", "Guatemala": "gtm",
|
|
52
|
+
"Guyana": "guy", "Haiti": "hti", "Honduras": "hnd", "Jamaica": "jam", "Martinique": "mtq", "Mexico": "mex", "Montserrat": "msr", "Nicaragua": "nic", "Panama": "pan", "Paraguay": "pry",
|
|
53
|
+
"Peru": "per", "Puerto Rico": "pri", "Saint Barthélemy": "blm", "Saint Kitts and Nevis": "kna", "Saint Lucia": "lca", "Saint Martin": "maf", "Saint Pierre and Miquelon": "spm",
|
|
54
|
+
"Saint Vincent and the Grenadines": "vct", "Sint Maarten (Dutch part)": "sxm", "Suriname": "sur", "Trinidad and Tobago": "tto", "Turks and Caicos Islands": "tca", "United States of America": "50",
|
|
55
|
+
"United States Virgin Islands": "vir", "Uruguay": "ury", "Venezuela": "ven", "Afghanistan": "afg", "Armenia": "arm", "Azerbaijan": "aze", "Bahrain": "bhr", "Bangladesh": "bgd", "Bhutan": "btn",
|
|
56
|
+
"Brunei Darussalam": "brn", "Cambodia": "khm", "China": "chn", "Cyprus": "cyp", "Georgia": "geo", "Hong Kong": "hkg", "India": "ind", "Indonesia": "idn", "Iran": "irn", "Iraq": "irq",
|
|
57
|
+
"Israel": "isr", "Japan": "jpn", "Jordan": "jor", "Kazakhstan": "kaz", "Kuwait": "kwt", "Kyrgyz Republic": "kgz", "Lao People's Democratic Republic": "lao", "Lebanon": "lbn", "Macao": "mac",
|
|
58
|
+
"Malaysia": "mys", "Maldives": "mdv", "Mongolia": "mng", "Myanmar": "mmr", "Nepal": "npl", "North Korea": "prk", "Oman": "omn", "Pakistan": "pak", "Palestinian Territory": "pse", "Philippines": "phl",
|
|
59
|
+
"Qatar": "qat", "Saudi Arabia": "sau", "Singapore": "sgp", "South Korea": "kor", "Sri Lanka": "lka", "Syrian Arab Republic": "syr", "Taiwan": "twn", "Tajikistan": "tjk", "Thailand": "tha", "Timor-Leste": "tls",
|
|
60
|
+
"Turkey": "tur", "Turkmenistan": "tkm", "United Arab Emirates": "are", "Uzbekistan": "uzb", "Vietnam": "vnm", "Yemen": "yem", "Albania": "alb", "Andorra": "and", "Austria": "aut", "Belarus": "blr",
|
|
61
|
+
"Belgium": "bel", "Bosnia and Herzegovina": "bih", "Bulgaria": "bgr", "Croatia": "hrv", "Czech Republic": "cze", "Denmark": "dnk", "Estonia": "est", "Faroe Islands": "fro", "Finland": "fin", "France": "fra",
|
|
62
|
+
"Germany": "deu", "Gibraltar": "gib", "Greece": "grc", "Holy See (Vatican City State)": "vat", "Hungary": "hun", "Iceland": "isl", "Ireland": "irl", "Isle of Man": "imn", "Italy": "ita", "Latvia": "lva",
|
|
63
|
+
"Liechtenstein": "lie", "Lithuania": "ltu", "Luxembourg": "lux", "Macedonia": "mkd", "Malta": "mlt", "Moldova": "mda", "Monaco": "mco", "Montenegro": "mne", "Netherlands": "nld", "Norway": "nor", "Poland": "pol",
|
|
64
|
+
"Portugal": "prt", "Romania": "rou", "Russian Federation": "rus", "San Marino": "smr", "Serbia": "srb", "Slovakia (Slovak Republic)": "svk", "Slovenia": "svn", "Spain": "esp", "Sweden": "swe", "Switzerland": "che",
|
|
65
|
+
"Ukraine": "ukr", "United Kingdom of Great Britain & Northern Ireland": "gbr", "American Samoa": "asm", "Australia": "aus", "Cook Islands": "cok", "Fiji": "fji", "French Polynesia": "pyf", "Guam": "gum",
|
|
66
|
+
"Kiribati": "kir", "Marshall Islands": "mhl", "Micronesia": "fsm", "Nauru": "nru", "New Caledonia": "ncl", "New Zealand": "nzl", "Niue": "niu", "Northern Mariana Islands": "mnp", "Palau": "plw",
|
|
67
|
+
"Papua New Guinea": "png", "Samoa": "wsm", "Solomon Islands": "slb", "Tokelau": "tkl", "Tonga": "ton", "Tuvalu": "tuv", "Vanuatu": "vut", "Wallis and Futuna": "wlf"}
|
|
68
|
+
|
|
69
|
+
|
|
70
|
+
class Dataset:
|
|
71
|
+
_datasets = {
|
|
72
|
+
'gpw_v4_unadjusted_1km',
|
|
73
|
+
'gpw_v4_adjusted_1km',
|
|
74
|
+
'worldPop_unadjusted_1km',
|
|
75
|
+
'worldPop_adjusted_1km'
|
|
76
|
+
}
|
|
77
|
+
|
|
78
|
+
def __init__(
|
|
79
|
+
self,
|
|
80
|
+
) -> None:
|
|
81
|
+
pass
|
|
82
|
+
|
|
83
|
+
def _find_url(self, dataset, year, country):
|
|
84
|
+
country_upper, country_lower = None, None
|
|
85
|
+
|
|
86
|
+
if dataset.startswith('worldPop'):
|
|
87
|
+
country_lower = wp_info[country]
|
|
88
|
+
print("The country that will be downloaded is {}".format(
|
|
89
|
+
country))
|
|
90
|
+
country_upper = country_lower.upper()
|
|
91
|
+
if year not in wp_datasets['valid_years']:
|
|
92
|
+
raise ValueError(
|
|
93
|
+
"The dataset {} in {} is not avaliable. ".format(dataset, year))
|
|
94
|
+
|
|
95
|
+
if dataset.startswith('worldPop'):
|
|
96
|
+
self.dataset = wp_datasets[dataset]
|
|
97
|
+
self.url = self.dataset['base_url'].format(year=year, country_upper=country_upper,
|
|
98
|
+
country_lower=country_lower)
|
|
99
|
+
elif dataset.startswith('gpw_v4'):
|
|
100
|
+
self.dataset = gpw_datasets[dataset]
|
|
101
|
+
self.url = self.dataset['base_url'].format(year=year)
|
|
102
|
+
|
|
103
|
+
def _download(self, dataset) -> None:
|
|
104
|
+
if dataset.startswith('worldPop'):
|
|
105
|
+
try:
|
|
106
|
+
urllib.request.urlretrieve(self.url, self.file_path)
|
|
107
|
+
except urllib.error.ContentTooShortError:
|
|
108
|
+
self._download()
|
|
109
|
+
else:
|
|
110
|
+
try:
|
|
111
|
+
urllib.request.urlretrieve(self.url, self.fp.name)
|
|
112
|
+
except urllib.error.ContentTooShortError:
|
|
113
|
+
self._download()
|
|
114
|
+
|
|
115
|
+
def download(
|
|
116
|
+
self,
|
|
117
|
+
dataset,
|
|
118
|
+
year,
|
|
119
|
+
country=None,
|
|
120
|
+
file_path=None
|
|
121
|
+
) -> None:
|
|
122
|
+
"""Download and load Population Counts Dataset.
|
|
123
|
+
Args:
|
|
124
|
+
dataset (string): dataset name to download.
|
|
125
|
+
year: year of dataset to download.
|
|
126
|
+
country: country of dataset to download, only needed when the dataset is worldpop.
|
|
127
|
+
"""
|
|
128
|
+
year = int(year)
|
|
129
|
+
if dataset not in self._datasets:
|
|
130
|
+
raise ValueError(
|
|
131
|
+
"The dataset {} is not avaliable. ".format(dataset))
|
|
132
|
+
|
|
133
|
+
self._find_url(dataset=dataset, year=year, country=country)
|
|
134
|
+
|
|
135
|
+
if not os.path.isabs(file_path):
|
|
136
|
+
file_path = os.path.abspath(file_path)
|
|
137
|
+
|
|
138
|
+
if not os.path.exists(file_path):
|
|
139
|
+
self.file_path = file_path
|
|
140
|
+
else:
|
|
141
|
+
raise ValueError("The path {} is not exist. ".format(file_path))
|
|
142
|
+
|
|
143
|
+
if dataset.startswith('gpw_v4'):
|
|
144
|
+
self.fp = tempfile.NamedTemporaryFile(suffix='.zip', mode='w', delete=False)
|
|
145
|
+
|
|
146
|
+
print("Downloading " + os.path.basename(self.file_path) + " ... ")
|
|
147
|
+
self._download(dataset)
|
|
148
|
+
if dataset.startswith('gpw_v4'):
|
|
149
|
+
z = zipfile.ZipFile(self.fp.name, 'r')
|
|
150
|
+
target_dir = os.path.dirname(self.file_path)
|
|
151
|
+
z.extract(path=target_dir)
|
|
152
|
+
z.close()
|
|
153
|
+
tif_path = glob(os.path.join(target_dir, '*.tif'))[0]
|
|
154
|
+
os.rename(tif_path, self.file_path)
|
|
155
|
+
self.fp.close()
|
|
156
|
+
print('Done')
|
|
157
|
+
|
|
158
|
+
def mask(self, shp: str):
|
|
159
|
+
vector = Vector(shp)
|
|
160
|
+
raster = Raster(self.file_path)
|
|
161
|
+
raster = raster.read_from_geometry(vector.geometry)
|
|
162
|
+
raster.save(self.file_path)
|
|
@@ -0,0 +1,193 @@
|
|
|
1
|
+
import numpy as np
|
|
2
|
+
from scipy import ndimage
|
|
3
|
+
from .io import Raster
|
|
4
|
+
from .utils import zonal_stats
|
|
5
|
+
|
|
6
|
+
|
|
7
|
+
class DEGURBA:
|
|
8
|
+
|
|
9
|
+
grid_cells_l1_cla = {
|
|
10
|
+
'urban_centres': 1,
|
|
11
|
+
'urban_clusters': 2,
|
|
12
|
+
'rural_grid_cells': 3
|
|
13
|
+
}
|
|
14
|
+
local_units_l1_cla = {
|
|
15
|
+
'cities': 1,
|
|
16
|
+
'towns_semi_dense_areas': 2,
|
|
17
|
+
'rural_areas': 3
|
|
18
|
+
}
|
|
19
|
+
grid_cells_l2_cla = {
|
|
20
|
+
'urban_centre': 11,
|
|
21
|
+
'dense_urban_cluster': 21,
|
|
22
|
+
'semi_dense_urban_cluster': 22,
|
|
23
|
+
'suburban_peri_urban_grid_cells': 23,
|
|
24
|
+
'rural_cluster': 31,
|
|
25
|
+
'low_density_rural_grid_cells': 32,
|
|
26
|
+
'very_low_density_rural_grid_cells': 33
|
|
27
|
+
}
|
|
28
|
+
local_units_l2_cal = {
|
|
29
|
+
'city': 11,
|
|
30
|
+
'dense_town': 21,
|
|
31
|
+
'semi_dense_town': 22,
|
|
32
|
+
'suburban_peri_urban_area': 23,
|
|
33
|
+
'village': 31,
|
|
34
|
+
'dispersed_rural_area': 32,
|
|
35
|
+
'mostly_uninhabited_area': 33
|
|
36
|
+
}
|
|
37
|
+
|
|
38
|
+
def __init__(self,
|
|
39
|
+
pn=None,
|
|
40
|
+
affine=None,
|
|
41
|
+
crs=None,
|
|
42
|
+
nodata=None,
|
|
43
|
+
band=1) -> None:
|
|
44
|
+
if not isinstance(pn, type(None)):
|
|
45
|
+
self.pn = Raster(pn, affine=affine, crs=crs, nodata=nodata, band=band)
|
|
46
|
+
|
|
47
|
+
def _get_urban_centres(self, pn):
|
|
48
|
+
'''Identify the urban centres (high-density clusters), it is done in four steps.
|
|
49
|
+
Args:
|
|
50
|
+
pn (numpy array): population counts array.
|
|
51
|
+
'''
|
|
52
|
+
# First step, identify cells with at least 1500 inhabitants
|
|
53
|
+
urban_centres_mask = pn >= 1500
|
|
54
|
+
if isinstance(urban_centres_mask, np.ma.masked_array):
|
|
55
|
+
urban_centres_mask = urban_centres_mask.data
|
|
56
|
+
urban_centres_mask = urban_centres_mask.astype(np.byte)
|
|
57
|
+
# Second, identify groups of contiguous cells using the "four-point contiguity" method
|
|
58
|
+
s = np.array(
|
|
59
|
+
[
|
|
60
|
+
[0, 1, 0],
|
|
61
|
+
[1, 1, 1],
|
|
62
|
+
[0, 1, 0]
|
|
63
|
+
]
|
|
64
|
+
)
|
|
65
|
+
label, num_features = ndimage.label(urban_centres_mask, structure=s)
|
|
66
|
+
# Third step, remove group whose total number of inhabitants less than 50000
|
|
67
|
+
for i in range(1, num_features+1):
|
|
68
|
+
mask = label == i
|
|
69
|
+
if np.sum(pn[mask]) < 50000:
|
|
70
|
+
urban_centres_mask[mask] = 0
|
|
71
|
+
# Fouth step, fill gaps and smooth borders by using iterative ‘majority rule’
|
|
72
|
+
w = np.array(
|
|
73
|
+
[
|
|
74
|
+
[1, 1, 1],
|
|
75
|
+
[1, 0, 1],
|
|
76
|
+
[1, 1, 1]
|
|
77
|
+
]
|
|
78
|
+
)
|
|
79
|
+
for i in range(1, num_features+1):
|
|
80
|
+
mask = label == i
|
|
81
|
+
mask = mask.astype(np.byte)
|
|
82
|
+
while True:
|
|
83
|
+
mask = ndimage.convolve(
|
|
84
|
+
mask, weights=w, mode='constant', cval=0)
|
|
85
|
+
mask = np.logical_and(mask >= 5, urban_centres_mask == 0)
|
|
86
|
+
if 0 == np.count_nonzero(mask):
|
|
87
|
+
break
|
|
88
|
+
urban_centres_mask[mask] = 1
|
|
89
|
+
return urban_centres_mask.astype(np.bool_)
|
|
90
|
+
|
|
91
|
+
def _get_urban_clusters(self, pn, urban_centres_mask):
|
|
92
|
+
"""Identify the urban clusters (moderate-density clusters), it is done in four steps.
|
|
93
|
+
Args:
|
|
94
|
+
pn (numpy array): population counts array.
|
|
95
|
+
urban_centres (numpy array): urban centres.
|
|
96
|
+
"""
|
|
97
|
+
# First step, identify cells with at least 300 inhabitants
|
|
98
|
+
urban_clusters_mask = pn >= 300
|
|
99
|
+
if isinstance(urban_clusters_mask, np.ma.masked_array):
|
|
100
|
+
urban_clusters_mask = urban_clusters_mask.data
|
|
101
|
+
urban_clusters_mask = urban_clusters_mask.astype(np.byte)
|
|
102
|
+
# Second, identify groups of contiguous cells using the "eight-point contiguity" method
|
|
103
|
+
s = np.array(
|
|
104
|
+
[
|
|
105
|
+
[1, 1, 1],
|
|
106
|
+
[1, 1, 1],
|
|
107
|
+
[1, 1, 1]
|
|
108
|
+
]
|
|
109
|
+
)
|
|
110
|
+
label, num_features = ndimage.label(urban_clusters_mask, structure=s)
|
|
111
|
+
# Third step, remove group whose total number of inhabitants less than 5000
|
|
112
|
+
for i in range(1, num_features+1):
|
|
113
|
+
mask = label == i
|
|
114
|
+
if np.sum(pn[mask]) < 5000:
|
|
115
|
+
urban_clusters_mask[mask] = 0
|
|
116
|
+
# Fouth step, overlay the urban centres on urban clusters to identify final urban clusters
|
|
117
|
+
urban_clusters_mask = np.logical_and(urban_clusters_mask,
|
|
118
|
+
urban_centres_mask == 0)
|
|
119
|
+
urban_clusters_mask = urban_clusters_mask.astype(np.byte)
|
|
120
|
+
return urban_clusters_mask.astype(np.bool_)
|
|
121
|
+
|
|
122
|
+
def _get_rural_grid_cells(self, pn, urban_centres_mask, urban_clusters_mask):
|
|
123
|
+
"""Identify the rural grid cells (mostly low density cells) that are not identified as urban centres or as urban clusters.
|
|
124
|
+
Args:
|
|
125
|
+
pn (numpy array): population counts array.
|
|
126
|
+
urban_centres_mask (numpy array): urban centres.
|
|
127
|
+
urban_clusters_mask (numpy array): urban clusters.
|
|
128
|
+
"""
|
|
129
|
+
rural_grid_cells_mask = pn >= 0
|
|
130
|
+
if isinstance(rural_grid_cells_mask, np.ma.masked_array):
|
|
131
|
+
rural_grid_cells_mask = rural_grid_cells_mask.data
|
|
132
|
+
rural_grid_cells_mask = np.logical_and(rural_grid_cells_mask,
|
|
133
|
+
urban_centres_mask == False)
|
|
134
|
+
rural_grid_cells_mask = np.logical_and(rural_grid_cells_mask,
|
|
135
|
+
urban_clusters_mask == False)
|
|
136
|
+
rural_grid_cells_mask = rural_grid_cells_mask.astype(np.byte)
|
|
137
|
+
return rural_grid_cells_mask.astype(np.bool_)
|
|
138
|
+
|
|
139
|
+
def classify_grid_cells_l1(self):
|
|
140
|
+
pn_array = self.pn.array
|
|
141
|
+
urban_centres = self._get_urban_centres(pn_array)
|
|
142
|
+
urban_clusters = self._get_urban_clusters(
|
|
143
|
+
pn_array, urban_centres)
|
|
144
|
+
rural_grid_cells = self._get_rural_grid_cells(
|
|
145
|
+
pn_array, urban_centres, urban_clusters)
|
|
146
|
+
grid_cells_clas = [urban_centres,
|
|
147
|
+
urban_clusters, rural_grid_cells]
|
|
148
|
+
grid_cells_l1 = np.zeros(
|
|
149
|
+
shape=urban_centres.shape, dtype=np.int8)
|
|
150
|
+
for grid_cells_cla, index in zip(grid_cells_clas,
|
|
151
|
+
self.grid_cells_l1_cla.values()):
|
|
152
|
+
grid_cells_l1[grid_cells_cla] = index
|
|
153
|
+
grid_cells_l1 = Raster(
|
|
154
|
+
grid_cells_l1, affine=self.pn.affine, crs=self.pn.crs, nodata=0)
|
|
155
|
+
|
|
156
|
+
return grid_cells_l1
|
|
157
|
+
|
|
158
|
+
def classify_local_units_l1(self, local_units, field=None,
|
|
159
|
+
grid_cells_l1=None, all_touched=False):
|
|
160
|
+
"""
|
|
161
|
+
Parameters:
|
|
162
|
+
-----------
|
|
163
|
+
local_units: path to an vector source or io.Vector object or ndarray
|
|
164
|
+
grid_cells_l1: the result of classify_grid_cells_l1
|
|
165
|
+
"""
|
|
166
|
+
if grid_cells_l1 == None:
|
|
167
|
+
grid_cells_l1 = self.classify_grid_cells_l1()
|
|
168
|
+
|
|
169
|
+
def classify(grid_cells):
|
|
170
|
+
total_count = grid_cells.count()
|
|
171
|
+
if not total_count:
|
|
172
|
+
return 0
|
|
173
|
+
urban_centres = self.grid_cells_l1_cla['urban_centres']
|
|
174
|
+
urban_centres_cells_r = np.count_nonzero(
|
|
175
|
+
grid_cells == urban_centres) / total_count
|
|
176
|
+
|
|
177
|
+
if urban_centres_cells_r >= 0.5:
|
|
178
|
+
return self.local_units_l1_cla['cities']
|
|
179
|
+
|
|
180
|
+
rural_grid_cells = self.grid_cells_l1_cla['rural_grid_cells']
|
|
181
|
+
rural_grid_cells_r = np.count_nonzero(
|
|
182
|
+
grid_cells == rural_grid_cells) / total_count
|
|
183
|
+
|
|
184
|
+
if urban_centres_cells_r < 0.5 and rural_grid_cells_r < 0.5:
|
|
185
|
+
return self.local_units_l1_cla['towns_semi_dense_areas']
|
|
186
|
+
|
|
187
|
+
if rural_grid_cells_r >= 0.5:
|
|
188
|
+
return self.local_units_l1_cla['rural_areas']
|
|
189
|
+
|
|
190
|
+
local_units = zonal_stats(
|
|
191
|
+
local_units, grid_cells_l1, field=field,
|
|
192
|
+
zone_func=classify, all_touched=all_touched)
|
|
193
|
+
return local_units
|
|
@@ -0,0 +1,102 @@
|
|
|
1
|
+
import numpy as np
|
|
2
|
+
import os
|
|
3
|
+
from .io import Raster, Vector, geometry_window, overlap
|
|
4
|
+
from .io import geometry_bounds
|
|
5
|
+
|
|
6
|
+
|
|
7
|
+
def stat_func(array, stat):
|
|
8
|
+
"""
|
|
9
|
+
Parameters
|
|
10
|
+
----------
|
|
11
|
+
array: numpy masked array
|
|
12
|
+
"""
|
|
13
|
+
stats = {'min': np.ma.min, 'max': np.ma.max,
|
|
14
|
+
'mean': np.ma.mean, 'sum': np.ma.sum,
|
|
15
|
+
'count': np.ma.count, 'std': np.ma.std,
|
|
16
|
+
'median': np.ma.median,
|
|
17
|
+
'range': lambda x: np.ma.max(x) - np.ma.min(x)}
|
|
18
|
+
return stats[stat](array)
|
|
19
|
+
|
|
20
|
+
|
|
21
|
+
def zonal_stats(vector,
|
|
22
|
+
raster,
|
|
23
|
+
field,
|
|
24
|
+
affine=None,
|
|
25
|
+
crs=None,
|
|
26
|
+
nodata=None,
|
|
27
|
+
stat=None,
|
|
28
|
+
zone_func=None,
|
|
29
|
+
all_touched=False
|
|
30
|
+
):
|
|
31
|
+
"""
|
|
32
|
+
Parameters
|
|
33
|
+
----------
|
|
34
|
+
vector: path to an vector source or io.Vector object or ndarray
|
|
35
|
+
raster: path to an raster source or io.Raster object
|
|
36
|
+
field: str, optional
|
|
37
|
+
field in the vector
|
|
38
|
+
defaults to None
|
|
39
|
+
affine: Affine instance
|
|
40
|
+
required only for ndarrays, otherwise it is read from src
|
|
41
|
+
crs: str, dict, or CRS; optional
|
|
42
|
+
Coordinate reference systems defines how a dataset’s pixels map to locations on,
|
|
43
|
+
for example, a globe or the Earth.
|
|
44
|
+
nodata: int or float, optional
|
|
45
|
+
stat: str
|
|
46
|
+
Which statistics to calculate for each zone.
|
|
47
|
+
The optional parameters are min, max, mean, sum, count, std, median, range
|
|
48
|
+
zone_func: callable
|
|
49
|
+
function to apply to zone ndarray prior to computing stats
|
|
50
|
+
all_touched: bool, optional
|
|
51
|
+
Whether to include every raster cell touched by a geometry, or only
|
|
52
|
+
those having a center point within the polygon.
|
|
53
|
+
defaults to `False`
|
|
54
|
+
"""
|
|
55
|
+
if stat and zone_func:
|
|
56
|
+
raise ValueError("Specify either stat or zone_func")
|
|
57
|
+
|
|
58
|
+
if isinstance(vector, str):
|
|
59
|
+
if not os.path.exists(vector):
|
|
60
|
+
raise ValueError("The vector {} is not exist.".format(vector))
|
|
61
|
+
else:
|
|
62
|
+
vector = Vector(vector)
|
|
63
|
+
|
|
64
|
+
if isinstance(raster, str):
|
|
65
|
+
if not os.path.exists(raster):
|
|
66
|
+
raise ValueError("The vector {} is not exist.".format(raster))
|
|
67
|
+
else:
|
|
68
|
+
raster = Raster(raster, affine=affine, crs=crs, nodata=nodata)
|
|
69
|
+
|
|
70
|
+
values = []
|
|
71
|
+
for geometry in vector['geometry']:
|
|
72
|
+
clip_raster = raster.read_from_geometry([geometry], all_touched=all_touched)
|
|
73
|
+
array = clip_raster.array
|
|
74
|
+
geometry_mask = ~array.mask
|
|
75
|
+
if not np.any(geometry_mask):
|
|
76
|
+
left, bottom, right, top = geometry_bounds(geometry)
|
|
77
|
+
center_x, center_y = (left + right) / 2, (bottom + top) / 2
|
|
78
|
+
center_row, center_col = raster.index(center_x, center_y)
|
|
79
|
+
value = raster.array[center_row, center_col]
|
|
80
|
+
values.append(int(value))
|
|
81
|
+
continue
|
|
82
|
+
|
|
83
|
+
if stat != None:
|
|
84
|
+
value = stat_func(array, stat)
|
|
85
|
+
values.append(int(value))
|
|
86
|
+
continue
|
|
87
|
+
|
|
88
|
+
# execute zone_func on masked zone ndarray
|
|
89
|
+
if zone_func is not None:
|
|
90
|
+
if not callable(zone_func):
|
|
91
|
+
raise TypeError(('zone_func must be a callable '
|
|
92
|
+
'which accepts function a '
|
|
93
|
+
'single `zone_array` arg.'))
|
|
94
|
+
value = zone_func(array)
|
|
95
|
+
values.append(int(value))
|
|
96
|
+
continue
|
|
97
|
+
|
|
98
|
+
if field != None:
|
|
99
|
+
vector.create_field(name=field, type='float', values=values)
|
|
100
|
+
return vector
|
|
101
|
+
|
|
102
|
+
|
|
@@ -0,0 +1,105 @@
|
|
|
1
|
+
Metadata-Version: 2.2
|
|
2
|
+
Name: degurba
|
|
3
|
+
Version: 0.1
|
|
4
|
+
Summary: Urban Boundary Extraction Software Based on Degree of Urbanization
|
|
5
|
+
Home-page: https://github.com/djw-easy/degurba
|
|
6
|
+
Author: Your Name
|
|
7
|
+
Author-email: djw@lreis.ac.cn
|
|
8
|
+
Classifier: Programming Language :: Python :: 3
|
|
9
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
10
|
+
Classifier: Operating System :: OS Independent
|
|
11
|
+
Requires-Python: >=3.7
|
|
12
|
+
Description-Content-Type: text/markdown
|
|
13
|
+
Requires-Dist: rasterio
|
|
14
|
+
Requires-Dist: scipy
|
|
15
|
+
Dynamic: author
|
|
16
|
+
Dynamic: author-email
|
|
17
|
+
Dynamic: classifier
|
|
18
|
+
Dynamic: description
|
|
19
|
+
Dynamic: description-content-type
|
|
20
|
+
Dynamic: home-page
|
|
21
|
+
Dynamic: requires-dist
|
|
22
|
+
Dynamic: requires-python
|
|
23
|
+
Dynamic: summary
|
|
24
|
+
|
|
25
|
+
# Urban Boundary Extraction Software Based on Degree of Urbanization
|
|
26
|
+
|
|
27
|
+
## Project Introduction
|
|
28
|
+
|
|
29
|
+
This project aims to provide a tool for urban boundary extraction based on the Degree of Urbanization (DEGURBA) algorithm. By integrating multi-source geospatial data, it offers a fast, flexible, and efficient method for extracting urban boundaries at specific times and locations, providing technical support and data for researchers.
|
|
30
|
+
|
|
31
|
+

|
|
32
|
+
<center>
|
|
33
|
+
(a) Gridded population of Beijing in 2020.
|
|
34
|
+
(b) Grid cell level classification result by DEGURBA.
|
|
35
|
+
(c) Local unit classification result by DEGURBA.
|
|
36
|
+
</center>
|
|
37
|
+
|
|
38
|
+
## Features
|
|
39
|
+
|
|
40
|
+
- **Multi-source Data Support**: Supports downloading WorldPOP and GPWV4 grid population data.
|
|
41
|
+
- **Grid Cell Classification**: Classifies grid cells into urban centers, urban clusters, and rural grid units based on population density, continuity, and scale.
|
|
42
|
+
- **Local Unit Classification**: Overlays the grid cell classification results onto local spatial units and further classifies them into urban areas, semi-dense areas, and rural areas.
|
|
43
|
+
- **Flexible and Efficient**: Users can generate urban boundary data with different time, location, and classification accuracy requirements.
|
|
44
|
+
|
|
45
|
+
## Installation
|
|
46
|
+
|
|
47
|
+
### Install using QGIS
|
|
48
|
+
|
|
49
|
+
#### 1. Install QGIS
|
|
50
|
+
- Download and install QGIS, version 3.20 or higher.
|
|
51
|
+
|
|
52
|
+
#### 2. Install rasterio
|
|
53
|
+
- In the QGIS QSGeo4W Shell, run `pip install rasterio`.
|
|
54
|
+
|
|
55
|
+
#### 3. Configure Plugin
|
|
56
|
+
- Place the project code folder into the QGIS plugin directory.
|
|
57
|
+
- Open QGIS, click on "Manage Plugins" and install the "DEGURBA" plugin.
|
|
58
|
+
- Open the Processing Toolbar and select the appropriate tools for operation.
|
|
59
|
+
|
|
60
|
+
### Install using Python
|
|
61
|
+
|
|
62
|
+
You can install the package using pip:
|
|
63
|
+
|
|
64
|
+
~~~
|
|
65
|
+
pip install degurba
|
|
66
|
+
~~~
|
|
67
|
+
|
|
68
|
+
## Usages with QGIS
|
|
69
|
+
|
|
70
|
+
1. **Download Population Data**:
|
|
71
|
+
- Use the "download worldpop grid data" or "download gpwv4 grid data" tool to select the desired dataset, country, year, and clipping area (optional).
|
|
72
|
+
|
|
73
|
+
2. **Grid Cell Classification**:
|
|
74
|
+
- Use the "Grid Cell Classification" tool, input the population grid data, and output raster data.
|
|
75
|
+
|
|
76
|
+
3. **Local Unit Classification**:
|
|
77
|
+
- Use the "Local Units Classification" tool, input the grid cell classification result data, and output local unit data (vector data).
|
|
78
|
+
|
|
79
|
+
## Example
|
|
80
|
+
|
|
81
|
+
Using Beijing's data for the year 2020 as an example:
|
|
82
|
+
|
|
83
|
+
1. Download the WorldPOP population grid data for Beijing, setting the MASK layer to Beijing's vector boundary.
|
|
84
|
+
|
|
85
|
+

|
|
86
|
+
|
|
87
|
+

|
|
88
|
+
|
|
89
|
+
2. Perform grid cell classification on the downloaded population grid data.
|
|
90
|
+
|
|
91
|
+

|
|
92
|
+
|
|
93
|
+

|
|
94
|
+
|
|
95
|
+
3. Overlay the classification results onto Beijing's local spatial units (such as districts) to complete the local unit classification.
|
|
96
|
+
|
|
97
|
+

|
|
98
|
+
|
|
99
|
+

|
|
100
|
+
|
|
101
|
+
## Notes
|
|
102
|
+
|
|
103
|
+
- Ensure compatibility between QGIS version and the plugin.
|
|
104
|
+
- When downloading data, pay attention to selecting the correct dataset and parameters.
|
|
105
|
+
- When performing local unit classification, ensure that the input grid cell classification result data is complete and accurate.
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
README.md
|
|
2
|
+
setup.py
|
|
3
|
+
degurba/__init__.py
|
|
4
|
+
degurba/io.py
|
|
5
|
+
degurba/load_data.py
|
|
6
|
+
degurba/main.py
|
|
7
|
+
degurba/utils.py
|
|
8
|
+
degurba.egg-info/PKG-INFO
|
|
9
|
+
degurba.egg-info/SOURCES.txt
|
|
10
|
+
degurba.egg-info/dependency_links.txt
|
|
11
|
+
degurba.egg-info/requires.txt
|
|
12
|
+
degurba.egg-info/top_level.txt
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
degurba
|
degurba-0.1/setup.cfg
ADDED
degurba-0.1/setup.py
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
1
|
+
from setuptools import setup, find_packages
|
|
2
|
+
|
|
3
|
+
setup(
|
|
4
|
+
name='degurba',
|
|
5
|
+
version='0.1',
|
|
6
|
+
author='Your Name',
|
|
7
|
+
author_email='djw@lreis.ac.cn',
|
|
8
|
+
description='Urban Boundary Extraction Software Based on Degree of Urbanization',
|
|
9
|
+
long_description=open('README.md', 'r', encoding='utf-8').read(),
|
|
10
|
+
long_description_content_type='text/markdown',
|
|
11
|
+
url='https://github.com/djw-easy/degurba',
|
|
12
|
+
packages=find_packages(),
|
|
13
|
+
classifiers=[
|
|
14
|
+
'Programming Language :: Python :: 3',
|
|
15
|
+
'License :: OSI Approved :: MIT License',
|
|
16
|
+
'Operating System :: OS Independent',
|
|
17
|
+
],
|
|
18
|
+
python_requires='>=3.7',
|
|
19
|
+
install_requires=[
|
|
20
|
+
'rasterio',
|
|
21
|
+
'scipy'
|
|
22
|
+
],
|
|
23
|
+
)
|