climb_factor_gem 0.1.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/LICENSE +2 -0
- data/README.md +83 -0
- data/lib/climb_factor/filtering.rb +124 -0
- data/lib/climb_factor/geometry.rb +84 -0
- data/lib/climb_factor/low_level_math.rb +59 -0
- data/lib/climb_factor/physiology.rb +104 -0
- data/lib/climb_factor.rb +86 -0
- metadata +50 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA256:
|
3
|
+
metadata.gz: 3de080785de7f1f7833f2d69a8323197e026bb3e621dfd395539f152ce9966ca
|
4
|
+
data.tar.gz: 4b697e0f6cea23b5c2fbdb5b650ac90e7912b5367b87d0d96a3cce7efda9e1c7
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: 7f3b2ff8ce4b33182e3407b7f214b711404f888ee8d6dae3d06f5d07c543c2d86afe38015f1315a9480f32ad9d7217079b8043bed3e2e3de14974f30933ad47c
|
7
|
+
data.tar.gz: 3fac167ac1c40cc6d30f2ae4639cd5eb40f1adefdaf00eb3fba6723abca53539679778af68d0963931b230966484a7b59b72c9723f9f27ba296ded124933a388
|
data/LICENSE
ADDED
data/README.md
ADDED
@@ -0,0 +1,83 @@
|
|
1
|
+
Climb Factor
|
2
|
+
=====
|
3
|
+
This is a Ruby gem adaptation of the software authored by Ben Crowell, the original source for which you can find [here](https://bitbucket.org/ben-crowell/kcals).
|
4
|
+
|
5
|
+
## Description
|
6
|
+
|
7
|
+
This software estimates your energy expenditure from running or walking,
|
8
|
+
based on a GPS track or a track generated by an application such as
|
9
|
+
google maps or [onthegomap](https://onthegomap.com).
|
10
|
+
|
11
|
+
The model used to calculate the results is described in this paper:
|
12
|
+
|
13
|
+
B. Crowell, "From treadmill to trails: predicting performance of runners,"
|
14
|
+
https://www.biorxiv.org/content/10.1101/2021.04.03.438339v2 , doi: 10.1101/2021.04.03.438339
|
15
|
+
|
16
|
+
The output from this software is most useful if you want to compare
|
17
|
+
one run to another, e.g., if I want to know how a mountain run with lots of elevation gain
|
18
|
+
compares with a flat run at a longer distance, or if I want to project whether doing a certain
|
19
|
+
trail as a run is feasible for me.
|
20
|
+
|
21
|
+
## Use through the web interface
|
22
|
+
|
23
|
+
The web interface is available [here](https://www.lightandmatter.com/cf)
|
24
|
+
|
25
|
+
## Use
|
26
|
+
Add this to your Gemfile
|
27
|
+
```
|
28
|
+
gem 'climb_factor_gem'
|
29
|
+
```
|
30
|
+
To invoke it, you must provide an array of horizontal/vertical pairs (in meters!):
|
31
|
+
```
|
32
|
+
hv = [[0, 1], [1.1, 4], [3.4, 6], ...]
|
33
|
+
ClimbFactor.estimate(hv)
|
34
|
+
```
|
35
|
+
|
36
|
+
See `climb_factor.rb` for more details and additional options.
|
37
|
+
|
38
|
+
## Filtering
|
39
|
+
|
40
|
+
The parameters filtering and xy_filtering both represent horizontal distances
|
41
|
+
in units of meters. Their defaults values are 200 and 30, respectively. These are meant to get rid of bogus
|
42
|
+
oscillations in the data. Any elevation (z) changes that occur over horizontal distances less
|
43
|
+
than the value of "filtering" will tend to get filtered out, and likewise any horizontal motion that occurs
|
44
|
+
over horizontal distances less than the value of "xy_filtering."
|
45
|
+
To turn off filtering, set the relevant parameter to 0.
|
46
|
+
|
47
|
+
The choice of the vertical filtering parameter
|
48
|
+
can have a huge effect on the total elevation gain, but the
|
49
|
+
effect on the calorie expenditure is usually fairly small.
|
50
|
+
There are several reasons why it may be a good idea to set a fairly large value of the vertical
|
51
|
+
("filtering") parameter:
|
52
|
+
|
53
|
+
1. If the resolution of the horizontal track is poor, then it may appear to go up and down steep hillsides,
|
54
|
+
when in fact the real road or trail contours around them.
|
55
|
+
|
56
|
+
2. If elevations from GPS are being used (which is a bad idea), then random fluctuations in the GPS
|
57
|
+
elevations can cause large errors.
|
58
|
+
|
59
|
+
3. If the elevations are being taken from a digital elevation model (DEM), which is generally a good
|
60
|
+
idea, then there may still be certain types of errors.
|
61
|
+
Trails and roads are intentionally constructed so as not to go up and down steep hills, but
|
62
|
+
the DEM may not accurately reflect this. The most common situation seems to be one in which
|
63
|
+
a trail or road takes a detour into a narrow gully in order to maintain a steady grade.
|
64
|
+
The DEM currently used by this software has a horizontal resolution of 30 meters.
|
65
|
+
If the gully is narrower than this, then the DEM doesn't know about the the gully, and
|
66
|
+
the detour appears to be a steep excursion up and then back down the prevailing slope.
|
67
|
+
I have found empirically that setting filtering=60 m is roughly the minimum that is required
|
68
|
+
in order to eliminate this type of artifact, which makes sense because a detour into a 30 meter
|
69
|
+
gully probably does involve about 60 meters of horizontal travel.
|
70
|
+
|
71
|
+
My rules of thumb for setting the filtering are as follows:
|
72
|
+
|
73
|
+
* For most runs with relatively short and not insanely steep hills, the default vertical filtering
|
74
|
+
parameter of 60 m works well. Using a higher filtering value leads to wrong results, because
|
75
|
+
the hills get smoothed out entirely.
|
76
|
+
|
77
|
+
* For very steep runs with a lot of elevation gain, in rugged terrain, it's necessary to use a
|
78
|
+
larger filtering value of about 200 m. Otherwise the energy estimates are much too high.
|
79
|
+
This is the software's default.
|
80
|
+
|
81
|
+
The mileage derived from a GPS track can vary quite a bit depending on the resolution of the GPS data.
|
82
|
+
Higher resolution increases the mileage, because small wiggles get counted in. This has a big effect on
|
83
|
+
the energy calculation, because the energy is mostly sensitive to mileage, not gain.
|
@@ -0,0 +1,124 @@
|
|
1
|
+
require_relative 'low_level_math'
|
2
|
+
|
3
|
+
module ClimbFactor
|
4
|
+
module Filtering
|
5
|
+
def self.resample_and_filter_hv(hv,filtering,target_resampling_resolution:30.0)
|
6
|
+
# The default for target_resampling_resolution is chosen because typically public DEM data isn't any better than this horizontal resolution anyway.
|
7
|
+
if filtering>target_resampling_resolution then filtering=target_resampling_resolution end
|
8
|
+
k = (filtering/target_resampling_resolution).floor # Set resampling resolution so that it divides filtering.
|
9
|
+
while target_resampling_resolution*k<filtering do k+=1 end
|
10
|
+
if k%2==1 then k+=1 end # make it even
|
11
|
+
resampling_resolution = filtering/k
|
12
|
+
resampled_v = resample_hv(hv,resampling_resolution)
|
13
|
+
result = []
|
14
|
+
0.upto(resampled_v.length-1) { |i|
|
15
|
+
result.push([resampling_resolution*i,resampled_v[i]])
|
16
|
+
}
|
17
|
+
return result
|
18
|
+
end
|
19
|
+
|
20
|
+
def self.resample_hv(hv,desired_h_resolution)
|
21
|
+
# Inputs a list of [h,v] points, returns a list of v values interpolated to achieve the given constant horizontal resolution (or slightly more).
|
22
|
+
# This is similar to add_resolution_and_check_size_limit(), but that routine kludgingly does multiple things.
|
23
|
+
tot_h = hv.last[0]-hv[0][0]
|
24
|
+
n = (tot_h/desired_h_resolution).ceil+1
|
25
|
+
dh = tot_h/(n-1) # actual resolution, which will be a tiny bit better
|
26
|
+
result = []
|
27
|
+
m = 0 # index into hv
|
28
|
+
0.upto(n-1) { |i| # i is an index into resampled output
|
29
|
+
h = hv[0][0]+dh*i
|
30
|
+
while m<hv.length-1 && hv[m+1][0]<h do m += 1 end
|
31
|
+
s = (h-hv[m][0])/(hv[m+1][0]-hv[m][0]) # interpolation fraction ranging from 0 to 1
|
32
|
+
result.push(CfMath.linear_interp(hv[m][1],hv[m+1][1],s))
|
33
|
+
}
|
34
|
+
return result
|
35
|
+
end
|
36
|
+
|
37
|
+
def self.do(v0,w)
|
38
|
+
# inputs:
|
39
|
+
# v0 is a list of floating-point numbers, representing the value of a function at equally spaced points on the x axis
|
40
|
+
# w = width of rectangular window to convolve with; will be made even if it isn't
|
41
|
+
# output:
|
42
|
+
# a fresh array in the same format as v0, doesn't modify v0
|
43
|
+
# v0's length should be a power of 2 and >=2
|
44
|
+
|
45
|
+
if w<=1 then return v0 end
|
46
|
+
|
47
|
+
if w%2==1 then w=w+1 end
|
48
|
+
|
49
|
+
# remove DC and detrend, so that start and end are both at 0
|
50
|
+
# -- https://www.dsprelated.com/showthread/comp.dsp/175408-1.php
|
51
|
+
# After filtering, we put these back in.
|
52
|
+
# v0 = original, which we don't touch
|
53
|
+
# v = detrended
|
54
|
+
# v1 = filtered
|
55
|
+
# For the current method, it's only necessary that n is even, not a power of 2, and detrending
|
56
|
+
# isn't actually needed.
|
57
|
+
v = v0.dup
|
58
|
+
n = v.length
|
59
|
+
slope = (v.last-v[0])/(n.to_f-1.0)
|
60
|
+
c = -v[0]
|
61
|
+
0.upto(n-1) { |i|
|
62
|
+
v[i] = v[i] - (c + slope*i)
|
63
|
+
}
|
64
|
+
|
65
|
+
# Copy the unfiltered data over as a default. On the initial and final portions, where part of the
|
66
|
+
# rectangular kernel hangs over the end, we don't attempt to do any filtering. Using the filter
|
67
|
+
# on those portions, even with appropriate normalization, would bias the (x,y) points, effectively
|
68
|
+
# moving the start and finish line inward.
|
69
|
+
v1 = v.dup
|
70
|
+
|
71
|
+
# convolve with a rectangle of width w:
|
72
|
+
sum = 0
|
73
|
+
count = 0
|
74
|
+
# Sum the initial portion for use in the average for the first filtered data point:
|
75
|
+
sum_left = 0.0
|
76
|
+
0.upto(w-1) { |i|
|
77
|
+
break if i>n/2-1 # this happens in the unusual case where w isn't less than n; we're guaranteed that n is even
|
78
|
+
sum_left = sum_left+v[i]
|
79
|
+
}
|
80
|
+
# the filter is applied to the middle portion, from w to n-w:
|
81
|
+
if w<n then
|
82
|
+
sum = sum_left
|
83
|
+
w.upto(n) { |i|
|
84
|
+
if i>=v.length then break end # wasn't part of original algorithm, but needed for some data sets...?
|
85
|
+
if v[i].nil?
|
86
|
+
raise("coordinate #{i} of #{w}..#{n} is nil, length of vector is #{v.length}, for v0=#{v0.length}, w=#{w}")
|
87
|
+
end
|
88
|
+
sum = sum + v[i]-v[i-w]
|
89
|
+
j = i-w/2
|
90
|
+
break if j>n-w
|
91
|
+
if j>=w && j<=n-w then
|
92
|
+
v1[j] = sum/w
|
93
|
+
end
|
94
|
+
}
|
95
|
+
end
|
96
|
+
|
97
|
+
# To avoid a huge discontinuity in the elevation when the filter turns on, turn it on gradually
|
98
|
+
# in the initial and final segments of length w:
|
99
|
+
# FIXME: leaves a small discontinuity
|
100
|
+
sum_left = 0.0
|
101
|
+
sum_right = 0.0
|
102
|
+
nn = 0
|
103
|
+
0.upto(2*w+1) { |i|
|
104
|
+
break if i>n/2-1 # unusual case, see above
|
105
|
+
j = n-i-1
|
106
|
+
sum_left = sum_left+v[i]
|
107
|
+
sum_right = sum_right+v[j]
|
108
|
+
nn = nn+1
|
109
|
+
if i%2==0 then
|
110
|
+
ii = i/2
|
111
|
+
jj = n-i/2-1
|
112
|
+
v1[ii] = sum_left/nn
|
113
|
+
v1[jj] = sum_right/nn
|
114
|
+
end
|
115
|
+
}
|
116
|
+
|
117
|
+
# put DC and trend back in:
|
118
|
+
0.upto(n-1) { |i|
|
119
|
+
v1[i] = v1[i] + (c + slope*i)
|
120
|
+
}
|
121
|
+
return v1
|
122
|
+
end
|
123
|
+
end
|
124
|
+
end
|
@@ -0,0 +1,84 @@
|
|
1
|
+
require_relative 'low_level_math'
|
2
|
+
|
3
|
+
module ClimbFactor
|
4
|
+
module Geom
|
5
|
+
def self.earth_radius(lat)
|
6
|
+
# https://en.wikipedia.org/wiki/Earth_radius#Geocentric_radius
|
7
|
+
a = 6378137.0 # earth's equatorial radius, in meters
|
8
|
+
b = 6356752.3 # polar radius
|
9
|
+
slat = Math::sin(CfMath.deg_to_rad(lat))
|
10
|
+
clat = Math::cos(CfMath.deg_to_rad(lat))
|
11
|
+
return Math::sqrt( ((a*a*clat)**2+(b*b*slat)**2) / ((a*clat)**2+(b*slat)**2)) # radius in meters
|
12
|
+
end
|
13
|
+
|
14
|
+
def self.cartesian_to_spherical(x,yy,z,lat0,lon0)
|
15
|
+
# returns [lat,lon,altitude], in units of degrees, degrees, and meters
|
16
|
+
# see Geom.spherical_to_cartesian() for description of coordinate systems used and the transformations.
|
17
|
+
# Calculate a first-order approximation to the inverse of the polyconic projection:
|
18
|
+
r0 = earth_radius(lat0)
|
19
|
+
zz = z+r0
|
20
|
+
slat0 = Math::sin(CfMath.deg_to_rad(lat0))
|
21
|
+
clat0 = Math::cos(CfMath.deg_to_rad(lat0))
|
22
|
+
r = Math::sqrt(x*x+yy*yy+zz*zz)
|
23
|
+
y = clat0*yy+slat0*zz
|
24
|
+
zzz = -slat0*yy+clat0*zz
|
25
|
+
lat = CfMath.rad_to_deg(Math::asin(y/r))
|
26
|
+
lon = CfMath.rad_to_deg(Math::atan2(x,zzz))+lon0
|
27
|
+
1.upto(10) { |i| # more iterations to improve the result
|
28
|
+
x2,y2,z2 = spherical_to_cartesian(lat,lon,z,lat0,lon0)
|
29
|
+
dx = x-x2
|
30
|
+
dy = yy-y2
|
31
|
+
break if dx.abs<1.0e-8 and dy.abs<1.0e-8
|
32
|
+
lat = lat + CfMath.rad_to_deg(dy/r0)
|
33
|
+
lon = lon + CfMath.rad_to_deg(dx/(r0*clat0)) if clat0!=0.0
|
34
|
+
}
|
35
|
+
return [lat,lon,z]
|
36
|
+
end
|
37
|
+
|
38
|
+
def self.spherical_to_cartesian(lat,lon,alt,lat0,lon0)
|
39
|
+
# Inputs are in degrees, except for alt, which is in meters.
|
40
|
+
# Returns [x,y,z] in meters.
|
41
|
+
# The "cartesian" coordinates are not actually cartesian. They're coordinates in which
|
42
|
+
# (x,y) are from a polyconic projection https://en.wikipedia.org/wiki/Polyconic_projection
|
43
|
+
# centered on (lat0,lon0), and z is altitude.
|
44
|
+
# (In older versions of the software, z was distance from center of earth.)
|
45
|
+
# Outputs are in meters. The metric for the projection is not exactly euclidean, so later
|
46
|
+
# calculations that treat these as cartesian coordinates are making an approximation. The error
|
47
|
+
# should be tiny on the scales we normally deal with. The important thing for our purposes is
|
48
|
+
# that the gradient of z is vertical.
|
49
|
+
lam = CfMath.deg_to_rad(lon)
|
50
|
+
lam0 = CfMath.deg_to_rad(lon0)
|
51
|
+
phi = CfMath.deg_to_rad(lat)
|
52
|
+
phi0 = CfMath.deg_to_rad(lat0)
|
53
|
+
cotphi = 1/Math::tan(phi)
|
54
|
+
u = (lam-lam0)*Math::sin(phi) # is typically on the order of 10^-3 (half the width of a USGS topo)
|
55
|
+
if u.abs<0.01
|
56
|
+
# use taylor series to avoid excessive rounding in calculation of 1-cos(u)
|
57
|
+
u2 = u*u
|
58
|
+
u4 = u2*u2
|
59
|
+
one_minus_cosu = 0.5*u2-(0.0416666666666667)*u4+(1.38888888888889e-3)*u2*u4-(2.48015873015873e-5)*u4*u4
|
60
|
+
# max error is about 10^-27, which is a relative error of about 10^-23
|
61
|
+
else
|
62
|
+
one_minus_cosu = 1-Math::cos(u)
|
63
|
+
end
|
64
|
+
r0 = earth_radius(lat0)
|
65
|
+
# Use initial latitude and keep r0 constant. If we let r0 vary, then we also need to figure
|
66
|
+
# out the direction of the g vector in this model.
|
67
|
+
x = r0*cotphi*Math::sin(u)
|
68
|
+
y = r0*((phi-phi0)+cotphi*one_minus_cosu)
|
69
|
+
z = alt
|
70
|
+
return [x,y,z]
|
71
|
+
end
|
72
|
+
|
73
|
+
def self.interpolate_raster(z,x,y)
|
74
|
+
# z = array[iy][ix]
|
75
|
+
# x,y = floating point, in array-index units
|
76
|
+
ix = x.to_i
|
77
|
+
iy = y.to_i
|
78
|
+
fx = x-ix # fractional part
|
79
|
+
fy = y-iy
|
80
|
+
z = CfMath.interpolate_square(fx,fy,z[iy][ix],z[iy][ix+1],z[iy+1][ix],z[iy+1][ix+1])
|
81
|
+
return z
|
82
|
+
end
|
83
|
+
end
|
84
|
+
end
|
@@ -0,0 +1,59 @@
|
|
1
|
+
module ClimbFactor
|
2
|
+
module CfMath
|
3
|
+
def self.deg_to_rad(x)
|
4
|
+
return 0.0174532925199433*x
|
5
|
+
end
|
6
|
+
|
7
|
+
def self.rad_to_deg(x)
|
8
|
+
return x/0.0174532925199433
|
9
|
+
end
|
10
|
+
|
11
|
+
def self.pythag(x,y)
|
12
|
+
return Math::sqrt(x*x+y*y)
|
13
|
+
end
|
14
|
+
|
15
|
+
def self.interpolate_square(x,y,z00,z10,z01,z11)
|
16
|
+
# https://en.wikipedia.org/wiki/BiClimbFactorMath.linear_interpolation#Unit_Square
|
17
|
+
# The crucial thing is that this give results that are continuous across boundaries of squares.
|
18
|
+
w00 = (1.0-x)*(1.0-y)
|
19
|
+
w10 = x*(1.0-y)
|
20
|
+
w01 = (1.0-x)*y
|
21
|
+
w11 = x*y
|
22
|
+
norm = w00+w10+w01+w11
|
23
|
+
z = (z00*w00+z10*w10+z01*w01+z11*w11)/norm
|
24
|
+
return z
|
25
|
+
end
|
26
|
+
|
27
|
+
def self.linear_interp(x1,x2,s)
|
28
|
+
return x1+s*(x2-x1)
|
29
|
+
end
|
30
|
+
|
31
|
+
def self.add2d(p,q)
|
32
|
+
return [p[0]+q[0],p[1]+q[1]]
|
33
|
+
end
|
34
|
+
|
35
|
+
def self.sub2d(p,q)
|
36
|
+
return [p[0]-q[0],p[1]-q[1]]
|
37
|
+
end
|
38
|
+
|
39
|
+
def self.dot2d(p,q)
|
40
|
+
return p[0]*q[0]+p[1]*q[1]
|
41
|
+
end
|
42
|
+
|
43
|
+
def self.scalar_mul2d(p,s)
|
44
|
+
return [s*p[0],s*p[1]]
|
45
|
+
end
|
46
|
+
|
47
|
+
def self.normalize2d(p)
|
48
|
+
return scalar_mul2d(p,1.0/mag2d(p))
|
49
|
+
end
|
50
|
+
|
51
|
+
def self.dist2d(p,q)
|
52
|
+
return mag2d(sub2d(p,q))
|
53
|
+
end
|
54
|
+
|
55
|
+
def self.mag2d(p)
|
56
|
+
return Math::sqrt(dot2d(p,p))
|
57
|
+
end
|
58
|
+
end
|
59
|
+
end
|
@@ -0,0 +1,104 @@
|
|
1
|
+
#=========================================================================
|
2
|
+
# @@ physiological model
|
3
|
+
#=========================================================================
|
4
|
+
|
5
|
+
# For the cr and cw functions, see Minetti, http://jap.physiology.org/content/93/3/1039.full
|
6
|
+
|
7
|
+
module ClimbFactor
|
8
|
+
module Phys
|
9
|
+
def self.minetti(i)
|
10
|
+
# cost of running or walking, in J/kg.m
|
11
|
+
if $running then
|
12
|
+
a,b,c,d,p = [26.073730183424228, 0.031038121935618928, 1.3809948743424785, -0.06547207947176657, 2.181405714691871]
|
13
|
+
else
|
14
|
+
a,b,c,d,p = [22.911633035337864, 0.02621471025436344, 1.3154310892336223, -0.08317260964525384, 2.208584834633906]
|
15
|
+
end
|
16
|
+
cost = (a*((i**p+b)**(1/p)+i/c+d)).abs
|
17
|
+
if true then
|
18
|
+
# "recreational" version
|
19
|
+
cutoff_i = -0.03
|
20
|
+
if i<cutoff_i then cost=[cost,minetti(cutoff_i)].max end
|
21
|
+
end
|
22
|
+
return cost
|
23
|
+
end
|
24
|
+
# Five-parameter fit to the following data:
|
25
|
+
# c is minimized at imin, and has the correct value cmin there (see comments in ClimbFactorPhys.i_to_iota())
|
26
|
+
# slopes at +-infty are ClimbFactorPhys.minetti's values: sp=9.8/0.218; sm=9.8/-1.062 for running,
|
27
|
+
# sp=9.8/0.243; sm=9.8/-1.215 for walking
|
28
|
+
# match ClimbFactorPhys.minetti's value at i=0.0
|
29
|
+
# original analytic work, with p=2 and slightly different values of sp and sm:
|
30
|
+
# calc -e "x0=-0.181355; y0=1.781269; sp=9.8/.23; sm=9.8/-1.2; a=(sp-sm)/2; c=a/[(sp+sm)/2]; b=x0^2(c^2-1); d=(1/a)*{y0-a*[sqrt(x0^2+b)+x0/c]}; a(1-1/c)"
|
31
|
+
# a = 25.3876811594203
|
32
|
+
# c = 1.47422680412371
|
33
|
+
# b = 0.0385908791280687
|
34
|
+
# d = -0.0741786448190981
|
35
|
+
# I then optimized the parameters further, including p, numerically, to fit the above criteria.
|
36
|
+
# Also checked that it agrees well with the polynomial for a reasonable range of i values.
|
37
|
+
|
38
|
+
def self.minetti_original(i) # their 5th-order polynomial fits; these won't work well at extremes of i
|
39
|
+
if $running then return minetti_cr(i) else return minetti_cw(i) end
|
40
|
+
end
|
41
|
+
|
42
|
+
def self.i_to_iota(i)
|
43
|
+
# convert i to a linearized scale iota, where iota^2=[C(i)-C(imin)]/c2 and sign(iota)=sign(i-imin)
|
44
|
+
# The following are the minima of the Minetti functions.
|
45
|
+
if $running then
|
46
|
+
imin = -0.181355
|
47
|
+
cmin = 1.781269
|
48
|
+
c2=66.0 # see comments at ClimbFactorPhys.minetti_quadratic_coeffs()
|
49
|
+
else
|
50
|
+
imin = -0.152526
|
51
|
+
cmin= 0.935493
|
52
|
+
c2=94.0 # see comments at ClimbFactorPhys.minetti_quadratic_coeffs()
|
53
|
+
end
|
54
|
+
if i.class != Float then i=to_f(i) end
|
55
|
+
if i.infinite? then return i end
|
56
|
+
c=minetti(i)
|
57
|
+
if c<cmin then
|
58
|
+
# happens sometimes due to rounding
|
59
|
+
c=cmin
|
60
|
+
end
|
61
|
+
result = Math::sqrt((c-cmin)/c2)
|
62
|
+
if i<imin then result= -result end
|
63
|
+
return result
|
64
|
+
end
|
65
|
+
|
66
|
+
def self.minetti_quadratic_coeffs()
|
67
|
+
# my rough approximation to Minetti, optimized to fit the range that's most common
|
68
|
+
if $running then
|
69
|
+
i0=-0.15
|
70
|
+
c0=1.84
|
71
|
+
c2=66.0
|
72
|
+
else
|
73
|
+
i0=-0.1
|
74
|
+
c0=1.13
|
75
|
+
c2=94.0
|
76
|
+
end
|
77
|
+
b0=c0+c2*i0*i0
|
78
|
+
b1=-2*c2*i0
|
79
|
+
b2=c2
|
80
|
+
return [i0,c0,c2,b0,b1,b2]
|
81
|
+
end
|
82
|
+
|
83
|
+
def self.minetti_cr(i) # no longer used
|
84
|
+
# i = gradient
|
85
|
+
# cr = cost of running, in J/kg.m
|
86
|
+
if i>0.5 || i<-0.5 then return minetti_steep(i) end
|
87
|
+
return 155.4*i**5-30.4*i**4-43.3*i**3+46.3*i**2+19.5*i+3.6
|
88
|
+
# note that the 3.6 is different from their best value of 3.4 on the flats, i.e., the polynomial isn't a perfect fit
|
89
|
+
end
|
90
|
+
|
91
|
+
def self.minetti_cw(i) # no longer used
|
92
|
+
# i = gradient
|
93
|
+
# cr = cost of walking, in J/kg.m
|
94
|
+
if i>0.5 || i<-0.5 then return minetti_steep(i) end
|
95
|
+
return 280.5*i**5-58.7*i**4-76.8*i**3+51.9*i**2+19.6*i+2.5
|
96
|
+
end
|
97
|
+
|
98
|
+
def self.minetti_steep(i) # no longer used
|
99
|
+
g=9.8 # m/s2=J/kg.m
|
100
|
+
if i>0 then eff=0.23 else eff=-1.2 end
|
101
|
+
return g*i/eff
|
102
|
+
end
|
103
|
+
end
|
104
|
+
end
|
data/lib/climb_factor.rb
ADDED
@@ -0,0 +1,86 @@
|
|
1
|
+
require_relative "climb_factor/filtering"
|
2
|
+
require_relative "climb_factor/physiology"
|
3
|
+
require_relative "climb_factor/low_level_math"
|
4
|
+
|
5
|
+
module ClimbFactor
|
6
|
+
def self.estimate(hv,nominal_distance:nil,filtering:200.0)
|
7
|
+
# inputs:
|
8
|
+
# All inputs are in units of meters.
|
9
|
+
# hv = list of [horizontal,vertical] coordinate pairs
|
10
|
+
# nominal_distance = if supplied, scale horizontal length of course to equal this value
|
11
|
+
# filtering = get rid of bogus fluctuations in vertical data that occur on horizontal scales less than this
|
12
|
+
# output:
|
13
|
+
# cf = a floating-point number representing a climb factor, expressed as a percentage
|
14
|
+
hv = Filtering.resample_and_filter_hv(hv,filtering)
|
15
|
+
rescale = 1.0
|
16
|
+
if !nominal_distance.nil? then
|
17
|
+
rescale = nominal_distance/(hv.last[0]-hv[0][0])
|
18
|
+
end
|
19
|
+
stats,hv = integrate_gain_and_energy(hv,rescale)
|
20
|
+
c,h = stats['c'],stats['h'] # energy cost in joules and (possibly rescaled) horiz distance in meters
|
21
|
+
cf = 100.0*(c-h*Phys.minetti(0.0))/c
|
22
|
+
return cf
|
23
|
+
end
|
24
|
+
|
25
|
+
def self.integrate_gain_and_energy(hv,rescale,body_mass:1.0)
|
26
|
+
# integrate to find total gain, slope distance, and energy burned
|
27
|
+
# returns [stats,hv], where:
|
28
|
+
# stats = {'c'=>c,'d'=>d,'gain'=>gain,'i_rms'=>i_rms,...}
|
29
|
+
# hv = modified copy of input hv, with predicted times added, if we have the necessary data
|
30
|
+
v = 0 # total vertical distance (=0 at end of a loop)
|
31
|
+
d = 0 # total distance along the slope
|
32
|
+
gain = 0 # total gain
|
33
|
+
c = 0 # cost in joules
|
34
|
+
first = true
|
35
|
+
old_h = 0
|
36
|
+
old_v = 0
|
37
|
+
i_sum = 0.0
|
38
|
+
i_sum_sq = 0.0
|
39
|
+
iota_sum = 0.0
|
40
|
+
iota_sum_sq = 0.0
|
41
|
+
baumel_si = 0.0 # compute this directly as a check
|
42
|
+
t = 0.0 # integrated time, in seconds
|
43
|
+
h_reintegrated = 0.0 # if rescale!=1, this should be the same as nominal_h
|
44
|
+
k = 0
|
45
|
+
hv.each { |a|
|
46
|
+
h,v = a
|
47
|
+
if !first then
|
48
|
+
dh = (h-old_h)*rescale
|
49
|
+
dv = v-old_v
|
50
|
+
dd = Math::sqrt(dh*dh+dv*dv)
|
51
|
+
h_reintegrated = h_reintegrated+dh
|
52
|
+
d = d+dd
|
53
|
+
if dv>0 then gain=gain+dv end
|
54
|
+
i=0
|
55
|
+
if dh>0 then i=dv/dh end
|
56
|
+
if dh>0 then baumel_si=baumel_si+dv**2/dh end
|
57
|
+
# In the following, weight by dh, although normally this doesn't matter because we make the
|
58
|
+
# h intervals constant before this point.
|
59
|
+
i_sum = i_sum + i*dh
|
60
|
+
i_sum_sq = i_sum_sq + i*i*dh
|
61
|
+
dc = dd*body_mass*Phys.minetti(i)
|
62
|
+
# in theory it matters whether we use dd or dh here; I think from Minetti's math it's dd
|
63
|
+
c = c+dc
|
64
|
+
#if not ($split_energy_at.nil?) and d-dd<$split_energy_at_m and d>$split_energy_at_m then
|
65
|
+
# $stderr.print "at d=#{$split_energy_at}, energy=#{(c*0.0002388459).round} kcals\n"
|
66
|
+
# # fixme -- implement this in a better way
|
67
|
+
#end
|
68
|
+
k=k+1
|
69
|
+
end
|
70
|
+
old_h = h
|
71
|
+
old_v = v
|
72
|
+
first = false
|
73
|
+
}
|
74
|
+
n = hv.length-1.0
|
75
|
+
h = h_reintegrated # should equal $nominal_h; may differ from hv.last[0]-hv[0][0] if rescale!=1
|
76
|
+
i_rms = Math::sqrt(i_sum_sq/h - (i_sum/h)**2)
|
77
|
+
i_mean = (hv.last[1]-hv[0][1])/h
|
78
|
+
i0,c0,c2,b0,b1,b2 = Phys.minetti_quadratic_coeffs()
|
79
|
+
e_q = h*body_mass*(b0+b1*i_mean+b2*i_rms)
|
80
|
+
cf = (c-h*body_mass*Phys.minetti(0.0))/c
|
81
|
+
stats = {'c'=>c,'h'=>h,'d'=>d,'gain'=>gain,'i_rms'=>i_rms,'i_mean'=>i_mean,'e_q'=>e_q,
|
82
|
+
'cf'=>cf,'baumel_si'=>baumel_si,
|
83
|
+
't'=>t}
|
84
|
+
return [stats,hv]
|
85
|
+
end
|
86
|
+
end
|
metadata
ADDED
@@ -0,0 +1,50 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: climb_factor_gem
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.1.3
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- Benjamin Crowell
|
8
|
+
autorequire:
|
9
|
+
bindir: bin
|
10
|
+
cert_chain: []
|
11
|
+
date: 2023-08-14 00:00:00.000000000 Z
|
12
|
+
dependencies: []
|
13
|
+
description:
|
14
|
+
email:
|
15
|
+
executables: []
|
16
|
+
extensions: []
|
17
|
+
extra_rdoc_files: []
|
18
|
+
files:
|
19
|
+
- LICENSE
|
20
|
+
- README.md
|
21
|
+
- lib/climb_factor.rb
|
22
|
+
- lib/climb_factor/filtering.rb
|
23
|
+
- lib/climb_factor/geometry.rb
|
24
|
+
- lib/climb_factor/low_level_math.rb
|
25
|
+
- lib/climb_factor/physiology.rb
|
26
|
+
homepage: https://bitbucket.org/hello-drifter/climb-factor-gem
|
27
|
+
licenses:
|
28
|
+
- GPL-2.0-or-later
|
29
|
+
metadata: {}
|
30
|
+
post_install_message:
|
31
|
+
rdoc_options: []
|
32
|
+
require_paths:
|
33
|
+
- lib
|
34
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
35
|
+
requirements:
|
36
|
+
- - ">="
|
37
|
+
- !ruby/object:Gem::Version
|
38
|
+
version: 1.9.0
|
39
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
40
|
+
requirements:
|
41
|
+
- - ">="
|
42
|
+
- !ruby/object:Gem::Version
|
43
|
+
version: '0'
|
44
|
+
requirements: []
|
45
|
+
rubygems_version: 3.0.3.1
|
46
|
+
signing_key:
|
47
|
+
specification_version: 4
|
48
|
+
summary: A library that uses a scientifically validated model to determine the energetic
|
49
|
+
cost of running up and down hills.
|
50
|
+
test_files: []
|