extended_fragment_cache 0.0.1
Sign up to get free protection for your applications and to get access to all the features.
- data/README +109 -0
- data/lib/extended_fragment_cache.rb +210 -0
- metadata +52 -0
data/README
ADDED
@@ -0,0 +1,109 @@
|
|
1
|
+
=ExtendedFragmentCache
|
2
|
+
|
3
|
+
== About
|
4
|
+
|
5
|
+
The extended_fragment_cache plugin provides content interpolation and an
|
6
|
+
in-process memory cache for fragment caching. It also integrates the
|
7
|
+
features of Yan Pritzker's memcache_fragments plugin since they both
|
8
|
+
operate on the same methods.
|
9
|
+
|
10
|
+
== Installation
|
11
|
+
|
12
|
+
1. This plugin requires that the memcache-client gem is installed.
|
13
|
+
# gem install memcache-client
|
14
|
+
|
15
|
+
2. Install the plugin OR the gem
|
16
|
+
$ script/plugin install svn://rubyforge.org/var/svn/zventstools/projects/extended_fragment_cache
|
17
|
+
- OR -
|
18
|
+
# gem install extended_fragment_cache
|
19
|
+
|
20
|
+
== In-Process Memory Cache for Fragment Caching
|
21
|
+
|
22
|
+
Fragment caching has a slight inefficiency that requires two lookups
|
23
|
+
within the fragment cache store to render a single cached fragment.
|
24
|
+
The two cache lookups are:
|
25
|
+
|
26
|
+
1. The read_fragment method invoked in a controller to determine if a
|
27
|
+
fragment has already been cached. e.g.,
|
28
|
+
unless read_fragment("/x/y/z")
|
29
|
+
...
|
30
|
+
end
|
31
|
+
2. The cache helper method invoked in a view that renders the fragment. e.g.,
|
32
|
+
<% cache("/x/y/z") do %>
|
33
|
+
...
|
34
|
+
<% end %>
|
35
|
+
|
36
|
+
This plugin adds an in-process cache that saves the value retrieved from
|
37
|
+
the fragment cache store. The in-process cache has two benefits:
|
38
|
+
|
39
|
+
1. It cuts in half the number of read requests sent to the fragment cache
|
40
|
+
store. This can result in a considerable saving for sites that make
|
41
|
+
heavy use of memcached.
|
42
|
+
2. Retrieving the fragment from the in-process cache is faster than going
|
43
|
+
to fragment cache store. On a typical dev box, the savings are
|
44
|
+
relatively small but would be noticeable in standard production
|
45
|
+
environment using memcached (where the fragment cache could be remote)
|
46
|
+
|
47
|
+
Peter Zaitsev has a great post comparing the latencies of different
|
48
|
+
cache types on the MySQL Performance blog:
|
49
|
+
http://www.mysqlperformanceblog.com/2006/08/09/cache-performance-comparison/
|
50
|
+
|
51
|
+
The plugin automatically installs a before_filter on the ApplicationController
|
52
|
+
that flushes the in-process memory cache at the start of every request.
|
53
|
+
|
54
|
+
== Content Interpolation for Fragment Caching
|
55
|
+
|
56
|
+
Many modern websites mix a lot of static and dynamic content. The more
|
57
|
+
dynamic content you have in your site, the harder it becomes to implement
|
58
|
+
caching. In an effort to scale, you've implemented fragment caching
|
59
|
+
all over the place. Fragment caching can be difficult if your static content
|
60
|
+
is interleaved with your dynamic content. Your views become littered
|
61
|
+
with cache calls which not only hurts performance (multiple calls to the
|
62
|
+
cache backend), it also makes them harder to read. Content
|
63
|
+
interpolation allows you substitude dynamic content into cached fragment.
|
64
|
+
|
65
|
+
Take this example view:
|
66
|
+
<% cache("/first_part") do %>
|
67
|
+
This content is very expensive to generate, so let's fragment cache it.<br/>
|
68
|
+
<% end %>
|
69
|
+
<%= Time.now %><br/>
|
70
|
+
<% cache("/second_part") do %>
|
71
|
+
This content is also very expensive to generate.<br/>
|
72
|
+
<% end %>
|
73
|
+
|
74
|
+
We can replace it with:
|
75
|
+
<% cache("/only_part", {}, {"__TIME_GOES_HERE__" => Time.now}) do %>
|
76
|
+
This content is very expensive to generate, so let's fragment cache it.<br/>
|
77
|
+
__TIME_GOES_HERE__<br/>
|
78
|
+
This content is also very expensive to generate.<br/>
|
79
|
+
<% end %>
|
80
|
+
|
81
|
+
The latter is easier to read and induces less load on the cache backend.
|
82
|
+
|
83
|
+
We use content interpolation at Zvents to speed up our JSON methods.
|
84
|
+
Converting objects to JSON representation is notoriously slow.
|
85
|
+
Unfortunately, in our application, each JSON request must return some unique
|
86
|
+
data. This makes caching tedious because 99% of the content returned is
|
87
|
+
static for a given object, but there's a little bit of dynamic data that
|
88
|
+
must be sent back in the response. Using content interpolation, we cache
|
89
|
+
the object in JSON format and substitue the dynamic values in the view.
|
90
|
+
|
91
|
+
This plugin integrates Yan Pritzker's extension that allows content to be
|
92
|
+
cached with an expiry time (from the memcache_fragments plugin) since they
|
93
|
+
both operate on the same method. This allows you to do things like:
|
94
|
+
|
95
|
+
<% cache("/only_part", {:expire => 15.minutes}) do %>
|
96
|
+
This content is very expensive to generate, so let's fragment cache it.
|
97
|
+
<% end %>
|
98
|
+
|
99
|
+
== Bugs, Code and Contributing
|
100
|
+
|
101
|
+
There.s a RubyForge project set up at:
|
102
|
+
|
103
|
+
http://rubyforge.org/projects/zventstools/
|
104
|
+
|
105
|
+
Anonymous SVN access:
|
106
|
+
|
107
|
+
$ svn checkout svn://rubyforge.org/var/svn/zventstools
|
108
|
+
|
109
|
+
Author: Tyler Kovacs (tyler dot kovacs at gmail dot com)
|
@@ -0,0 +1,210 @@
|
|
1
|
+
# In-Process Memory Cache for Fragment Caching
|
2
|
+
#
|
3
|
+
# Fragment caching has a slight inefficiency that requires two lookups
|
4
|
+
# within the fragment cache store to render a single cached fragment.
|
5
|
+
# The two cache lookups are:
|
6
|
+
#
|
7
|
+
# 1. The read_fragment method invoked in a controller to determine if a
|
8
|
+
# fragment has already been cached. e.g.,
|
9
|
+
# unless read_fragment("/x/y/z")
|
10
|
+
# ...
|
11
|
+
# end
|
12
|
+
# 2. The cache helper method invoked in a view that renders the fragment. e.g.,
|
13
|
+
# <% cache("/x/y/z") do %>
|
14
|
+
# ...
|
15
|
+
# <% end %>
|
16
|
+
#
|
17
|
+
# This plugin adds an in-process cache that saves the value retrieved from
|
18
|
+
# the fragment cache store. The in-process cache has two benefits:
|
19
|
+
#
|
20
|
+
# 1. It cuts in half the number of read requests sent to the fragment cache
|
21
|
+
# store. This can result in a considerable saving for sites that make
|
22
|
+
# heavy use of memcached.
|
23
|
+
# 2. Retrieving the fragment from the in-process cache is faster than going
|
24
|
+
# to fragment cache store. On a typical dev box, the savings are
|
25
|
+
# relatively small but would be noticeable in standard production
|
26
|
+
# environment using memcached (where the fragment cache could be remote)
|
27
|
+
#
|
28
|
+
# Peter Zaitsev has a great post comparing the latencies of different
|
29
|
+
# cache types on the MySQL Performance blog:
|
30
|
+
# http://www.mysqlperformanceblog.com/2006/08/09/cache-performance-comparison/
|
31
|
+
#
|
32
|
+
# The plugin automatically installs a before_filter on the
|
33
|
+
# ApplicationController that flushes the in-process memory cache at the
|
34
|
+
# start of every request.
|
35
|
+
|
36
|
+
module ActionController
|
37
|
+
module Caching
|
38
|
+
module ExtendedFragments
|
39
|
+
# Add a local_fragment_cache object and accessor.
|
40
|
+
def self.append_features(base) #:nodoc:
|
41
|
+
super
|
42
|
+
base.class_eval do
|
43
|
+
@@local_fragment_cache = {}
|
44
|
+
cattr_accessor :local_fragment_cache
|
45
|
+
end
|
46
|
+
|
47
|
+
# add a before filter to flush the local cache before every request
|
48
|
+
base.before_filter({}) do |c|
|
49
|
+
@@local_fragment_cache.clear
|
50
|
+
end
|
51
|
+
end
|
52
|
+
end
|
53
|
+
|
54
|
+
module Fragments
|
55
|
+
# Override read_fragment so that it checks the local_fragment_cache
|
56
|
+
# object before going to the fragment_cache_store backend.
|
57
|
+
def read_fragment(name, options = nil)
|
58
|
+
return unless perform_caching
|
59
|
+
|
60
|
+
key = fragment_cache_key(name)
|
61
|
+
self.class.benchmark "Fragment read: #{key}" do
|
62
|
+
content = ApplicationController.local_fragment_cache[key]
|
63
|
+
if content.nil?
|
64
|
+
content = fragment_cache_store.read(key, options)
|
65
|
+
ApplicationController.local_fragment_cache[key] = content
|
66
|
+
end
|
67
|
+
content
|
68
|
+
end
|
69
|
+
end
|
70
|
+
|
71
|
+
def write_fragment(name, content, options = nil)
|
72
|
+
return unless perform_caching
|
73
|
+
|
74
|
+
key = fragment_cache_key(name)
|
75
|
+
self.class.benchmark "Cached fragment: #{key}" do
|
76
|
+
ApplicationController.local_fragment_cache[key] = content
|
77
|
+
fragment_cache_store.write(key, content, options)
|
78
|
+
end
|
79
|
+
content
|
80
|
+
end
|
81
|
+
end
|
82
|
+
end
|
83
|
+
end
|
84
|
+
|
85
|
+
# Content Interpolation for Fragment Caching
|
86
|
+
#
|
87
|
+
# Many modern websites mix a lot of static and dynamic content. The more
|
88
|
+
# dynamic content you have in your site, the harder it becomes to implement
|
89
|
+
# caching. In an effort to scale, you've implemented fragment caching
|
90
|
+
# all over the place. Fragment caching can be difficult if your static content
|
91
|
+
# is interleaved with your dynamic content. Your views become littered
|
92
|
+
# with cache calls which not only hurts performance (multiple calls to the
|
93
|
+
# cache backend), it also makes them harder to read. Content
|
94
|
+
# interpolation allows you substitude dynamic content into cached fragment.
|
95
|
+
#
|
96
|
+
# Take this example view:
|
97
|
+
# <% cache("/first_part") do %>
|
98
|
+
# This content is very expensive to generate, so let's fragment cache it.<br/>
|
99
|
+
# <% end %>
|
100
|
+
# <%= Time.now %><br/>
|
101
|
+
# <% cache("/second_part") do %>
|
102
|
+
# This content is also very expensive to generate.<br/>
|
103
|
+
# <% end %>
|
104
|
+
#
|
105
|
+
# We can replace it with:
|
106
|
+
# <% cache("/only_part", {}, {"__TIME_GOES_HERE__" => Time.now}) do %>
|
107
|
+
# This content is very expensive to generate, so let's fragment cache it.<br/>
|
108
|
+
# __TIME_GOES_HERE__<br/>
|
109
|
+
# This content is also very expensive to generate.<br/>
|
110
|
+
# <% end %>
|
111
|
+
#
|
112
|
+
# The latter is easier to read and induces less load on the cache backend.
|
113
|
+
#
|
114
|
+
# We use content interpolation at Zvents to speed up our JSON methods.
|
115
|
+
# Converting objects to JSON representation is notoriously slow.
|
116
|
+
# Unfortunately, in our application, each JSON request must return some unique
|
117
|
+
# data. This makes caching tedious because 99% of the content returned is
|
118
|
+
# static for a given object, but there's a little bit of dynamic data that
|
119
|
+
# must be sent back in the response. Using content interpolation, we cache
|
120
|
+
# the object in JSON format and substitue the dynamic values in the view.
|
121
|
+
#
|
122
|
+
# This plugin integrates Yan Pritzker's extension that allows content to be
|
123
|
+
# cached with an expiry time (from the memcache_fragments plugin) since they
|
124
|
+
# both operate on the same method. This allows you to do things like:
|
125
|
+
#
|
126
|
+
# <% cache("/only_part", {:expire => 15.minutes}) do %>
|
127
|
+
# This content is very expensive to generate, so let's fragment cache it.
|
128
|
+
# <% end %>
|
129
|
+
|
130
|
+
module ActionView
|
131
|
+
module Helpers
|
132
|
+
# See ActionController::Caching::Fragments for usage instructions.
|
133
|
+
module CacheHelper
|
134
|
+
def cache(name = {}, options=nil, interpolation = {}, &block)
|
135
|
+
begin
|
136
|
+
content = @controller.cache_erb_fragment(block, name, options, interpolation) || ""
|
137
|
+
rescue
|
138
|
+
content = ""
|
139
|
+
rescue MemCache::MemCacheError => err
|
140
|
+
content = ""
|
141
|
+
end
|
142
|
+
|
143
|
+
interpolation.keys.each{|k| content.sub!(k.to_s, interpolation[k].to_s)}
|
144
|
+
content
|
145
|
+
end
|
146
|
+
end
|
147
|
+
end
|
148
|
+
end
|
149
|
+
|
150
|
+
module ActionController
|
151
|
+
module Caching
|
152
|
+
module Fragments
|
153
|
+
# Called by CacheHelper#cache
|
154
|
+
def cache_erb_fragment(block, name={}, options=nil, interpolation={})
|
155
|
+
unless perform_caching then
|
156
|
+
content = block.call
|
157
|
+
interpolation.keys.each{|k|content.sub!(k.to_s,interpolation[k].to_s)}
|
158
|
+
content
|
159
|
+
return
|
160
|
+
end
|
161
|
+
|
162
|
+
buffer = eval("_erbout", block.binding)
|
163
|
+
|
164
|
+
if cache = read_fragment(name, options)
|
165
|
+
buffer.concat(cache)
|
166
|
+
else
|
167
|
+
pos = buffer.length
|
168
|
+
block.call
|
169
|
+
write_fragment(name, buffer[pos..-1], options)
|
170
|
+
interpolation.keys.each{|k|
|
171
|
+
buffer[pos..-1] = buffer[pos..-1].sub!(k.to_s,interpolation[k].to_s)
|
172
|
+
}
|
173
|
+
buffer[pos..-1]
|
174
|
+
end
|
175
|
+
end
|
176
|
+
end
|
177
|
+
end
|
178
|
+
end
|
179
|
+
|
180
|
+
class MemCache
|
181
|
+
# The read and write methods are required to get fragment caching to
|
182
|
+
# work with the Robot Co-op memcache_client code.
|
183
|
+
# http://rubyforge.org/projects/rctools/
|
184
|
+
#
|
185
|
+
# Lifted shamelessly from Yan Pritzker's memcache_fragments plugin.
|
186
|
+
# This should really go back into the memcache_client core.
|
187
|
+
# http://skwpspace.com/2006/08/19/rails-fragment-cache-with-memcached-client-and-time-based-expire-option/
|
188
|
+
def read(key,options=nil)
|
189
|
+
begin
|
190
|
+
get(key)
|
191
|
+
rescue
|
192
|
+
ActiveRecord::Base.logger.error("MemCache Error: #{$!}")
|
193
|
+
return false
|
194
|
+
rescue MemCache::MemCacheError => err
|
195
|
+
ActiveRecord::Base.logger.error("MemCache Error: #{$!}")
|
196
|
+
return false
|
197
|
+
end
|
198
|
+
end
|
199
|
+
|
200
|
+
def write(key,content,options=nil)
|
201
|
+
expiry = options && options[:expire] || 0
|
202
|
+
begin
|
203
|
+
set(key,content,expiry)
|
204
|
+
rescue
|
205
|
+
ActiveRecord::Base.logger.error("MemCache Error: #{$!}")
|
206
|
+
rescue MemCache::MemCacheError => err
|
207
|
+
ActiveRecord::Base.logger.error("MemCache Error: #{$!}")
|
208
|
+
end
|
209
|
+
end
|
210
|
+
end
|
metadata
ADDED
@@ -0,0 +1,52 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
rubygems_version: 0.8.10
|
3
|
+
specification_version: 1
|
4
|
+
name: extended_fragment_cache
|
5
|
+
version: !ruby/object:Gem::Version
|
6
|
+
version: 0.0.1
|
7
|
+
date: 2006-11-03
|
8
|
+
summary: Integrates memcache-client compatibility code and time-based expiry from Yan Pritzker's memcache_fragments
|
9
|
+
require_paths:
|
10
|
+
- lib
|
11
|
+
email: tyler.kovacs@zvents.com
|
12
|
+
homepage: http://blog.zvents.com/2006/11/3/rails-plugin-extended-fragment-cache
|
13
|
+
rubyforge_project:
|
14
|
+
description:
|
15
|
+
autorequire: extended_fragment_cache
|
16
|
+
default_executable:
|
17
|
+
bindir: bin
|
18
|
+
has_rdoc: true
|
19
|
+
required_ruby_version: !ruby/object:Gem::Version::Requirement
|
20
|
+
requirements:
|
21
|
+
- - ">"
|
22
|
+
- !ruby/object:Gem::Version
|
23
|
+
version: 0.0.0
|
24
|
+
version:
|
25
|
+
platform: ruby
|
26
|
+
authors:
|
27
|
+
- Tyler Kovacs
|
28
|
+
files:
|
29
|
+
- lib/extended_fragment_cache.rb
|
30
|
+
- README
|
31
|
+
test_files: []
|
32
|
+
|
33
|
+
rdoc_options: []
|
34
|
+
|
35
|
+
extra_rdoc_files:
|
36
|
+
- README
|
37
|
+
executables: []
|
38
|
+
|
39
|
+
extensions: []
|
40
|
+
|
41
|
+
requirements: []
|
42
|
+
|
43
|
+
dependencies:
|
44
|
+
- !ruby/object:Gem::Dependency
|
45
|
+
name: memcache-client
|
46
|
+
version_requirement:
|
47
|
+
version_requirements: !ruby/object:Gem::Version::Requirement
|
48
|
+
requirements:
|
49
|
+
- - ">="
|
50
|
+
- !ruby/object:Gem::Version
|
51
|
+
version: 1.0.3
|
52
|
+
version:
|