django-postpone-index 0.0.1__py3-none-any.whl → 0.0.3__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: django-postpone-index
3
- Version: 0.0.1
3
+ Version: 0.0.3
4
4
  Summary: Postpone index creation to provide Zero Downtime Migration feature
5
5
  Home-page: https://github.com/nnseva/django-postpone-index
6
6
  Author: Vsevolod Novikov
@@ -169,6 +169,43 @@ calling the `apply_postponed` migration command again. All not-applied indexes w
169
169
  **NOTICE** the `apply_postponed` management command doesn't have any explicit locking mechanics. Avoid starting this
170
170
  command concurrently with itself or another `migrate` command on the same database.
171
171
 
172
+ ## Intermediate migration state
173
+
174
+ Apart from standard Django migrations, using the `postpone_index` package leads to the *intermediate migration state*
175
+ after the `migrate` management command finished:
176
+
177
+ - new model structure is applied
178
+ - indexes to be deleted are deleted
179
+ - indexes to be created are *not created yet*
180
+
181
+ You should be aware that if you introduce a new unique index or constraint, the database does not control uniqueness
182
+ based on not yet created indexes at this time.
183
+
184
+ Your code works now as expected everywhere, except the code which is based on new unique constraints introduced in applied migrations.
185
+
186
+ Apply the `apply_postponed run` management command to make these new indexes work.
187
+
188
+ Any error while `apply_postponed run` execution is stored in the `PostponedSQL` model instance.
189
+
190
+ You can see erroneous lines using `apply_postponed list` command. See the `[E]` mark at the start of the line.
191
+
192
+ You also can see the error details using the format parameter of the `apply_postponed list -f '... %(error)s'` management command.
193
+
194
+ The `apply_postponed run -x` breaks execution on any error. You can see the error in the standard error or logging streams.
195
+
196
+ The `apply_postponed run` (without `-x` parameter) doesn't stop on error, but outputs warning to the log stream instead.
197
+
198
+ When the error happened, it most probably is caused by the non-unique records. Fix the data and try to execute
199
+ `apply_postponed run` again to create an index.
200
+
201
+ After the successfull `apply_postponed run` execution, the migration state is finalised to be equal as if you applied the migration
202
+ without `postpone_index` package at all.
203
+
204
+ The `apply_postponed run` management command marks all successfully executed `PostponedSQL` instances as `done`. You can see `[X]` mark
205
+ at the start of the line produced by `apply_ponsponed list` management command.
206
+
207
+ You can cleanup `done` instances using `apply_postponed cleanup` management command. This step is optional.
208
+
172
209
  ## Django testing
173
210
 
174
211
  Django migrates testing database before tests. Always use `POSTPONE_INDEX_IGNORE = True` settings to avoid postpone index
@@ -0,0 +1,23 @@
1
+ postpone_index/__init__.py,sha256=s1NgdtOn7SO7scs-uAaq1134F7Luh-JjB5Ghra3D2fQ,84
2
+ postpone_index/_version.py,sha256=pBZsQt6tlL02W-ri--X_4JCubpAK7jjCSnOmUp_isjc,704
3
+ postpone_index/admin.py,sha256=TAZxaciQkEBNhReYulsU5VopPCyABtMLZvCSxk0CfYE,820
4
+ postpone_index/apps.py,sha256=1ap2HY49uXVXjY9UWK3___fjPe22wqFOSOnuH2kIAxs,200
5
+ postpone_index/migration_utils.py,sha256=02rm77mz_MAufl5vdqKPV1CAd1-3bRk6kSy4MIxKsEo,752
6
+ postpone_index/models.py,sha256=Aeow27GSHCkklLnUAGnRwwaCaLrQ11b2Sg7vIZIybGg,3580
7
+ postpone_index/testing_utils.py,sha256=ycxp7ovY6JceOvNpotK7OGAebhdHicXJ-0fWJmaq9fo,1224
8
+ postpone_index/utils.py,sha256=wYVPsQRs3LN-ZtvfSWrDDqFFp0OCkVxW9j7w9yqd47o,3823
9
+ postpone_index/contrib/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
10
+ postpone_index/contrib/postgis/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
11
+ postpone_index/contrib/postgis/base.py,sha256=a5h1k11BVSOflSSesTxObKAEYNWtisFLRbApzHeW3c0,289
12
+ postpone_index/contrib/postgis/schema.py,sha256=eantuNhFYWDngDnAeURJurpW58UztPnjTIySmUlXx7s,340
13
+ postpone_index/contrib/postgres/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
14
+ postpone_index/contrib/postgres/base.py,sha256=Xq90nJkxGSgR8Im9JPnFepeMb0-bur9sRjPpm3zMd6U,281
15
+ postpone_index/contrib/postgres/schema.py,sha256=qmJRxBCKfwWVrM7P2vSaaR4aAl_C6-2qMrKPiQQ9QpU,10383
16
+ postpone_index/management/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
17
+ postpone_index/management/commands/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
18
+ postpone_index/management/commands/apply_postponed.py,sha256=wVbw0ugUyPuYwUikGnn5MvP9l9OsEIZja9sb6nsMtOw,9107
19
+ postpone_index/sql/start.sql,sha256=44yFbZTjknjXUvl7MpO1uuPv-CVBsZcnUDuKc8mVj3k,871
20
+ django_postpone_index-0.0.3.dist-info/METADATA,sha256=aEo4ieeQpm90WE2j8ljOuz_F3m5pD1e0wzGBTBA4MIQ,12365
21
+ django_postpone_index-0.0.3.dist-info/WHEEL,sha256=wUyA8OaulRlbfwMtmQsvNngGrxQHAvkKcvRmdizlJi0,92
22
+ django_postpone_index-0.0.3.dist-info/top_level.txt,sha256=B6q_TICqApvruf_NJfnLFNLZLpxMLebQF6cPdKrWm5A,162
23
+ django_postpone_index-0.0.3.dist-info/RECORD,,
@@ -0,0 +1,6 @@
1
+ postpone_index
2
+ postpone_index/contrib
3
+ postpone_index/contrib/postgis
4
+ postpone_index/contrib/postgres
5
+ postpone_index/management
6
+ postpone_index/management/commands
@@ -28,7 +28,7 @@ version_tuple: VERSION_TUPLE
28
28
  commit_id: COMMIT_ID
29
29
  __commit_id__: COMMIT_ID
30
30
 
31
- __version__ = version = '0.0.1'
32
- __version_tuple__ = version_tuple = (0, 0, 1)
31
+ __version__ = version = '0.0.3'
32
+ __version_tuple__ = version_tuple = (0, 0, 3)
33
33
 
34
34
  __commit_id__ = commit_id = None
File without changes
File without changes
@@ -0,0 +1,11 @@
1
+ from django.contrib.gis.db.backends.postgis.base import (
2
+ DatabaseWrapper as _DatabaseWrapper,
3
+ )
4
+
5
+ from postpone_index.contrib.postgis.schema import DatabaseSchemaEditor
6
+
7
+
8
+ class DatabaseWrapper(_DatabaseWrapper):
9
+ """Database wrapper"""
10
+
11
+ SchemaEditorClass = DatabaseSchemaEditor
@@ -0,0 +1,11 @@
1
+ """Schema editor with concurrent index creation support."""
2
+
3
+ from django.contrib.gis.db.backends.postgis.schema import (
4
+ PostGISSchemaEditor as _DatabaseSchemaEditor,
5
+ )
6
+
7
+ from postpone_index.contrib.postgres.schema import DatabaseSchemaEditorMixin
8
+
9
+
10
+ class DatabaseSchemaEditor(DatabaseSchemaEditorMixin, _DatabaseSchemaEditor):
11
+ pass
File without changes
@@ -0,0 +1,11 @@
1
+ from django.db.backends.postgresql.base import (
2
+ DatabaseWrapper as _DatabaseWrapper,
3
+ )
4
+
5
+ from postpone_index.contrib.postgres.schema import DatabaseSchemaEditor
6
+
7
+
8
+ class DatabaseWrapper(_DatabaseWrapper):
9
+ """Database wrapper"""
10
+
11
+ SchemaEditorClass = DatabaseSchemaEditor
@@ -0,0 +1,220 @@
1
+ """Schema editor with concurrent index creation support."""
2
+ import logging
3
+ import os
4
+ import os.path
5
+
6
+ from django.conf import settings
7
+ from django.db.backends.postgresql.schema import (
8
+ DatabaseSchemaEditor as _DatabaseSchemaEditor,
9
+ )
10
+
11
+ from postpone_index.utils import Utils
12
+
13
+
14
+ logger = logging.getLogger(__name__)
15
+ package_folder = os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
16
+
17
+
18
+ class DatabaseSchemaEditorMixin(Utils):
19
+ """Mixin to embed into DatabaseSchemaEditor"""
20
+ _base_tables_created = False
21
+
22
+ def _create_base_tables(self):
23
+ """
24
+ Create base package tables on the first access to this function
25
+ """
26
+ if self._base_tables_created:
27
+ return
28
+ # Avoid circular import
29
+ from postpone_index.models import PostponedSQL
30
+
31
+ PostponedSQL.objects.using(self.connection.alias)._create_base_tables()
32
+ self._base_tables_created = True
33
+
34
+ def _ignore(self):
35
+ """Check whether to ignore the extension"""
36
+ return getattr(settings, 'POSTPONE_INDEX_IGNORE', False) or getattr(self, '_postpone_index_ignore', False)
37
+
38
+ def execute(self, sql, params=()):
39
+ """
40
+ Overriden for execute processing with special handling for index operations.
41
+
42
+ All indexing operations are postponed by writing to a special table instead of execution.
43
+
44
+ They will be executed later in CONCURRENTLY manner.
45
+ """
46
+ if self._ignore():
47
+ return super().execute(sql, params)
48
+
49
+ self._create_base_tables()
50
+
51
+ # Avoid circular import
52
+ from postpone_index.models import PostponedSQL
53
+
54
+ if match := self._create_index_re.fullmatch(str(sql)):
55
+ # Postpone index creation
56
+ index_name = match.group('index_nameq') or match.group('index_name')
57
+ table_name = match.group('table_nameq') or match.group('table_name')
58
+ columns = ','.join(self._extract_column_names(match.group('rest')))
59
+ description = 'Create Index "%s" on "%s" (%s)' % (
60
+ index_name,
61
+ table_name,
62
+ columns
63
+ )
64
+ PostponedSQL.objects.using(self.connection.alias).create(
65
+ sql=str(sql),
66
+ description=description,
67
+ table=table_name,
68
+ db_index=index_name,
69
+ fields=columns
70
+ )
71
+ logger.info('[%s] Postponed %s', self.connection.alias, description)
72
+ elif match := self._add_constraint_re.fullmatch(str(sql)):
73
+ # Postpone constraint creation
74
+ index_name = match.group('index_nameq') or match.group('index_name')
75
+ table_name = match.group('table_nameq') or match.group('table_name')
76
+ columns = ','.join(self._extract_column_names(match.group('rest')))
77
+ description = 'Add Unique Constraint "%s" on "%s" (%s)' % (
78
+ index_name,
79
+ table_name,
80
+ columns,
81
+ )
82
+ PostponedSQL.objects.using(self.connection.alias).create(
83
+ sql=str(sql),
84
+ description=description,
85
+ table=table_name,
86
+ db_index=index_name,
87
+ fields=columns
88
+ )
89
+ logger.info('[%s] Postponed %s', self.connection.alias, description)
90
+ else:
91
+ if match := self._drop_index_re.fullmatch(str(sql)):
92
+ # Override to ignore inexistent index drop error
93
+ index_name = match.group('index_nameq') or match.group('index_name')
94
+ if PostponedSQL.objects.using(self.connection.alias).filter(db_index=index_name, done=False).delete()[0]:
95
+ logger.info('[%s] Removed Index %s from postponed', self.connection.alias, index_name)
96
+ # Avoid errors on introspecting inexistent index
97
+ try:
98
+ return super().execute(sql, params)
99
+ logger.info('[%s] Drop index %s success', self.connection.alias, index_name)
100
+ except Exception as ex:
101
+ logger.info('[%s] Drop index %s ignored: %s', self.connection.alias, index_name, ex)
102
+ elif match := self._drop_constraint_re.fullmatch(str(sql)):
103
+ # Override to ignore inexistent constraint drop error
104
+ index_name = match.group('index_nameq') or match.group('index_name')
105
+ table_name = match.group('table_nameq') or match.group('table_name')
106
+ if PostponedSQL.objects.using(self.connection.alias).filter(db_index=index_name, table=table_name, done=False).delete()[0]:
107
+ logger.info('[%s] Removed Constraint %s on %s from postponed', self.connection.alias, index_name, table_name)
108
+ # Avoid errors on introspecting inexistent constraint
109
+ try:
110
+ return super().execute(sql, params)
111
+ logger.info('[%s] Drop constraint %s success', self.connection.alias, index_name)
112
+ except Exception as ex:
113
+ logger.info('[%s] Drop constraint %s ignored: %s', self.connection.alias, index_name, ex)
114
+ elif match := self._drop_table_re.fullmatch(str(sql)):
115
+ # Table dropped cancels all postponed operations related
116
+ table_name = match.group('table_nameq') or match.group('table_name')
117
+ if PostponedSQL.objects.using(self.connection.alias).filter(table=table_name, done=False).delete()[0]:
118
+ logger.info('[%s] Removed all indexes on table %s from postponed', self.connection.alias, table_name)
119
+ return super().execute(sql, params)
120
+ elif match := self._drop_column_re.fullmatch(str(sql)):
121
+ # Table dropped cancels all postponed operations related
122
+ table_name = match.group('table_nameq') or match.group('table_name')
123
+ column_name = match.group('column_nameq') or match.group('column_name')
124
+ if PostponedSQL.objects.using(self.connection.alias).filter(
125
+ table=table_name, fields__contains='"%s"' % column_name, done=False
126
+ ).delete()[0]:
127
+ logger.info(
128
+ '[%s] Removed all indexes on column %s of table %s from postponed',
129
+ self.connection.alias, column_name, table_name
130
+ )
131
+ return super().execute(sql, params)
132
+ else:
133
+ logger.debug('[%s] No special statements: %s', self.connection.alias, sql)
134
+ return super().execute(sql, params)
135
+
136
+ def _alter_field(
137
+ self, model, old_field, new_field, old_type, new_type,
138
+ old_db_params, new_db_params, *args, **kw
139
+ ):
140
+ """Overriden for special field attributes processing"""
141
+
142
+ # The original code uses database introspection to decide whether the
143
+ # index to be removed is present. We override the function before the introspection calls
144
+ # to remove the postponed index creation jobs instead.
145
+
146
+ if self._ignore():
147
+ return super()._alter_field(
148
+ model, old_field, new_field, old_type, new_type,
149
+ old_db_params, new_db_params, *args, **kw
150
+ )
151
+
152
+ self._create_base_tables()
153
+
154
+ # Avoid circular import
155
+ from postpone_index.models import PostponedSQL
156
+
157
+ suffixes = []
158
+
159
+ if old_field.unique and not new_field.unique:
160
+ # Delete unique field attribute cancels all postponed operations related
161
+ suffixes.append('_uniq')
162
+
163
+ if old_field.db_index and not new_field.db_index:
164
+ # Delete db_index field attribute cancels all postponed operations related
165
+ suffixes.append('')
166
+ suffixes.append('_like')
167
+
168
+ if old_db_params['check'] != new_db_params['check'] and old_db_params['check']:
169
+ # Delete check field attribute cancels all postponed operations related
170
+ suffixes.append('_check')
171
+
172
+ for suffix in suffixes:
173
+ index_name = self._create_index_name(
174
+ model._meta.db_table,
175
+ [old_field.column],
176
+ suffix
177
+ )
178
+ if PostponedSQL.objects.using(self.connection.alias).filter(table=model._meta.db_table, db_index=index_name, done=False).delete()[0]:
179
+ logger.info('[%s] Removed single index %s on %s from postponed', self.connection.alias, index_name, model._meta.db_table)
180
+
181
+ return super()._alter_field(
182
+ model, old_field, new_field, old_type, new_type,
183
+ old_db_params, new_db_params, *args, **kw
184
+ )
185
+
186
+ def _delete_composed_index(self, model, fields, constraint_kwargs, sql):
187
+ """Overriden for special composed index processing"""
188
+
189
+ # The original code uses database introspection to decide whether the
190
+ # index to be removed is present. We override the function before the introspection calls
191
+ # to remove the postponed index creation jobs instead.
192
+
193
+ if self._ignore():
194
+ return super()._delete_composed_index(model, fields, constraint_kwargs, sql)
195
+
196
+ self._create_base_tables()
197
+
198
+ # Avoid circular import
199
+ from postpone_index.models import PostponedSQL
200
+
201
+ # Delete composed index cancels all postponed operations related
202
+ index_name = self._create_index_name(
203
+ model._meta.db_table,
204
+ [model._meta.get_field(f).column for f in fields],
205
+ '_uniq' if (constraint_kwargs or {}).get('unique', False) else '_idx'
206
+ )
207
+
208
+ if PostponedSQL.objects.using(self.connection.alias).filter(table=model._meta.db_table, db_index=index_name, done=False).delete()[0]:
209
+ logger.info('[%s] Removed composed index %s on %s from postponed', self.connection.alias, index_name, model._meta.db_table)
210
+
211
+ # Avoid errors on introspecting inexistent compound index
212
+ try:
213
+ super()._delete_composed_index(model, fields, constraint_kwargs, sql)
214
+ logger.info('[%s] Delete composed %s on %s success', self.connection.alias, index_name, model._meta.db_table)
215
+ except Exception as ex:
216
+ logger.info('[%s] Delete composed %s on %s ignored: %s', self.connection.alias, index_name, model._meta.db_table, ex)
217
+
218
+
219
+ class DatabaseSchemaEditor(DatabaseSchemaEditorMixin, _DatabaseSchemaEditor):
220
+ pass
File without changes
File without changes
@@ -0,0 +1,251 @@
1
+ """
2
+ The apply_postponed command applies collected postponed index and constraint creation
3
+ in CONCURRENTLY manner.
4
+ """
5
+ import argparse
6
+ import logging
7
+ import sys
8
+
9
+ from django.core.management.base import BaseCommand
10
+ from django.db import connections
11
+
12
+ from postpone_index.models import PostponedSQL
13
+ from postpone_index.utils import ObjMap, Utils
14
+
15
+
16
+ logger = logging.getLogger(__name__)
17
+
18
+
19
+ class Command(Utils, BaseCommand):
20
+ __doc__ = __doc__
21
+ help = __doc__
22
+
23
+ _short_format = '%(d)s %(sql)s'
24
+ _descr_format = '%(i)04d: %(d)s %(description)s'
25
+ _long_format = '%(i)04d: %(d)s %(sql)s'
26
+
27
+ class formatter_class(argparse.RawTextHelpFormatter):
28
+ pass
29
+
30
+ def add_arguments(self, parser):
31
+ subparsers = parser.add_subparsers(
32
+ title='Commands',
33
+ dest='command',
34
+ help='Use --help with command to see the command-specific help',
35
+ )
36
+ show = subparsers.add_parser(
37
+ name='list',
38
+ formatter_class=self.formatter_class,
39
+ help='Show applied and not yet applied postponed index creation jobs'
40
+ )
41
+ show.add_argument(
42
+ '-db', '--database',
43
+ dest='database',
44
+ default='default',
45
+ help='Database alias to be applied, default is %(default)s'
46
+ )
47
+ show.add_argument(
48
+ '-r', '--reversed',
49
+ dest='reversed',
50
+ action='store_true',
51
+ help='Show records in reversed order'
52
+ )
53
+ show.add_argument(
54
+ '-u', '--unapplied',
55
+ dest='unapplied',
56
+ action='store_true',
57
+ help='Show only unapplied records'
58
+ )
59
+ show.add_argument(
60
+ '-e', '--erroneous',
61
+ dest='erroneous',
62
+ action='store_true',
63
+ help='Show only erroneous records having non-null error'
64
+ )
65
+ show.add_argument(
66
+ '-f', '--format',
67
+ dest='format',
68
+ default='s',
69
+ help=f"""Output format. Use either:
70
+ - `s` for short SQL-only format `{self._short_format}` (default)
71
+ - `d` for enumerated descriptions format `{self._descr_format}`
72
+ - `l` for enumerated SQL format `{self._long_format}`
73
+ - %-style named format
74
+ Use the PostponedSQL field names for %-style format.
75
+ Additional names are:
76
+ - `i` for enumeration index
77
+ - `d` for the `done` attribute in form of `+` or `-`""".replace('%', '%%')
78
+ )
79
+ run = subparsers.add_parser(
80
+ name='run',
81
+ formatter_class=self.formatter_class,
82
+ help='Apply not yet applied postponed index creation jobs'
83
+ )
84
+ run.add_argument(
85
+ '-x', '--exception',
86
+ dest='exception',
87
+ action='store_true',
88
+ help='Immediately stop on any exception. Continues with the next job by default'
89
+ )
90
+ run.add_argument(
91
+ '-db', '--database',
92
+ dest='database',
93
+ default='default',
94
+ help='Database alias to be applied, default is %(default)s'
95
+ )
96
+ cleanup = subparsers.add_parser(
97
+ name='cleanup',
98
+ formatter_class=self.formatter_class,
99
+ help='Cleanup stored applied index creation jobs'
100
+ )
101
+ cleanup.add_argument(
102
+ '--all',
103
+ dest='all',
104
+ action='store_true',
105
+ help='Cleanup all postponed SQL including not yet applied. Be careful!'
106
+ )
107
+ cleanup.add_argument(
108
+ '-db', '--database',
109
+ dest='database',
110
+ default='default',
111
+ help='Database alias to be applied, default is %(default)s'
112
+ )
113
+
114
+ def handle(self, *args, **options):
115
+ """Handling commands"""
116
+ if not options['command']:
117
+ return self.print_help(sys.argv[0], sys.argv[1])
118
+ return getattr(self, '_handle_%s' % options['command'])(*args, **options)
119
+
120
+ def _handle_list(self, *args, **options):
121
+ """Handle list command"""
122
+ try:
123
+ PostponedSQL.objects.using(options['database']).all().first()
124
+ except Exception:
125
+ # The table has not been created
126
+ return
127
+
128
+ q = PostponedSQL.objects.using(options['database']).all()
129
+ q = q.order_by('-ts' if options['reversed'] else 'ts')
130
+ if options['unapplied']:
131
+ q = q.filter(done=False)
132
+ if options['erroneous']:
133
+ q = q.filter(error__isnull=False)
134
+ match options['format']:
135
+ case 's':
136
+ format = self._short_format
137
+ case 'd':
138
+ format = self._descr_format
139
+ case 'l':
140
+ format = self._long_format
141
+ case _:
142
+ format = options['format']
143
+ for i, j in enumerate(q):
144
+ print(format % ObjMap(j, {
145
+ 'i': i,
146
+ }))
147
+
148
+ def _handle_run(self, *args, **options):
149
+ """Handle run command"""
150
+ try:
151
+ PostponedSQL.objects.using(options['database']).all().first()
152
+ except Exception:
153
+ # The table has not been created
154
+ return
155
+ q = PostponedSQL.objects.using(options['database']).filter(done=False).order_by('ts')
156
+ for j in q:
157
+ try:
158
+ j.error = None
159
+ j.done = False
160
+ self._handle_run_job(j, *args, **options)
161
+ j.save(update_fields=['error', 'done'])
162
+ except Exception as ex:
163
+ logger.warning('Error on running job: %s', ex)
164
+ if not j.error:
165
+ j.error = 'Exception: %s' % ex
166
+ j.save(update_fields=['error', 'done'])
167
+ if options['exception']:
168
+ raise
169
+
170
+ def _handle_cleanup(self, *args, **options):
171
+ """Handle cleanup command"""
172
+ try:
173
+ PostponedSQL.objects.using(options['database']).all().first()
174
+ except Exception:
175
+ # The table has not been created
176
+ return
177
+ q = PostponedSQL.objects.using(options['database'])
178
+ if not options['all']:
179
+ q = q.filter(done=True)
180
+ q.delete()
181
+
182
+ def _handle_run_job(self, job, *args, **options):
183
+ """Handle a single job for the run command"""
184
+ if match := self._create_index_re.fullmatch(job.sql):
185
+ unique = match.group('unique') or ''
186
+ index_name = match.group('index_nameq') or match.group('index_name')
187
+ table_name = match.group('table_nameq') or match.group('table_name')
188
+ rest = match.group('rest')
189
+ sql = 'DROP INDEX IF EXISTS "%s"' % (
190
+ index_name,
191
+ )
192
+ logger.info('[%s] SQL: %s', options['database'], sql)
193
+ cursor = connections[options['database']].cursor()
194
+ cursor.execute(sql)
195
+ cursor.close()
196
+ sql = 'CREATE %sINDEX CONCURRENTLY "%s" ON "%s" %s' % (
197
+ unique,
198
+ index_name,
199
+ table_name,
200
+ rest
201
+ )
202
+ logger.info('[%s] SQL: %s', options['database'], sql)
203
+ cursor = connections[options['database']].cursor()
204
+ cursor.execute(sql)
205
+ cursor.close()
206
+ elif match := self._add_constraint_re.fullmatch(job.sql):
207
+ unique = 'UNIQUE '
208
+ index_name = match.group('index_nameq') or match.group('index_name')
209
+ table_name = match.group('table_nameq') or match.group('table_name')
210
+ rest = match.group('rest')
211
+ sql = 'ALTER TABLE "%s" DROP CONSTRAINT IF EXISTS "%s"' % (
212
+ table_name,
213
+ index_name,
214
+ )
215
+ logger.info('[%s] SQL: %s', options['database'], sql)
216
+ cursor = connections[options['database']].cursor()
217
+ cursor.execute(sql)
218
+ cursor.close()
219
+ sql = 'DROP INDEX IF EXISTS "%s"' % (
220
+ index_name,
221
+ )
222
+ logger.info('[%s] SQL: %s', options['database'], sql)
223
+ cursor = connections[options['database']].cursor()
224
+ cursor.execute(sql)
225
+ cursor.close()
226
+ sql = 'CREATE %sINDEX CONCURRENTLY "%s" ON "%s" %s' % (
227
+ unique,
228
+ index_name,
229
+ table_name,
230
+ rest
231
+ )
232
+ logger.info('[%s] SQL: %s', options['database'], sql)
233
+ cursor = connections[options['database']].cursor()
234
+ cursor.execute(sql)
235
+ cursor.close()
236
+ sql = 'ALTER TABLE "%s" ADD CONSTRAINT "%s" UNIQUE USING INDEX "%s"' % (
237
+ table_name,
238
+ index_name,
239
+ index_name,
240
+ )
241
+ logger.info('[%s] SQL: %s', options['database'], sql)
242
+ cursor = connections[options['database']].cursor()
243
+ cursor.execute(sql)
244
+ cursor.close()
245
+ else:
246
+ logger.info('[%s] Unrecognized: %s', options['database'], sql)
247
+ cursor = connections[options['database']].cursor()
248
+ cursor.execute(sql)
249
+ cursor.close()
250
+ job.error = None
251
+ job.done = True
postpone_index/models.py CHANGED
@@ -1,12 +1,78 @@
1
+ import logging
2
+ import os
1
3
  import time
2
4
 
3
5
  from unlimited_char.fields import CharField
4
6
 
5
- from django.db import models
7
+ from django.db import models, connections
6
8
  from django.utils.translation import gettext_lazy as _
7
9
 
10
+ logger = logging.getLogger(__name__)
11
+ package_folder = os.path.dirname(os.path.abspath(__file__))
12
+
13
+
14
+ class PostponedSQLQuerySet(models.QuerySet):
15
+ """
16
+ QuerySet for PostponedSQL model.
17
+
18
+ Redefined for admin view when table is absent.
19
+ """
20
+
21
+ def _create_base_tables(self):
22
+ """
23
+ Create base package table if absent
24
+ """
25
+ if self._is_present():
26
+ logger.debug('[%s] The PostponedSQL storage already exists', self.db)
27
+ return
28
+ logger.info('[%s] The PostponedSQL storage is absent, creating', self.db)
29
+ with open(os.path.join(package_folder, 'sql/start.sql')) as f:
30
+ sql = f.read()
31
+ cursor = connections[self.db].cursor()
32
+ cursor.execute(sql)
33
+
34
+ def count(self):
35
+ """
36
+ Count only if table is present
37
+ """
38
+ if not self._is_present():
39
+ return 0
40
+ return super().count()
41
+
42
+ def _is_present(self):
43
+ """
44
+ Check if table is present
45
+ """
46
+ cursor = connections[self.db].cursor()
47
+ cursor.execute(
48
+ """
49
+ SELECT 1 FROM information_schema.tables
50
+ WHERE table_schema = 'public' AND table_name = '%s' limit 1
51
+ """ % PostponedSQL._meta.db_table
52
+ )
53
+ return bool(cursor.fetchall())
54
+
55
+
56
+ class PostponedSQLManager(models.Manager):
57
+ """
58
+ Manager for PostponedSQL model.
59
+
60
+ Redefined for admin view when table is absent.
61
+ """
62
+
63
+ def get_queryset(self):
64
+ """
65
+ Get custom QuerySet
66
+ """
67
+ qs = PostponedSQLQuerySet(self.model, using=self._db)
68
+ if not qs._is_present():
69
+ return qs.none()
70
+ return qs
71
+
8
72
 
9
73
  class PostponedSQL(models.Model):
74
+ """Model to store postponed SQL commands"""
75
+
10
76
  ts = models.BigIntegerField(
11
77
  primary_key=True, editable=False,
12
78
  default=time.time_ns,
@@ -47,6 +113,8 @@ class PostponedSQL(models.Model):
47
113
  help_text=_('Last error reported when tried to apply')
48
114
  )
49
115
 
116
+ objects = PostponedSQLManager()
117
+
50
118
  @property
51
119
  def d(self):
52
120
  """Status mark"""
@@ -0,0 +1,14 @@
1
+ CREATE TABLE IF NOT EXISTS public.postpone_index_postponedsql (
2
+ ts bigint NOT NULL PRIMARY KEY,
3
+ description character varying NOT NULL,
4
+ "sql" text NOT NULL,
5
+ "table" character varying,
6
+ db_index character varying,
7
+ fields character varying,
8
+ "done" boolean NOT NULL DEFAULT FALSE,
9
+ "error" text
10
+ );
11
+ CREATE INDEX IF NOT EXISTS postpone_index_postponedsql_db_index ON public.postpone_index_postponedsql USING btree (db_index);
12
+ CREATE INDEX IF NOT EXISTS postpone_index_postponedsql_db_index_like ON public.postpone_index_postponedsql USING btree (db_index varchar_pattern_ops);
13
+ CREATE INDEX IF NOT EXISTS postpone_index_postponedsql_table ON public.postpone_index_postponedsql USING btree ("table");
14
+ CREATE INDEX IF NOT EXISTS postpone_index_postponedsql_table_like ON public.postpone_index_postponedsql USING btree ("table" varchar_pattern_ops);
@@ -1,12 +0,0 @@
1
- postpone_index/__init__.py,sha256=s1NgdtOn7SO7scs-uAaq1134F7Luh-JjB5Ghra3D2fQ,84
2
- postpone_index/_version.py,sha256=qf6R-J7-UyuABBo8c0HgaquJ8bejVbf07HodXgwAwgQ,704
3
- postpone_index/admin.py,sha256=TAZxaciQkEBNhReYulsU5VopPCyABtMLZvCSxk0CfYE,820
4
- postpone_index/apps.py,sha256=1ap2HY49uXVXjY9UWK3___fjPe22wqFOSOnuH2kIAxs,200
5
- postpone_index/migration_utils.py,sha256=02rm77mz_MAufl5vdqKPV1CAd1-3bRk6kSy4MIxKsEo,752
6
- postpone_index/models.py,sha256=jMsA5hdiL8Xgl-hq7bgVl5NUX6LXlSYpMuX9pD68BbU,1771
7
- postpone_index/testing_utils.py,sha256=ycxp7ovY6JceOvNpotK7OGAebhdHicXJ-0fWJmaq9fo,1224
8
- postpone_index/utils.py,sha256=wYVPsQRs3LN-ZtvfSWrDDqFFp0OCkVxW9j7w9yqd47o,3823
9
- django_postpone_index-0.0.1.dist-info/METADATA,sha256=f6r6FaLnag7wQ4NjfFRPhMDZAffRmXtXK7Bg7o4viLU,10415
10
- django_postpone_index-0.0.1.dist-info/WHEEL,sha256=wUyA8OaulRlbfwMtmQsvNngGrxQHAvkKcvRmdizlJi0,92
11
- django_postpone_index-0.0.1.dist-info/top_level.txt,sha256=SzF44QR8Fr9ArKXZPY7FSJUlTBKYo_dPNHn2hbqKOrc,15
12
- django_postpone_index-0.0.1.dist-info/RECORD,,
@@ -1 +0,0 @@
1
- postpone_index