-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathtsuinfo_notes.txt
1424 lines (1181 loc) · 67 KB
/
tsuinfo_notes.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
bookmarks also dont work, since the html doesnt stay the same
when visiting from main page: http://www.tsumino.com/Book/Info/36176/astolfo-risei-jouhatsu-choukyou-
when exiting from reader: http://www.tsumino.com/Book/Info/36176/astolfo-risei-jouhatsu-choukyou
DL http://www.tsumino.com/Book/Info/36623/melo-melo-melon-pan-melomelo-melon-bread-
--> link: www.tsumino.com/Download/Process?token=J72DzE4fQaizgA4gw%2FlQP5GUh4F2aEt21Do4DsDrF4gYim2x5r9Q9yjr38ofa5BV
gilt >30min, aber def. <90min, auch ohne login (cookies)
DL http://www.tsumino.com/Book/Info/36445/boku-no-shiawase-na-shumi-kouhen-
---> http://www.tsumino.com/Download/Process?token=gpxr1ZlDVzp1xlAwB95zNcNdtla9im2oFf0tRPGR63lyho0JnS5WdNLuon6j49As
token -> JWT (JSON Web Token)?
info -> https://stackoverflow.com/questions/29452031/how-to-handle-file-downloads-with-jwt-based-authentication
- https://superuser.com/questions/27243/how-to-find-out-the-real-download-url-on-download-sites-that-use-redirects
https://stackoverflow.com/questions/24501358/how-to-set-a-header-for-a-http-get-request-and-trigger-file-download
prob -> tsumino as logged in user -> reCaptcha -> token -> tsumino -> dl token generated -> dl
__________________________________________________________________________________________________________
#SQL(ite) Database Design for Tagging System
Tagging ist eine many-to-many (auch "m:n") Relation, da (hier) ein Buch mehrere Tags und ein Tag von mehreren Büchern verwendet
werden kann.
(Sonst gibt es noch one-to-one(1:1), also wenn (hier) ein Buch auch nur ein Tag und ein Tag auch nur ein Buch haben könnte,
dann existiert noch one-to-many (1:m), also wenn (hier) ein Buch mehrere Tags haben könnte, diese aber nur von diesem einem
Buch verwendet werden können)
#Eine Tabelle (MySQLicious)
Die einfachste Lösung wäre alles in eine Tabelle zu packen
__books__
id title tags
0 Mama Para Straight Sex, Vanilla, ...
1 Kei Land Exhibitionism, Ahegao, ..
Das wäre auch noch performant (bei mir eigtl. sowieso nicht relevant, da nur ein Nutzer bei lokaler SQLite DB, aber ich mache das ja
für den Lerneffekt), wenn die Tag-Anzahl begreenzt wäre auf z.B. 5 Tags (hat so stackoverflow eine lange Zeit benutzt)
Für das Suchen nach Tags müsste man dann entweder das Pattern-Matching von SQLite verwenden mit "LIKE '%, Straight Sex,%'"
(% -> wildcard for 0 or more chars, _ -> any one char) oder alles in den RAM laden (id und tags) und dann im Code selber
die Tags filtern.
MySQL-Query for Inersection (AND)
SELECT -
FROM `books`
WHERE tags LIKE "%, Straight Sex,%"
AND tags LIKE "%, Vanilla,%"
==> wird schnell unperformant und Fragen wie "Was ist der populärste Tag? Was bedeutet dieser Tag? etc." lassen sich nicht oder
nur umständlich bentworten
==> dafür einfache queries
Versößt bei Tags gegen 1NF (zusammengesetzte, mengenwertige oder geschachtelte Wertebereiche (also relationenwertige Attributwertebereiche) nicht erlaubt) bei Normalisierung von Datenbanken
-> """Wikipedia: Unter Normalisierung eines relationalen Datenschemas (Tabellenstruktur) versteht man die Aufteilung von
Attributen (Tabellenspalten) in mehrere Relationen (Tabellen) gemäß den Normalisierungsregeln (s. u.), so dass eine Form
entsteht, die keine vermeidbaren Redundanzen mehr enthält.
Ein konzeptionelles Schema, das Datenredundanzen enthält, kann dazu führen, dass bei Änderungen der damit realisierten Datenbank
die mehrfach enthaltenen Daten nicht konsistent, sondern nur teilweise und unvollständig geändert werden, womit sie obsolet oder
widersprüchlich werden können. Man spricht von auftretenden Anomalien. Zudem belegt mehrfache Speicherung derselben Daten unnötig
Speicherplatz. Um Redundanz zu verhindern, normalisiert man solche Tabellen.
Es gibt verschiedene Ausmaße, in denen ein Datenbankschema gegen Anomalien gefeit sein kann. Je nachdem spricht man davon, dass
es in erster, zweiter, dritter usw. Normalform vorliege. Diese Normalformen sind durch bestimmte formale Anforderungen an das
Schema definiert."""
##Zwei Tabellen (Scuttle)
Zusätzliche Tabelle die jeweils die bookid(FOREIGN KEY) und einen tag enthält:
(f) -> foreign key
__books__
id title
0 Mama Para
1 Kei Land
__tags__
id bookid(f) tag
0 0 Straight Sex
1 0 Vanilla
2 0 ...
3 1 Exhibitionism
4 1 Ahegao
5 1 ...
Oder man nimmt nicht id sondern ein Composite-Key aus bookid und tag als PRIMARY KEY
->
Code in SQLite für composite key:
CREATE TABLE something (
column1 INTEGER NOT NULL,
column2 INTEGER NOT NULL,
value,
PRIMARY KEY ( column1, column2)
);
Doesn't Primary Key impose a NOT NULL?
-> In standard SQL, yes. In SQLite, NULL is allowed in primary keys. This answer emphasizes that if you want more standard
behavior, you need to add the NOT NULL yourself. My answer is just the very basic syntax for a multi-column primary key
Hier schon deutlich einfacher populärstes Tag rauszufinden etc., aber man wiederholt natürlich Daten (hier nur den Tag-Namen).
Sobald man zusätzliche Spalten wie Tagbeschreibungen etc. aufnehmen möchte steigt die Menge der redundanten (wiederholten/mehrfachen)
Daten natürlich stark an.
+ Man kann mehr Tags verwenden, als bei Variante 1.
MySQL-Query for Inersection (AND) --> src: http://howto.philippkeller.com/2005/04/24/Tags-Database-schemas/
SELECT b.*
FROM books b, tags c
WHERE c.bookid = b.id
AND (c.tag IN ('Straight Sex', 'Femdom', 'Ahegao'))
GROUP BY b.id
HAVING COUNT( b.id )=3
First, all bookmark-tag combinations are searched, where the tag is 'Straight Sex', 'Femdom' and 'Ahegao'
(c.category IN ('bookmark', 'webservice', 'semweb')), then just the bookmarks that have got all three tags searched
for are taken into account (HAVING COUNT(b.bId)=3)
##Drei Tabellen (Toxi)
Benutzt eine Bridge(/Junction/Intersection/Mapping/...)-Table um das m:m-Problem zu lösen.
(f) -> foreign key
__books__
id title
0 Mama Para
1 Kei Land
__tagmap__
(id) bookid(f) tag_id(f)
0 0 0
1 0 1
2 0 ...
3 1 2
4 1 3
5 1 ...
^ OR use (bookid, tag_id) as composite key
__tag__
tag_id name
0 Straight Sex
1 Vanilla
2 Exhibitionism
3 Ahegao
Am stärksten normalisiert (3NF) von den drei Varianten.
MySQL-Query für Inersection(AND):
SELECT b.*
FROM tagmap bt, books b, tag t
WHERE bt.tag_id = t.tag_id
AND (t.name IN ('Straight Sex', 'Femdom', 'Ahegao'))
AND b.id = bt.bookid
GROUP BY b.id
HAVING COUNT( b.id )=3
Einfach zusätzliche Infos für Tags hinzuzufügen.
Beim Löschen oder Verändern von Einträgen in books muss darauf geachtet werden, dass auch tagmap und tags entsprechend angepasst
werden -> sonst tag-orphans (Waisen, niemand zugehörig)
"""
If you want to have more complicated queries like (bookmarks OR bookmark) AND (webservice or WS) AND NOT (semweb or semanticweb)
the queries tend to become very complicated. In these cases I suggest the following query/computation process:
1. Run a query for each tag appearing in your “tag-query”:
SELECT b.id FROM tagmap bt, bookmark b, tag t
WHERE bt.tag_id = t.tag_id AND b.id = bt.bookmark_id AND t.name = "semweb"
2. Put each id-set from the result into an array (that is: in your favourite coding language). You could cache this arrays if you want..
3. Constrain the arrays with union or intersection or whatever.
In this way, you can also do queries like (del.icio.us|delicious)+(semweb|semantic_web)-search. This type of queries (that is:
the brackets) cannot be done by using the denormalized MySQLicious solution.
"""
Aber INSERTs sind deutlich langsamer je mehr Tabellene man benutzt, aber:
"I guess it doesn’t really make sense to base your decision, which schema to take on the time for an insert: Bookmark inserts are
about 100 times as fast as the intersection queries"
- src: http://howto.philippkeller.com/2005/06/19/Tagsystems-performance-tests/
Obwohl bei mir auch Variante 1 oder 2 reichen würden, entscheide ich mich (für den Lerneffekt, da eigtl. kompliziertester Aufbau)
für Variante 3
__________________________________________________________________________________________________________
added favorite col in Table after i was alrdy using it
added col with SQL:
ALTER TABLE Tsumino ADD COLUMN favorite INTEGER
bei der db im dev-ordner hatte ich den SQLiteBrowser benutzt, aber beim zweiten Mal hab ich gesehen,
dass es zu Problemen kommen kann, da er den Datentyp der DATETIME col nicht erkennt (spez für python)
sicherheitshalber (obwohl bis jetzt keine probs) lieber mit dem Befehl arbeiten
__________________________________________________________________________________________________________
SQL um die neue Spalte anhand der Tags (Bridge-Table) zu befuellen
UPDATE Tsumino SET favorite = (Tsumino.id IN (SELECT Tsumino.id
FROM BookTags bt, Tsumino, Tags
WHERE bt.tag_id = Tags.tag_id
AND (Tags.name IN ('li_best'))
AND Tsumino.id = bt.book_id
GROUP BY Tsumino.id
HAVING COUNT( Tsumino.id ) = 1))
for downloaded
UPDATE Tsumino SET downloaded = (Tsumino.id IN (SELECT Tsumino.id
FROM BookTags bt, Tsumino, Tags
WHERE bt.tag_id = Tags.tag_id
AND (Tags.name IN ('li_downloaded'))
AND Tsumino.id = bt.book_id
GROUP BY Tsumino.id
HAVING COUNT( Tsumino.id ) = 1))
__________________________________________________________________________________________________________
you cannot have two different operations (INSERT, UDPATE) for the same trigger.
That is an syntax error as the Syntax Diagram for SQLite Triggers does not allow any IF clauses nor CASE WHEN constructions.
But you can achieve the same effect by defining two or three triggers that use the WHEN condition, see http://sqlite.org/lang_createtrigger.html
Suppose you use a UPDATE statement to update 10 rows in a table, the trigger that associated with the table is fired 10
times. This trigger is called FOR EACH ROW trigger. If the trigger associated with the table is fired one time, we call
this trigger a FOR EACH STATEMENT trigger.
At this time SQLite supports only FOR EACH ROW triggers, not FOR EACH STATEMENT triggers. FOR EACH ROW implies that the
SQL statements specified in the trigger may be executed (depending on the WHEN clause) for each database row being inserted,
updated or deleted by the statement causing the trigger to fire.
-> too many triggers decrease performance
=> rather only use add_tags to mark sth as downloaded (even though just using the trigger for updating tags when downloaded
gets updated woudl prob be faster, since theyre is no subquery (SELECT) in when clause (subquery in statment of triggers
below wont get executed if when clause doesnt apply))
!!!!!!!!these are untested!!!!!!!!
CREATE TRIGGER IF NOT EXISTS update_tags_on_downloaded_1
AFTER UPDATE ON Tsumino
WHEN OLD.downloaded <> NEW.downloaded AND NEW.downloaded = 1
BEGIN
INSERT OR IGNORE INTO BookTags(book_id, tag_id)
SELECT NEW.id, Tags.tag_id FROM Tags
WHERE Tags.name = 'li_downloaded';
END
CREATE TRIGGER IF NOT EXISTS update_tags_on_downloaded_0
AFTER UPDATE ON Tsumino
WHEN OLD.downloaded <> NEW.downloaded AND NEW.downloaded = 0
BEGIN
DELETE FROM BookTags
WHERE BookTags.tag_id IN (SELECT Tags.tag_id FROM Tags
WHERE Tags.name = 'li_downloaded')
AND BookTags.book_id = NEW.id
END
__________________________________________________________________________________________________________
removed TRIGGER set_last_change_tags, since two triggers had the same name (but they got renamed now)
DROP TRIGGER IF EXISTS set_last_change_tags;
__________________________________________________________________________________________________________
import module from subdir
create __init__.py in folder (empty, but can have setup/init code) -> package
then (in py3) import with:
import subdir.submodule
__________________________________________________________________________________________________________
display Collections in webGUI
Collection list on tsumino is ordered and it would be possible to get the id (e.g. when getting tsuinfo)
in the collection but that id could change if a preceeding book gets added
(we could update that when adding book to db where other parts of the collection alrdy present, but would
probably complicate the code too much with little benefit,
or we could do it manually (like tsumino probably does as well when adding the book, but then wed need
a new col or even better another bridge table for collections etc.)
-> display books in collection but not in original order
__________________________________________________________________________________________________________
Handling tsumino book being removed or rather replaced
When adding it via manga_db.py old manga might be in DB and INSERT INTO will fail due to UNIQUE constraint on
url or title -> e.g. sqlite3.IntegrityError: UNIQUE constraint failed: Tsumino.url
i cant check anything other than book url/the id contained in url for duplicates when collectiing links
(unless i already dl the site which would not make sense)
on UNIQUE constraint fail -> provide options to replace(update everything with new vals but lists(only downloaded gets updated),
keep_both (append (DUPLICATE nr) to title of old book and add new one to db) or keep_old (do nothing)
still too dependent on tsumino so heavy refactoring neccessary for being compatible with other sites
-> book in db would have to store multiple external urls etc.
but solution is okay for current use case so its fine
__________________________________________________________________________________________________________
N:\coding\tsu-info>manga_db\t.py
sys.argv == ['N:\\coding\\tsu-info\\manga_db\\t.py']
N:\coding\tsu-info>manga_db\t.py
current working dir == N:\coding\tsu-info
N:\coding\tsu-info>manga_db\t.py
sys.path[0] == N:\coding\tsu-info\manga_db
so when calling a script from cmdline
sys.argv[0] is absolute path to the called script,
and the cwd in python is the cwd of cmd.exe/the shell
and sys.path[0] is path to the dir which the script is contained in
with runner file t-runner.py that runs test\t.py
sys.path[0] == N:\coding\tsu-info
sys.argv[0] == N:\coding\tsu-info\t-runner.py
os.getcwd() == N:\coding\tsu-info
__________________________________________________________________________________________________________
tsumino is re-using old ids that were deleted:
http://www.tsumino.com/Book/Info/31726/mama-wa-roshia-haafu-no-akuyaku-resuraa-icha-rabu-kyuuaihen-my-mom-is-a-half-russian-heel-wrestler-the-sexy-love-dovey-courting-edition
is now:
http://www.tsumino.com/Book/Info/31726/secret-fakku
=> i cant count on id_onpage being unique!!!
__________________________________________________________________________________________________________
some collections have a "," in their name e.g. Anyway, I Want to Have Sex With...
=> need to either escape the comma or use a different seperator
__________________________________________________________________________________________________________
# create index for imported_from,id_onpage so we SQLite can access it
# with O(log n) instead of O(n) complexit when using WHERE id_onpage = ?
# (same exists for PRIMARY KEY) but using rowid/PK INTEGER ASC is still faster
# order is important for composite key index
# To utilize a multicolumn index, the query must contain the condition
# that has the same column order as defined in the index
# querying by just imported_from will work or imported_from,id_onpage
# but just id_onpage wont work
# by making index unique we get an error if we want to insert values
# for imported_from,id_onpage that are already in the table as the same combo
# but from sqlite.org: The left-most column is the primary key
# used for ordering the rows in the index. The second column is used to
# break ties in the left-most column. If there were a third column, it
# would be used to break ties for the first two columns
# -> more sense to have id_onpage since it will have few cases where there
# are still duplicates left whereas imported_from will have TONS
# but then i cant sort by site having the speed bonus of the index only
# id_onpage alone would work which is of no use
__________________________________________________________________________________________________________
# create index for imported_from,id_onpage so we SQLite can access it
# with O(log n) instead of O(n) complexit when using WHERE id_onpage = ?
# (same exists for PRIMARY KEY) but using rowid/PK INTEGER ASC is still faster
# order is important for composite key index
# To utilize a multicolumn index, the query must contain the condition
# that has the same column order as defined in the index
# querying by just imported_from will work or imported_from,id_onpage
# but just id_onpage wont work
# by making index unique we get an error if we want to insert values
# for imported_from,id_onpage that are already in the table as the same combo
# TODO but from sqlite.org: The left-most column is the primary key
# used for ordering the rows in the index. The second column is used to
# break ties in the left-most column. If there were a third column, it
# would be used to break ties for the first two columns
# -> more sense to have id_onpage since it will have few cases where there
# are still duplicates left whereas imported_from will have TONS
# but then i cant sort by site having the speed bonus of the index only
# id_onpage alone would work which is of no use
__________________________________________________________________________________________________________
# create table if it doesnt exist
# group reserved keyword -> use groups for col name
# SQLite does not have a separate Boolean -> stored as integers 0 (false) and 1 (true).
# FOREIGN KEY ON DEL/UPD RESTRICT disallows deleting/modifying parent
# key if it has child key(s)
# title_foreign cant be UNIQUE since some uploads on tsumino had the same asian title
# but a different english title/ were a different book; mb because theyre chapters of
# a larger book?
# for now lets assume the english title is always unique (at least it has been for now for
# over 2k books), the alternative would be to only leave title UNIQUE and (which i have to
# do anyway) always have the title as english-title / foreign-title -- this may be the better
# approach anyway
__________________________________________________________________________________________________________
get rows with duplicate values:
SELECT title_foreign, COUNT(*) c FROM Tsumino GROUP BY title_foreign HAVING c > 1;
__________________________________________________________________________________________________________
SQLAlchemy also loads all columns and relationships (e.g. associated columns(many-to-many), one
to many etc.) by default when an object/row is loaded
But theres several different ways to increase performance by not doing that:
- lazy kwarg/function when creating the relationship with options to only load the
relationships once theyre accessed
- deferred columns which defer loading normal columns
-> use same approach for my DBRow classes
but furthermore SQLAlchemy doesnt use the classes __init__ so you can do
anything with a Model class that you could do with a regular class
whereas my classes use the __init__ to load/represent the cols of the row
__________________________________________________________________________________________________________
# search assoc col with pattern OR
SELECT Books.*
FROM BookTag bx, Books, Tag
WHERE bx.tag_id = Tag.id
AND Tag.id IN (
SELECT Tag.id FROM Tag
WHERE Tag.name LIKE '%arge%'
OR Tag.name LIKE '%ale%'
)
AND Books.id = bx.book_id
GROUP BY Books.id
# for AND add
HAVING COUNT (Books.id) = NR_OF_PATTERNS
# LIKE pattern is slower than .name IN (...)
# search for mutliple cols with AND (without having count would be OR
SELECT Books.*
FROM BookTag bx, Books, Tag, BookList, List
WHERE bx.tag_id = Tag.id
AND Tag.name IN ('Large Breasts', 'Nakadashi')
AND Books.id = bx.book_id
AND BookList.list_id = List.id
AND List.name in ('to-read', 'downloaded')
AND BookList.book_id = Books.id
GROUP BY Books.id
HAVING COUNT (Books.id) = 4/NR OF TAGS IN BOTH (..) SUMMED TOGETHER
# tags with join
SELECT Books.*, t1.name
FROM Books
LEFT JOIN BookTag as tt on tt.book_id = Books.id
LEFT JOIN Tag as t1 ON tt.tag_id = t1.id
WHERE t1.name in ('Large Breasts', 'Nakadashi')
GROUP BY Books.id
# for and
HAVING COUNT (BOOKS.ID) = 2/NR OF TAGS IN (..)
##############
SELECT Books.*
FROM BookTag bx, Books, Tag
WHERE bx.tag_id = Tag.id
AND Tag.name IN ('Large Breasts', 'Nakadashi')
AND Books.id = bx.book_id
GROUP BY Books.id
HAVING COUNT (Books.id) = 2
-> ~20ms on ~2700rows returning 1007
SELECT Books.*
FROM Books
LEFT JOIN BookTag as tt on tt.book_id = Books.id
LEFT JOIN Tag as t1 ON tt.tag_id = t1.id
WHERE t1.name in ('Large Breasts', 'Nakadashi')
GROUP BY Books.id
HAVING COUNT (BOOKS.ID) = 2
-> ~40ms on ~2700rows returning 1007
# multiple cols with join
SELECT Books.*, Tag.name, Category.name, List.name
FROM Books
LEFT JOIN BookTag bt ON Books.id = bt.book_id
LEFT JOIN Tag ON Tag.id = bt.tag_id
LEFT JOIN BookCategory bc ON Books.id = bc.book_id
LEFT JOIN Category ON Category.id = bc.category_id
LEFT JOIN BookList bl ON Books.id = bl.book_id
LEFT JOIN List ON List.id = bl.list_id
WHERE Tag.name in ('Large Breasts', 'Elf')
AND Category.name in ('Manga')
AND List.name in ('to-read', 'downloaded')
-> ~40ms on ~2700rows returning 1257
GROUP BY Books.id
HAVING COUNT(Books.id) = 4/ NR OF ELEMENTS IN (..)s MUTLTIPLIED 2x1x2=4...
-> ~40ms on ~2700rows returning 13
# 3 cols with normal select
SELECT Books.*, Tag.name, List.name
FROM BookTag bx, Books, Tag, BookList, List, BookCategory, Category
WHERE bx.tag_id = Tag.id
AND Tag.name IN ('Large Breasts', 'Elf')
AND Books.id = bx.book_id
AND BookList.list_id = List.id
AND List.name in ('to-read', 'downloaded')
AND BookList.book_id = Books.id
AND Category.name in ('Manga')
AND BookCategory.category_id = Category.id
AND BookCategory.book_id = Books.id
-> ~140ms on ~2700rows returning 1257
GROUP BY Books.id
HAVING COUNT(Books.id) = 4
-> ~15ms on ~2700rows returning 13
==> join faster on big datasets, select faster when returning low nr of rows
############
mutliple excludes for assoc col
SELECT Books.*
FROM Books
WHERE Books.id NOT IN (
SELECT Books.id
FROM BookTag bx, Books, Tag
WHERE Books.id = bx.book_id
AND bx.tag_id = Tag.id
AND Tag.name IN ('Anal', 'Trap', 'Large Breasts', 'Nakadashi', 'Happy Sex')
)
AND Books.id NOT IN (
SELECT Books.id
FROM BookList bx, Books, List
WHERE Books.id = bx.book_id
AND bx.list_id = List.id
AND List.name IN ('to-read')
)
doing it in one select with Boosk.id NOT IN (...) doesnt work since then it has to have at least one of the
values specified to be included in the Books.id list
SELECT Books.*, Tag.name, List.name
FROM BookTag bx, Books, Tag, BookList, List
WHERE bx.tag_id = Tag.id
AND (Tag.name NOT IN ('Anal', 'Trap', 'Large Breasts', 'Nakadashi', 'Happy Sex'))
AND Books.id = bx.book_id
AND Books.id = BookList.book_id
AND BookList.list_id = List.id
AND (List.name NOT IN ('to-read'))
^^ this also doesnt work since the in the result there is a row for every combination of tags and lists that are not not in
the supplied values -> so the combinations of to-read and the supplied tags wont be selected but the supplied
but every other combination will be e.g.:
Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen
tags: Ahegao BBM / Fat Man Blowjob Bukkake Cunnilingus Full Color Group Sex Harem Large Breasts Magical Girl Prostitution Virginity (Male)
list: to-read, downloaded
-> to-read and ('Anal', 'Trap', 'Large Breasts', 'Nakadashi', 'Happy Sex') are not included but the rest of the
-> combinations are:
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "Ahegao" "downloaded"
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "BBM / Fat Man" "downloaded"
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "Blowjob" "downloaded"
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "Bukkake" "downloaded"
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "Cunnilingus" "downloaded"
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "Full Color" "downloaded"
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "Group Sex" "downloaded"
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "Harem" "downloaded"
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "Magical Girl" "downloaded"
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "Prostitution" "downloaded"
"Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen / 月火水木金土日フルカラー2 ホテルヴィーナス出張編" "Getsu Ka Sui Moku Kin Do Nichi FullColor 2 Hotel Venus Shucchou Hen" "月火水木金土日フルカラー2 ホテルヴィーナス出張編" "1" "19" "1" "2018-10-04" "0" "Virginity (Male)" "downloaded"
__________________________________________________________________________________________________________
LIMIT OFFSET is slow!
As JvdBerg said, indexes are not used in LIMIT/OFFSET. Simply adding 'ORDER BY indexed_field' will not help too.
To speed up pagination you should avoid LIMIT/OFFSET and use WHERE clause instead. For example, if your primary key field is named 'id' and has no gaps, than your code above can be rewritten like this:
SELECT * FROM myTable WHERE id>=0 AND id<100 //very fast!
SELECT * FROM myTable WHERE id>=95000 AND id<95100 //as fast as previous line!
__________________________________________________________________________________________________________
key in sqlalchemy (for identitymap) is tuple of
<class '__main__.BlogPost'>, (2,), None)
class, id or rather primary key (migh be composite thats why tuple) in table, None is identity_token (needed when using multiple databases which means obj can have same table and primary key)
__________________________________________________________________________________________________________
sqlalchemy has a function that handles turning rows into objects/instances
check if in id map
refresh attributes from row
if not in id create new instance and populate them from row
sqlalchemy uses instance dict to set values on instance (see loading.py->_populate_full/_partial) using populators -> (attr name, operator.itemgetter(id in row)) tuples to set instance_dict[attr name] = getter(row)
several diff types of populators
...
also differentiates between quick populators -> normal columns (further diffs between deferred cols etc.)
and populators for relationships etc. that might be lazy loaded
loading techniques:
The loading of relationships falls into three categories; lazy loading, eager loading, and no loading. Lazy
loading refers to objects are returned from a query without the related objects loaded at first. When the
given collection or reference is first accessed on a particular object, an additional SELECT statement is
emitted such that the requested collection is loaded.
Eager loading refers to objects returned from a query with the related collection or scalar reference
already loaded up front. The Query achieves this either by augmenting the SELECT statement it would normally
emit with a JOIN to load in related rows simultaneously, or by emitting additional SELECT statements after
the primary one to load collections or scalar references at once.
“No” loading refers to the disabling of loading on a given relationship, either that the attribute is empty
and is just never loaded, or that it raises an error when it is accessed, in order to guard against unwanted
lazy loads.
##################
-> either leaves assoc col alone if theyre lazy loaded
-> or loads them if theyre eager loaded
__________________________________________________________________________________________________________
GET or POST?
The GET Method
GET is used to request data from a specified resource.
GET is one of the most common HTTP methods.
Note that the query string (name/value pairs) is sent in the URL of a GET request:
/test/demo_form.php?name1=value1&name2=value2
Some other notes on GET requests:
GET requests can be cached
GET requests remain in the browser history
GET requests can be bookmarked
GET requests should never be used when dealing with sensitive data
GET requests have length restrictions
GET requests is only used to request data (not modify)
The POST Method
POST is used to send data to a server to create/update a resource.
The data sent to the server with POST is stored in the request body of the HTTP request:
POST /test/demo_form.php HTTP/1.1
Host: w3schools.com
name1=value1&name2=value2
POST is one of the most common HTTP methods.
Some other notes on POST requests:
POST requests are never cached
POST requests do not remain in the browser history
POST requests cannot be bookmarked
POST requests have no restrictions on data length
==> should only use POST when i change data in MangaDB
for just displaying/querying data -> GET
webgui security:
CSRF protection is only needed for state-changing operations because of the same-origin policy. This policy states that:
a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web
pages have the same origin.
So the CSRF attack will not be able to access the data it requests because it is a cross-site (that's the CS in CSRF)
request and prohibited by the same-origin policy. So illicit data access is not a problem with CSRF.
As a CSRF attack can execute commands but can't see their results, it is forced to act blindly. For example, a CSRF
attack can tell your browser to request your bank account balance, but it can't see that balance. This is obviously a
pointless attack (unless you're trying to DDoS the bank server or something). But it is not pointless if, for example,
the CSRF attack tells your browser to instruct your bank to transfer money from your account to the attacker's account.
The success or failure page for the transfer is inaccessible to the attacking script. Fortunately for the attacker, they
don't need to see the bank's response, they just want the money in their account.
As only state-changing operations are likely to be targets of CSRF attacks, only they need CSRF defenses.
__________________________________________________________________________________________________________
flask db access
if we use an app factory we cant create the db conn in the module code otherwise a diff thread will try to use it
and the program crashes
flask tutorial suggests to save db connection in g object which is unqiue for every request
not viable in my case since i wouldnt be able to leverage id_map of MangaDB an recreating MangaDB for every request
would be inefficient
-> rather save mdb instance in app_config directly in factoy func
or saving in sep module as var is not as clean? cant init it in factory func since current_app doesnt
exist at that point yet, i could create a func that inits the module var and takes app as argument so it
can be called from app factory
or as var in module but it wont be imported or used in factory func/__init__ so it can use current_app
doesnt work since we import blueprints and at that point we dont have app context yet
-> both of these fail with sqlite3.ProgrammingError due to a a connection being used by a thread that it wasnt created in
since apparently every request is run on a sep thread
[2018-10-31 16:46:45,239] INFO on (Thread-1 ) in __init__: Creating app
[2018-10-31 16:46:46,766] WARNING on (MainThread) in _internal: * Debugger is active!
[2018-10-31 16:46:46,773] INFO on (MainThread) in _internal: * Debugger PIN: 177-547-265
[2018-10-31 16:46:46,785] INFO on (Thread-2 ) in _internal: * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
[2018-10-31 16:46:47,964] INFO on (Thread-3 ) in __init__: Getting DB
[2018-10-31 16:46:47,997] INFO on (Thread-3 ) in _internal: 127.0.0.1 - - [31/Oct/2018 16:46:47] "GET / HTTP/1.1" 200 -
[2018-10-31 16:48:12,934] INFO on (Thread-5 ) in __init__: Getting DB
[2018-10-31 16:48:12,944] INFO on (Thread-5 ) in _internal: 127.0.0.1 - - [31/Oct/2018 16:48:12] "GET / HTTP/1.1" 200 -
[2018-10-31 16:48:16,509] INFO on (Thread-6 ) in __init__: Getting DB
[2018-10-31 16:48:16,522] INFO on (Thread-6 ) in _internal: 127.0.0.1 - - [31/Oct/2018 16:48:16] "GET / HTTP/1.1" 200 -
even without debug mode:
* Debug mode: off
[2018-10-31 16:52:36,534] INFO on (MainThread) in __init__: Creating app
[2018-10-31 16:52:38,006] INFO on (MainThread) in _internal: * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
[2018-10-31 16:52:43,402] INFO on (Thread-1 ) in __init__: Getting DB
[2018-10-31 16:52:43,413] INFO on (Thread-1 ) in _internal: 127.0.0.1 - - [31/Oct/2018 16:52:43] "GET / HTTP/1.1" 200 -
[2018-10-31 16:52:48,438] INFO on (Thread-2 ) in __init__: Getting DB
[2018-10-31 16:52:48,450] INFO on (Thread-2 ) in _internal: 127.0.0.1 - - [31/Oct/2018 16:52:48] "GET / HTTP/1.1" 200 -
[2018-10-31 16:52:49,773] INFO on (Thread-3 ) in __init__: Getting DB
[2018-10-31 16:52:49,799] INFO on (Thread-3 ) in _internal: 127.0.0.1 - - [31/Oct/2018 16:52:49] "GET / HTTP/1.1" 200 -
[2018-10-31 16:54:24,125] INFO on (Thread-5 ) in __init__: Getting DB
[2018-10-31 16:54:24,147] INFO on (Thread-5 ) in _internal: 127.0.0.1 - - [31/Oct/2018 16:54:24] "GET / HTTP/1.1" 200 -
-> would lose id_map benefits since mdb is only alive for one request if threaded
-> to be able to use flask threaded: rewrite of MangaDB needed so that its similar to sqlalchemy in that it handles
new/delete/update itself in (scoped)sessions with a connection from connection pool
sqlalchemy uses scoped_session for threaded use
flask-sqlalchemy uses wrapper SQLAlchemy around sqlalchemy with which you can access Query Model session
-> uses scoped_session
flask-sqlalchemy uses werkzeug.local.Local.__identfunc__ for sqlalchemy's scoped_session scopefunc by default
-> is greenlet.get_ident or thread.get_ident
flask-sqlalchemy:
def create_scoped_session(self, options=None):
"""Create a :class:`~sqlalchemy.orm.scoping.scoped_session`
on the factory from :meth:`create_session`.
An extra key ``'scopefunc'`` can be set on the ``options`` dict to
specify a custom scope function. If it's not provided, Flask's app
context stack identity is used. This will ensure that sessions are
created and removed with the request/response cycle, and should be fine
in most cases.
:param options: dict of keyword arguments passed to session class in
``create_session``
"""
if options is None:
options = {}
scopefunc = options.pop('scopefunc', _app_ctx_stack.__ident_func__)
options.setdefault('query_cls', self.Query)
return orm.scoped_session(
self.create_session(options), scopefunc=scopefunc
)
flask: app_ctx_stack
_app_ctx_stack = LocalStack()
LocalStack imported from werkzeug.local:
def release_local(local):
"""Releases the contents of the local for the current context.
This makes it possible to use locals without a manager.
Example::
>>> loc = Local()
>>> loc.foo = 42
>>> release_local(loc)
>>> hasattr(loc, 'foo')
False
With this function one can release :class:`Local` objects as well
as :class:`LocalStack` objects. However it is not possible to
release data held by proxies that way, one always has to retain
a reference to the underlying local object in order to be able
to release it.
.. versionadded:: 0.6.1
"""
local.__release_local__()
class Local(object):
__slots__ = ('__storage__', '__ident_func__')
def __init__(self):
object.__setattr__(self, '__storage__', {})
object.__setattr__(self, '__ident_func__', get_ident)
def __iter__(self):
return iter(self.__storage__.items())
def __call__(self, proxy):
"""Create a proxy for a name."""
return LocalProxy(self, proxy)
def __release_local__(self):
self.__storage__.pop(self.__ident_func__(), None)
def __getattr__(self, name):
try:
return self.__storage__[self.__ident_func__()][name]
except KeyError:
raise AttributeError(name)
def __setattr__(self, name, value):
ident = self.__ident_func__()
storage = self.__storage__
try:
storage[ident][name] = value
except KeyError:
storage[ident] = {name: value}
def __delattr__(self, name):
try:
del self.__storage__[self.__ident_func__()][name]
except KeyError:
raise AttributeError(name)
class LocalStack(object):
"""This class works similar to a :class:`Local` but keeps a stack
of objects instead. This is best explained with an example::
>>> ls = LocalStack()
>>> ls.push(42)
>>> ls.top
42
>>> ls.push(23)
>>> ls.top
23
>>> ls.pop()
23
>>> ls.top
42
They can be force released by using a :class:`LocalManager` or with
the :func:`release_local` function but the correct way is to pop the
item from the stack after using. When the stack is empty it will
no longer be bound to the current context (and as such released).
By calling the stack without arguments it returns a proxy that resolves to
the topmost item on the stack.
.. versionadded:: 0.6.1
"""
def __init__(self):
self._local = Local()
def __release_local__(self):
self._local.__release_local__()
def _get__ident_func__(self):
return self._local.__ident_func__
def _set__ident_func__(self, value):
object.__setattr__(self._local, '__ident_func__', value)
__ident_func__ = property(_get__ident_func__, _set__ident_func__)
del _get__ident_func__, _set__ident_func__
def __call__(self):
def _lookup():
rv = self.top
if rv is None:
raise RuntimeError('object unbound')
return rv
return LocalProxy(_lookup)
def push(self, obj):
"""Pushes a new item to the stack"""
rv = getattr(self._local, 'stack', None)
if rv is None:
self._local.stack = rv = []
rv.append(obj)
return rv
def pop(self):
"""Removes the topmost item from the stack, will return the
old value or `None` if the stack was already empty.
"""
stack = getattr(self._local, 'stack', None)
if stack is None:
return None
elif len(stack) == 1:
release_local(self._local)
return stack[-1]
else:
return stack.pop()
@property
def top(self):
"""The topmost item on the stack. If the stack is empty,
`None` is returned.
"""
try:
return self._local.stack[-1]
except (AttributeError, IndexError):
return None
sqlalchemy.orm.scoping.scoped_session:
class scoped_session(object):
"""Provides scoped management of :class:`.Session` objects.
See :ref:`unitofwork_contextual` for a tutorial.
"""
session_factory = None
"""The `session_factory` provided to `__init__` is stored in this
attribute and may be accessed at a later time. This can be useful when
a new non-scoped :class:`.Session` or :class:`.Connection` to the
database is needed."""
def __init__(self, session_factory, scopefunc=None):
"""Construct a new :class:`.scoped_session`.
:param session_factory: a factory to create new :class:`.Session`
instances. This is usually, but not necessarily, an instance
of :class:`.sessionmaker`.
:param scopefunc: optional function which defines
the current scope. If not passed, the :class:`.scoped_session`
object assumes "thread-local" scope, and will use
a Python ``threading.local()`` in order to maintain the current
:class:`.Session`. If passed, the function should return
a hashable token; this token will be used as the key in a
dictionary in order to store and retrieve the current
:class:`.Session`.
"""
self.session_factory = session_factory
if scopefunc:
self.registry = ScopedRegistry(session_factory, scopefunc)
else:
self.registry = ThreadLocalRegistry(session_factory)
def __call__(self, **kw):
r"""Return the current :class:`.Session`, creating it
using the :attr:`.scoped_session.session_factory` if not present.
:param \**kw: Keyword arguments will be passed to the
:attr:`.scoped_session.session_factory` callable, if an existing
:class:`.Session` is not present. If the :class:`.Session` is present
and keyword arguments have been passed,
:exc:`~sqlalchemy.exc.InvalidRequestError` is raised.
"""
if kw:
scope = kw.pop('scope', False)
if scope is not None:
if self.registry.has():
raise sa_exc.InvalidRequestError(
"Scoped session is already present; "
"no new arguments may be specified.")
else:
sess = self.session_factory(**kw)
self.registry.set(sess)
return sess
else:
return self.session_factory(**kw)
else:
return self.registry()
def remove(self):
"""Dispose of the current :class:`.Session`, if present.
This will first call :meth:`.Session.close` method
on the current :class:`.Session`, which releases any existing
transactional/connection resources still being held; transactions
specifically are rolled back. The :class:`.Session` is then
discarded. Upon next usage within the same scope,
the :class:`.scoped_session` will produce a new
:class:`.Session` object.
"""
if self.registry.has():
self.registry().close()
self.registry.clear()
def configure(self, **kwargs):
"""reconfigure the :class:`.sessionmaker` used by this
:class:`.scoped_session`.
See :meth:`.sessionmaker.configure`.
"""
if self.registry.has():
warn('At least one scoped session is already present. '
' configure() can not affect sessions that have '
'already been created.')
self.session_factory.configure(**kwargs)
def query_property(self, query_cls=None):
"""return a class property which produces a :class:`.Query` object
against the class and the current :class:`.Session` when called.
e.g.::
Session = scoped_session(sessionmaker())
class MyClass(object):
query = Session.query_property()
# after mappers are defined
result = MyClass.query.filter(MyClass.name=='foo').all()
Produces instances of the session's configured query class by
default. To override and use a custom implementation, provide
a ``query_cls`` callable. The callable will be invoked with
the class's mapper as a positional argument and a session
keyword argument.
There is no limit to the number of query properties placed on
a class.
"""
class query(object):
def __get__(s, instance, owner):
try:
mapper = class_mapper(owner)
if mapper:
if query_cls:
# custom query class
return query_cls(mapper, session=self.registry())
else:
# session's configured query class
return self.registry().query(mapper)
except orm_exc.UnmappedClassError:
return None
return query()
if scopefunc sqlalchemy uses ScopedRegistry else ThreadLocalRegistry
class ScopedRegistry(object):
"""A Registry that can store one or multiple instances of a single
class on the basis of a "scope" function.
The object implements ``__call__`` as the "getter", so by
calling ``myregistry()`` the contained object is returned
for the current scope.