@ponyorm

Страница 61 из 75
Alexander
06.05.2018
19:55:58
To support Python 3.7 we need to fix some bytecode-related things. I agree that at this moment supporting Python 3.7 become an important task. I will look into it during the next week

Jim
06.05.2018
19:59:40
talking about it, wouldn't it be possible to have a prerelease on pypi "0.8.1a" or wathever ?it makes it more convenient using pip --pre instead of git repo.

Alexander
06.05.2018
20:05:26
We need to finish some stuff before release, part of it is hybrid methods (an ability to define one-line methods and properties of entity classes and call them inside queries, so they are translated to SQL) and query composition (an ability to write select(x for x in previous_query)). It is almost done, I hope we can finish it during the next week. After that we can make a release

Jim
06.05.2018
20:07:06
Cool

Google
Jim
07.05.2018
12:07:18
Hello: given ``` order = "12,2,9,8,5"

whats the best way to do : r = [Item[x] for x in order] ? does this hit the database each time ? Or is it in a unique SELECT ?

sorry order = [12,2,8,8,5]. not a string but a list

Alexander
07.05.2018
12:20:18
Hi, you can do from pony import orm orm.set_sql_debug(True) or with orm.sql_debugging: ... to see SQL queries. Pony translate generators to SQL when generator a passed to select function. List comprehension (with square brackets) is not a generator and cannot be translated to SQL, so in your expression each object retrieved individually. You can write the following query in order to retrieve all objects at once: order = [12, 2, 9, 8, 5] objects = select(item for item in Item if item.id in order)[:] or, equivalently: objects = Item.select(lambda item: item.id in order)[:] But you need to sort the resulting list in Python: objects.sort(key=lambda item: order.index(item.id))

Jim
07.05.2018
14:15:45
ok thank you

J J
08.05.2018
19:35:42
Is there any documentation on using marshmallow with pony?

Specifically for deserialisation

How do i set max connections to db?

Would like to avoid pony.orm.dbapiprovider.OperationalError: FATAL: too many connections for role "xxxxxxxx" error on Heroku

Alexander
09.05.2018
08:21:19
Pony creates a single connection per process (or per thread, if you use multi-threading) for each Database instance (db variable). If you have too many connection, it may means you have too many Python processes, or you have many Database instances which points to the same database. Each db cache connection for later user, you can call db.disconnect() to close that cached connection

pony doesn't have specific integration with marshmallow, but I hear some people use marshmallow with pony

Альберт
10.05.2018
06:31:43
Hello. I encountered a problem and I can't solve it. Pony orm not supported primary key > 2147483647 Will long int support be added in ponyORM?

Alexander
10.05.2018
07:50:22
You can specify id = PrimaryKey(int, size=64)

Google
Альберт
10.05.2018
07:50:41
Thank you!

stsouko
10.05.2018
08:57:39
Hello! Is it possible to load lazy attrs in single select query or in prefetch procedure?

Alexander
10.05.2018
09:39:30
It should work: select(x for x in X).prefetch(X.lazy_attr1, X.lazy_attr2)

stsouko
10.05.2018
10:17:10
Good. But if I want to prefetch set attr, which has lazy attr. This: .prefetch(X.set_attr.lazy_attr) don't work

Jim
10.05.2018
16:20:05
Hello what's the difference in pony between flush and commit ? I must say I don't really see a difference using one or another.

Alexander
10.05.2018
16:25:13
What database do you use?

Jim
10.05.2018
16:25:55
pg and sqlite

Alexander
10.05.2018
16:30:35
When you do flush, objects are saved as a table rows to the database. But they are still unvisible to other processes which are accessing the database. It is still possible to roll back changes and return database to previous state. After commit (which can be performed explicitly or happen implicitly upon exit from db_sesssion) inserted and updated rows become visible to another processes and cannot be rolled back anymore

Jim
10.05.2018
16:42:08
ok nice. Is there a way to know before leaving a db_session if some commit already happen ?

Alexander
10.05.2018
16:45:05
I think no, it is typically not a very useful information. In most cases it is not necessary to perform manual commit. You need manual commit if you are inserting millions of objects and want to split them to several transactions in order to reduce usage of resources

Jim
10.05.2018
16:52:06
My use case was to speed up testing to not have to delete every thing if nothing was commit, an better do a rollback between tests. thanks for explanation

One more thing considering this, why do you propose using "commit" to get de pk value in docs rather than flush ? https://docs.ponyorm.com/working_with_entity_instances.html#saving-objects-in-the-database

Matthew
10.05.2018
17:21:46
My understanding is that a commit is required for you to know the primary key value, as there can be multiple concurrent transactions, so until one commits, you wouldn't know which row was "first", and got the next primary key.

Alexander
10.05.2018
17:26:04
Actually, flush should be enough. In the documentation we not state that commit is necessary for knowing id, but just showing that after commit it is defined. I think, we need to change documentation to use flush instead of commit here

Alexander
10.05.2018
19:07:17
Hi) I'm sorry for question, but I need help. Is it possible to safely generate models from existing tables?

Jim
10.05.2018
19:10:05
yes for sql views so I think It should be possible for table (I did not test it). take a look here https://github.com/ponyorm/pony/issues/160 You might find some help

Alexander
10.05.2018
19:10:21
Thank you

stsouko
11.05.2018
04:24:57
Hello! Is it possible to set data to lazy attrs of entities without loading. I want to load lazy data in single query and manually attach it to entities.

Alexander
11.05.2018
04:30:00
Hi! You mean lazy collection attributes?

What do you want to achieve, some performance optimization?

Google
Alexander
11.05.2018
04:50:47
When Pony loads an object non-collection attributes, it attach them to the object using an internal _db_set_ method. Like: person._db_set_({'name': John, 'age': 30, 'spouse': person2}) This call means to Pony that object has specified attribute values in the database at this moment In principle, you can use this method to imitate loading attribute values from the database, but you need to be careful, because it is an internal method. For example, all values that you pass should have correct type. For collection attributes there are no single method that can be invoked to imitate collection loading from the database, but in principle it is possible to introduce such method by extracting it from Set.load internal method

stsouko
11.05.2018
06:20:20
I got next error``` s._db_set_({'data': d}) File "env/lib/python3.5/site-packages/pony/orm/core.py", line 4519, in _db_set_ assert attr.pk_offset is None AttributeError: 'str' object has no attribute 'pk_offset'`

which type of attr should be?

Alexander
11.05.2018
06:24:08
My bad, it should be something like person._db_set_({Person.name: 'John'}) The key of dictionary is attribute descriptor

stsouko
11.05.2018
06:27:08
next error. TypeError: the JSON object must be str, not 'dict'. my data attr is JSON. how to load from db raw data? or I need to convert to string again before using _db_set_&?

Alexander
11.05.2018
06:36:07
As I sade, there is some pecularities with different data types, because this is an internal method. It receives values that were read from the cursor and converted using AppropriateConverterType.sql2py method. For JSON datatype values from the database are received and stored as string values (for optimistic check of JSON attributes we need to keep exact string value received from the database and not just some parsed dictionary)

The loop for loading multiple objects from the cursor internally looks like objects = [] for row in rows: real_entity_subclass, pkval, avdict = entity._parse_row_(row, attr_offsets) obj = real_entity_subclass._get_from_identity_map_(pkval, 'loaded') obj._db_set_(avdict) objects.append(obj)

I'm still not sure you need to deal with internal mechanics manually. It may be error-prone, becuse the logic is pretty complex for some datatypes. What is your goal? Do you want to reduce the number of queries?

stsouko
11.05.2018
06:42:13
Yes. in some cases I need to load lazy data for one to many rsh. I'm using preload method. but it omits to load lazy data.

with db_session: q = db.Reaction.select(lambda x: x.user_id == 1)\ .order_by(lambda x: x.id)\ .prefetch(db.Reaction.metadata)\ .page(1, pagesize=10)[:]

whis is my query

db.Reaction.metadata = Set(db.ReactionConditions)

db.ReactionConditions.data is lazy JSON attr

Alexander
11.05.2018
06:48:56
So the JSON attribute is lazy, but the Set attribute is not lazy

Ok, I understand your problem, and need some time to suggest a solution

stsouko
11.05.2018
07:09:31
Thank You!

Jim
11.05.2018
12:06:12
Hello, I finally did a PR about flush/commit to try to resume what was said yesterday : https://github.com/ponyorm/pony-doc/pull/6

stsouko
11.05.2018
12:08:31
It means it will be peristed

Typo

Jim
11.05.2018
12:10:30
thx

Google
Jim
13.05.2018
18:31:52
did some one ever try to mock queries (lambda or generator) ? everyhting is easly mockable but concerning query I can't find any reliable way to do it.

Matthew
18.05.2018
16:16:09
I have a situation where I need to do approx 10k queries like A.get(z=1, b=2)

Admin


Matthew
18.05.2018
16:16:24
Individual queries are fast enough but overall it is very slow

How can I combine this into one select query with pony?

It needs to lookup with two attributes

stsouko
18.05.2018
16:25:26
Del

Alexander
18.05.2018
17:17:38
So you have a list of pairs with attribute values?

Matthew
18.05.2018
17:29:56
Yes

Ideally I'd be able to zip up the list of pairs, and the select results, so it's easy to see which pairs have a result

Alexander
18.05.2018
17:39:28
Ideally, it should be expressed as A.select(lambda a: (a.z, a.b) in pairs_list) but I checked, and right now it is not working. I think we need to fix it. #todo In the mean time, you can use a query with a raw sql fragment of the following structure: conditions = '(%s)' % ' or '.join( 'a.z = %d and a.b = %d' % (val1, val2) for val1, val2 in pairs_list) A.select(lambda a: raw_sql(conditions)) If values are not numeric, it may be better to use $parameters to avoid SQL injection: conditions = '(%s)' % ' or '.join( 'a.z = $(pairs_list[%d][0]) and a.b = $(pairs_list[%d][1])' % (i, i) for i, (val1, val2) in enumerate(pairs_list)) A.select(lambda a: raw_sql(conditions))

Etienne
19.05.2018
00:10:20
Is it possible to bind to a sqlite database from a stringio buffer?

Matthew
19.05.2018
09:52:05
Thanks Alexander, I look forward to the fix :)

You can do in-memory SQLite and probably put it into a StringIO object, what is your use case?

Etienne
19.05.2018
10:27:57
I want to do it the other way around.

I have an encrypted db file which I decrypt and I don't want to store the decrypted data on disk. I need a way to interface with that data.

Matthew
19.05.2018
11:24:19
https://stackoverflow.com/a/3850259/964375

encrypt the result of iterdump, load the decrypted data into an in-memory sqlite file

Etienne
19.05.2018
12:34:49
How do i bind to an in memory database with pony? : memory: creates a new one from what I've seen

Which is why I wanted to use a string io file. The problem being pony uses a filename, not a file object.

Google
Alexander
19.05.2018
12:37:44
As far as I know, sqlite does not allow to bind to existing in-memory database. You can bind pony to a new in-memory database and then load data into it. You can obtain native sqlite connection from the database object using db.get_connection()

Etienne
19.05.2018
12:41:04
I have no idea

The problem with loading to the new memory database is that it just shifts the problem upstream

Alexander
19.05.2018
12:44:43
db = Database('sqlite', ':memory:') define_entities(db) db.generate_mapping(create_tables=False) connection = db.get_connection() load_data(connection, backup_data)

Etienne
19.05.2018
12:48:13
Sorry I can't find load_data in the docs, what kind of format does backup_data have to be in?

Alexander
19.05.2018
12:51:46
It just some pseudocode. I mean, Pony allows you to obtain native sqlite3 connection to empty in-memory database, and then you can write a function which load decripted data into it. I don't think that python sqlite3 module accepts file descriptor instead of file name, so probably you need to load decrypted data into empty in-memory database, not the other way around

Etienne
19.05.2018
12:55:27
Yeah seems sqlite doesn't allow it. But yeah it just shifts the problem upstream since I have to deal with the data myself it defeats the purpose of using pony for my use case :(.

Either way, thanks for the insights, maybe I need to find a simpler way to deal with encryption

Alexander
19.05.2018
12:59:27
load_data may be some generic function, like in that stackoverflow example that Matthew posted. It is not necessary for load_data to know definition of entities. But yeah, may be there is some easier approach to encryption

Matthew
19.05.2018
13:05:56
SQLite likely doesn't allow Python file handles because it is a C library

Etienne
19.05.2018
13:09:57
Ooh yeah I misunderstood your first reply, I thought you meant load the dump into an in memory file and then connect to it using pony.

Страница 61 из 75