Lucky
Is there some place else we could check them?
Alan
not really, I saw someone playing with __new__, but that seemed really odd and I didn't trust it
Lucky
Naah, don't use __new__
Alan
thanks
Valentin
Hi, everybody. I came here just to check if community here as dead as on github. Don't want pony to die. ;D
Lucky
Message deleted
Alexander
Hi Xunto! The community is not dead, but it is not as big as for Django. But as a main developer I can assure you that Pony will not die before me :)
Last weeks I worked on migrations and missed the bug on GitHub that bothered you. Actually, it was already fixed, you can take development version of Pony from GitHub, and check if it works as expected. On monday I want to release 0.7.2 version of Pony which will include this fix
Henri
🎉🎆🎉
Valentin
@akozlovsky I saw your answer at github. Thanks. But unanswered issue wasn't the main cause of my worry. It's just to quite there for orm that is that good. It's good to hear it's not dead.
Alexander
Last time releases began to appear less often, it is caused by work on migrations. I hope that after we publish the migration tool, the frequency of releases with new features should increase
Andrew
Lucky
Artur Rakhmatulin
Matthew
https://ponyorm.com/css/img/top-img.png
Matthew
For some of my pony queries, I have redis caching of the results, this is done on an ad-hoc basis, has anyone thought about having optional caching built in one a per-query basis?
Alexander
Yes, we have such plans too, but currently other tasks have higher priority for us
Alexander
It will be great to have such a layer
Alexander
In cases where all work with database is performing through Pony, it would be possible to have a consistent write-through redis cache, which can completely avoid database hits for some db sessions
Matthew
Can you give an example session?
Lucky
Alexander
The main problem with caching layer is with data invalidation. Many queries share the same information. So, when an update is performed, it is hard to tell which query results need to be invalidated. One solutions is to invalidate all queries for specific entity if one instance of entity was changed. Another option is to have limited time-to-live for each query result, and to have a chance to get inconsistent result before that timeout.
I have in mind the cache, in which each object is cached and invalidated separately by id. Then query can be automatically transfomed to retrieve only ids from the database, and other fields will be received from Redis. In some cases, such query will be more lightweight. If db session retrieve objects by id or by reference through attributes and collections, then all information can be loaded from Redis
Matthew
Might it be most simple (and highest ROI) to do something like passing an optional TTL to a certain query, and if no TTL is passed, it's not cached?
Matthew
For me, it's often only a few queries in a session that are intensive
Matthew
there isn't too much value in pulling an entity by ID from redis rather than the database
Matthew
Maybe:
select(x for x in X where x.y == 10).cache(ttl=3600)
Alexander
Sure, it is more simple, but sometimes the query result can be incorrect before TTL. If this is acceptable, then this approach is the most appropriate
Matthew
Right, if it's a manual tagging of queries, the programmer can know that staleness isn't an issue
Lucky
Matthew
Can then use machine learning if you want to work out which queries to cache :)
Alexander
lol
Lucky
Matthew
One way I get around staleness in my custom caching is having the ability to have a custom caching key. If the pony caching key was the raw SQL of the query, a custom key could be appended, like '{date}' or '{current_user_id}' or {unix_time}'
Matthew
so the key might be "select * from x where y = 10.2017-07-10"
Matthew
after midnight, the old cache is no longer used
Matthew
so maybe query.cache(ttl=86400, key_template='{date}')
Lucky
Matthew
the results of certain queries
Matthew
https://gist.github.com/db6c492ac320b8608190b2fcf055866f
Matthew
I do some weird stuff with functools.partial, not proposing that for pony :)
Matthew
I just use cPickle to store the query results in redis
Matthew
the gist I gave is probably a bad example, as it is quite complex
Matthew
https://gist.github.com/anonymous/608bc523c0410ad73b5463581981252e
Matthew
this seems to work
Matthew
models.redis is a redis connection object
Matthew
the query sql could be hashed if memory usage was a concern
Matthew
It would need to only be used for SELECTs of course
Lucky
Caching inserts 🤣
Matthew
Is there a way to access get_sql() on a query that ends in count() ?
Matthew
since count() is not lazy
Alexander
Right now - only if you rewrite query as select(count() for x in ...). Also, right after query execution, you can see db.last_sql, but this is a bit too late
Matthew
If it was integrated into pony as query.cache() then I don't think subclassing would work
Matthew
I think that caching code is at the wrong layer, it needs to be within pony, after sql is generated but before it is executed
Matthew
Then count(), first(), limit() etc would work
Alexander
I think some time later we will add such a layer
Anonymous
Hi Guys,
We're trying to determine a way to add columns to a table on the fly with pony.
for example remap...
Table Customers(db.Entity)
email = Required(str, unique=True)
name = Required(str)
... to:
Table Customers(Customers)
surname = Required(str)
Anonymous
Is there a clever way to do this?
Alexander
In order to do this you need a migration tool. I'm working on it right now. It should be ready soon, at first I plan to put experimental version on GitHub, and then we include it in official release
Anonymous
Nice.
Lucky
Lucky
Permanent link to the luckydonald/pony_up project you mentioned. (?)
Alexey
Hi guys,
We are currently working on an upgrade of our Enity-Relationship Editor. Here you can take a look at the current beta version of the interface: https://beta.editor.ponyorm.com/. You can use the same login and password as for the current editor, as it is connected the production database.
Your feedback is greately appreciated!
Lucky
Alexey
Alexey
Before we release migrations we had to fix some bugs
And we are happy to release Pony version 0.7.2!
Alexey
https://blog.ponyorm.com/2017/07/17/pony-orm-release-0-7-2/
Valentin
@alexeymalashkevich yey cool
Henri
Great to have a dictionary for db_bind!
Matthew
Thank you!
Alexander
Matthew, I think we fixed the memory leak. You can try to use db_session without strict=True option, it should work without any problem
Matthew
Happy to test that, is that listed in the above notes?
Alexander
Yes,
> Fixes #276: Memory leak
Matthew
was it the cache building up objects, or something else?
Matthew
memory usage seems stable without strict=True :)
Alexander
The problem was in handling expressions like
query = query.filter(lambda obj: obj.field == x)
We have some internal function get_lambda_args which returns names of function arguments, like
get_lambda_args(lambda obj: obj.field == x) -> ["obj"]
That function caches lambdas for speed. But function can have closures with values of external variables, like x in this example. That values become cached too.
If x is an instance of object, the function that sitting in a lambda cache holds reference to that object indefinitely. And object has a reference to a whole db_session cache with all loaded objects. As a result, all previously loaded objects stays in the memory.
In order to fix that memory leak, now I cache function code object instead of function itself. Codeobject does not linked with any closure, so now all objects can be garbage collected properly
Matthew
I'm very glad that's fixed!
Alexander
Me too :)
Matthew
So big db_sessions still can use up a lot of memory, but it'll be reclaimed upon exiting the session?
Matthew
or is strict=True still needed in that case?