Alexander
I'm not sure the latter code is bullet-proof. If some exception like NetworkError arises after commit(), but before redis_enqueue, the object will not be added to queue, but will have updated scheduled attribute value.
Also, I don't see how latter code will prevent the error. It may shorten the transaction time and reduce the chance of conflict, but probably it will not prevent it absolutely. Also, operations with Redis should be fast and probably don't have big impact on transaction time.
> I think attributes other than scheduled are being changed, so not sure making scheduled volitile would help?
If your code snippet is complete, the code reads only two attributes of object: x.id and x.scheduled. x.id is a primary key and cannot be updated, so only attribute that can cause OptimisticError is x.scheduled
Alexander
OptimisticError can arise only if two different processes access the same attribute of the same object: process 1 read the attribute, process 2 updates the same attribute, and then process 1 tries to update any attribute of the same obect
Matthew
I wonder if this is two scheduler processes actually running at the same time then, as it uses cron. I think I'm going to try switching it to being a process running under supervisor, that might stop the errors
Alexander
> redis_enqueue should only ever be run once in a given time period (hours).
It may be possible, that when it starts once per hour, it starts in multiple processes at the same time
Matthew
a slow query seems to have been causing multiple schedulers to run at once. Thanks for your help!
Matthew
Is there a way to update an attribute, without loading the full model data,for example:
x = X[1]
x.y = 2
Matthew
it seems to be a major part of my execution time is loading a lot of data for each instance
Alexander
What is the type of attribue?
Matthew
the type i want to update is datetime
Matthew
there are a lot of Unicode attributes which seem to take a long time to load
Alexander
At this moment you have three options:
1) You can specify lazy=True option for all string attributes you don't want to load. The drawback is, if you later want to access such attributes they will be loaded one-by-one using separate SQL queries.
2) Maybe the slowdown is caused not by loading string attributes, but by multiple SQL queries. It may be possible to reorganize code to load all objects of the same type in a single query.
3) You can use internal method _get_by_raw_pkval_ to represent object in memory without loading it. Then Pony will not load object attributes from the database until you access any attribute. But you need to be sure that the object is really exists in the database, because Pony will not check it for you:
x = X._get_by_raw_pkval_((1,))
x.y = 2
Alexander
I fixed the typo in the method name
Anonymous
Hi, I want to ask about Pony ORM License . Does it okay if I make a propietary software by using PonyOrm License as a free license under Apache?
Alexander
Sure
Alexander
Previously it was dusal licensed (AGPL and commercial), but now it just Apache2
Anonymous
so it's okay
Anonymous
I read it,
Anonymous
I just need to make sure
Anonymous
cause I am developing Point of Sales and I am going to use PonyORM for my commercial products
Valentin
Is it ready for REAL production? Did anybody make something big and complex with it?
Matthew
i use pony for multiple projects that have plenty of scale and paying customers
Matthew
It's a fairly minor part of a complex project really
Matthew
important for productivity etc, but projects are possible without it
Anonymous
I use RAW SQL in my projects, but I want to migrate by using Pony ORM as well.
Anonymous
What makes you think that Pony ORM doesn't quite ready in Deployment / Production?
Alan
So- I'm trying to defnie a date in my models, and 'Required(date)' as it shows on the docs is giving me a NameError. Any idea why that would be?
Juan Antonio
I'm using datetime with no issues
Alan
are you doing any import other that from pony.orm import * ?
Alan
yes... you're importing date from datetime because that's the type used... that's my bad
Juan Antonio
Yes, you got it right :D
Matthew
If I do the exact same query twice within the same database session, but within different model methods, is the result cached?
Matthew
def x(self):
query()
def y(self):
query()
Matthew
instance.x(); instance.y()
Alexander
If these queries use different generators (even with identical text), then no. To be cached, they need to be based on the same code object:
def x(self):
query = self._get_query()
def y(self):
query = self._get_query()
def _get_query(self):
query = select(x for x in X):
return query
Matthew
ah thank you
Matthew
is it the same with count() and first() queries?
Alexander
yes
Alexander
I think
Juan Antonio
Is there a direct way to "jsonify" the result from a query?
Alexander
I want to add it, but at this moment I'm busy with migrations. Right now you can use something like
jsonify([obj.to_dict() for obj in query])
Juan Antonio
Thank you as usual :D
Lucky
Alexander
I think its Flask.jsonify();
Alexander
select(<generator>).extract(<hierarchical query>)
Matthew
https://gist.github.com/anonymous/49f28115700ea3db3a5bd346839ced3f
This seems to be "leaking" memory, like before the db_session bug was fixed. Could it be because of the start / stop methods using a db_session? the generate function does a large amount of pony queries, but it should be releasing the cache once exiting. The process is currently using 37GB of memory.
Matthew
That's after 3 hours of the process running
Alexander
Maybe the code holds reference to some object from each generate_estimade_data() call, and so prevent garbage collection of db_session data. I think I can change Pony code to remove link from object to session cache upon db_session exit, it may help to garbage collect at least some objects.
Alexander
We just pushed on github some fixes which should improve garbage collection. So, Matthew, you can try your code again using github version of Pony.
ichux
Hello @akozlovsky I've used Pony in the past but I just got back to checking it out again. Its license has changed in the time I left it. Well done for all the hard work.
Alexander
Thanks for the kind words :)
Micaiah
What would be the easiest way to say to_dict should only return Json compatable types
Micaiah
Ignore that, no longer relavent
Lucky
Alexander
Lucky
Nothing specific. It's more that condensed boredom lead to creation of this fact.
Alexander
Ok then.
Alan
weird question, how often is the PyPI package updated compared to the GitHub?
Alexander
PyPi package updated at new release, while GitHub repository updated on each bug fix. Currently Pony releases are irregular, maybe one release per several month
Alan
got it- thanks
Juan Antonio
Do I need to change my MariaDB engine to fix this error?
(1071, 'Specified key was too long; max key length is 767 bytes')
Valentin
No, you just need shorter table or fieled name, I think.
Святослав
Lucky
Yeah, the 3rd digit
Matthew
If you need a specific bug fix, you can easily reference a specific git commit until the next proper release
Святослав
Release frequency it's a part of "production ready" status
Matthew
File "/root/app/.env/local/lib/python2.7/site-packages/pony/orm/core.py", line 427, in new_func
finally: db_session.exit(exc_type, exc, tb)
File "/root/app/.env/local/lib/python2.7/site-packages/pony/orm/core.py", line 396, in exit
for cache in _get_caches(): cache.release()
File "/root/app/.env/local/lib/python2.7/site-packages/pony/orm/core.py", line 1566, in release
cache.close(rollback=False)
File "/root/app/.env/local/lib/python2.7/site-packages/pony/orm/core.py", line 1592, in close
for obj in cache.objects:
TypeError: 'NoneType' object is not iterable
Matthew
is this a known pony bug?
Alexander
Did you use strict=True param for db_session?
Matthew
yes
Alexander
Yes, this but is known. Fix is already on GitHub. It will be part of next release. You can use GitHub version or do not use this param.
Matthew
thank you!
Alexander
No problem
Anonymous
Howdy - just wondering if I'm wrong in noticing there is no way to tick "unsigned" on an int in the online designer tool? Should I create an issue on github?
Alexander
You are right, we need to fix it
Anonymous
Ok - thanks :)
Anonymous
And thanks for Pony :D
Alexander
You are welcome :)
Alexander
Hey, @mr_agb. We fixed all bugs you have mentioned:
- incorrect index and foreign key names when schema name is specified for table;
- problem with primary key column which is a part of a secondary index.
Also now you able explicitly specify fk_name for relationship attributes.
Alexander
Also, we plan to deprecate sql_debug function in favor of set_sql_debug function to avoid confusion between sql_debug and sql_debugging names