М
Hello. Is it possible to run some raw sql code, when pony creates tables? If I want it to create some stored procedures. Or it's not possible, or it's a bad idea in general?)
Alexander
Maybe this can help? https://docs.ponyorm.org/database.html?highlight=connect#customizing-connection-behavior
Andrea
I think you can use db.on_connect to run some queries after the connection has been established
Andrea
at least, I have been using it like this to create some stored procedures
М
@sashaaero, @turingcomplete Ea, it's useful, thanks. I hoped, there is a solution, that not requires to run that code on every connection, but if no, it's a good one too.
Nikolay
Do you have some specific solution in mind? How about adding pool option to db_session? 1) if pool is not specified, Pony will keep current behavior when each connection is tied to current thread, suitable for single-threaded application or application with a long-lived threads 2) When using as db_session(pool=None) Pony will close connection after exiting from db_session 3) When specifying db_session(pool=some_pool) it will become possible to use pools with different semantics, for example to reuse the same connection in different thread
No, nothing specific, aside from maybe looking into how sqlalchemy handles this. I'd say pool (or rather pool type?) should be an argument of Databse.bind rather than db_sesion, unless changing pools on the fly makes sense. I also like it when arg=None results in default behavior, so perhaps the no-pooling-pool could be specified by it's own pool class rather than a literal None. Are there downsides to reusing connections between different threads? I know sqlalchemy has issues with connections being shared across processes when forking [1][2], but they also show a nice automated fix for engine sharing one, and the others arise in situations that should be avoided.
Alexander
Some database drivers (MySQLdb for example) cannot use same connection from different threads
Alexander
But probably we can consider MySQLdb as obsolete
Nikolay
And iirc sqlite :memory: connections can't be shared across threads, but making a new connection for each thread doesn't help much either edit: looks like sqlite does have shareable in-memory databases nowadays, though i'm not sure if connections are shareable across threads, or database across connectons
Nikolay
Reusing connections between threads would probably be an option alongside using unique connection for each thread, so appropriate one can be selected.
Nikolay
Thread locals are garbage collected on thread termination, perhaps this is enough and there's actually no issue. At least I wasn't able to produce any leaks using psycopg2. If this is not psycopg2 specific then perhaps there's no point in allowing different connection pool types
Anonymous
Hi there, has anyone tried to use PonyORM together with Pandas? I'm currently refactoring a program I wrote a while ago, where I try to get rid of SA in the new version, but I forgot that in that old version I used the direct write of the dataframe to the database taking an engine created with SA and used df.to_sql(tablename, con=destination_engine). Any hints on this? Many thansk in advance!
Anonymous
Hi Guys!, is this still the best approach to use the same model with different connections?? https://github.com/ponyorm/pony/issues/330#issuecomment-470153594
Alexander
I think yes.
Anonymous
👍
André
Pony expects that concurrent requests are processed in transactions, each parallel transaction decorated with db_session, and have separate connection to the database. Pony handles connection under the hood, each connection associated with current thread (or current process if the application is single-threaded). In async framework all coroutines use the same thread, so the approach that connections are tied to specific threads does not work. It may lead to situation that different coroutines use the same connection, which may break transaction boundaries. Also, async code can lead to enormous amount of concurrent coroutines with parallel transactions, which are very long comparing with transactions in a usual application. Database server does not work well in situation, when there are thousands of concurent long-living transactions. But it seems that many people who choose to use async approach does not understand what does it mean from the database point of view. The just say, "hey, async code should definitely be better, because it async", and don't think about transactions and all that database stuff. For async application it is better to have separate database access layer which communicates with the rest part of application using queues, but such architecture is much more complex than the usual synchronous approach.
Hi, Alexander! You said that async code can lead to an enormous amount of parallel transactions and a database server doesn't work well in this situation. I understand it, but instead of serializing database access through queues, we could improve the database's side in some way such as using database replicas and load balancing connections. I mean, getting to hundreds or thousands of connections is a database problem and it should be solved there. From the application's point of view, we need to respond to as many requests as we can and asynchronous programming is perfect for it. I love Pony and I really wish I could use it in my new async apps, but it's been very hard to fit it into them. Since Asynchronous programming is here to stay, I wonder if you could reconsider a plan for an asynchronous version of Pony ORM.
Vladyslav 🇺🇦🇪🇺
Folks
Vladyslav 🇺🇦🇪🇺
Pony orm got migrations?
Alexey
Pony orm got migrations?
Almost there PostgreSQL first MySQL would take longer, as its SQL ALTER TABLE commands not that great as in PostgreSQL SQLite also requires some polishing Oracle is not the priority for migrations at the moment
Vladyslav 🇺🇦🇪🇺
Thx
Alexey
Thx
What database do you use?
Henri
For getting the primary key value of a newly created object which has autoincremented primary key it's better to use flush() or commit()? In the docs are mentioned both...
Henri
I guess entity.flush() is the most effective...
Alexander
Yes, flush should be more efficient
Matthew
Good morning. This expression seems to work in python, but is giving an error in pony?
Matthew
Incomparable types 'datetime' and 'bool' in expression: a.last_oauth_error > (a.last_updated or a.created)
Matthew
all 3 fields are datetime
Matthew
and Optional
Matthew
I managed to work around it, but maybe it’s a bug in pony?
Matthew
Hi, try this ( a.last_oauth_error > a.last_updated ) or a.created
I don’t think this would have the same logical meaning
Andrea
Of course, because you are querying a datetime to be greater than Boolean. Maybe you want to compare two datetime 😉
Matthew
It isn’t boolean though. All attributes mentioned are datetimes
Matthew
In normal python:
Matthew
In [2]: None or datetime.date.today() Out[2]: datetime.date(2020, 5, 25)
Matthew
In [3]: datetime.date(2000, 1, 1) or datetime.date.today() Out[3]: datetime.date(2000, 1, 1)
Andrea
Oh, so sorry, 😳😳I’m still reading this error Incomparable types 'datetime' and 'bool' in expression: a.last_oauth_error > (a.last_updated or a.created) I’m not totally sure what are you looking for🤔, but I’m figured out that if two dates are correctly in order OR the obj is created, than do something. I think the answer is to looking for with MySQL’ view: compare two datetime and suddenly the bool value. I’m curious to know the solution 😉
Jim
maybe show us the sql ?
Matthew
This ended up working:
Matthew
a.last_oauth_error and ((a.last_updated and a.last_oauth_error > a.last_updated) or not a.last_updated)
Andrea
😱woa, this is amazing for me. Thank you
Henri
I've the following class: class PatientCollection: def __init__(self, user): self.user = user def query(self): return Patient.select(user=self.user)[:] When testing it I get the error > return Patient.select(user=self.user)[:] E TypeError: select() got an unexpected keyword argument 'user' When using def query(self): return Patient.select(lambda p: p.user == self.user)[:] Everything is fine. Is this a bug or something wrong in my code?
Jake A
select accepts a generator, filter() accepts kwargs like you're trying to do afaik
Jake A
I could be wrong though, just clicked this channel by accident and saw your Q, havent actually used pony in over a year haha :)
Henri
select accepts a generator, filter() accepts kwargs like you're trying to do afaik
Normally select() also accepts kwargs and in other places it works.
Jake A
Ah, probably not that then :)
Henri
The entity is class Patient(db.Entity): user = Required(User) data = Required(str) # encrypted JSON repertorizations = Set("Repertorization") created_at = Required(datetime, default=lambda: datetime.utcnow()) updated_at = Required(datetime, default=lambda: datetime.utcnow()) def before_update(self): self.updated_at = datetime.utcnow()
Matthew
Normally select() also accepts kwargs and in other places it works.
I just tested and select doesn’t accept kwargs.
Henri
I just tested and select doesn’t accept kwargs.
https://docs.ponyorm.org/api_reference.html#Entity.select
Henri
I just tested and select doesn’t accept kwargs.
But yeah that could be right. I just took a look and the places where I use it are not tested yet. But that would rather be a bug. Either in documentation or implementation.
Matthew
Are you using at least 0.7.7?
Henri
I think the current. It's 0.7.13
Henri
I just started the project.
Henri
Maybe it doesn't work because user is a relationship.
Alexander
Hi! It seems for some reason Entity.select(**kwargs) was not actually added to Pony in 0.7.7 release by a mistake. I think we'll add it in the next release. For now you can do Entity.select().filter(**kwargs) instead
Evgeniy
Hi everyone! Can I make asynchronous requests to the database via await via pony? Or maybe there are third-party extensions for ponies with this feature?
Alexander
Hi, Pony does not make async requests at this moment
Evgeniy
Thank! And when is such an improvement tentatively planned?
Alexander
Not in the near future I think, we need to finish migrations & upserts before
Evgeniy
Yeah, migration is also very important. I hope for you, your ORM is the best.
Alexander
Thanks
Drop
Hi there 1. Did u hase bitcoin wallet for donations? 2. Any updates about migrations, maybe some release dates?)
Alexander
Hi Andrei! We don't have bitcoin wallet for donations, but I think we can create it We are working on migrations right now after some pause, it is hard to estimate release date, there are some tricky things with migrations. Specifically, we need to change structure of some tables to be migrations-friendly. Mostly it's about renaming some columns, many-to-many tables and indexes, but other things too. So, the previous tables will be incompatible with the new release. I want to implement upgrade functionality, where upon calling python migrate.py upgrade Pony will automatically do all necessary changes. But such upgrade is tricky to implement correctly, because some default object names depend on another object & column names, and we need to cascade rename multiple objects. So, my concern is someone starts app and import new version of Pony with old version of application code, and then calls db.generate_mapping(create_tables=True) without performing upgrade at first. Pony sees that some tables are missed (they are not missed actually, but their names are old-style) and create "missed" tables as the option create_tables=True suggests. After that there will be mess of old many-to-many tables (filled with data) and empty new tables in the same database, and it can lead to some problems if someone will do it in productions, like, there will be sudden miss of all many-to-many relationships between objects, and new many-to-many relationships will go to separate table, and will be unused after the user sees that something is wrong and rolls back to previous version of Pony. So I want of avoid data loss on incorrect updating to new Pony version when someone forget to perform db upgrade, and this is the reason why migrations implementation take more time than was initially expected
Drop
Hi Andrei! We don't have bitcoin wallet for donations, but I think we can create it We are working on migrations right now after some pause, it is hard to estimate release date, there are some tricky things with migrations. Specifically, we need to change structure of some tables to be migrations-friendly. Mostly it's about renaming some columns, many-to-many tables and indexes, but other things too. So, the previous tables will be incompatible with the new release. I want to implement upgrade functionality, where upon calling python migrate.py upgrade Pony will automatically do all necessary changes. But such upgrade is tricky to implement correctly, because some default object names depend on another object & column names, and we need to cascade rename multiple objects. So, my concern is someone starts app and import new version of Pony with old version of application code, and then calls db.generate_mapping(create_tables=True) without performing upgrade at first. Pony sees that some tables are missed (they are not missed actually, but their names are old-style) and create "missed" tables as the option create_tables=True suggests. After that there will be mess of old many-to-many tables (filled with data) and empty new tables in the same database, and it can lead to some problems if someone will do it in productions, like, there will be sudden miss of all many-to-many relationships between objects, and new many-to-many relationships will go to separate table, and will be unused after the user sees that something is wrong and rolls back to previous version of Pony. So I want of avoid data loss on incorrect updating to new Pony version when someone forget to perform db upgrade, and this is the reason why migrations implementation take more time than was initially expected
🧐 Thanks for information! P.s. Will be great, if u will add some new support methods(bitcoin and etc., for example), in future.
Jim
Hi, query.page(row, pagesize) returns a slice. I there a way to retrieve all those different slices without looping (and then one query per page) or is it juste more efficient to fetchall and then create slice in python ?
Nikolay
Hi Andrei! We don't have bitcoin wallet for donations, but I think we can create it We are working on migrations right now after some pause, it is hard to estimate release date, there are some tricky things with migrations. Specifically, we need to change structure of some tables to be migrations-friendly. Mostly it's about renaming some columns, many-to-many tables and indexes, but other things too. So, the previous tables will be incompatible with the new release. I want to implement upgrade functionality, where upon calling python migrate.py upgrade Pony will automatically do all necessary changes. But such upgrade is tricky to implement correctly, because some default object names depend on another object & column names, and we need to cascade rename multiple objects. So, my concern is someone starts app and import new version of Pony with old version of application code, and then calls db.generate_mapping(create_tables=True) without performing upgrade at first. Pony sees that some tables are missed (they are not missed actually, but their names are old-style) and create "missed" tables as the option create_tables=True suggests. After that there will be mess of old many-to-many tables (filled with data) and empty new tables in the same database, and it can lead to some problems if someone will do it in productions, like, there will be sudden miss of all many-to-many relationships between objects, and new many-to-many relationships will go to separate table, and will be unused after the user sees that something is wrong and rolls back to previous version of Pony. So I want of avoid data loss on incorrect updating to new Pony version when someone forget to perform db upgrade, and this is the reason why migrations implementation take more time than was initially expected
If migration are going to be integrated into pony, as opposed to living in a separate package, it would be reasonable for pony to automatically create a table with meta info like schema version and pony version used to generate the tables or some "pony naming convention version". The new incompatible pony release could check for existence of this table and raise an error instructing the user to run upgrade if it's missing.
Alexander
That is exactly how it's developed right now
Matthew
If so, I don’t think using the page function makes sense
Jim
it means I have [[a,b],[c,d]] and I want [a,b] and [c,d]
Matthew
Can you explain further?
Jim
class TableauSection(Section): lignes = Required(int, default=0) colonnes = Required(int, default=0) cells = Set("TableauCell") def after_insert(self): for r in range(self.lignes): for c in range(self.colonnes): TableauCell(tableau=self, y=r, x=c) def get_cells(self): return self.cells.select().sort_by(TableauCell.y, TableauCell.x) def get_cells_by_row(self, row): return self.get_cells().page( row + 1, self.colonnes ) # page commence a 1 chez pony class TableauCell(db.Entity): x = Required(int) y = Required(int) tableau = Required(TableauSection) PrimaryKey(tableau, y, x) Sometimes I Just want a "row with same y" so page is very good. but let say at init, I would need all those chunks at a time.
Matthew
I think doing a query for all of them makes sense
Young
how can I config a heroku postgres database with pony? using the env variable ?
J J
how can I config a heroku postgres database with pony? using the env variable ?
db.bind(provider="postgres", dsn=os.getenv('DATABASE_URL'))
Matthew
I have a model with active = Required(bool)
Matthew
when I do Model.select() I get ValueError: Attribute Model.active is required